09:00 - 10:30
Parallel sessions 4
09:00 - 10:30
Room: C-Building - N14
Chair/s:
Kathrin Finke, Ingrid Scharlau, Jan Tünnermann
The Theory of Visual Attention (TVA) continues to be developed as a powerful quantitative framework for understanding attentional selection and capacity. Current research applies TVA across diverse contexts and methodologies. This symposium presents new theoretical developments, methodological advances, and applied perspectives that extend TVA’s reach and precision and sometimes challenge its present state. Estela Carmona investigates how self-relevant information shapes attentional parameters, offering insights into the role of personal significance in visual selection. Anders Petersen follows by introducing advances in modeling enumeration data within the TVA framework, extending the set of tasks that can be used with TVA. Kai Biermeier, Ngoc Chi Banh, and Ingrid Scharlau test the applicability of TVA to online scenarios and identify methodological challenges in the online estimation of attentional processing speed. Tobias Peters, Kai Biermeier, and Ingrid Scharlau apply TVA measures of attention to human–AI interaction, examining whether attentional signatures can indicate adaptive distrust. Finally, Jan Tünnermann and Ingrid Scharlau revisit lateral asymmetries in visual processing, presenting updated findings on left–right visual field differences within TVA. Together, these contributions demonstrate the continuing vitality of TVA research and its capacity to inform theoretical, methodological, and applied perspectives on visual attention but also challenges that have to be addressed. Part 2 of the symposium will turn to attentional changes in diverse populations. 
Submission 438
Interpreting Changes in Visual Attention: Can It Indicate Healthy Distrust in Human-AI Interaction?
SymposiumTalk-04
Presented by: Tobias Peters
Tobias PetersKai BiermeierIngrid Scharlau
Paderborn University, Germany
The Theory of Visual Attention (TVA) is a well-established approach to quantify visual attention by theoretical meaningful parameters. Beyond fundamental research e.g., on saliency, TVA has been successfully applied to diverse contexts, including clinical populations, virtual reality, or traffic situations. In this talk, we present an experimental approach that takes the application of TVA even further. Motivated by recent developments in artificial intelligence (AI), we use TVA to assess visual attention during mock-up human-AI interaction. We hypothesized that visual attention could serve as an indicator of healthy distrust towards AI.

Healthy distrust describes a reasonable, careful stance towards AI with the aim of mitigating overreliance. Crucially, this stance and the related notion of appropriate trust are informed not only by actual AI errors, but also by a mere intuition that errors are possible. The typical measurements of these notions – self-reported (dis)trust and reliance – only indirectly measure whether participants notice (potential) errors. We propose that visual attention complements self-report and behavioral measures by other cognitive aspects, such as vigilance.

We compared participants’ attentional capacity and attentional weight towards correct and incorrect mock-up AI classifications. While we observed that misclassifications reduced attentional capacity, this capacity reduction was not beneficial for subsequent judgments of the classifications. Furthermore, the attentional weighting was not affected by the classifications’ correctness. It was only affected by the difficulty of categorizing the stimuli themselves. Thus, we will discuss the advantages and disadvantages of using visual attention as an indicator of appropriate trust and healthy distrust.