Submission 438
Interpreting Changes in Visual Attention: Can It Indicate Healthy Distrust in Human-AI Interaction?
SymposiumTalk-04
Presented by: Tobias Peters
The Theory of Visual Attention (TVA) is a well-established approach to quantify visual attention by theoretical meaningful parameters. Beyond fundamental research e.g., on saliency, TVA has been successfully applied to diverse contexts, including clinical populations, virtual reality, or traffic situations. In this talk, we present an experimental approach that takes the application of TVA even further. Motivated by recent developments in artificial intelligence (AI), we use TVA to assess visual attention during mock-up human-AI interaction. We hypothesized that visual attention could serve as an indicator of healthy distrust towards AI.
Healthy distrust describes a reasonable, careful stance towards AI with the aim of mitigating overreliance. Crucially, this stance and the related notion of appropriate trust are informed not only by actual AI errors, but also by a mere intuition that errors are possible. The typical measurements of these notions – self-reported (dis)trust and reliance – only indirectly measure whether participants notice (potential) errors. We propose that visual attention complements self-report and behavioral measures by other cognitive aspects, such as vigilance.
We compared participants’ attentional capacity and attentional weight towards correct and incorrect mock-up AI classifications. While we observed that misclassifications reduced attentional capacity, this capacity reduction was not beneficial for subsequent judgments of the classifications. Furthermore, the attentional weighting was not affected by the classifications’ correctness. It was only affected by the difficulty of categorizing the stimuli themselves. Thus, we will discuss the advantages and disadvantages of using visual attention as an indicator of appropriate trust and healthy distrust.