09:00 - 10:30
Parallel sessions 1
09:00 - 10:30
Room: HSZ - N2
Chair/s:
Lea Marie Petrasch
As artificially intelligent systems become embedded in daily life, understanding the cognitive foundations of our interactions with them is essential for shaping the future of human-technology relations. This symposium brings together complementary perspectives that examine how humans think, perceive, and interact with intelligent systems, focusing on social robots and large language models (LLMs). The studies contribute toward a deeper understanding of contexts and cues under which we perceive and act toward AI as social units or actors (Gambino et al., 2020; Nass et al., 1994). The first contribution by Katharina Kühne compared the perception of robotic and human agents, through motor resonance, finding that both evoke comparable implicit motor responses irrespective of anthropomorphic detail or biomechanical feasibility. These results highlight how humans simulate robotic agents internally. The second study by Jairo Perez-Osorio examined how the reliability of a humanoid robot’s gaze affects human–robot collaboration, finding that consistent gaze improved attentional alignment, task efficiency, and coordination, while unreliable gaze disrupted performance. The findings highlight the critical role of social cues in supporting adaptive joint action with artificial agents. Two further contributions focus on the communication with chatbots. In four rounds Anita Körner compared the performance in a classic referential communication task between a basic version of a conversational agent (Chat-GPT) versus a version that was prompted to use grounding strategies. She found that time per round decreased, even more so for the group who interacted with the conversational agent prompted with grounding strategies, indicating more common ground. Lea Petrasch investigated whether humans apply linguistic perspective taking when communicating with chatbots (LLMs). Adapting Keysar’s (1994) paradigm on the illusory transparency of intention, results showed an egocentric bias in judgements of the chatbots’ understanding. To round things off, Marcel Binz will discuss foundational unified models of human cognition. Models that not only predict, simulate, and explain behavior in a single domain but instead offer a unified take on our mind. Together, these contributions foster understanding on how humans make sense of artificial communicators and how cognition and perception of such can be studied best in a digital social world.
Submission 587
Gaze Reliability Modulates Attentional Orienting and Joint Performance in Human-Agent Collaboration
SymposiumTalk-02
Presented by: Eva Wiese
Jairo Perez-Osorio 1Eva Wiese 1, 2
1 Technical University of Berlin, Germany
2 George Mason University, United States
Effective human-agent interaction relies on the ability to integrate social cues and dynamically allocate attention during joint action. In this study, we investigated how the reliability of a humanoid robot’s gaze affects cognitive processing and cooperative performance in a shared visual search task. Participants collaborated with the NAO robot in a time-constrained, turn-taking paradigm, with the robot’s gaze either consistently aligned with task goals or uninformative. Critically, participants were not informed about this manipulation. Reliable gaze led to significantly faster target selection and more efficient coordination, indicating enhanced attentional alignment between human and robot. Unreliable gaze increased response latencies and disrupted joint performance, revealing the cost of inconsistent social signals. Additionally, post-interaction ratings showed that participants in the reliable condition perceived the robot as more responsive. In contrast, those in the unreliable condition reported decreased confidence in the robot's behavior under changing conditions. Crucially, reaction time differences emerged even though participants reported no awareness of the gaze manipulation, demonstrating that behavioral measures reveal adaptation before it surfaces in explicit evaluation. By integrating continuous metrics with self-report scales, this study shows that reaction times capture millisecond-level behavioral adjustments during interaction. At the same time, questionnaires identify which qualitative dimensions (such as responsiveness or reliability) drive post-hoc attitudes. These findings offer insights into the cognitive mechanisms supporting adaptive human-robot collaboration and highlight the value of multimodal assessment in the design of socially intelligent systems.