09:00 - 10:30
Parallel sessions 1
09:00 - 10:30
Room: HSZ - N2
Chair/s:
Lea Marie Petrasch
As artificially intelligent systems become embedded in daily life, understanding the cognitive foundations of our interactions with them is essential for shaping the future of human-technology relations. This symposium brings together complementary perspectives that examine how humans think, perceive, and interact with intelligent systems, focusing on social robots and large language models (LLMs). The studies contribute toward a deeper understanding of contexts and cues under which we perceive and act toward AI as social units or actors (Gambino et al., 2020; Nass et al., 1994). The first contribution by Katharina Kühne compared the perception of robotic and human agents, through motor resonance, finding that both evoke comparable implicit motor responses irrespective of anthropomorphic detail or biomechanical feasibility. These results highlight how humans simulate robotic agents internally. The second study by Jairo Perez-Osorio examined how the reliability of a humanoid robot’s gaze affects human–robot collaboration, finding that consistent gaze improved attentional alignment, task efficiency, and coordination, while unreliable gaze disrupted performance. The findings highlight the critical role of social cues in supporting adaptive joint action with artificial agents. Two further contributions focus on the communication with chatbots. In four rounds Anita Körner compared the performance in a classic referential communication task between a basic version of a conversational agent (Chat-GPT) versus a version that was prompted to use grounding strategies. She found that time per round decreased, even more so for the group who interacted with the conversational agent prompted with grounding strategies, indicating more common ground. Lea Petrasch investigated whether humans apply linguistic perspective taking when communicating with chatbots (LLMs). Adapting Keysar’s (1994) paradigm on the illusory transparency of intention, results showed an egocentric bias in judgements of the chatbots’ understanding. To round things off, Marcel Binz will discuss foundational unified models of human cognition. Models that not only predict, simulate, and explain behavior in a single domain but instead offer a unified take on our mind. Together, these contributions foster understanding on how humans make sense of artificial communicators and how cognition and perception of such can be studied best in a digital social world.
Submission 183
Do We Mirror Robots? Investigating Human Grip Force in Response to Robotic Hand Movements
SymposiumTalk-01
Presented by: Katharina Kühne
Katharina Kühne 1, Emily Evermann 2, Mandana Leandra Tiurma Sutter 2
1 University of Potsdam, Germany
2 Technical University of Berlin (TU Berlin), Germany
The integration of social robots into everyday life has intensified interest in the neural and perceptual mechanisms underlying Human–Robot Interaction (HRI). Motor resonance, supported by the mirror neuron system (MNS), plays a key role in action perception and understanding, and may vary depending on an agent’s anthropomorphism and the biological plausibility of its movements. However, empirical evidence remains inconsistent. In two within-subject experiments (N = 24; N = 27), we examined implicit motor responses while participants viewed static images of human and robotic hands differing in anthropomorphic design and biomechanical plausibility. Analyses revealed no significant differences in motor resonance strength across conditions, with Bayesian results further supporting the null hypotheses. These findings suggest that both human and robotic stimuli evoke comparable motor resonance and perceptual processing, irrespective of anthropomorphic detail or biomechanical feasibility. The results contribute to a deeper understanding of how humans perceive and internally simulate robotic agents in social contexts.