09:00 - 10:30
Parallel sessions 1
09:00 - 10:30
Room: HSZ - N2
Chair/s:
Lea Marie Petrasch
As artificially intelligent systems become embedded in daily life, understanding the cognitive foundations of our interactions with them is essential for shaping the future of human-technology relations. This symposium brings together complementary perspectives that examine how humans think, perceive, and interact with intelligent systems, focusing on social robots and large language models (LLMs). The studies contribute toward a deeper understanding of contexts and cues under which we perceive and act toward AI as social units or actors (Gambino et al., 2020; Nass et al., 1994). The first contribution by Katharina Kühne compared the perception of robotic and human agents, through motor resonance, finding that both evoke comparable implicit motor responses irrespective of anthropomorphic detail or biomechanical feasibility. These results highlight how humans simulate robotic agents internally. The second study by Jairo Perez-Osorio examined how the reliability of a humanoid robot’s gaze affects human–robot collaboration, finding that consistent gaze improved attentional alignment, task efficiency, and coordination, while unreliable gaze disrupted performance. The findings highlight the critical role of social cues in supporting adaptive joint action with artificial agents. Two further contributions focus on the communication with chatbots. In four rounds Anita Körner compared the performance in a classic referential communication task between a basic version of a conversational agent (Chat-GPT) versus a version that was prompted to use grounding strategies. She found that time per round decreased, even more so for the group who interacted with the conversational agent prompted with grounding strategies, indicating more common ground. Lea Petrasch investigated whether humans apply linguistic perspective taking when communicating with chatbots (LLMs). Adapting Keysar’s (1994) paradigm on the illusory transparency of intention, results showed an egocentric bias in judgements of the chatbots’ understanding. To round things off, Marcel Binz will discuss foundational unified models of human cognition. Models that not only predict, simulate, and explain behavior in a single domain but instead offer a unified take on our mind. Together, these contributions foster understanding on how humans make sense of artificial communicators and how cognition and perception of such can be studied best in a digital social world.
Submission 260
The Illusory Transparency of Intention: Cognitive Biases in Human-AI Dialogue
SymposiumTalk-04
Presented by: Lea Marie Petrasch
Lea Marie PetraschRegina Jucks
University of Münster, Germany
People do not just use computers, they often treat them as social actors (CASA paradigm; Gambino et al., 2020; Nass et al., 1994). Yet, effective communication requires assessing one’s interlocutor’s (lack of) knowledge, known as perspective taking (Nickerson, 1999). Individuals frequently neglect this and assume that others grasp their intended meaning – reflecting a well-documented egocentric bias (Keysar, 1994; Lau et al., 2022). In two studies we explored whether individuals (mistakenly) expect Large Language Models to infer an intended meaning, even when the necessary context is unavailable to the system (illusory transparency of intention). In a 1 x 2 within-subjects design, we varied privileged information (negative vs. positive) in six chatbot-student scenarios. Our first study used edited screenshots (visual stimuli), whereas our second study focused on the voice function of chatbots (auditive stimuli). In two online surveys participants were introduced to a scenario and then judged whether the chatbot would interpret the last statement as sarcastic. In a second round, participants reported their own perception of sarcasm and predicted the chatbot’s response. Our results showed significant differences between the two conditions: participants assumed more perception of sarcasm and more sarcasm-congruent responses when negative privileged information was present. Thus, the illusory transparency of intention applies to communication with LLMs. Individuals show an egocentric bias in assessments of their interlocutor’s understanding even when this interlocutor is not human.