09:00 - 10:30
Parallel sessions 1
09:00 - 10:30
Room: HSZ - N2
Chair/s:
Lea Marie Petrasch
As artificially intelligent systems become embedded in daily life, understanding the cognitive foundations of our interactions with them is essential for shaping the future of human-technology relations. This symposium brings together complementary perspectives that examine how humans think, perceive, and interact with intelligent systems, focusing on social robots and large language models (LLMs). The studies contribute toward a deeper understanding of contexts and cues under which we perceive and act toward AI as social units or actors (Gambino et al., 2020; Nass et al., 1994). The first contribution by Katharina Kühne compared the perception of robotic and human agents, through motor resonance, finding that both evoke comparable implicit motor responses irrespective of anthropomorphic detail or biomechanical feasibility. These results highlight how humans simulate robotic agents internally. The second study by Jairo Perez-Osorio examined how the reliability of a humanoid robot’s gaze affects human–robot collaboration, finding that consistent gaze improved attentional alignment, task efficiency, and coordination, while unreliable gaze disrupted performance. The findings highlight the critical role of social cues in supporting adaptive joint action with artificial agents. Two further contributions focus on the communication with chatbots. In four rounds Anita Körner compared the performance in a classic referential communication task between a basic version of a conversational agent (Chat-GPT) versus a version that was prompted to use grounding strategies. She found that time per round decreased, even more so for the group who interacted with the conversational agent prompted with grounding strategies, indicating more common ground. Lea Petrasch investigated whether humans apply linguistic perspective taking when communicating with chatbots (LLMs). Adapting Keysar’s (1994) paradigm on the illusory transparency of intention, results showed an egocentric bias in judgements of the chatbots’ understanding. To round things off, Marcel Binz will discuss foundational unified models of human cognition. Models that not only predict, simulate, and explain behavior in a single domain but instead offer a unified take on our mind. Together, these contributions foster understanding on how humans make sense of artificial communicators and how cognition and perception of such can be studied best in a digital social world.
Submission 700
Finding Common Ground with a Chatbot
SymposiumTalk-03
Presented by: Anita Körner
Anita Körner
University of Kassel, Germany
During conversations, people establish and rely on common ground. For example, they converge on modes of turn taking and on referential terms. The establishment of common ground leads to faster and less error-prone communication. The present experiment examines whether grounding mechanisms from human communication generalize to communication with conversational agents (here, a custom version of ChatGPT). Participants were assigned to interact either with a basic version of a conversational agent or a version that was prompted to use grounding strategies. We used a classic referential communication task and assessed performance. In four rounds, participants were asked to determine the correct order of complex shapes by interacting with the conversational agent, who knew the correct order. As typical in human–human dyads, time per round decreased when participants performed the referential communication task with a conversational agent. Moreover, the decrease was more pronounced for the group interacting with the conversational agent prompted with grounding strategies (vs. the basic conversational agent), indicating more common ground. We conclude that grounding with conversational agents and grounding with humans rely on overlapping mechanisms, so that conversational agents could be improved by incorporating human grounding principles.