Submission 700
Finding Common Ground with a Chatbot
SymposiumTalk-03
Presented by: Anita Körner
During conversations, people establish and rely on common ground. For example, they converge on modes of turn taking and on referential terms. The establishment of common ground leads to faster and less error-prone communication. The present experiment examines whether grounding mechanisms from human communication generalize to communication with conversational agents (here, a custom version of ChatGPT). Participants were assigned to interact either with a basic version of a conversational agent or a version that was prompted to use grounding strategies. We used a classic referential communication task and assessed performance. In four rounds, participants were asked to determine the correct order of complex shapes by interacting with the conversational agent, who knew the correct order. As typical in human–human dyads, time per round decreased when participants performed the referential communication task with a conversational agent. Moreover, the decrease was more pronounced for the group interacting with the conversational agent prompted with grounding strategies (vs. the basic conversational agent), indicating more common ground. We conclude that grounding with conversational agents and grounding with humans rely on overlapping mechanisms, so that conversational agents could be improved by incorporating human grounding principles.