Submission 260
The Illusory Transparency of Intention: Cognitive Biases in Human-AI Dialogue
SymposiumTalk-04
Presented by: Lea Marie Petrasch
People do not just use computers, they often treat them as social actors (CASA paradigm; Gambino et al., 2020; Nass et al., 1994). Yet, effective communication requires assessing one’s interlocutor’s (lack of) knowledge, known as perspective taking (Nickerson, 1999). Individuals frequently neglect this and assume that others grasp their intended meaning – reflecting a well-documented egocentric bias (Keysar, 1994; Lau et al., 2022). In two studies we explored whether individuals (mistakenly) expect Large Language Models to infer an intended meaning, even when the necessary context is unavailable to the system (illusory transparency of intention). In a 1 x 2 within-subjects design, we varied privileged information (negative vs. positive) in six chatbot-student scenarios. Our first study used edited screenshots (visual stimuli), whereas our second study focused on the voice function of chatbots (auditive stimuli). In two online surveys participants were introduced to a scenario and then judged whether the chatbot would interpret the last statement as sarcastic. In a second round, participants reported their own perception of sarcasm and predicted the chatbot’s response. Our results showed significant differences between the two conditions: participants assumed more perception of sarcasm and more sarcasm-congruent responses when negative privileged information was present. Thus, the illusory transparency of intention applies to communication with LLMs. Individuals show an egocentric bias in assessments of their interlocutor’s understanding even when this interlocutor is not human.