09:00 - 10:30
Parallel sessions 4
09:00 - 10:30
Room: HSZ - N2
Chair/s:
Teresa Luther
The increasing integration of artificial intelligence (AI) into everyday life – through local large language models, multimodal assistants, and personalized system designs – raises essential questions about how people perceive, trust, and interact with AI systems over time. This symposium brings together findings from an interdisciplinary longitudinal study investigating the perception and dynamics of human – AI interaction across six measurement points within one year. The project explores how individual characteristics, behavioral responses, and usage contexts jointly shape willingness to delegate to AI – both in writing and decision-making contexts – as well as perceptions of trust, credibility, closeness, and self-efficacy. 
The first contribution presents data from the initial wave of data investigating predictors of individuals’ willingness to delegate writing tasks to AI. The second contribution investigates how trust and related perceptual facets—credibility, creepiness, and mind perception—shape people’s willingness to delegate decision-making to AI systems over time. The third contribution addresses perceived closeness and behavioral intention, examining how perceptions of AI agents as social actors—rather than mere tools—change over time and how these changes relate to feelings of loneliness, perceived intelligence, and behavioral intentions to use AI. The fourth contribution examines how perception of AI as tool or social actor and interaction modality (text vs. voice) shape credibility perceptions. Finally, the fifth contribution examines whether the perception of AI as tool vs. social actor also has consequences for cognitive self-esteem of the user, also over time. 
Together, these studies provide a comprehensive perspective on the evolving relationship between humans and AI. By integrating psychological, social, and technological viewpoints, the symposium offers novel insights into how trust, roles, and self-perceptions adapt to increasingly intelligent and omnipresent AI systems – highlighting implications for user-centered and ethically informed AI design. 
Submission 176
Communicative AI as Extended Mind: Relationships Between AI Perception and Cognitive Self-Esteem
SymposiumTalk-05
Presented by: Sonja Utz
Sonja Utz
Leibniz-Institut für Wissensmedien, Germany
University of Tübingen, Germany
Prior research has shown that people mistakenly regard the Internet’s knowledge as their own and report higher cognitive self-esteem after using Google to answer questions than without using Google. Experimental work showed that this effect does, however, not occur when people interact with a chatbot. Building on this, we examine whether this effect is only short-lived or occurs in the field among regular users of communicative AIs (ComAIs) like ChatGPT or Alexa. More importantly, we examine the impact of people’s perception of ComAI as tool vs. social actor and their usage modality (text, voice). Using data from a six-wave longitudinal study among active ComAI users, we tested the following hypotheses:

H1: People who perceive communicative AI as a social actor show lower cognitive self- esteem than people who perceive AI as a tool.

H2: The effect postulated in H1 is stronger the more people interact with the AI via voice.

H3: People who perceive AI as social actor (vs. tool) at time t1, show lower cognitive self-esteem at time t+1.

No significant effect of perception (tool vs. actor) on cognitive self-esteem, and no interaction with modality emerged. In some waves, a modality effect occurred: people who used ComAI via voice and text reported the highest cognitive self-esteem.