09:00 - 10:30
Parallel sessions 4
09:00 - 10:30
Room: HSZ - N2
Chair/s:
Teresa Luther
The increasing integration of artificial intelligence (AI) into everyday life – through local large language models, multimodal assistants, and personalized system designs – raises essential questions about how people perceive, trust, and interact with AI systems over time. This symposium brings together findings from an interdisciplinary longitudinal study investigating the perception and dynamics of human – AI interaction across six measurement points within one year. The project explores how individual characteristics, behavioral responses, and usage contexts jointly shape willingness to delegate to AI – both in writing and decision-making contexts – as well as perceptions of trust, credibility, closeness, and self-efficacy. 
The first contribution presents data from the initial wave of data investigating predictors of individuals’ willingness to delegate writing tasks to AI. The second contribution investigates how trust and related perceptual facets—credibility, creepiness, and mind perception—shape people’s willingness to delegate decision-making to AI systems over time. The third contribution addresses perceived closeness and behavioral intention, examining how perceptions of AI agents as social actors—rather than mere tools—change over time and how these changes relate to feelings of loneliness, perceived intelligence, and behavioral intentions to use AI. The fourth contribution examines how perception of AI as tool or social actor and interaction modality (text vs. voice) shape credibility perceptions. Finally, the fifth contribution examines whether the perception of AI as tool vs. social actor also has consequences for cognitive self-esteem of the user, also over time. 
Together, these studies provide a comprehensive perspective on the evolving relationship between humans and AI. By integrating psychological, social, and technological viewpoints, the symposium offers novel insights into how trust, roles, and self-perceptions adapt to increasingly intelligent and omnipresent AI systems – highlighting implications for user-centered and ethically informed AI design. 
Submission 123
Tool or Social Actor? How AI Perception and Preferred Modality Influence Perceived Credibility of AI over Time
SymposiumTalk-04
Presented by: Stefanie Klein
Stefanie Klein 1, Sonja Utz 1, 2
1 Leibniz-Institut für Wissensmedien Tübingen, Germany
2 University of Tübingen, Germany
People increasingly interact with conversational agents based on artificial intelligence (AI) for a variety of tasks. While these systems often produce responses that are natural and engaging, the provided information is not always correct and unbiased. This makes it essential to understand how users perceive the credibility of such systems. Drawing on data from four waves of a longitudinal survey (NT1 = 617, NT4 = 485), we investigate how active users’ perception of AI as a tool or social actor and their preferred interaction modality (text vs. voice) influence users’ credibility perceptions of language-based AI systems, both cross-sectionally and over time. We found that, within each wave, social presence, enjoyment, and perceived intelligence mediated the impact of tool-social actor perception but not modality on perceived credibility. A random-intercept cross-lagged panel model (RI-CLPM) did not support mediation effects at the within-person level over time. Our study contributes to the field of Human-Machine Communication with novel longitudinal insights into the psychological mechanisms that shape users’ credibility judgements of AI systems.