09:00 - 10:30
Parallel sessions 4
09:00 - 10:30
Dynamics of Human - AI Interaction: Longitudinal Insights Into Delegation, Trust, Perceived Closeness, Credibility, and Cognitive Self-Esteem
Room: HSZ - N2
Chair/s:
Teresa Luther
The increasing integration of artificial intelligence (AI) into everyday life – through local large language models, multimodal assistants, and personalized system designs – raises essential questions about how people perceive, trust, and interact with AI systems over time. This symposium brings together findings from an interdisciplinary longitudinal study investigating the perception and dynamics of human – AI interaction across six measurement points within one year. The project explores how individual characteristics, behavioral responses, and usage contexts jointly shape willingness to delegate to AI – both in writing and decision-making contexts – as well as perceptions of trust, credibility, closeness, and self-efficacy. 
The first contribution presents data from the initial wave of data investigating predictors of individuals’ willingness to delegate writing tasks to AI. The second contribution investigates how trust and related perceptual facets—credibility, creepiness, and mind perception—shape people’s willingness to delegate decision-making to AI systems over time. The third contribution addresses perceived closeness and behavioral intention, examining how perceptions of AI agents as social actors—rather than mere tools—change over time and how these changes relate to feelings of loneliness, perceived intelligence, and behavioral intentions to use AI. The fourth contribution examines how perception of AI as tool or social actor and interaction modality (text vs. voice) shape credibility perceptions. Finally, the fifth contribution examines whether the perception of AI as tool vs. social actor also has consequences for cognitive self-esteem of the user, also over time. 
Together, these studies provide a comprehensive perspective on the evolving relationship between humans and AI. By integrating psychological, social, and technological viewpoints, the symposium offers novel insights into how trust, roles, and self-perceptions adapt to increasingly intelligent and omnipresent AI systems – highlighting implications for user-centered and ethically informed AI design. 
SymposiumTalk-01
Teresa Luther, Leibniz-Institut für Wissensmedien Tübingen, Germany
SymposiumTalk-02
Nico Ehrhardt, Leibniz-Institut für Wissensmedien, Germany
SymposiumTalk-03
Büsra Sarigül, Leibniz-Institut für Wissensmedien (IWM), Tübingen, Germany
SymposiumTalk-04
Stefanie Klein, Leibniz-Institut für Wissensmedien Tübingen, Germany
SymposiumTalk-05
Sonja Utz, Leibniz-Institut für Wissensmedien, Germany | University of Tübingen, Germany