09:00 - 10:30
Parallel sessions 4
09:00 - 10:30
Room: HSZ - N2
Chair/s:
Teresa Luther
The increasing integration of artificial intelligence (AI) into everyday life – through local large language models, multimodal assistants, and personalized system designs – raises essential questions about how people perceive, trust, and interact with AI systems over time. This symposium brings together findings from an interdisciplinary longitudinal study investigating the perception and dynamics of human – AI interaction across six measurement points within one year. The project explores how individual characteristics, behavioral responses, and usage contexts jointly shape willingness to delegate to AI – both in writing and decision-making contexts – as well as perceptions of trust, credibility, closeness, and self-efficacy. 
The first contribution presents data from the initial wave of data investigating predictors of individuals’ willingness to delegate writing tasks to AI. The second contribution investigates how trust and related perceptual facets—credibility, creepiness, and mind perception—shape people’s willingness to delegate decision-making to AI systems over time. The third contribution addresses perceived closeness and behavioral intention, examining how perceptions of AI agents as social actors—rather than mere tools—change over time and how these changes relate to feelings of loneliness, perceived intelligence, and behavioral intentions to use AI. The fourth contribution examines how perception of AI as tool or social actor and interaction modality (text vs. voice) shape credibility perceptions. Finally, the fifth contribution examines whether the perception of AI as tool vs. social actor also has consequences for cognitive self-esteem of the user, also over time. 
Together, these studies provide a comprehensive perspective on the evolving relationship between humans and AI. By integrating psychological, social, and technological viewpoints, the symposium offers novel insights into how trust, roles, and self-perceptions adapt to increasingly intelligent and omnipresent AI systems – highlighting implications for user-centered and ethically informed AI design. 
Submission 340
From Tools to Companions? How Does Tool-Actor Perception Shape Relational and Behavioral Dynamics over Time?
SymposiumTalk-03
Presented by: Büsra Sarigül
Teresa LutherBüsra Sarigül
Leibniz-Institut für Wissensmedien (IWM), Tübingen, Germany
Conversational agents are becoming increasingly advanced and humanlike. As their abilities improve, people may begin to perceive them not only as tools but also as companions or social partners (Cheng et al., 2025). Individuals often form and strengthen relationships with AI systems (Skjuve et al., 2023). Yet, little is known about how these perceptions change over time. This six-wave longitudinal study examined whether users’ perceptions of AI agents (tool vs. social actor) shift over time and how these shifts relate to interpersonal closeness, loneliness, and behavioral intention to use AI tools (Ntotal = 1007). We predicted that perceptions would increasingly favor “social actor” and that this shift would be linked to higher perceived intelligence, greater closeness, and stronger intentions to use AI. Contrary to expectations, people increasingly viewed the agent as a tool rather than a social actor over time, with “social actor” responses decreasing across waves. Loneliness at T1 did not predict role trajectories. Cross-sectionally, those viewing AI as a social actor rated it as more intelligent than those viewing it as a tool, but this association did not strengthen over time. Perceived role did not influence interpersonal closeness across waves. Behavioral intention to use AI started high but declined over time (with a small quadratic uptick), independent of role perceptions. These findings suggest that even as AI systems become more anthropomorphic, users continue to approach them primarily as functional tools rather than social companions.