09:00 - 10:30
Parallel sessions 4
09:00 - 10:30
Room: HSZ - N2
Chair/s:
Teresa Luther
The increasing integration of artificial intelligence (AI) into everyday life – through local large language models, multimodal assistants, and personalized system designs – raises essential questions about how people perceive, trust, and interact with AI systems over time. This symposium brings together findings from an interdisciplinary longitudinal study investigating the perception and dynamics of human – AI interaction across six measurement points within one year. The project explores how individual characteristics, behavioral responses, and usage contexts jointly shape willingness to delegate to AI – both in writing and decision-making contexts – as well as perceptions of trust, credibility, closeness, and self-efficacy. 
The first contribution presents data from the initial wave of data investigating predictors of individuals’ willingness to delegate writing tasks to AI. The second contribution investigates how trust and related perceptual facets—credibility, creepiness, and mind perception—shape people’s willingness to delegate decision-making to AI systems over time. The third contribution addresses perceived closeness and behavioral intention, examining how perceptions of AI agents as social actors—rather than mere tools—change over time and how these changes relate to feelings of loneliness, perceived intelligence, and behavioral intentions to use AI. The fourth contribution examines how perception of AI as tool or social actor and interaction modality (text vs. voice) shape credibility perceptions. Finally, the fifth contribution examines whether the perception of AI as tool vs. social actor also has consequences for cognitive self-esteem of the user, also over time. 
Together, these studies provide a comprehensive perspective on the evolving relationship between humans and AI. By integrating psychological, social, and technological viewpoints, the symposium offers novel insights into how trust, roles, and self-perceptions adapt to increasingly intelligent and omnipresent AI systems – highlighting implications for user-centered and ethically informed AI design. 
Submission 584
From Creepiness to Delegation: Longitudinal Trust Dynamics and Adoption of AI
SymposiumTalk-02
Presented by: Nico Ehrhardt
Nico Ehrhardt
Leibniz-Institut für Wissensmedien, Germany
People increasingly hand important decisions to AI, yet we know little about how day-to-day shifts in perception translate into real delegation and adoption. I present preregistered analyses from a six-wave U.S. panel (N=1,007 at Wave 1; N=434 at Wave 6; ~2-month lags) that followed participants’ “favorite LLM based AI” from late 2024 to mid-2025. At each wave, we measured perceived usefulness and creepiness of that AI, trust in it, willingness to let AI systems make decisions, and behavioral intention to delegate concrete tasks. Wave-6 outcomes capture adoption (number of AI-completed tasks, intention to increase delegation) and perceived side effects (impact on one’s skills, adaptation, anxiety, ethical concern).

Using person-mean–centered mixed models, we test (a) whether within-person changes in usefulness and creepiness forecast next-wave trust, (b) whether trust shifts predict subsequent delegation attitudes and intentions, and (c) whether earlier delegation intentions translate into later adoption. We probe negativity asymmetry (“do losses hurt more than gains?”), nonlinear thresholds in the trust–delegation link, and indirect chains such as creepiness → trust → delegation and trust → delegation → adoption.

The talk highlights when and how small changes in people’s everyday experiences with AI snowball into broader adoption, and when they instead trigger withdrawal and concern.