Agency and Adaptation: How Human Engagement Shapes Goal-Directed Learning in AI Collaborations
Mon—HZ_12—Talks2—1504
Presented by: Jakob Kaiser
Humans increasingly interact with AI systems to solve important tasks. However, passively following AI advice may undermine our sense of agency and engagement with the environment. Theories of sense of agency suggest that diminished active involvement could impair our ability to learn effectively from our actions. To better understand the impact of human-AI interaction on learning, this study explores how AI interactions influence the processing of goal-relevant feedback. A total of 141 participants performed an online decision-making task alongside an AI avatar, which involved choosing stimuli that resulted in either monetary wins or losses. We manipulated both self-agency (whether the human or AI made the choice) and self-relevance (whether the human or AI received the outcome). Participants' reaction times in confirming trial outcomes served as an indicator of processing ease. We also measured participants’ likelihood to repeat or change choices based on wins or losses during the task. We found that both self-agency and self-relevance led to significantly faster outcome processing. Crucially, self-agency compared to AI-agency resulted in more pronounced win-stay/lose-switch behavior, meaning that participants adapted their behavior more effectively based on performance feedback resulting from their own choices. This agency-dependent change in adaptivity occurred regardless of whether the human or the AI received the monetary outcome. These findings suggest that active decision-making, compared to passively following AI, fosters more efficient adaptation to feedback. Thus, our study demonstrates the importance of ensuring active human engagement in AI-supported task environments to facilitate effective learning.
Keywords: agency, self-determination, human-computer interaction, artificial intelligence, learning, feedback processing, adaptation