08:30 - 10:00
Tue—HZ_10—Talks4—38
Tue-Talks4
Room:
Room: HZ_10
Chair/s:
Dirk U Wulff
Human vs. LLM-Generated Possibilities Shape Moral Judgments
Tue—HZ_10—Talks4—3805
Presented by: Lara Kirfel
Lara Kirfel 1*Neele Engelmann 2Anne-Marie Nussberger 3
1 Center for Humans and Machines, Max Planck Institute for Human Development, 2 Center for Humans and Machines, Max Planck Institute for Human Development, 3 Center for Humans and Machines, Max Planck Institute for Human Development
Moral judgment often depends on evaluating the range of possible actions available to an agent in a given situation. Determining blame requires considering what an agent could have done differently. While people often rely on a limited set of default options when thinking about what’s possible in a given context (Morris et al., 2021; Zhang et al., 2021; Bear, Bensinger, Jara-Ettinger, Knobe, & Cushman, 2020), Large Language Models (LLMs) generate broader and qualitatively distinct sets of alternatives by leveraging extensive datasets (Poulsen & Dedeo, 2023). In this study, we demonstrate that people attribute more blame to an agent when they first consider action options generated by an LLM, compared to when they generate the options themselves. This shift in blame attribution is driven by the qualitative difference in the consideration sets produced by humans versus LLMs. We situate these findings within theories of human modal cognition, highlighting how differences in the representation of possibilities influence moral evaluation. Finally, we discuss the implications of these results for Human-LLM interaction.
Keywords: moral judgment, modal reasoning, reasoning, Human-LLM-Interaction