09:00 - 10:30
Parallel sessions 1
09:00 - 10:30
Room: HSZ - N9
Chair/s:
Maren Mayer, Tobias Rebholz
When making decisions or providing a judgment, individuals often seek and receive advice from others. They may ask a friend whether or not they should spend their holidays in Japan and how much they should plan to budget for such a stay. Advice taking is typically investigated in a judge-advisor system. The judge provides an initial judgment before being introduced to the advice of the advisor. Afterward, the judge provides their final judgment. In this symposium, we combine advances in advice-taking research, outlining new perspectives for the field.

The first contribution demonstrates that individual differences can affect whether and how individuals take advice, and how much these influences have been overlooked. The second contribution presents a meta-analysis on how advice taking varies across different study contexts and designs using a dual hurdle model. The third contribution compares advice taking based on aggregated versus non-aggregated advice from multiple advisors, investigating why aggregated advice is heeded more by judges. The last two contributions focus on advice taking from algorithms. The fourth contribution investigates algorithmic advice demonstrating that without explicit communication advice can shape competition and collaboration among individuals. Finally, the fifth contribution examines algorithmic and hybrid advice combining human and algorithmic advice, demonstrating no algorithm aversion but instead algorithm appreciation.
Submission 311
Algorithm Appreciation Rather than Aversion in Judge-Advisor-Systems
SymposiumTalk-05
Presented by: Samantha Darrah
Samantha Darrah 1, Jennifer Cheung 2, Aidan Feeney 1, Thomas Schultze 1, 3
1 School of Psychology, Queen's University Belfast, United Kingdom
2 Khoury College & College of Science, Northeastern University, United States
3 Institute of Psychology, University of Bamberg, Germany
In this paper, we demonstrate that people are less likely to discount advice that they believe originates from an algorithm rather than from a human. We set out to test the idea that algorithm aversion can be ameliorated by using hybrid advice, a combination of human and algorithmic advice. However, in two preregistered and well-powered studies, we failed to replicate algorithm aversion despite creating the exact conditions under which it should occur, namely when providing participants with performance feedback showing that the algorithm is good but not perfect. In both studies, we presented participants with identical advice but manipulated the label of the advisor depending on the condition that they were allocated to. Contrary to our expectations, hybrid advice was not weighted more than algorithmic advice, and this holds across different ways to operationalise hybrid advice. Together our studies suggest that algorithm appreciation dominates the Judge-Advisor-System (JAS), and that people discount the same advice less as soon as they believe it to be at least partially generated by an algorithm.