Does algorithm aversion occur in advice taking?
Tue—HZ_8—Talks4—3604
Presented by: Thomas Schultze
The growing abilities of algorithms and their increased accessibility have led researchers to study whether and to which extent decision-makers rely on algorithmic advice. One phenomenon that has garnered substantial interest is algorithm aversion. Algorithm aversion denotes a dysfunctional tendency to prefer human over algorithmic advice even when the latter is more accurate. Importantly, algorithm aversion is contingent on the decision-maker being able to see it perform prior to the decision-task. Observing that an algorithm makes errors on occasion is assumed to violate decision-makers’ exaggerated expectations about algorithmic accuracy, leading them to lose trust in the algorithm. Surprisingly, However, in the most common research paradigm researchers use to study the informational social influence of advice (the judge-advisor system), evidence for algorithm aversion is scarce. Most judge-advisor studies report either the opposite phenomenon of algorithm appreciation (identical advice is preferred when it is allegedly stems from an algorithm) or find no differences between algorithmic and human advice. In the few judge-advisor studies that report algorithm aversion, methodological problems preclude firm conclusions about its occurrence. One possible reason for the lack of evidence of algorithm aversion in the judge-advisor system is that few studies provide decision-makers with the information necessary to infer that the algorithm – while good – is not perfect. In a preregistered study creating optimal conditions for algorithm aversion to emerge, we instead observed algorithm appreciation, and adding a human-in-the loop did not increase reliance on algorithmic advice. This poses questions about the generalisability of the algorithm aversion effect.
Keywords: advice taking, algorithm aversion, algortihm appreciation, judgement and decision-making, social influence