AI-augmented decision-making in one-to-one face matching: Alternatives to concurrent advice presentation to reduce overreliance
Mon—HZ_12—Talks2—1503
Presented by: Eesha Kokje
The use of AI-based decision aids has seen a steady rise in recent years in areas like border control, police investigations, or identity verification. Previous studies from different fields report users’ overreliance on incorrect AI advice as a major issue in the effective integration of AI systems, as it leads to underperformance of the human-AI team compared to AI alone. Although explainability has been proposed as a solution, results so far have been mixed. We aimed to find potential solutions beyond explainability, by exploring the impact of delayed advice presentation, which may reduce anchoring bias caused by seeing advice before having the opportunity to engage cognitively with the decision. We conducted two experiments, in which we tested (a) on-demand advice/explanations and (b) conditional advice i.e. advice after user makes initial decision and only if that decision is incongruent with the AI recommendation. We found that participants’ overall performance did not differ significantly whether advice/explanation was demanded or presented by default. However, participants tended to agree with AI advice more when they demanded it and tended to disagree with the advice more when they demanded explanations. In Experiment 2, for the first time, the human-AI team performed better than the AI alone in the conditional advice condition, and participants agreed with AI advice less often in this condition. Additionally, participants were more confident in their decision when they rejected incorrect advice. These findings contribute towards potential solutions for reducing overreliance on AI in order to improve performance of the human-AI team.
Keywords: artificial intelligence, human-AI interaction, face identification, AI advice, overreliance, human-AI teaming