Factors influencing AI-supported decision-making in one-to-one face matching
Mon-H6-Talk 2-1904
Presented by: Eesha Kokje
The demand for AI-based decision aids has seen a steep rise in a number of different fields. Among them, the use facial identification systems are becoming widespread, particularly in high stakes areas such as border control. The impact of design factors on human-AI teaming is still understudied. We aimed to asses factors associated with the design of the paradigm and presentation format of predictions from AI-systems, and their effect on human performance. In a series of experiments, we examined the impact of (a) implied AI accuracy, (b) mismatch frequency, and (c) explainable AI predictions on performance in a one-to-one face matching task. We found that participants’ performance improves when aided by AI, compared to baseline, with greatest improvement observed when no information on AI’s accuracy is provided. Further, mismatch frequency does not appear to influence performance. Finally, explainability does not help mitigate AI’s errors, but does increase users’ certainty in their own decision. Additionally, two findings were consistent across all experiments. First, participants often failed to dismiss inaccurate AI predictions, resulting in significantly lower performance accuracy compared to that in the accurate predictions condition. Second, on a group level, participants failed to outperform the AI, though examination of individual performance showed that some participants were able to exceed the AI’s performance. These findings contribute towards determining appropriate design formats in a human-in-the-loop system, so that the performance of the human-AI team can be maximized.
Keywords: artificial intelligence, decision making, face matching, human-AI interaction, AI advice