Can visual decisions be manipulated by explainable artificial intelligence?
Mon—Casino_1.811—Poster1—2105
Presented by: Romy Müller
While explainable artificial intelligence (XAI) promises to make AI more transparent, explanations can induce overtrust. This has usually been investigated in tasks where decisions could either be right or wrong. Conversely, many real-life decisions depend on subjective criteria. For instance, does a suboptimal food item look like it is still edible? The present study investigated whether visual decision criteria are susceptible to manipulation via XAI. In a simulated quality control task, participants saw images of faulty chocolate bars. They decided whether the chocolate should be retained or discarded, and rated the severity of the fault. In half of the trials, they received XAI support in the form of heatmaps overlaid on the images. This XAI highlighted the areas that led an AI algorithm to judge the chocolate as faulty. We hypothesised that such XAI would lead participants to discard more chocolate, but that this would also depend on the visual features of the fault. The results revealed no overall effects of XAI, but clear image-specific effects. When faults were large and non-salient, participants discarded more chocolate with XAI than without. When faults were smaller but more salient, the opposite was found. Importantly, these XAI-induced biases were only observed when participants fully relied on the XAI without inspecting the original image. Taken together, the results suggest that XAI can indeed manipulate decision criteria, but only under particular conditions and only if people do not cross-check its results.
Keywords: explainable artificial intelligence, overtrust, decision criteria, visual decision making