How does correct and incorrect explainable artificial intelligence affect visual search?
Mon-P14-Poster I-204
Presented by: Romy Müller
Artificial intelligence (AI) often acts like a black box. To achieve more transparency, explainable artificial intelligence (XAI) can reveal what inputs the AI relied on to make its decision. For instance, in visual quality control, XAI can highlight which image areas were abnormal. While XAI can foster trust, some studies suggest that people comply with false explanations. We investigated how the effects of XAI on human performance depend on whether the AI is correct or what kinds of error it makes. Simulating quality control in a chocolate factory, participants had to decide whether moulds with chocolate bars contained faulty products. Before each image, they were informed whether the AI had classified it as faulty or intact. In half of the experiment, this decision was justified by XAI, presenting visual highlights to mark the fault area. Besides correct AI decisions, there also were occasional misses, false alarms, and misplaced highlights where an intact chocolate bar was highlighted while the actual fault was located elsewhere. We measured reaction times, errors, and eye movements to assess participants’ visual search process. Overall, XAI led to faster performance. However, participants committed more errors with XAI than without for misplaced highlights when the actual fault was elsewhere. In this situation, they often checked the XAI highlight, determined that the area was intact, but then failed to search the rest of the image. These results suggest that while people do not uncritically trust XAI, sometimes they merely check its specific suggestion but do not thoroughly consider alternatives.
Keywords: explainable artificial intelligence, fault detection, visual search, eye movements