13:30 - 15:00
Mon-B17-Talk II-
Mon-Talk II-
Room: B17
Chair/s:
Martin Baumann, Stefan Brandenburg
The facilitated integration of technology into people's lives highlights the importance of examining its impact on experience and behavior. Experimental approaches help to determine the underlying psychological processes of this impact. This symposium aims to highlight the value of the experimental approach in the applied setting of Engineering Psychology and Human Factors by presenting recent research projects and results in various application fields. The first talk by Nadine Schlicker and Markus Langer presents findings of a study that aimed to compare justice perceptions of decision recipients between human and automated agents and to investigate how these perceptions are affected by explanations. The second talk by Veronica Hoth, Maria Ivanova, and Stefan Brandenburg examines the impact of three different Design Patterns of a cookie banner on participants' ratings of user experience and trust. The third talk by Markus Gödker, Tim Schrills, and Thomas Franke presents an electric vehicle driving simulator experiment that investigated the drivers' mental representation of energy consumption, its development over time, and its link to eco-driving. The fourth talk by Luisa Heinrich and Martin Baumann examines the effects of different interaction strategies on the take-over behavior in automated vehicles. The fifth talk by
Elisabeth Wögerbauer addresses the effect of dissociating viewpoints through the use of camera-monitor systems on time-to-contact estimation. The results of a laboratory experiment in which the horizontal position of the camera was varied will be reported. The sixth talk by Matthias Arend and Verena Nitsch investigates situation awareness during telemanipulation. In the presented experiment they study the situation models of human operators in a situation in which they control a complex robotic system with various end-effectors at a distance.
The Role of XAI in the Comparison of Human and Automated Agents with respect to Justice Perceptions
Mon-B17-Talk II-01
Presented by: Nadine Schlicker
Nadine Schlicker 1, 2, Markus Langer 3
1 Institute for AI in Medicine, Philipps-University of Marburg, 2 University Hospital of Gießen and Marburg, 3 Department of Psychology, Saarland University
Driven by advances in artificial intelligence, automated systems increasingly automate decision-making. Especially in organizations where distributions are automated systems might influence justice perceptions of employees regarding the distribution of outcomes, the processes, the transparency, and the interpersonal treatment of decision-making – factors that reflect the four facets of organizational justice: distributive, procedural, informational, and interpersonal justice (Colquitt, 2001), which are known to positively affect i.a. employee’s job motivation and satisfaction. We aimed to understand whether the introduction of automated agents changes justice perception in comparison to human decision-making, and which role agent’s explanations play with respect to justice perceptions of decision recipients.
In this talk we present preregistered and published findings of a fully randomized 2 (agent: automated vs. human) x 3 (explanation: equality-explanation vs. equity-explanation vs. no explanation) between-subjects study. Participants were recruited from the healthcare sector (N = 209) and responded to an online survey. Results showed that perceptions of interpersonal justice were stronger for the human agent. Participants perceived human agents as offering more voice and automated agents as being more consistent in decision-making. When given no explanation, perceptions of informational justice were impaired only for the human decision agent. In the study’s second part, participants took the perspective of a decision-maker and were given the choice to delegate decision-making to an automated system. Exploratory analyses suggest that participants delegating an unpleasant decision to the system frequently externalized responsibility and showed different response patterns when confronted by a decision-recipient who asked for a rationale for the decision.
Keywords: automated decision-making, human-computer interaction, HCI, justice, XAI, scheduling