The Role of XAI in the Comparison of Human and Automated Agents with respect to Justice Perceptions
Mon-B17-Talk II-01
Presented by: Nadine Schlicker
Driven by advances in artificial intelligence, automated systems increasingly automate decision-making. Especially in organizations where distributions are automated systems might influence justice perceptions of employees regarding the distribution of outcomes, the processes, the transparency, and the interpersonal treatment of decision-making – factors that reflect the four facets of organizational justice: distributive, procedural, informational, and interpersonal justice (Colquitt, 2001), which are known to positively affect i.a. employee’s job motivation and satisfaction. We aimed to understand whether the introduction of automated agents changes justice perception in comparison to human decision-making, and which role agent’s explanations play with respect to justice perceptions of decision recipients.
In this talk we present preregistered and published findings of a fully randomized 2 (agent: automated vs. human) x 3 (explanation: equality-explanation vs. equity-explanation vs. no explanation) between-subjects study. Participants were recruited from the healthcare sector (N = 209) and responded to an online survey. Results showed that perceptions of interpersonal justice were stronger for the human agent. Participants perceived human agents as offering more voice and automated agents as being more consistent in decision-making. When given no explanation, perceptions of informational justice were impaired only for the human decision agent. In the study’s second part, participants took the perspective of a decision-maker and were given the choice to delegate decision-making to an automated system. Exploratory analyses suggest that participants delegating an unpleasant decision to the system frequently externalized responsibility and showed different response patterns when confronted by a decision-recipient who asked for a rationale for the decision.
In this talk we present preregistered and published findings of a fully randomized 2 (agent: automated vs. human) x 3 (explanation: equality-explanation vs. equity-explanation vs. no explanation) between-subjects study. Participants were recruited from the healthcare sector (N = 209) and responded to an online survey. Results showed that perceptions of interpersonal justice were stronger for the human agent. Participants perceived human agents as offering more voice and automated agents as being more consistent in decision-making. When given no explanation, perceptions of informational justice were impaired only for the human decision agent. In the study’s second part, participants took the perspective of a decision-maker and were given the choice to delegate decision-making to an automated system. Exploratory analyses suggest that participants delegating an unpleasant decision to the system frequently externalized responsibility and showed different response patterns when confronted by a decision-recipient who asked for a rationale for the decision.
Keywords: automated decision-making, human-computer interaction, HCI, justice, XAI, scheduling