An agent's reputation and skill affect self-reported trust differently from behavioral trust.
Mon-H6-Talk 1-1106
Presented by: Fritz Becker
Through continuing automation, the systems people work with are becoming increasingly agentic. In these human-agent teams, the system is no longer a tool to be used but rather a partner with agency. These changing roles create the challenge of how much trust human users should place in such agentic systems. Since over- and under-trust are associated with costs, the trust one puts into an agentic partner must be finely calibrated. To accurately calibrate their trust, the user must use all available information, such as the agent's reputation and task performance.
In our experiment, the participants interacted with an agent in a cooperative puzzle game similar to Tetris. The players could select one among four pieces and place it without time pressure. We manipulated the agent’s reputation and skill and measured the participant's trust in the agent using self-report and behavioral measures. Self-reports were measured immediately after reading the reputation text and again after experiencing the agent. As a behavioral measure of trust during the interaction with the agent, we used trials in which the participants delegated the choice of a piece to it.
Results show that trust initially reflects the agent's reputation, but it is immediately canceled out at the start of the interaction, depending on the agent's skill level. During and following the experiment, no effect of reputation is detectable anymore. The team's cooperative task performance depended on the agents' and the participants' abilities. Additionally, the participant’s performance moderates the effect of the agent's skill in post-interaction trust.
In our experiment, the participants interacted with an agent in a cooperative puzzle game similar to Tetris. The players could select one among four pieces and place it without time pressure. We manipulated the agent’s reputation and skill and measured the participant's trust in the agent using self-report and behavioral measures. Self-reports were measured immediately after reading the reputation text and again after experiencing the agent. As a behavioral measure of trust during the interaction with the agent, we used trials in which the participants delegated the choice of a piece to it.
Results show that trust initially reflects the agent's reputation, but it is immediately canceled out at the start of the interaction, depending on the agent's skill level. During and following the experiment, no effect of reputation is detectable anymore. The team's cooperative task performance depended on the agents' and the participants' abilities. Additionally, the participant’s performance moderates the effect of the agent's skill in post-interaction trust.
Keywords: trust, reputation, performance, human-agent-interaction, game, multi-method