The impact of embodiment on trust calibration and performance in a virtual team environment
Tue-Main hall - Z1-Poster 2-5510
Presented by: Karla Krüger
System-wide trust (SWT; i.e., all teammates are trusted equally) is a common phenomenon in teams that contain automated aids, such as algorithms or artificial intelligence (AI). If trust is not calibrated according to team members’ individual abilities (i.e., component-specific trust; CST), performance may be negatively impacted due to under-trust in capable teammates and/or over-trust in incapable teammates. The aim of our study is to examine whether enhancing trust calibration and performance in human-AI teams is achievable through interventions that support individuation, enabling the perception of AIs as individuals rather than a homogeneous group. We hypothesize that embodying the AI in a robot form, thereby transforming the human-AI interaction into human-robot interaction, will improve individuation and consequently enhancing trust calibration and team performance.
Participants will collaborate with either three AI’s (no physical representation), three humanoid robots, or three humans in a virtual reality setting. 66 Participants will be collected based on a power analysis assuming a medium-to-high effect size. Effects will be assessed through subjective rating scales (e.g., single item trust/reliability rating) and objective measures, such as eye tracking (e.g., number of fixations) and team performance data (e.g., number of targets found). The project is unique as it is the first to investigate the dynamics of SWT versus CST in human-robot collaboration and uses objective measures with high internal validity to investigate trust calibration in human-AI/robot teams with a high external validity. Findings will help identify effective influences of trust calibration and developing/validating objective measures of trust relevant to human-AI/robot teams.
Participants will collaborate with either three AI’s (no physical representation), three humanoid robots, or three humans in a virtual reality setting. 66 Participants will be collected based on a power analysis assuming a medium-to-high effect size. Effects will be assessed through subjective rating scales (e.g., single item trust/reliability rating) and objective measures, such as eye tracking (e.g., number of fixations) and team performance data (e.g., number of targets found). The project is unique as it is the first to investigate the dynamics of SWT versus CST in human-robot collaboration and uses objective measures with high internal validity to investigate trust calibration in human-AI/robot teams with a high external validity. Findings will help identify effective influences of trust calibration and developing/validating objective measures of trust relevant to human-AI/robot teams.
Keywords: artificial intelligence, human-robot team, trust calibration, trust measurement, eye tracking, embodiment, visual search