Bottom-up and top-down mechanisms involved in the perception of robot faces
Mon-H8-Talk 2-2002
Presented by: Eva Wiese
Robot faces often differ from human faces both in terms of their facial features (e.g., lack of nose or eyebrows) and spatial relationships between these features (e.g., disproportionately large eyes). This can influence the degree to which brain areas involved in face processing (e.g., Fusiform Face Area) process them as face-like in a bottom-up manner. At the same time, it is possible that believing that a face stimulus represents a human versus an artificial agent (e.g., robot or deepfake) might impact face-typical processing in a top-down manner. Processing robot faces in a less face-like manner than human faces could undermine human-robot interactions because humans could fail to perceive robots as individuals with unique capabilities – a phenomenon known as outgroup homogeneity – potentially leading to miscalibration of trust and errors in allocation of task responsibilities. In two experiments, we use the face inversion task to examine whether face processing differs between human and robot stimuli: if robot faces are perceived as less face-like than human faces, the inversion effect (i.e., difference in recognition performance for faces presented upright versus invertedly) should be less pronounced for robot than human faces. Experiment 1 shows that robot faces induce smaller inversion effects than human faces in a bottom-up manner: the more human-like facial features robots possess, the larger the inversion effect. Experiment 2 shows that inversion effects for actual human faces are attenuated in a top-down manner when participants believe that the depicted stimuli are android robots rather than human agents.
Keywords: social cognition, face perception, human-robot interaction, artificial agents, top-down modulation