When facial expressions and actions do not fit: Conflict adaptation in social interactions
Mon-H8-Talk 2-2003
Presented by: Leon Kroczek
Face-to-face social interactions are an important part of our everyday lives, yet coordinating with another person can be challenging as it requires efficient processing of social cues. Previous studies found that observers use an interactive partner’s facial emotional expressions to predict upcoming actions and prepare responses. However, facial expressions may not always be reliable sources of information. It remains unclear whether persons can adapt to the pattern by which an interactive partner uses facial emotional expressions and actions in social interactions. Therefore, we conducted a Virtual Reality study (N = 48) where participants interacted with two virtual agents. For each agent we manipulated proportions of congruency between facial emotional expressions and subsequent socially relevant actions (e.g., happy facial expression followed by greeting [fist bump: congruent] or punch action [incongruent]). Participants interacted with a mostly congruent (MC agent, 75% congruent) and a mostly incongruent agent (MI agent, 75% incongruent) while response times and gaze behavior were measured. We found slower responses to incongruent compared to congruent actions only when interacting with the MC agent but not when interacting with the MI agent. Furthermore, gaze data revealed that fixations on the face region were longer for unexpected compared to expected behavior in terms of congruency proportion. These data show that persons are sensitive to individual patterns of facial expressions and actions and can adapt their responses accordingly. This represents a potential mechanism to deal with variance in the use of social cues across social interactions.
Keywords: Social Interaction, Emotion, Action, Conflict Adaptation, Congruency, Response Times, Eye Tracking