Emotional and social features shape the representation of dynamic facial expressions
Mon—Casino_1.811—Poster1—2207
Presented by: Hilal Nizamoglu
Dynamic facial expressions play a crucial role in daily communication by revealing emotions, social cues, and mental states. While the dimensions of object and action representations are increasingly well-understood, how humans represent dynamic facial expressions remains unclear. Here, we investigated which dimensions shape the human representational space of dynamic facial expressions using emotional, social, and motion-based features. Participants (N=19) performed a multi-arrangement task on 48 facial expression videos (4 actors, 12 expressions: 6 emotional, 6 conversational). In addition, ratings on emotional dimensions (valence, arousal, affectiveness), social dimensions (communicativeness, potency), and motion features (e.g., head and facial part movements) were collected from a larger sample (N=37). To understand which feature dimensions underlie the representational space, we applied Representational Similarity Analysis (RSA) to the behavioral similarity judgments derived from the task. Additionally, we created model representational dissimilarity matrices to measure the influence of expression and identity on human perceptual judgments. Results revealed substantial discrepancy among participants’ strategies on their similarity judgments, with some participants prioritizing expressions and others focusing on actor identity. Across participants, valence, potency, and communicativeness were found as the strongest predictors of representational similarities, while arousal, affectiveness and motion features had limited influence. These findings highlight the multidimensional nature of dynamic facial expression representations, emphasizing the critical role of emotional and social dimensions in shaping this representational space.
Keywords: Facial expression, similarity judgment, dynamic faces