Modality-dependent age differences in emotion categorization of dynamic, bimodal expressions
Tue—Casino_1.801—Poster2—5101
Presented by: Nikol Tsenkova
Research on emotion perception has shown that while children are faster and more accurate with positive expressions (positivity bias), young adults show negativity bias. However, these findings stem from static unimodal stimuli, overlooking emotions' dynamic and diverse nature.
This emotion categorization experiment aimed to determine whether these biases would appear with dynamic expressions and whether a bimodal condition would be more beneficial than a unimodal one, we created a dataset of avatars expressing anger, sadness, happiness, and happy-surprise in two conditions: visual (facial expression), and visual-verbal (facial and verbal expression). We tested 75 participants across three age groups (children, young, and older adults) measuring accuracy and reaction times.
We did not find biases in the young adult group.
Children were more accurate with the positive visual-verbal condition than the negative visual (p=.012), suggesting a positivity bias. They were also faster with positive visual expressions compared to negative and with negative visual-verbal conditions compared to positive.
Older adults were more accurate with negative visual-verbal expressions compared to positive (p<.001), suggesting a negativity bias. They were also more accurate in positive visual than visual-verbal conditions, and with negative visual-verbal expressions than with visual. Their RTs supported their accuracy results.
While children retained the positivity bias, older adults showed a negativity bias along with more nuanced modality-dependent preferences. These findings suggest that additional information is only relevant for specific emotions and specific age groups. Overall, the addition of dynamic presentation and verbal components produced a greater differentiation in the previously reported biases.
This emotion categorization experiment aimed to determine whether these biases would appear with dynamic expressions and whether a bimodal condition would be more beneficial than a unimodal one, we created a dataset of avatars expressing anger, sadness, happiness, and happy-surprise in two conditions: visual (facial expression), and visual-verbal (facial and verbal expression). We tested 75 participants across three age groups (children, young, and older adults) measuring accuracy and reaction times.
We did not find biases in the young adult group.
Children were more accurate with the positive visual-verbal condition than the negative visual (p=.012), suggesting a positivity bias. They were also faster with positive visual expressions compared to negative and with negative visual-verbal conditions compared to positive.
Older adults were more accurate with negative visual-verbal expressions compared to positive (p<.001), suggesting a negativity bias. They were also more accurate in positive visual than visual-verbal conditions, and with negative visual-verbal expressions than with visual. Their RTs supported their accuracy results.
While children retained the positivity bias, older adults showed a negativity bias along with more nuanced modality-dependent preferences. These findings suggest that additional information is only relevant for specific emotions and specific age groups. Overall, the addition of dynamic presentation and verbal components produced a greater differentiation in the previously reported biases.
Keywords: emotion perception, lifespan, developmental differences, facial expressions