Emotion Recognition – A Battle of Accuracy between Physiological Measurements and Computer Vision
Mon-B22-Talk I-04
Presented by: Sebastian Scholz
Nowadays, computer algorithms are available on the market that claim to accurately predict expressed emotions from video clips showing facial expressions. However, the validity and accuracy of these algorithms is still debated. We investigated whether computer vision algorithms or laboratory-based psychophysiological measurements are more accurate in the recognition of happy, neutral, and disgust expressions in response to visual emotional stimuli. Participants (N = 30) viewed emotional pictures from the IAPS and other databases, gave valance and arousal ratings, and had to select the basic emotion that best fitted their reaction towards the presented stimulus. We recorded EDA-, heartrate-, facial EMG-, and EEG-data and videotaped participants facial expressions towards the stimuli. For analyzing the video recordings, we applied the two open access algorithms OpenFace and rPPG, which extract facial action units and heart rate data from video files. Supervised machine learning algorithms were then used for predicting valance and arousal ratings and self-selected universal emotions. We compared recognition performance between the algorithms and the psychophysiological data. In line with limited prior research, we found that the applied computer vision algorithms were able to accurately recognize happy reactions but could not distinguish between disgusted or neutral reactions. In contrast, our psychophysiological data showed a higher accuracy across all emotional reactions and were more reliable in separating the three stimuli categories. This pattern of results demonstrates the boundaries of applicability of available computer vision algorithms in emotion recognition.
Keywords: Emotion recognition, Psychophysiology, EEG, Automatic facial coding, IAPS