What you say and how you say it: Does sentiment bias stance detection in political texts?
P7-S186-4
Presented by: Alex Hartland
Distinguishing between sentiment and stance has become a key consideration in the analysis of political texts. Where once they were conflated, it is now clear that the two concepts should be measured separately. For example, a text may express negative sentiments while also being strongly in favour of a given policy position. However, it is not yet clear how the two interact on a range of issues, and whether this interaction biases the measurement of either concept. To address this question, we turn to a large and varied corpus of texts collected and hand-coded as part of the Horizon Europe-funded ActEU project. The texts are produced by a range of actors, from elected officials and journalists to interest groups and members of the public, across a range of platforms including Twitter/X, Telegram, and traditional news media. They express both sentiment and stances on the issues of migration, gender, and climate change in nine European languages. Using this corpus to train and validate a number of text classifiers, we examine the potential for sentiment to bias stance detection in a variety of contexts and languages, and the methods which are better able to overcome such biases. As the costs of large-scale automated text classification fall and it becomes increasingly feasible for researchers to perform, our paper contributes to improving the validity of these methods and a better understanding of online political communication.
Keywords: sentiment analysis, stance detection, text analysis, social media, LLMs