In the age of social media, the questions of what is allowed to say and how hate speech should be regulated are ever more contested. We hypothesize that content- and context-specific factors influence citizens’ perceptions of the offensiveness of online content, and also shape preferences for action that should be taken. This has implications for the legitimacy of hate speech regulation.
We present a pre-registered study to analyze citizens’ preferences for online hate speech regulation. The study is embedded in nationally representative online panels in the US and Germany (about 1,300 respondents, opt-in panels operated by YouGov). We construct vignettes in forms of social media posts that vary along key dimensions of hate speech regulation, such as sender/target characteristics (e.g., gender and ethnicity), message content, and target’s reaction (e.g., counter-aggression or blocking/reporting). Respondents are asked to judge the posts with regards to their offensiveness and consequences the sender should face. Furthermore, the vignette task is embedded in a framing experiment, motivating it by (a) looming government regulation protecting potential victims of hate speech, (b) civil rights groups advocating against censorship online, or (c) a neutral frame.
While governments around the world are acting towards regulating hate speech, little is known about what is deemed acceptable or inacceptable speech online in different parts of the population and societal contexts. We provide first evidence that could inform future debates on hate speech regulation.