11:20 - 13:00
P2-S49
Room: 1A.09
Chair/s:
Adam Reiff
Discussant/s:
Andreu Casas
Tackling harmful online comments with visible moderator presence
P2-S49-3
Presented by: Laura Bronner
Laura BronnerNicolai BerkFrancisco Tomás-ValienteDominik Hangartner
ETH Zürich
Online platforms often respond to harmful content by deleting or downranking it, but moderation actions are generally invisible to most users. Since moderation is typically invisible, online discussions feel like spaces where no one is enforcing conversational norms, which can lead users to produce toxic messages or fail to engage with other arguments. Can the visible engagement of clearly identifiable moderators in comment sections - what we call ‘active moderation’ - improve online discourse? Together with the Swiss online newspaper 20Minuten, we run a randomized field experiment testing the efficacy of such visible moderation. We use an experimental design in which 20Minuten moderators identify articles they deem at risk for harmful comments and randomize them into two categories – one control group, which experiences the usual invisible moderation (deleting harmful comments before publication), and one treatment group, in which moderators additionally engage visibly in the comments section, responding to both constructive and potentially harmful comments. We find that active moderation reduces both the number and the share of toxic comments submitted; moreover, it does so without reducing engagement. We show that this reduction in toxicity is entirely due to behavioral change among users, rather than selection of less toxic users into actively moderated articles.
Keywords: Content moderation, harmful speech, media

Sponsors