11:20 - 13:00
P2-S49
Room: 1A.09
Chair/s:
Adam Reiff
Discussant/s:
Andreu Casas
Does preventive content moderation improve online discourse?
P2-S49-4
Presented by: Laura Bronner
Laura BronnerFrancisco Tomás-ValienteNicolai BerkDominik Hangartner
ETH Zürich
Many online platforms use forms of pre-moderation, whereby online contributions are screened - by algorithms, humans or both - with the aim of identifying harmful comments before they are posted. This system has disadvantages: it can be costly, depending on the cost of the human (and/or algorithmic) moderators used, and it can inhibit on-platform interaction by delaying or even preventing the posting of comments, including harmless ones. But does it actually work to reduce harmful comments and improve the quality of online discourse? And what is the trade-off between harmful comments and the costs (financial and otherwise) to the platform? In this study, we partner with the Austrian online newspaper Der Standard to run a randomized field experiment testing the efficacy of algorithmically-supported pre-moderation. In our experimental design, the Community Team at Der Standard randomizes articles into two categories: one treatment group, in which comments are passed through their in-house moderation algorithm, which automatically publishes or deletes most comments and sends the remainder to human moderators, and a control group, in which comments are not systematically pre-screened. We track how submitted and published comments differ between treated and control articles to measure the impact of pre-moderation. We hypothesize that there is a trade-off between preventing harmful speech and encouraging engagement; we expect that pre-moderation reduces harmful speech, but also takes longer, leading to fewer comments and interactions, and requires more resources from the platform.
Keywords: Content moderation, online discourse, media

Sponsors