16:30 - 18:00
Mon-H8-Talk 3--35
Mon-Talk 3
Room: H8
Chair/s:
Xenia Schmalz
How to (Better) Norm Word Ratings
Mon-H8-Talk 3-3502
Presented by: Jack Taylor
Jack Taylor
Department for Psychology, Goethe University Frankfurt, School of Psychology and Neuroscience, University of Glasgow
Dimensions like word concreteness and sentence plausibility are inherently difficult to measure empirically. One simple solution is to ask participants to rate the dimension of interest on an ordinal scale. For instance, we may ask, "how emotionally arousing is the word SKYSCRAPER, on a scale from 1 (not at all) to 7 (very)?". Such datasets typically report summary statistics of the mean (M), standard deviation (SD), and number (N) of ratings for each item. However, M-SD patterns in such datasets have been interpreted as problematic in recent years. In particular, Pollock (2018) has argued that high SDs at scale midpoints are evidence that concepts like concreteness are in fact dichotomous, and that mid-scale averages capture disagreement rather than a meaningful continuum. Is such “midscale disagreement” a problem? Perhaps not. Rather, simulations suggest that patterns identified as problematic in raw means and SDs result from such summary statistics inappropriately treating ordinal ratings as though they are on an interval scale. Using a more appropriate modelling approach (Cumulative-Link Mixed-Effects Models), we can (1) disentangle means and SDs from such statistical artefacts, and (2) explain more variance in relevant correlates. We recommend Cumulative-Link Mixed-Effects Models as a robust, general solution to norming items’ ratings.
Keywords: word, ratings, norms, concreteness, orthography, mixed-effects models, random effects