Different framings of climate change abound, and there is no definitive way to present the "facts" about this global issue. Rather, climate change raises associations with diverse themes such as geophysical sciences, nature, jobs, taxes and ethics. Studies of public perceptions and attitudes in relation to climate change touch on these issues and more, but rarely permit participants to draw attention to the aspect they themselves find the most important.
Recent advances in quantitative text analysis have made open-ended survey questions a more useful tool in overcoming this problem. This paper analyses open-ended responses to two interrelated question: “What comes to mind when you hear the words ‘climate change’?” and “What do you think will be the most important effect of climate change on [your country]?" We collect data on these questions in four countries: Germany, France, the UK and Norway. We use structural topic modelling of word frequencies from verbatim textual responses to induce topics such as melting ice, future generations, changes to the seasonal cycle and carbon taxes. Models are run both on the individual languages and on statements from all four countries, using machine-assisted translation of the most frequent terms.
Based on earlier work, we hypothesise that what people associate with climate change relates to demographic factors (notably age and gender), political/ideological placement and recent events. Notably, we expect that younger people will emphasise effects on humans, all else equal, whereas older people will emphasise the physical aspects of climate change (glaciers, ice melt, sea-level rise). Given that the four countries display great variation in their electricity production systems, we also hypothesise that respondents in countries with high emissions from the power sector (Germany, UK), will emphasise electricity emissions and coal more than those in countries with lower emissions from the same sector (France, Norway). We validate the computer-generated categorisations in two ways: First, team members independently assess computer-generated sets of topics, using the criteria of cross-topic completeness and intra-topic coherence. Second, we compare our sets of computer-generated topics to categories produced by human coders using a coding scheme with 58 different topics.