Submission 130
LLMs can Correct Economic Falsehoods When Perceived as Neutral
PS2-G05-03
Presented by: Matthew DiGiuseppe
Recent evidence suggests that large language models (LLMs) can effectively persuade people to abandon conspiracy beliefs (Costello et al. 2024). This paper investigates whether LLMs can similarly influence public beliefs about economic relationships that are widely held but largely rejected by orthodox economics. The study also examines the mechanisms behind LLMs' persuasive capabilities, arguing that their perceived non-partisanship and lack of normative agenda enhance their credibility among respondents.
To test both LLMs' persuasiveness and the role of perceived neutrality, I conducted a two-wave survey on the Prolific platform. The first wave collected baseline data on respondents' support for common economic misconceptions, including the government-household budget analogy, trade protectionism, and the zero-sum nature of capitalism. The second wave engaged respondents in discussions with an LLM while experimentally manipulating claims about the models' partisan bias and reprobed support for economic misconceptions. A pilot study with 500 US respondents demonstrates that LLMs can successfully persuade citizens to revise their economic misconceptions, consistent with findings on conspiracy beliefs. While priming respondents to perceive the models as having out-partisan bias diminishes their persuasiveness, it does not eliminate it entirely. If these findings hold in a larger survey, the study will provide evidence that LLMs can effectively cut through polarized human discourse, primarily due to their perceived political neutrality.
To test both LLMs' persuasiveness and the role of perceived neutrality, I conducted a two-wave survey on the Prolific platform. The first wave collected baseline data on respondents' support for common economic misconceptions, including the government-household budget analogy, trade protectionism, and the zero-sum nature of capitalism. The second wave engaged respondents in discussions with an LLM while experimentally manipulating claims about the models' partisan bias and reprobed support for economic misconceptions. A pilot study with 500 US respondents demonstrates that LLMs can successfully persuade citizens to revise their economic misconceptions, consistent with findings on conspiracy beliefs. While priming respondents to perceive the models as having out-partisan bias diminishes their persuasiveness, it does not eliminate it entirely. If these findings hold in a larger survey, the study will provide evidence that LLMs can effectively cut through polarized human discourse, primarily due to their perceived political neutrality.