Comparing Closed- and Open-Ended Responses to Measure Attention
P11-S283-1
Presented by: Jeffrey Ziegler
While many researchers rely on closed-ended manipulation checks to measure respondents’ attention in survey experiments, these measures have notable limitations, including their binary nature, cuing of respondents, and the potential for guessing. Open-ended responses for attention checks, on the other hand, provide a useful alternative that alleviates these concerns. And yet, researchers still overwhelmingly implement closed-ended responses, rather than open-ended responses. I compare and outline the circumstances under which open-ended or closed-ended attention checks are most effective. Specifically, I leverage multiple survey experiments to examine the impact of experimental stimuli (e.g., text vs. audio), topic, and placement on the efficacy of both closed- and open-ended attention checks. The findings suggest there are few instances, if ever, closed-ended responses are preferable to open-ended responses. I then conclude with practical recommendations for designing surveys to strategically implement open-ended responses to check attention and enhance the quality of survey data.
Keywords: text-as-data; audio data; open-ended responses; respondent attention; experiments