08:30 - 10:00
Tue-H9-Talk 4--44
Tue-Talk 4
Room: H9
Chair/s:
Thorsten Pachur
Using LLMs to Automate the Analysis of Verbal Reports
Tue-H9-Talk 4-4405
Presented by: Paul Ungermann
Paul Ungermann 1, 2, Tehilla Ostrovsky 1, Christopher Donkin 1
1 Ludwig-Maximilians-University Munich, 2 Technical University of Munich
In this presentation, we argue that verbal reports, often overlooked due to their perceived subjectivity and inefficiency for large-scale analysis, can be invaluable in understanding decision-making processes. Drawing from Mechera-Ostrovsky’s framework, we demonstrate that such reports can validate the formal components in cognitive models, as well as explore their more implicit assumptions. To make the collection and analysis of verbal reports more efficient, we introduce a new, user-friendly platform, which is integrated in the jsPsych library, that uses advanced machine-learning methods to capture and automatically analyze verbal reports collected during an experiment.
We demonstrate the capabilities of this platform through a case study on a memory task. In this study, we provide a detailed explanation of our data evaluation process, which includes Speech Recognition, Auto-Summarization, and Text Vectorization. We will show how the text-vectorization step involves converting summaries of text into high-dimensional vectors, which then enables the use of numerical methods like clustering, hypothesis testing, and visualization. Our case study serves as a fundamental illustration, presenting a flexible structure that can be conveniently adapted to different pipelines, tasks, and applications. Overall, our approach provides a scalable and accessible alternative for translating qualitative data into quantified data, opening up new options for the way verbal reports can be utilized in cognitive research.
Keywords: natural language processing models, self-report, verbal description, decision-making, cognitive model validity