16:30 - 18:00
Parallel sessions 6
16:30 - 18:00
Room: HSZ - N2
Chair/s:
Hanna Woloszyn
Computational modeling has become an increasingly important approach for studying language generation across speaking, writing, and associative processes. Approaches range from distributional semantics and transformer-based architectures to learning-based production models, each providing different ways to formalize and investigate how meaning, structure, and behavior emerge in linguistic systems. This symposium presents five studies at the intersection of cognitive science, computational linguistics, and psycholinguistics that use these methods to better understand language production and related cognitive processes. 
The symposium starts with a study investigating whether LLM-generated corpora can simulate the longitudinal development of children's texts using various psycholinguistic variables to compare the produced language. The second talk explores whether visual characteristics of pictures, beyond their conceptual or lexical representations, contribute to interference effects in picture–word-interference tasks by integrating modern vision–language embeddings with behavioral data. The third project validates centroid analysis as a method to infer concept representations from participants’ open-ended verbal responses, such as free associations, word substitutions, and feature generations with word embeddings. The fourth talk proposes a computational model that accounts for semantic interference phenomena in language production by implementing an incremental learning mechanism within an interactive production network. Finally, we present a study that investigates the psychometric capacities of LMs in the verbal fluency task, an experimental paradigm used to examine human knowledge retrieval, cognitive performance, and creative abilities. 
By bringing together different perspectives, the symposium encourages a discussion on what it means to "model" language production and how such modeling can contribute to our understanding of human cognition and language processing. Together, these projects will increase our understanding of language models' potential benefits and limitations.
Submission 705
Components of Creativity: Language Model-Based Predictors for Clustering and Switching in Verbal Fluency
SymposiumTalk-05
Presented by: Özge Alacam
Özge AlacamJudith SiekerSimeon JunkerSina Zarriess
University of Bielefeld, Germany
This work investigates the psychometric capacities of Language Models (LMs) in the verbal fluency task, an experimental paradigm used to examine human knowledge retrieval, cognitive performance and creative abilities. We focus on switching and clustering patterns and seek evidence to substantiate them as two distinct and separable components of lexical retrieval processes in LMs. Specifically, we prompt different transformer-based LMs with verbal fluency items and ask whether metrics derived from the language models’ prediction probabilities or internal attention distributions offer reliable predictors of switching/clustering behaviors in verbal fluency. We find that token probabilities, but especially attention-based metrics have strong statistical power when separating between cases of switching and clustering.