16:30 - 18:00
Parallel sessions 6
16:30 - 18:00
Room: HSZ - N2
Chair/s:
Hanna Woloszyn
Computational modeling has become an increasingly important approach for studying language generation across speaking, writing, and associative processes. Approaches range from distributional semantics and transformer-based architectures to learning-based production models, each providing different ways to formalize and investigate how meaning, structure, and behavior emerge in linguistic systems. This symposium presents five studies at the intersection of cognitive science, computational linguistics, and psycholinguistics that use these methods to better understand language production and related cognitive processes. 
The symposium starts with a study investigating whether LLM-generated corpora can simulate the longitudinal development of children's texts using various psycholinguistic variables to compare the produced language. The second talk explores whether visual characteristics of pictures, beyond their conceptual or lexical representations, contribute to interference effects in picture–word-interference tasks by integrating modern vision–language embeddings with behavioral data. The third project validates centroid analysis as a method to infer concept representations from participants’ open-ended verbal responses, such as free associations, word substitutions, and feature generations with word embeddings. The fourth talk proposes a computational model that accounts for semantic interference phenomena in language production by implementing an incremental learning mechanism within an interactive production network. Finally, we present a study that investigates the psychometric capacities of LMs in the verbal fluency task, an experimental paradigm used to examine human knowledge retrieval, cognitive performance, and creative abilities. 
By bringing together different perspectives, the symposium encourages a discussion on what it means to "model" language production and how such modeling can contribute to our understanding of human cognition and language processing. Together, these projects will increase our understanding of language models' potential benefits and limitations.
Submission 505
Beyond Lemmas: Modeling the Picture in Picture-Word Interference
SymposiumTalk-02
Presented by: Louis Schiekiera
Louis Schiekiera 1, 2, Vincent Gruber 1, Fritz Günther 1
1 Humboldt-University, Berlin, Germany
2 Free University of Berlin, Germany
Accounts of picture–word interference (PWI) typically assume that the picture has already been conceptually encoded and thus model interference exclusively at the conceptual–lexical level. In contrast, we test whether visual properties of the target picture itself contribute to interference effects in naming. We assembled a large multimodal PWI dataset— 189,767 trials from 23 experiments across 13 studies, involving more than 1,078 participants and 1,311 target images (with additional data still being collected). For each target picture and distractor word, we compute vision–language embeddings using OpenCLIP, deriving cosine-similarity measures for (a) image × distractor-word, (b) image × target-word, and (c) target-word × distractor-word pairs. These multimodal similarity estimates are integrated with trial-level behavioral data in mixed-effects models predicting log naming latencies. The project aims to assess whether image–word similarity provides explanatory power beyond traditional semantic relatedness measures and design factors. More broadly, the study offers a framework for incorporating computational visual representations into psycholinguistic models of word production.