16:30 - 18:00
Parallel sessions 6
16:30 - 18:00
Room: HSZ - N2
Chair/s:
Hanna Woloszyn
Computational modeling has become an increasingly important approach for studying language generation across speaking, writing, and associative processes. Approaches range from distributional semantics and transformer-based architectures to learning-based production models, each providing different ways to formalize and investigate how meaning, structure, and behavior emerge in linguistic systems. This symposium presents five studies at the intersection of cognitive science, computational linguistics, and psycholinguistics that use these methods to better understand language production and related cognitive processes. 
The symposium starts with a study investigating whether LLM-generated corpora can simulate the longitudinal development of children's texts using various psycholinguistic variables to compare the produced language. The second talk explores whether visual characteristics of pictures, beyond their conceptual or lexical representations, contribute to interference effects in picture–word-interference tasks by integrating modern vision–language embeddings with behavioral data. The third project validates centroid analysis as a method to infer concept representations from participants’ open-ended verbal responses, such as free associations, word substitutions, and feature generations with word embeddings. The fourth talk proposes a computational model that accounts for semantic interference phenomena in language production by implementing an incremental learning mechanism within an interactive production network. Finally, we present a study that investigates the psychometric capacities of LMs in the verbal fluency task, an experimental paradigm used to examine human knowledge retrieval, cognitive performance, and creative abilities. 
By bringing together different perspectives, the symposium encourages a discussion on what it means to "model" language production and how such modeling can contribute to our understanding of human cognition and language processing. Together, these projects will increase our understanding of language models' potential benefits and limitations.
Submission 568
Semantic Interference as Hebbian Learning: An Integrative Framework
SymposiumTalk-04
Presented by: Merel Muylle
Shanhua Hu 1Merel Muylle 2, Tom Verguts 2, Nazbanou Nozari 1
1 Indiana University, United States
2 Ghent University, Belgium
Incremental learning has been shown to underlie many psycholinguistic effects, from phonotactic and orthotactic constraint learning to structural priming. Recently, it has been proposed that cognitive control is also best understood as a learning mechanism. There is much debate about the role of cognitive control in resolving semantic interference in language production. Here, we propose a model of Hebbian learning that captures different kinds of semantic interference in language production. Specifically, the model successfully captures semantic interference effects across blocked cyclic naming, continuous naming, and picture-word interference (PWI) tasks, as well as congruency sequence effects found in PWI.

While the same Hebbian learning mechanism is applied to capture all of these effects, our simulations demonstrate that the critical learning happens in different parts of the system based on the kind of interference. When stimulus-driven information is sufficient to arrive at the correct response (i.e., when the target stimulus is more potent than the distractor), learning between stimulus and response is sufficient to capture semantic interference. In contrast, when stimulus-driven information more strongly signals an incorrect response (i.e., when the distractor is more potent than the target stimulus), the critical learning happens between task-demand and task-specific stimulus representations. In addition to accounting for behavioral findings, this framework can account for seemingly discrepant neural data obtained from a variety of semantic interference tasks.