11:00 - 12:30
Parallel sessions 5
11:00 - 12:30
Room: HSZ - N2
Chair/s:
Cosimo Iaia, Jack E Taylor
The development of powerful computational language models in recent years has seen an increasing application of such models in psychology and psycholinguistics. Both Distributional Semantic Models (e.g., word2vec, GloVe, etc) and Large Language Models (e.g., GPT, BERT) have been applied in two main ways: (1) as models of human language processing, and (2) as tools for generating measures that are relevant to psycholinguistic theories and hypotheses. However, the distinction between these two applications of language models is not always clear, and both applications are limited by fundamental differences in language processing between models engineered to achieve human-like performance, and the processes actually used in human language. Nevertheless, language models have demonstrated strong potential for providing insight into language processes. This symposium brings together five talks to address recent developments in the use of language models as tools for psycholinguistics, and the degree to which such models provide comparisons and outputs that are meaningful for progress in the field. The first talk will set a theoretical foundation for the symposium, evaluating caveats of comparing Large Language Models to humans, and outlining how meaningful comparisons require rigorous experimental methods. The second talk explores whether humans and language models (both Large Language Models and Distributional Semantic Models) represent abstract meaning in a similar way, while also highlighting differences emerging between the two systems. The third talk shows how Large Language Models can be used to generate new iconicity ratings for Turkish, providing a new avenue for investigating semantic dimensions in otherwise understudied languages. The fourth talk evaluates how well estimates of word frequency and familiarity derived from Large Language Models can explain children’s reading times. Finally, in the last talk, Distributional Semantic Models are applied to provide insight into the learning of morphology, showing that natural text provides sufficient information to learn the meanings of prefixes and suffixes. Together, these talks highlight the ongoing potential of language models as tools for psycholinguistics. However, these talks also provide opportunity for important discussion on the caveats of this approach, and on the scientific applications language models can support.
Submission 495
The Representational Alignment Between Humans and Language Models Is Implicitly Driven by a Concreteness Effect
SymposiumTalk-02
Presented by: Cosimo Iaia
Cosimo Iaia 1, 2, Bhavin Choksi 3, Emily Wiebers 1, Gemma Roig 3, 4, 5, Christian J. Fiebach 1, 2
1 Department of Psychology, Goethe University Frankfurt, Germany
2 Cooperative Brain Imaging Center, Germany
3 Department of Computer Science, Goethe University Frankfurt, Germany
4 hessian.AI, Germany
5 Center fo Brains, Minds, and Machines, MIT, United States
Words in human language can be conceptualized as more abstract (like justice) or more concrete (like table). Cognitive psychology has established that this property of word meaning influences how words are processed. Thus, understanding how concreteness is represented in our mind and brain is a central question in psychology, neuroscience, and computational linguistics. While the advent of powerful language models has allowed for quantitative inquiries into the nature of semantic representations, it remains largely underexplored how they represent concreteness. Here, we leveraged an odd-one-out task to estimate semantic distances implicitly used by humans, for a set of carefully selected abstract and concrete nouns. Using Representational Similarity Analysis, we find that this implicit representational space based on human ratings from 40 participants and the semantic representations of five frequently-used language models (fastText, word2vec, BERT base, BERT large, and GPT2) are significantly aligned, and that both representational spaces are aligned to an explicit representation of concreteness (based on concreteness ratings provided by the same participants). Most importantly, using model ablation experiments, we demonstrate that human-to-model alignment is substantially driven by concreteness, but not by other word characteristics like word length or frequency. Overall, human-to-model alignment is sensitive more to semantic than to non-semantic variables. Combined, these results highlight a critical role for concreteness as a mediator of the alignment between human word representations and language models. More generally, this work shows that language models can be useful tools for understanding the nature of semantic representations in the human mind and brain.