11:00 - 12:30
Parallel sessions 8
11:00 - 12:30
Room: HSZ - N5
Chair/s:
Fritz Günther, Markus Kiefer
Embodied and grounded cognition approaches have remained enduring focal points in cognitive psychology. Although some subtle differences are sometimes postulated, both approaches converge on the assumption that cognition is essentially based on a reinstatement of processes of perception, action and introspection. How exactly the symbols that are central our higher cognition and communication, such as linguistic forms and abstracted mental representations, obtain their meaning from sensorimotor experience is one of the open challenges in this line of research. Highly interesting experimental-behavioural studies have been conducted that produced important insights. At the same time, we are experiencing the theoretical and empirical limits of this approach:
On the one hand, research on embodied cognition is often missing the formalisation, quantification, and precision required to make theoretically substantive advances – a gap to be filled with computational modelling. Here, recent work has brought forward large-scale data-driven representation models built from different data sources, such as language and vision. These allow us to exactly operationalize to what extent information from different modalities of experience shape our semantic representations, and investigate their specific influences on cognitive processes.
On the other hand, theories of embodied cognition ultimately always result in claims about processes in specific cognitive systems (shared between higher cognition and sensorimotor or introspective processing), which are hard to evaluate with purely behavioural approaches and instead require neuroscientific methods. This includes electrophysiological methods with high temporal precision, as well neuroimaging methods with high spatial resolution; together, these techniques allow us to precisely map neural processes that underpin higher cognition.
 In this symposium, we bring together recent advances integrating computational and neuroscientific approaches to embodiment research: Computational models yield precise predictions at the system level, which in turn can be tested with neuroscientific methods. The presentations in this symposium highlight the advantage of an interplay of computational and neuroscientific approaches for various fields of embodied cognition such as language, memory and semantics.
Submission 556
When Vision and Language Act Together: Multimodal Contributions to False Memory Formation
SymposiumTalk-01
Presented by: Marco Petilli
Marco Petilli 1, Francesca Rodio 2, Daniele Gatti 3, Marco Marelli 1, 4, Luca Rinaldi 3, 5
1 Department of Psychology, University of Milano–Bicocca, Italy
2 Institute for Advanced Studies, Istituto Universitario di Studi Superiori, Italy
3 Department of Brain and Behavioral Sciences, University of Pavia, Italy
4 NeuroMI, Milan Center for Neuroscience, Milano, Italy, Italy
5 Cognitive Psychology Unit, Istituto di Ricovero e Cura a Carattere Scientifico Mondino Foundation, Pavia, Italy, Italy
The formation of memory representations is a complex phenomenon shaped by experience, yet the contributions of different experiential sources remain unclear. The present study aimed to assess the role of language-based and vision-based experience in the generation of false memories. To this aim, we employed computational models (CNN and DSM) trained on distinct data types – visual data and language corpora – to independently quantify the similarity between object representations in the visual and linguistic experiential domains. We then examined how these two forms of similarity predict false memories in two parallel variants of a DRM false memory task: a linguistic variant using words and a visual variant using corresponding images. In each task, participants first memorised a list of items and then indicated whether items in a new list were part of the initially memorised one. Consistent with established findings in the literature, higher similarity between unstudied and studied items increased false recognition across both tasks. Critically, both visual and linguistic similarity provided unique and significant contributions to this effect in both image- and word-based tasks, supporting a multimodal and integrated architecture of memory traces regardless of input modality. However, the relative influence of these knowledge sources was modality-dependent: language-based knowledge played a stronger role in the word-based task, while visual prior knowledge dominated in the image-based task. This hybrid and modality-dependent pattern highlights the adaptive nature of memory representations, showing how the human mind dynamically integrates diverse experiential traces depending on task demands.