Submission 556
When Vision and Language Act Together: Multimodal Contributions to False Memory Formation
SymposiumTalk-01
Presented by: Marco Petilli
The formation of memory representations is a complex phenomenon shaped by experience, yet the contributions of different experiential sources remain unclear. The present study aimed to assess the role of language-based and vision-based experience in the generation of false memories. To this aim, we employed computational models (CNN and DSM) trained on distinct data types – visual data and language corpora – to independently quantify the similarity between object representations in the visual and linguistic experiential domains. We then examined how these two forms of similarity predict false memories in two parallel variants of a DRM false memory task: a linguistic variant using words and a visual variant using corresponding images. In each task, participants first memorised a list of items and then indicated whether items in a new list were part of the initially memorised one. Consistent with established findings in the literature, higher similarity between unstudied and studied items increased false recognition across both tasks. Critically, both visual and linguistic similarity provided unique and significant contributions to this effect in both image- and word-based tasks, supporting a multimodal and integrated architecture of memory traces regardless of input modality. However, the relative influence of these knowledge sources was modality-dependent: language-based knowledge played a stronger role in the word-based task, while visual prior knowledge dominated in the image-based task. This hybrid and modality-dependent pattern highlights the adaptive nature of memory representations, showing how the human mind dynamically integrates diverse experiential traces depending on task demands.