Submission 667
Exploring Large Semantic Spaces of Neural Word Representations
SymposiumTalk-02
Presented by: Ilaria Appel
The nature of our conceptual representations – how they’re grounded in our experience of the world – remains unclear. Comparing neural semantic maps with vector space models promises to offer insight; yet, large-scale studies are lacking.
We addressed this issue by building large neural spaces for 1,080 abstract and concrete words. We collected EEG data from 40 participants in a Rapid Serial Visual Presentation paradigm (RSVP) and computed each pairwise distance (N=582,660) for each electrode (N=128) and time point. We compared these neural spaces with both language- and image-based models (Word2Vec vs. ViSpa) to examine how visual and linguistic information integrates across time and space.
Initial results from five Regions of Interest (ROIs) along the ventral stream show that image-based models account best for neural maps. Notably, semantic coding seems to arise already in the occipital pole, early after word presentation (~100ms). This coding persists to the left Anterior Temporal Pole/Inferior Frontal ROI at later time points (~350ms), particularly for abstract words. Overall, these results suggest that image-based semantic models effectively capture the structure of the neural conceptual system in the ventral stream, pointing to a perceptually-grounded representation of semantics that emerges at very early processing stages. More generally, this situates the results in the framework of embodied and modal cognition, highlighting the central role of perceptual information in shaping our conceptual system.