How is multisensory information encoded and maintained in working memory?
Wed-P13-Poster III-205
Presented by: Ceren Arslan
Multisensory integration can occur automatically, in a bottom-up fashion, or require top-down attention. To date, this interaction between attention and cross-modal integration has been primarily studied on a perceptual level. Yet, it is unclear how multisensory integration and attention interact in working memory and hence, how multisensory information is represented in working memory. Here, using an audio-visual working memory task, participants are presented with one or two temporally and spatially aligned audio-visual memory items. In separate blocks, they are asked to maintain either only auditory or visual features (single-feature conditions) or both features (conjunction condition). After a short delay interval, participants indicate whether the probe item matches any of the task-relevant memory features or objects. As the probe is always audio-visual, it allows us to observe the congruency effects between the two modalities. This allows for conclusions as to whether task-irrelevant features still affect performance. Preliminary behavioral data analysis shows that increasing memory load reduces accuracy and increases reaction times (RTs) across all memory conditions. Further, RTs are slower when both features are maintained than in single-feature conditions. Critically, there is no difference in RTs between congruent and incongruent probes in the two single-features conditions, suggesting that task-irrelevant features are successfully filtered out; conversely, in the conjunction condition, there is a clear probe-congruency effect influencing response times. Ongoing EEG data analysis aims at unravelling at what stage and in what conditions multisensory integration occurs and whether top-down modulations of attention are reflected in the (dis-)engagement of sensory regions.
Keywords: multisensory integration, working memory, attention