Contextual Facilitation of Neural Processes Underlying Action Recognition
Wed-Main hall - Z2b-Poster 3-8908
Presented by: Oleg Vrabie
The co-occurrence of objects and actions in scenes of our daily life allows our brains to build and rely on associations for action recognition, especially when information is scarce (Wurm et al. 2017). These associations are thought to be conveyed by top-down projections facilitating recognition (Bar 2007). Several studies examined the interplay between actions, objects, and scenes during action recognition (Wurm et al. 2012; Wokke et al. 2016; Baldassano et al. 2017). However, we still have a limited understanding of the neural mechanisms through which context contributes to action recognition. To shed light on this question, we designed a paradigm for an fMRI experiment in which we manipulate the amount of contextual information, by segmenting backgrounds or manipulated objects out of action scene images. To assess the impact of contextual facilitation on action recognition, we will compare decoding accuracies from partially segmented action scenes to those obtained on full action scenes. We expect to find contextual facilitation for actions as a super-additive enhancement of action representations in fully compared to partially presented action scenes. Additionally, we will test whether multivoxel fMRI activation patterns evoked by full action scenes are resembled by weighted averages or sums of patterns evoked by isolated constituent components. We expect the representation of the full action scene to be better modelled by a weighted pattern average evoked by constituent components. The results of this experiment are expected to contribute to our understanding of the way in which we extract the action meaning from multiple different sources.
Keywords: actions, contextual facilitation, fMRI, MVPA