Representations of hierarchical scene information in the brain
Wed—HZ_12—Talks8—8103
Presented by: Victoria Nicholls
Our knowledge of scenes is thought to have a hierarchical structure: at the lowest level are often smaller, local objects e.g. a soap, followed by so-called “anchors”, often larger objects like a sink. Together they form a “phrase”, a meaningful and functionally organized sub-set of a scene. Multiple phrases combined form a scene. What has not been established so far is whether this hierarchical scene knowledge is represented on a neural level, which brain regions might be involved, and the dynamics of accessing this knowledge. To examine this, participants were presented with an isolated object (local or anchor) either as a word label, image, or target word in the context of a search task, followed by a blank period while we recorded MEG. During the blank period participants were instructed to imagine the object. Using representational similarity analysis (RSA) with models representing the different levels of scene knowledge, we analysed each stimulus presentation and blank period to determine whether participants access representations about the objects only, or additionally access phrase and scene representations. During the stimulus period we found peaks for object, phrase, and scene category models from 100-200ms post-stimulus onset. During the blank period we found no such peaks. This suggests that even when seeing isolated objects participants automatically access also representations of scene and even phrasal information. This implies automatic representations of functional groupings of objects within scenes that may not be maintained in working memory if not immediately required by the task.
Keywords: