Anchor-level scene semantic effects on spatial coding in naturalistic (virtual) environments
Wed—HZ_7—Talks7—6801
Presented by: Bianca Baltaretu
Interacting with objects in our environment involves object perception and object location coding. This can be achieved by using an egocentric (self-centred) and/or allocentric (object-centred) reference frame. For memory-guided actions, allocentric coding for more naturalistic scenarios has been shown to be subject to the influence of low- to high-level factors (e.g., task relevance). The semantic relationship between local objects (small, moveable) has also been shown to strengthen allocentric coding (i.e., stronger effects for local objects of the same object category). Currently, the role of the next level of the scene grammar hierarchy, i.e., anchor objects (large, stationary, predictive), in allocentric coding is largely unexplored. Here, we investigated the effect of anchor 1) identity and 2) presence on allocentric coding of local objects within two scene types (kitchen, bathroom). In a virtual environment, three local objects were presented on a shelf in one of three conditions: 1) scene-congruent anchors present, 2) cuboids present, or 3) only the shelf present. After a brief mask and delay, the scene was presented again without the local objects and one of the anchors/cuboids shifted (leftward or rightward) or not shifted. Then, participants had to grab the presented local object target with the controller and place it in its remembered location on the empty shelf. Our findings showed that placement behaviour was neither affected by anchor identity changes, nor by anchor presence. These findings suggest that any large(r), stable anchor object that is present benefits allocentric coding of local objects in memory-guided placement tasks.
Keywords: Scene semantics; Scene perception; Object perception; Allocentric coding; Memory-guided action; Virtual Reality