Scene semantic effects on allocentric coding in naturalistic (virtual) environments
Wed-H4-Talk 9-9504
Presented by: Bianca Baltaretu
Interacting with objects in our surroundings involves object perception and object location coding, the latter of which can be accomplished egocentrically (i.e., relative to the self) and/or allocentrically (i.e., relative to other objects). Allocentric coding for actions under more naturalistic scenarios can be influenced by multiple factors, (e.g., task relevance). Within the hierarchy of scene grammar, the semantic relationship of local objects (small/moveable) can strengthen allocentric coding (i.e., stronger effects for local objects of the same vs. different object categories). Currently, it is unknown how the next level of the scene grammar hierarchy, i.e., anchor objects (large/stationary), modulates this process, since they tend to predict the identity and location of surrounding local objects. Here, we investigated the effect of semantically congruent versus incongruent anchors on allocentric coding of local objects within two scene types (kitchen, bathroom). In a virtual environment, three local objects were presented on a shelf connecting two anchors (semantically congruent or incongruent with the local objects). After a brief mask and delay, the scene was presented again without the local objects and one of the anchor objects shifted (leftward or rightward) or not shifted. Then, participants grabbed the presented local object target and placed it in its remembered location on the empty shelf. These findings show systematic placement errors in the direction of the anchor shift, with no influence of semantic congruency. Our results suggest that, even if task-irrelevant, anchors play an important role in allocentric coding of local objects in naturalistic, virtual environments for action.
Keywords: spatial coding, object perception, scene perception, scene semantics, virtual reality