Statistical learning in visual search inside and outside the initial field of view
Wed—HZ_9—Talks9—9605
Presented by: Artyom Zinchenko
While contextual cueing (the ability to learn and remember visual patterns) has been extensively studied in 2D environments, little is known about how it works in 3D virtual reality (VR) settings. To investigate this, we developed a new VR experiment that tested how people find targets both within their immediate view and in the surrounding environment. Across two experiments, we found strong contextual cueing effects in 3D environments. Participants responded faster to repeated configurations, whether targets were within or outside their initial view. We discovered that people tend to scan 3D environments from left to right, which affected how they adapted when we moved targets to new locations. Importantly, participants remained faster at finding targets in familiar displays even after we relocated them, particularly when the new locations matched target positions from other learned displays. Our head-tracking measurements confirmed that participants developed more efficient search patterns in familiar contexts. These findings show how contextual cueing works in realistic 3D environments, demonstrating that repeated exposure helps people develop more efficient visual search strategies.
Keywords: Contextual cueing, VR, initial field of view, relearning