Context-based guidance vs. context suppression in contextual learning: role of un-/certainty in the target-context relations in visual search
Wed—HZ_9—Talks9—9601
Presented by: Siyi Chen
Standard investigations of contextual facilitation typically employ invariant distractor arrangements predicting a fixed target location. In the real world, though, invariant spatial contexts are not always predictive. This talk presents recent studies on how facilitation is influenced by un-/certainty in the prediction of the target location. Using behavioral modeling and eye-tracking methods, we examined how uncertainty in target locations, context-target relations, and display types influence contextual learning. Our findings reveal that different modes of contextual learning, such as contextual guidance and context suppression, are influenced by the probability of these predictions. Specifically, we demonstrate that only when the context-target relationship is fully predictable does the brain engage in contextual guidance, while any disruption in this predictability triggers context suppression instead. How these modes may be implemented in a neural-network model of visual search is discussed, offering important insights into the mechanisms underlying long-term contextual learning in visual search.
Keywords: contextual cueing; eye tracking; visual search; context-based guidance; context suppression