Statistical learning of target-distractor co-occurrences in human participants‘ visual-search scanning patterns
Wed—HZ_9—Talks9—9604
Presented by: Thomas Geyer
The contextual cueing paradigm has generated considerable interest in cognitive neuroscience to understand how human participants exploit statistical target-distractor co-occurrences in visual search tasks. According to one account, contextual repetitions lead to the acquisition of spatial, or, contextual, long-term memories (LTM) that lead to more efficient guidance, or ‘cueing’, of attention to the searched-for target location in recurring displays. However, guidance of search from display-specific LTMs has been shown only with aggregated, e.g., reaction times, fixation number, attention measures.
Because of this, in recent investigations, we have analyzed fixational patterns in individual old-context displays. We find high similarity in visual scanning across participants when these participants nevertheless viewed differently composed old-context displays (with different target positions and distractor compositions). Our results suggest an account according to which contextual repetitions lead to the acquisition of a skill, or procedural knowledge, about how to perform visual search more effectively across the entire set of the repeated displays. Consequently, different types of 'learning-that' versus ‚learning-how' seem to contribute to search facilitation in repeated search arrays.
Because of this, in recent investigations, we have analyzed fixational patterns in individual old-context displays. We find high similarity in visual scanning across participants when these participants nevertheless viewed differently composed old-context displays (with different target positions and distractor compositions). Our results suggest an account according to which contextual repetitions lead to the acquisition of a skill, or procedural knowledge, about how to perform visual search more effectively across the entire set of the repeated displays. Consequently, different types of 'learning-that' versus ‚learning-how' seem to contribute to search facilitation in repeated search arrays.
Keywords: Contextual Cueing, Selective Attention, Scanpath Theory, Visual Search, Procedural Learning, Statistical Learning