15:00 - 16:30
Submission 652
Is Visual Context an Effective Segmentation Cue in Event File Integration?
Posterwall-09
Presented by: Susanne Mayr
Susanne MayrMalte MöllerAnna Scotts
University of Passau, Germany
Stimulus and response features are assumed to be temporarily integrated into so-called event files that are retrieved upon reencountering any of their features, with impaired performance in case of partial repetitions (so-called partial repetition costs, PRCs). Auditory context has been shown to influence the integration of auditory stimulus and response features with stronger PRCs for features that had been surrounded by the same auditory context than for stimulus and response features that had been separated by a different auditory context. The presented experiments tested whether visual context takes on a similar segmenting role during event file integration. In a distractor-response-binding task participants either had to identify shapes that were presented on a colored background serving as context (Experiment 1, N = 50) or identify sounds while viewing a colored background (Experiment 2, N = 50). The color context either changed between prime stimulus presentation and prime response or stayed the same throughout. PRCs were not influenced by the context manipulation. In Experiment 3 (N = 58), we sought to enhance the segmenting function of visual context by introducing a dynamically moving background motif, which was expected to more strongly emphasize the temporal dimension of contextual information. Again, shapes had to be identified in front of the dynamic context that either separated stimulus and response features by color change or maintained a constant color. Yet, there was no modulation of PRCs by context. Unlike auditory context, visual context does not appear to play the same segmenting role in event file integration.