14:30 - 16:00
Tue-Main hall - Z3-Poster 2--60
Tue-Poster 2
Room: Main hall - Z3
More is not always better: Temporal neural signatures of object-driven versus scene-driven human scene categorization
Tue-Main hall - Z3-Poster 2-6005
Presented by: Elia Samuel Rothenberg
Elia Samuel RothenbergAylin KallmayerSandro Luca WiesmannMelissa Lê-Hoa Vo
Goethe University Frankfurt, Institute of Psychology, Scene Grammar Lab, Theodor-W.-Adorno-Platz 6, D-60629 Frankfurt am Main, Germany
Visual scene categorization is an impressively rapid process. Whether a scene’s gist is mainly conveyed by object-centered or scene-centered information, however, has sparked a considerable debate. In this experiment, we used electroencephalography (EEG) to better understand the temporal dynamics of visual scene processing. Drawing on previous work from our research group, we expected that if diagnostic object information is available to the human observer at fixation, scene categorization commences in a local-to-global manner. Stimuli consisted of black-and-white images from four basic-level scene categories (“bedroom”, “kitchen”, “city”, “train station”) and three stimulus conditions (“objects”, “scenes”, “textures”), comprising fifteen exemplars each. With preliminary data from 21 participants, multivariate pattern analysis (MVPA) demonstrated above-chance accuracy in scene category decoding across stimulus conditions. Comparison of the proportional decoding peak latencies of the scene category-specific neural representations of objects, scenes, and textures across time showed significantly diverging peaks. Objects peaked first at ~150 ms after stimulus onset, followed by scenes at ~250 ms, and textures at ~350 ms, indicating that objects convey scene category information more rapidly than scenes while processing of scene diagnostic properties contained in textures emerges at an even later stage. Our findings stress the pivotal utility of local information for human scene categorization. While scenes compared to objects typically entail higher information density, pertinent diagnostic information may not always be incidentally available at fixation but requires additional sampling to be extracted. We therefore conclude that accessibility of scene category information can be expedited by local close-to-fixation object diagnosticity.
Keywords: visual perception, scene categorization, EEG, object information, global scene information, decoding, scene grammar