09:00 - 10:30
Parallel sessions 1
09:00 - 10:30
Room: HSZ - N3
Chair/s:
Karin Maria Bausenhart, Markus Huff, Jeffrey M Zacks
Human cognition is shaped by the way humans perceive and segment continuous, dynamic, complex, and multimodal perceptual input into meaningful, discrete episodes. Such events and their boundaries (transitions that separate meaningful units of experience from each other) play a crucial role in structuring memory, guiding attention, and enhancing understanding. Perceiving an event boundary – for example, triggered by changes in time, location, protagonist, goal, or social interaction – evokes updates in working memory and thereby prompts the formation of new or adaptation of existing event models. This segmentation process may thus enhance comprehension and recall by creating clear divisions between contexts, allowing individuals to better encode, retrieve, and reason about sensory experience. Events and their boundaries also influence predictive processes: within a given event, reliable forecasts can be made based on contextual continuity and abstract event schemata, but predictions become less reliable when crossing event boundaries. Recent models suggest that increased uncertainty and error in predictive processing in itself may drive the updating of event models in working memory, thus reinforcing the link between predictive processing and event segmentation. Overall, events and their boundaries serve as fundamental units of organization in cognitive processing, enabling humans to make sense of and coherently act upon a dynamic and often unpredictable world. In this symposium, we will present novel empirical and theoretical developments from psychology and cognitive science that explore the functions and mechanisms of event cognition. We will focus in particular on how boundaries affect the perception and segmentation (vs. integration) of dynamic input, how event models are formed within and across modalities, and how dynamic input, schema-based prediction, and contextual factors interplay to shape event representations and higher-level cognitive processes such as categorization, memory, and problem-solving.
Submission 702
The Role of Matching Versus Mismatching Granularity in Cross-Modal Event Memory
SymposiumTalk-03
Presented by: Tolgahan Aydın
Tolgahan Aydın 1, Nadia Said 2, Daniel Levin 3, Markus Huff 1, 2
1 Leibniz Institut für Wissensmedien, Germany
2 University of Tübingen, Germany
3 Vanderbilt University, United States

Theories of multimodal learning, such as Dual Coding Theory, posit that aligning information across sensory channels is a key mechanism for building coherent event models. A direct prediction is that matching the granularity of information (e.g., fine- vs. coarse-grained details) between verbal and visual modalities should facilitate integration and enhance memory. We put this hypothesis to a test across two experiments. Participants viewed text and video presentations of everyday activities where granularity was systematically matched or mismatched. Recognition memory was assessed using a sensitive task designed to detect the benefits of alignment. Contrary to these theoretical foundations, we found no reliable benefit of cross-modal granularity matching on memory sensitivity, false alarms, or confidence. Bayesian analyses provided substantial evidence for this null effect. The critical factor was not alignment, but timing: memory was significantly superior when verbal descriptions preceded the video, scaffolding initial encoding. These findings pose a significant challenge to the assumption that structural alignment is a primary driver of multimodal integration in event memory. Instead, they demonstrate that the initial conditions of encoding—specifically, the order of presentation—can override the putative benefits of cross-modal congruence, forcing a reevaluation of how and when multimodal inputs effectively shape event models.