15:00 - 16:30
Submission 672
The Interplay of Attentional Resources and Multisensory Causal Inference
Posterwall-18
Presented by: Tim Rohe
Tim RoheCeline Fleischmann
Institute for Psychology, University of Erlangen-Nürnberg, Germany
To obtain coherent multisensory representations, humans integrate sensory signals if they infer a common cause, but they segregate signals from independent causes. Yet, it is unclear how such causal inference (CI) processes depend on attentional resources. Here, we employ a dual-task paradigm in which participants (n=40) selectively localize auditory or visual spatial targets (primary task) while performing a visual multiple object tracking (MOT) task (secondary task) to manipulate attentional load. AV targets appear at three horizontal locations, with visual reliability of the V targets manipulated via narrow (2° STD) or broad (7° STD) Gaussian dot clouds. The MOT task varies in load (1 vs. 3 targets), and catch trials ensure compliance with the task.

Behavioral analyses examine crossmodal bias (CMB) as a function of spatial disparity, visual reliability, task relevance, and attentional load. Computational modeling compares a Bayesian CI model to heuristic alternatives to quantify how load modulates integration. EEG multivariate pattern analyses compute a relative audiovisual weight index on decoded spatial representations across time. We hypothesize that higher load weakens the effects of spatial disparity and visual reliability on CMB, and diminishes segregation of the task-irrelevant signal. CI modeling should show lower causal priors under high load. EEG audiovisual weight indices should reveal early effects of spatial disparity and visual reliability, and later interactions between task relevance and disparity, with stronger effects for low-load conditions.

This study will combine behavioral indices, computational modeling, and neural dynamics to gain a deeper understanding of how attentional resources modulate multisensory causal inference.