Influence of audio renderings on spatial attention and social presence in virtual environments
Mon-Main hall - Z3-Poster 1-2708
Presented by: Sarah Roßkopf
Using advanced audio rendering methods, spatial auditory impressions can be simulated that are indistinguishable from real sound sources in the room. Such a high degree of realism can be assumed as highly relevant for virtual social interactions. The aim of this study is to investigate whether these plausible auralizations allow sound source localization comparable to real sound sources as further indicator for realism and whether they improve social presence in virtual reality (VR). To address these questions, we implemented two different paradigms (placement task, eye-tracking task) to measure sound source localization and spatial attention in VR. In a within-subjects design (N = 49), three different advanced audio renderings were compared to loudspeakers and to an anchor (gaming audio engine) in terms of distance estimation and angular error (horizontal plane) as well as evaluations of audio quality and social presence. While loudspeakers resulted in less front-back confusions compared to all other conditions, we found that distance estimates were significantly more accurate for audio renderings compared to loudspeakers and that there was no difference in azimuth estimation once front-back confusions were controlled for. Overall, loudspeaker and audio rendering conditions were always more accurate than the anchor and resulted in higher ratings of social presence. Furthermore, social presence and audio quality were strongly correlated. Our findings demonstrate that advanced audio rendering methods result in close-to-real hearing in virtual environments and can also improve social presence, thereby leading to more naturalistic social scenarios.
Keywords: sound source localization, virtual reality, social presence, auditory perception, eye-tracking, realism, spatial attention