Our auditory system enables us to orient in natural acoustic environments and adapt to changes in that environment. With virtual and augmented realities becoming part of everyday life, the question becomes important to what extent our perceptual system is able to adapt to a reality that is not subject to physical laws. Virtual environments can deviate from the rules of the physical world, because of the limited performance of the underlying numerical simulations, because of the limited interaction possibilities of a user with the virtual environment, or because the quality of experience of the virtual environment may consist precisely in crossing the physical boundaries of the real world. We propose to use simulated environments to provide a remapping of physical cues for the distance of sound sources to study the behavioral adaptation and the changes in functional brain organization as listeners learn to orient themselves in these environments. Research into the neural mechanisms of distance perception and plasticity is directly relevant to a number of applications, including enhancements of auditory prostheses and brain-computer interfaces, and treatment of central auditory processing disorders and disorders due to cortical damage (such as aphasia), all of which involve insufficiently understood auditory brain plasticity. Research in this area has so far been hindered by the technical difficulties of simulating realistic audiovisual environments in real-time and achieving the necessary stimulation fidelity in an MR scanner. Our principle aims are to understand how people are able to adapt to remapped acoustic distance cues and whether this learning generalizes to untrained distances and audiovisual environments, and to understand how brain activity encodes sound source distance and determine brain correlates of perceptual learning of remapped distance cue. We will achieve this through a series of psychoacoustic experiments that investigate the learnability of the remapped cues, and by acquiring brain responses with high-resolution neuroimaging at different time points during this learning process. The cooperation of two groups with a strong, partly overlapping and partly complementary expertise in acoustics and neuroscience not only promises to substantially expand the state of knowledge in these areas, but also addresses the expectations of the AUDICTIVE Priority Program by exploring both the auditory spatial cognition as well as the technical leeway for designing virtual and augmented acoustic environments beyond the re-creation of physical worlds.

People:



Marc Schönwiesner
University of Leipzig

Project Leader


Stefan Weinzierl
Technische Universität Berlin

Project Leader


Fabian Brinkmann
Technische Universität Berlin

PostDoc


Johannes Arend, M.Sc.
Technische Universität Berlin

PhD Student