Duration: 01/10/2025-31/03/2026
Funding Agency: OeAD-GmbH – Agentur für Bildung und Internatinalisierung
Collaborator: Dr. Bernhard LABACK; Österreichische Akademie der Wissenschaften
Description
Spatial hearing allows us to create an internal map of the sounds around us. It’s important because it helps us figure out where sounds are coming from, hear and understand people when it’s noisy, know which way is which, and enjoy music. For localization in the vertical-plane, we rely primarily on monaural spectral-shape cues, while azimuthal sound localization is based on the binaural cues of interaural time and level difference (ITD and ILD). Listeners use both ITD and ILD cues for sound localization, applying frequency-dependent weights when combining the cues to determine the perceived source location. According to the duplex theory of binaural processing, for frequencies of up to approximately 1.5 kHz, the ITD is the dominant localization cue, while for higher frequencies, the ILD is the dominant cue. Recent study from Laback group showed that vision-induced reweighting of ITD and ILD can be achieved via visually based localization training in virtual reality. Here, we will build up the computational model that will focus on the reweighting of binaural localization cues assessed by discrimination and localization tasks. Also, determine the trading ratio in algorithms for hearing aids (HI)/ cochlear implants (CI), and computerized training tools that can improve spatial hearing in normal hearing.