Auditory localization requires complex neural calculations. That complexity is even greater in reverberant
environments where sound, in addition to traveling directly from the source to
the ears, also reflects off surfaces and reaches the ears from other directions
a few milliseconds later. Yet listeners can
localize sound in reverberant environments and rely on auditory localization in
many real-world circumstances. For
example, auditory localization is important for being able to understand one
person talking in a room where many people are talking. The current study investigates
subcortical responses to a speech sound under four spatial conditions. In two of those conditions, a 40 ms /da/
syllable is presented from a loudspeaker in front of the participant or 70
degrees to the right of the participant (Free-Field conditions). In the other two conditions, the syllable is
presented from both the central and right loudspeaker, but with the sound
starting from one location 4 ms before the other location (Precedence Effect
conditions). The Precedence Effect
describes the fact that listeners localize a sound presented from multiple
locations based mostly on the lead location.
By recording Auditory Brainstem Responses (ABRs) to thousands of
repetitions of the syllable in each condition, we were able to identify
differences in early auditory processing of Free-Field and Precedence Effect
localization. These results contribute
to an understanding of neural mechanisms involved in real-world sound
localization, and suggest intervention directions for listeners who struggle to
understand speech in noisy conditions.