Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 4.908
Filter
1.
J Acoust Soc Am ; 155(5): 2934-2947, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38717201

ABSTRACT

Spatial separation and fundamental frequency (F0) separation are effective cues for improving the intelligibility of target speech in multi-talker scenarios. Previous studies predominantly focused on spatial configurations within the frontal hemifield, overlooking the ipsilateral side and the entire median plane, where localization confusion often occurs. This study investigated the impact of spatial and F0 separation on intelligibility under the above-mentioned underexplored spatial configurations. The speech reception thresholds were measured through three experiments for scenarios involving two to four talkers, either in the ipsilateral horizontal plane or in the entire median plane, utilizing monotonized speech with varying F0s as stimuli. The results revealed that spatial separation in symmetrical positions (front-back symmetry in the ipsilateral horizontal plane or front-back, up-down symmetry in the median plane) contributes positively to intelligibility. Both target direction and relative target-masker separation influence the masking release attributed to spatial separation. As the number of talkers exceeds two, the masking release from spatial separation diminishes. Nevertheless, F0 separation remains as a remarkably effective cue and could even facilitate spatial separation in improving intelligibility. Further analysis indicated that current intelligibility models encounter difficulties in accurately predicting intelligibility in scenarios explored in this study.


Subject(s)
Cues , Perceptual Masking , Sound Localization , Speech Intelligibility , Speech Perception , Humans , Female , Male , Young Adult , Adult , Speech Perception/physiology , Acoustic Stimulation , Auditory Threshold , Speech Acoustics , Speech Reception Threshold Test , Noise
2.
PLoS One ; 19(5): e0303843, 2024.
Article in English | MEDLINE | ID: mdl-38771860

ABSTRACT

Bayesian models have proven effective in characterizing perception, behavior, and neural encoding across diverse species and systems. The neural implementation of Bayesian inference in the barn owl's sound localization system and behavior has been previously explained by a non-uniform population code model. This model specifies the neural population activity pattern required for a population vector readout to match the optimal Bayesian estimate. While prior analyses focused on trial-averaged comparisons of model predictions with behavior and single-neuron responses, it remains unknown whether this model can accurately approximate Bayesian inference on single trials under varying sensory reliability, a fundamental condition for natural perception and behavior. In this study, we utilized mathematical analysis and simulations to demonstrate that decoding a non-uniform population code via a population vector readout approximates the Bayesian estimate on single trials for varying sensory reliabilities. Our findings provide additional support for the non-uniform population code model as a viable explanation for the barn owl's sound localization pathway and behavior.


Subject(s)
Bayes Theorem , Sound Localization , Strigiformes , Animals , Strigiformes/physiology , Sound Localization/physiology , Models, Neurological , Neurons/physiology
3.
Curr Biol ; 34(10): 2162-2174.e5, 2024 05 20.
Article in English | MEDLINE | ID: mdl-38718798

ABSTRACT

Humans make use of small differences in the timing of sounds at the two ears-interaural time differences (ITDs)-to locate their sources. Despite extensive investigation, however, the neural representation of ITDs in the human brain is contentious, particularly the range of ITDs explicitly represented by dedicated neural detectors. Here, using magneto- and electro-encephalography (MEG and EEG), we demonstrate evidence of a sparse neural representation of ITDs in the human cortex. The magnitude of cortical activity to sounds presented via insert earphones oscillated as a function of increasing ITD-within and beyond auditory cortical regions-and listeners rated the perceptual quality of these sounds according to the same oscillating pattern. This pattern was accurately described by a population of model neurons with preferred ITDs constrained to the narrow, sound-frequency-dependent range evident in other mammalian species. When scaled for head size, the distribution of ITD detectors in the human cortex is remarkably like that recorded in vivo from the cortex of rhesus monkeys, another large primate that uses ITDs for source localization. The data solve a long-standing issue concerning the neural representation of ITDs in humans and suggest a representation that scales for head size and sound frequency in an optimal manner.


Subject(s)
Auditory Cortex , Cues , Sound Localization , Auditory Cortex/physiology , Humans , Male , Sound Localization/physiology , Animals , Female , Adult , Electroencephalography , Macaca mulatta/physiology , Magnetoencephalography , Acoustic Stimulation , Young Adult , Auditory Perception/physiology
4.
Hear Res ; 447: 109025, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38733712

ABSTRACT

Cortical acetylcholine (ACh) release has been linked to various cognitive functions, including perceptual learning. We have previously shown that cortical cholinergic innervation is necessary for accurate sound localization in ferrets, as well as for their ability to adapt with training to altered spatial cues. To explore whether these behavioral deficits are associated with changes in the response properties of cortical neurons, we recorded neural activity in the primary auditory cortex (A1) of anesthetized ferrets in which cholinergic inputs had been reduced by making bilateral injections of the immunotoxin ME20.4-SAP in the nucleus basalis (NB) prior to training the animals. The pattern of spontaneous activity of A1 units recorded in the ferrets with cholinergic lesions (NB ACh-) was similar to that in controls, although the proportion of burst-type units was significantly lower. Depletion of ACh also resulted in more synchronous activity in A1. No changes in thresholds, frequency tuning or in the distribution of characteristic frequencies were found in these animals. When tested with normal acoustic inputs, the spatial sensitivity of A1 neurons in the NB ACh- ferrets and the distribution of their preferred interaural level differences also closely resembled those found in control animals, indicating that these properties had not been altered by sound localization training with one ear occluded. Simulating the animals' previous experience with a virtual earplug in one ear reduced the contralateral preference of A1 units in both groups, but caused azimuth sensitivity to change in slightly different ways, which may reflect the modest adaptation observed in the NB ACh- group. These results show that while ACh is required for behavioral adaptation to altered spatial cues, it is not required for maintenance of the spectral and spatial response properties of A1 neurons.


Subject(s)
Acoustic Stimulation , Auditory Cortex , Basal Forebrain , Ferrets , Animals , Auditory Cortex/metabolism , Auditory Cortex/physiopathology , Basal Forebrain/metabolism , Sound Localization , Acetylcholine/metabolism , Male , Cholinergic Neurons/metabolism , Cholinergic Neurons/pathology , Auditory Pathways/physiopathology , Auditory Pathways/metabolism , Female , Immunotoxins/toxicity , Basal Nucleus of Meynert/metabolism , Basal Nucleus of Meynert/physiopathology , Basal Nucleus of Meynert/pathology , Neurons/metabolism , Auditory Threshold , Adaptation, Physiological , Behavior, Animal
5.
PLoS Biol ; 22(4): e3002586, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38683852

ABSTRACT

Having two ears enables us to localize sound sources by exploiting interaural time differences (ITDs) in sound arrival. Principal neurons of the medial superior olive (MSO) are sensitive to ITD, and each MSO neuron responds optimally to a best ITD (bITD). In many cells, especially those tuned to low sound frequencies, these bITDs correspond to ITDs for which the contralateral ear leads, and are often larger than the ecologically relevant range, defined by the ratio of the interaural distance and the speed of sound. Using in vivo recordings in gerbils, we found that shortly after hearing onset the bITDs were even more contralaterally leading than found in adult gerbils, and travel latencies for contralateral sound-evoked activity clearly exceeded those for ipsilateral sounds. During the following weeks, both these latencies and their interaural difference decreased. A computational model indicated that spike timing-dependent plasticity can underlie this fine-tuning. Our results suggest that MSO neurons start out with a strong predisposition toward contralateral sounds due to their longer neural travel latencies, but that, especially in high-frequency neurons, this predisposition is subsequently mitigated by differential developmental fine-tuning of the travel latencies.


Subject(s)
Acoustic Stimulation , Gerbillinae , Neurons , Superior Olivary Complex , Animals , Neurons/physiology , Superior Olivary Complex/physiology , Sound Localization/physiology , Male , Olivary Nucleus/physiology , Sound , Female
6.
PeerJ ; 12: e17104, 2024.
Article in English | MEDLINE | ID: mdl-38680894

ABSTRACT

Advancements in cochlear implants (CIs) have led to a significant increase in bilateral CI users, especially among children. Yet, most bilateral CI users do not fully achieve the intended binaural benefit due to potential limitations in signal processing and/or surgical implant positioning. One crucial auditory cue that normal hearing (NH) listeners can benefit from is the interaural time difference (ITD), i.e., the time difference between the arrival of a sound at two ears. The ITD sensitivity is thought to be heavily relying on the effective utilization of temporal fine structure (very rapid oscillations in sound). Unfortunately, most current CIs do not transmit such true fine structure. Nevertheless, bilateral CI users have demonstrated sensitivity to ITD cues delivered through envelope or interaural pulse time differences, i.e., the time gap between the pulses delivered to the two implants. However, their ITD sensitivity is significantly poorer compared to NH individuals, and it further degrades at higher CI stimulation rates, especially when the rate exceeds 300 pulse per second. The overall purpose of this research thread is to improve spatial hearing abilities in bilateral CI users. This study aims to develop electroencephalography (EEG) paradigms that can be used with clinical settings to assess and optimize the delivery of ITD cues, which are crucial for spatial hearing in everyday life. The research objective of this article was to determine the effect of CI stimulation pulse rate on the ITD sensitivity, and to characterize the rate-dependent degradation in ITD perception using EEG measures. To develop protocols for bilateral CI studies, EEG responses were obtained from NH listeners using sinusoidal-amplitude-modulated (SAM) tones and filtered clicks with changes in either fine structure ITD (ITDFS) or envelope ITD (ITDENV). Multiple EEG responses were analyzed, which included the subcortical auditory steady-state responses (ASSRs) and cortical auditory evoked potentials (CAEPs) elicited by stimuli onset, offset, and changes. Results indicated that acoustic change complex (ACC) responses elicited by ITDENV changes were significantly smaller or absent compared to those elicited by ITDFS changes. The ACC morphologies evoked by ITDFS changes were similar to onset and offset CAEPs, although the peak latencies were longest for ACC responses and shortest for offset CAEPs. The high-frequency stimuli clearly elicited subcortical ASSRs, but smaller than those evoked by lower carrier frequency SAM tones. The 40-Hz ASSRs decreased with increasing carrier frequencies. Filtered clicks elicited larger ASSRs compared to high-frequency SAM tones, with the order being 40 > 160 > 80> 320 Hz ASSR for both stimulus types. Wavelet analysis revealed a clear interaction between detectable transient CAEPs and 40-Hz ASSRs in the time-frequency domain for SAM tones with a low carrier frequency.


Subject(s)
Cochlear Implants , Cues , Electroencephalography , Humans , Electroencephalography/methods , Acoustic Stimulation/methods , Sound Localization/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Time Factors
7.
J Neurosci ; 44(21)2024 May 22.
Article in English | MEDLINE | ID: mdl-38664010

ABSTRACT

The natural environment challenges the brain to prioritize the processing of salient stimuli. The barn owl, a sound localization specialist, exhibits a circuit called the midbrain stimulus selection network, dedicated to representing locations of the most salient stimulus in circumstances of concurrent stimuli. Previous competition studies using unimodal (visual) and bimodal (visual and auditory) stimuli have shown that relative strength is encoded in spike response rates. However, open questions remain concerning auditory-auditory competition on coding. To this end, we present diverse auditory competitors (concurrent flat noise and amplitude-modulated noise) and record neural responses of awake barn owls of both sexes in subsequent midbrain space maps, the external nucleus of the inferior colliculus (ICx) and optic tectum (OT). While both ICx and OT exhibit a topographic map of auditory space, OT also integrates visual input and is part of the global-inhibitory midbrain stimulus selection network. Through comparative investigation of these regions, we show that while increasing strength of a competitor sound decreases spike response rates of spatially distant neurons in both regions, relative strength determines spike train synchrony of nearby units only in the OT. Furthermore, changes in synchrony by sound competition in the OT are correlated to gamma range oscillations of local field potentials associated with input from the midbrain stimulus selection network. The results of this investigation suggest that modulations in spiking synchrony between units by gamma oscillations are an emergent coding scheme representing relative strength of concurrent stimuli, which may have relevant implications for downstream readout.


Subject(s)
Acoustic Stimulation , Inferior Colliculi , Sound Localization , Strigiformes , Animals , Strigiformes/physiology , Female , Male , Acoustic Stimulation/methods , Sound Localization/physiology , Inferior Colliculi/physiology , Mesencephalon/physiology , Auditory Perception/physiology , Brain Mapping , Auditory Pathways/physiology , Neurons/physiology , Action Potentials/physiology
8.
Behav Res Methods ; 56(4): 3814-3830, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38684625

ABSTRACT

The ability to detect the absolute location of sensory stimuli can be quantified with either error-based metrics derived from single-trial localization errors or regression-based metrics derived from a linear regression of localization responses on the true stimulus locations. Here we tested the agreement between these two approaches in estimating accuracy and precision in a large sample of 188 subjects who localized auditory stimuli from different azimuthal locations. A subsample of 57 subjects was subsequently exposed to audiovisual stimuli with a consistent spatial disparity before performing the sound localization test again, allowing us to additionally test which of the different metrics best assessed correlations between the amount of crossmodal spatial recalibration and baseline localization performance. First, our findings support a distinction between accuracy and precision. Localization accuracy was mainly reflected in the overall spatial bias and was moderately correlated with precision metrics. However, in our data, the variability of single-trial localization errors (variable error in error-based metrics) and the amount by which the eccentricity of target locations was overestimated (slope in regression-based metrics) were highly correlated, suggesting that intercorrelations between individual metrics need to be carefully considered in spatial perception studies. Secondly, exposure to spatially discrepant audiovisual stimuli resulted in a shift in bias toward the side of the visual stimuli (ventriloquism aftereffect) but did not affect localization precision. The size of the aftereffect shift in bias was at least partly explainable by unspecific test repetition effects, highlighting the need to account for inter-individual baseline differences in studies of spatial learning.


Subject(s)
Space Perception , Humans , Space Perception/physiology , Female , Male , Adult , Sound Localization , Photic Stimulation , Visual Perception/physiology , Young Adult , Acoustic Stimulation/methods , Auditory Perception/physiology
9.
Article in Chinese | MEDLINE | ID: mdl-38561257

ABSTRACT

Objective: This study investigates the effect of signal-to-noise ratio (SNR), frequency, and bandwidth on horizontal sound localization accuracy in normal-hearing young adults. Methods: From August 2022 to December 2022, a total of 20 normal-hearing young adults, including 7 males and 13 females, with an age range of 20 to 35 years and a mean age of 25.4 years, were selected to participate in horizontal azimuth recognition tests under both quiet and noisy conditions. Six narrowband filtered noise stimuli were used with central frequencies (CF) of 250, 2 000, and 4 000 Hz and bandwidths of 1/6 and 1 octave. Continuous broadband white noise was used as the background masker, and the signal-to-noise ratio (SNR) was 0, -3, and -12 dB. The root-mean-square error (RMS error) was used to measure sound localization accuracy, with smaller values indicating higher accuracy. Friedman test was used to compare the effects of SNR and CF on sound localization accuracy, and Wilcoxon signed-rank test was used to compare the impact of the two bandwidths on sound localization accuracy in noise. Results: In a quiet environment, the RMS error in horizontal azimuth in normal-hearing young adults ranged from 4.3 to 8.1 degrees. Sound localization accuracy decreased with decreasing SNR: at 0 dB SNR (range: 5.3-12.9 degrees), the difference from the quiet condition was not significant (P>0.05); however, at -3 dB (range: 7.3-16.8 degrees) and -12 dB SNR (range: 9.4-41.2 degrees), sound localization accuracy significantly decreased compared to the quiet condition (all P<0.01). Under noisy conditions, there were differences in sound localization accuracy among stimuli with different frequencies and bandwidths, with higher frequencies performing the worst, followed by middle frequencies, and lower frequencies performing the best, with significant differences (all P<0.01). Sound localization accuracy for 1/6 octave stimuli was more susceptible to noise interference than 1 octave stimuli (all P<0.01). Conclusions: The ability of normal-hearing young adults to localize sound in the horizontal plane in the presence of noise is influenced by SNR, CF, and bandwidth. Noise with SNRs of ≥-3 dB can lead to decreased accuracy in narrowband sound localization. Higher CF signals and narrower bandwidths are more susceptible to noise interference.


Subject(s)
Sound Localization , Speech Perception , Male , Female , Humans , Young Adult , Adult , Noise , Signal-To-Noise Ratio , Hearing
10.
Nat Commun ; 15(1): 3116, 2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38600132

ABSTRACT

Spatiotemporally congruent sensory stimuli are fused into a unified percept. The auditory cortex (AC) sends projections to the primary visual cortex (V1), which could provide signals for binding spatially corresponding audio-visual stimuli. However, whether AC inputs in V1 encode sound location remains unknown. Using two-photon axonal calcium imaging and a speaker array, we measured the auditory spatial information transmitted from AC to layer 1 of V1. AC conveys information about the location of ipsilateral and contralateral sound sources to V1. Sound location could be accurately decoded by sampling AC axons in V1, providing a substrate for making location-specific audiovisual associations. However, AC inputs were not retinotopically arranged in V1, and audio-visual modulations of V1 neurons did not depend on the spatial congruency of the sound and light stimuli. The non-topographic sound localization signals provided by AC might allow the association of specific audiovisual spatial patterns in V1 neurons.


Subject(s)
Auditory Cortex , Sound Localization , Visual Cortex , Visual Perception/physiology , Auditory Cortex/physiology , Neurons/physiology , Visual Cortex/physiology , Photic Stimulation/methods , Acoustic Stimulation/methods
11.
J Acoust Soc Am ; 155(4): 2460-2469, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38578178

ABSTRACT

Head-worn devices (HWDs) interfere with the natural transmission of sound from the source to the ears of the listener, worsening their localization abilities. The localization errors introduced by HWDs have been mostly studied in static scenarios, but these errors are reduced if head movements are allowed. We studied the effect of 12 HWDs on an auditory-cued visual search task, where head movements were not restricted. In this task, a visual target had to be identified in a three-dimensional space with the help of an acoustic stimulus emitted from the same location as the visual target. The results showed an increase in the search time caused by the HWDs. Acoustic measurements of a dummy head wearing the studied HWDs showed evidence of impaired localization cues, which were used to estimate the perceived localization errors using computational auditory models of static localization. These models were able to explain the search-time differences in the perceptual task, showing the influence of quadrant errors in the auditory-aided visual search task. These results indicate that HWDs have an impact on sound-source localization even when head movements are possible, which may compromise the safety and the quality of experience of the wearer.


Subject(s)
Hearing Aids , Sound Localization , Acoustic Stimulation , Head Movements
12.
Otol Neurotol ; 45(5): 482-488, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38530367

ABSTRACT

OBJECTIVE: Severely asymmetrical hearing loss (SAHL) is characterized by a moderately severe or severe hearing loss in one side and normal or mildly impaired controlateral hearing in the other. The Active tri-CROS combines the Contralateral Routing-of-Signal System (CROS, or BiCROS if the best ear is stimulated) and the stimulation of the worst ear by an in-the-canal hearing aid. This study aims to evaluate the benefit of the Active tri-CROS for SAHL patients. STUDY DESIGN: This retrospective study was conducted from September 2019 to December 2020. SETTING: Ambulatory, tertiary care. PATIENTS: Patients were retrospectively included if they had received the Active tri-CROS system after having used a CROS or BiCROS system for SAHL for at least 3 years. MAIN OUTCOME MEASURES: Audiometric gain, signal-to-noise ratio, spatial localization, and the Abbreviated Profile of Hearing Aid Benefit and Tinnitus Handicap Inventory questionnaires were performed before equipment and after a month with the system. RESULTS: Twenty patients (mean, 62 yr old) with a mean of 74.3 ± 8.7 dB HL on the worst ear were included. The mean tonal hearing gain on the worst ear was 20 ± 6 dB. Signal-to-noise ratio significantly rose from 1.43 ± 3.9 to 0.16 ± 3.4 dB ( p = 0.0001). Spatial localization was not significantly improved. The mean Tinnitus Handicap Inventory test score of the eight patients suffering from tinnitus rose from 45.5 ± 18.5 to 31 ± 25.2 ( p = 0.016). CONCLUSIONS: The Active tri-CROS system is a promising new therapeutically solution for SAHL.


Subject(s)
Hearing Aids , Humans , Middle Aged , Retrospective Studies , Male , Female , Aged , Adult , Hearing Loss, Unilateral/rehabilitation , Hearing Loss, Unilateral/physiopathology , Sound Localization/physiology , Tinnitus/therapy , Tinnitus/physiopathology
13.
Trends Hear ; 28: 23312165241235463, 2024.
Article in English | MEDLINE | ID: mdl-38425297

ABSTRACT

Sound localization testing is key for comprehensive hearing evaluations, particularly in cases of suspected auditory processing disorders. However, sound localization is not commonly assessed in clinical practice, likely due to the complexity and size of conventional measurement systems, which require semicircular loudspeaker arrays in large and acoustically treated rooms. To address this issue, we investigated the feasibility of testing sound localization in virtual reality (VR). Previous research has shown that virtualization can lead to an increase in localization blur. To measure these effects, we conducted a study with a group of normal-hearing adults, comparing sound localization performance in different augmented reality and VR scenarios. We started with a conventional loudspeaker-based measurement setup and gradually moved to a virtual audiovisual environment, testing sound localization in each scenario using a within-participant design. The loudspeaker-based experiment yielded results comparable to those reported in the literature, and the results of the virtual localization test provided new insights into localization performance in state-of-the-art VR environments. By comparing localization performance between the loudspeaker-based and virtual conditions, we were able to estimate the increase in localization blur induced by virtualization relative to a conventional test setup. Notably, our study provides the first proxy normative cutoff values for sound localization testing in VR. As an outlook, we discuss the potential of a VR-based sound localization test as a suitable, accessible, and portable alternative to conventional setups and how it could serve as a time- and resource-saving prescreening tool to avoid unnecessarily extensive and complex laboratory testing.


Subject(s)
Auditory Perceptual Disorders , Sound Localization , Virtual Reality , Adult , Humans , Hearing Tests
14.
Otol Neurotol ; 45(4): 392-397, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38478407

ABSTRACT

OBJECTIVE: To assess cochlear implant (CI) sound processor usage over time in children with single-sided deafness (SSD) and identify factors influencing device use. STUDY DESIGN: Retrospective, chart review study. SETTING: Pediatric tertiary referral center. PATIENTS: Children with SSD who received CI between 2014 and 2020. OUTCOME MEASURE: Primary outcome was average daily CI sound processor usage over follow-up. RESULTS: Fifteen children with SSD who underwent CI surgery were categorized based on age of diagnosis and surgery timing. Over an average of 4.3-year follow-up, patients averaged 4.6 hours/day of CI usage. Declining usage trends were noted over time, with the first 2 years postactivation showing higher rates. No significant usage differences emerged based on age, surgery timing, or hearing loss etiology. CONCLUSIONS: Long-term usage decline necessitates further research into barriers and enablers for continued CI use in pediatric SSD cases.


Subject(s)
Cochlear Implantation , Cochlear Implants , Deafness , Hearing Loss, Unilateral , Sound Localization , Speech Perception , Humans , Child , Cochlear Implants/adverse effects , Retrospective Studies , Hearing Loss, Unilateral/surgery , Hearing Loss, Unilateral/rehabilitation , Sound Localization/physiology , Deafness/surgery , Deafness/rehabilitation , Speech Perception/physiology , Treatment Outcome
15.
Trends Hear ; 28: 23312165241229880, 2024.
Article in English | MEDLINE | ID: mdl-38545645

ABSTRACT

Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the benefit bilateral implants provide among recipients varies considerably across individuals. Here we consider one of the reasons for this variability: difference in hearing function between the two ears, that is, interaural asymmetry. Thus far, investigations of interaural asymmetry have been highly specialized within various research areas. The goal of this review is to integrate these studies in one place, motivating future research in the area of interaural asymmetry. We first consider bottom-up processing, where binaural cues are represented using excitation-inhibition of signals from the left ear and right ear, varying with the location of the sound in space, and represented by the lateral superior olive in the auditory brainstem. We then consider top-down processing via predictive coding, which assumes that perception stems from expectations based on context and prior sensory experience, represented by cascading series of cortical circuits. An internal, perceptual model is maintained and updated in light of incoming sensory input. Together, we hope that this amalgamation of physiological, behavioral, and modeling studies will help bridge gaps in the field of binaural hearing and promote a clearer understanding of the implications of interaural asymmetry for future research on optimal patient interventions.


Subject(s)
Cochlear Implantation , Cochlear Implants , Sound Localization , Speech Perception , Humans , Speech Perception/physiology , Hearing , Sound Localization/physiology
16.
Trends Hear ; 28: 23312165241229572, 2024.
Article in English | MEDLINE | ID: mdl-38347733

ABSTRACT

Subjective reports indicate that hearing aids can disrupt sound externalization and/or reduce the perceived distance of sounds. Here we conducted an experiment to explore this phenomenon and to quantify how frequently it occurs for different hearing-aid styles. Of particular interest were the effects of microphone position (behind the ear vs. in the ear) and dome type (closed vs. open). Participants were young adults with normal hearing or with bilateral hearing loss, who were fitted with hearing aids that allowed variations in the microphone position and the dome type. They were seated in a large sound-treated booth and presented with monosyllabic words from loudspeakers at a distance of 1.5 m. Their task was to rate the perceived externalization of each word using a rating scale that ranged from 10 (at the loudspeaker in front) to 0 (in the head) to -10 (behind the listener). On average, compared to unaided listening, hearing aids tended to reduce perceived distance and lead to more in-the-head responses. This was especially true for closed domes in combination with behind-the-ear microphones. The behavioral data along with acoustical recordings made in the ear canals of a manikin suggest that increased low-frequency ear-canal levels (with closed domes) and ambiguous spatial cues (with behind-the-ear microphones) may both contribute to breakdowns of externalization.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural , Sound Localization , Speech Perception , Young Adult , Humans , Speech , Hearing Loss, Bilateral , Noise , Speech Perception/physiology
17.
Trends Hear ; 28: 23312165241230947, 2024.
Article in English | MEDLINE | ID: mdl-38361245

ABSTRACT

Sound localization is an important ability in everyday life. This study investigates the influence of vision and presentation mode on auditory spatial bisection performance. Subjects were asked to identify the smaller perceived distance between three consecutive stimuli that were either presented via loudspeakers (free field) or via headphones after convolution with generic head-related impulse responses (binaural reproduction). Thirteen azimuthal sound incidence angles on a circular arc segment of ±24° at a radius of 3 m were included in three regions of space (front, rear, and laterally left). Twenty normally sighted (measured both sighted and blindfolded) and eight blind persons participated. Results showed no significant differences with respect to visual condition, but strong effects of sound direction and presentation mode. Psychometric functions were steepest in frontal space and indicated median spatial bisection thresholds of 11°-14°. Thresholds increased significantly in rear (11°-17°) and laterally left (20°-28°) space in free field. Individual pinna and torso cues, as available only in free field presentation, improved the performance of all participants compared to binaural reproduction. Especially in rear space, auditory spatial bisection thresholds were three to four times higher (i.e., poorer) using binaural reproduction than in free field. The results underline the importance of individual auditory spatial cues for spatial bisection, irrespective of access to vision, which indicates that vision may not be strictly necessary to calibrate allocentric spatial hearing.


Subject(s)
Sound Localization , Visually Impaired Persons , Humans , Space Perception/physiology , Blindness/diagnosis , Sound Localization/physiology , Acoustics
18.
PLoS One ; 19(2): e0293811, 2024.
Article in English | MEDLINE | ID: mdl-38394286

ABSTRACT

A hearing aid or a contralateral routing of signal device are options for unilateral cochlear implant listeners with limited hearing in the unimplanted ear; however, it is uncertain which device provides greater benefit beyond unilateral listening alone. Eighteen unilateral cochlear implant listeners participated in this prospective, within-participants, repeated measures study. Participants were tested with the cochlear implant alone, cochlear implant + hearing aid, and cochlear implant + contralateral routing of signal device configurations with a one-month take-home period between each in-person visit. Audiograms, speech perception in noise, and lateralization were evaluated. Subjective feedback was obtained via questionnaires. Marked improvement in speech in noise and non-implanted ear lateralization accuracy were observed with the addition of a contralateral hearing aid. There were no significant differences in speech recognition between listening configurations. However, the chronic device use questionnaires and the final device selection showed a clear preference for the hearing aid in spatial awareness and communication domains. Individuals with limited hearing in their unimplanted ears demonstrate significant improvement with the addition of a contralateral device. Subjective questionnaires somewhat contrast with clinic-based outcome measures, highlighting the delicate decision-making process involved in clinically advising one device or another to maximize communication benefits.


Subject(s)
Cochlear Implantation , Cochlear Implants , Hearing Aids , Sound Localization , Speech Perception , Humans , Prospective Studies , Hearing
19.
Eur J Neurosci ; 59(9): 2373-2390, 2024 May.
Article in English | MEDLINE | ID: mdl-38303554

ABSTRACT

Humans have the remarkable ability to integrate information from different senses, which greatly facilitates the detection, localization and identification of events in the environment. About 466 million people worldwide suffer from hearing loss. Yet, the impact of hearing loss on how the senses work together is rarely investigated. Here, we investigate how a common sensory impairment, asymmetric conductive hearing loss (AHL), alters the way our senses interact by examining human orienting behaviour with normal hearing (NH) and acute AHL. This type of hearing loss disrupts auditory localization. We hypothesized that this creates a conflict between auditory and visual spatial estimates and alters how auditory and visual inputs are integrated to facilitate multisensory spatial perception. We analysed the spatial and temporal properties of saccades to auditory, visual and audiovisual stimuli before and after plugging the right ear of participants. Both spatial and temporal aspects of multisensory integration were affected by AHL. Compared with NH, AHL caused participants to make slow, inaccurate and unprecise saccades towards auditory targets. Surprisingly, increased weight on visual input resulted in accurate audiovisual localization with AHL. This came at a cost: saccade latencies for audiovisual targets increased significantly. The larger the auditory localization errors, the less participants were able to benefit from audiovisual integration in terms of saccade latency. Our results indicate that observers immediately change sensory weights to effectively deal with acute AHL and preserve audiovisual accuracy in a way that cannot be fully explained by statistical models of optimal cue integration.


Subject(s)
Sound Localization , Visual Perception , Humans , Female , Adult , Male , Visual Perception/physiology , Sound Localization/physiology , Young Adult , Saccades/physiology , Auditory Perception/physiology , Hearing Loss/physiopathology , Photic Stimulation/methods , Acoustic Stimulation/methods , Space Perception/physiology
20.
Trends Hear ; 28: 23312165231217910, 2024.
Article in English | MEDLINE | ID: mdl-38297817

ABSTRACT

The present study aimed to define use of head and eye movements during sound localization in children and adults to: (1) assess effects of stationary versus moving sound and (2) define effects of binaural cues degraded through acute monaural ear plugging. Thirty-three youth (MAge = 12.9 years) and seventeen adults (MAge = 24.6 years) with typical hearing were recruited and asked to localize white noise anywhere within a horizontal arc from -60° (left) to +60° (right) azimuth in two conditions (typical binaural and right ear plugged). In each trial, sound was presented at an initial stationary position (L1) and then while moving at ∼4°/s until reaching a second position (L2). Sound moved in five conditions (±40°, ±20°, or 0°). Participants adjusted a laser pointer to indicate L1 and L2 positions. Unrestricted head and eye movements were collected with gyroscopic sensors on the head and eye-tracking glasses, respectively. Results confirmed that accurate sound localization of both stationary and moving sound is disrupted by acute monaural ear plugging. Eye movements preceded head movements for sound localization in normal binaural listening and head movements were larger than eye movements during monaural plugging. Head movements favored the unplugged left ear when stationary sounds were presented in the right hemifield and during sound motion in both hemifields regardless of the movement direction. Disrupted binaural cues have greater effects on localization of moving than stationary sound. Head movements reveal preferential use of the better-hearing ear and relatively stable eye positions likely reflect normal vestibular-ocular reflexes.


Subject(s)
Sound Localization , Adult , Child , Adolescent , Humans , Eye Movements , Hearing , Hearing Tests , Head Movements
SELECTION OF CITATIONS
SEARCH DETAIL
...