Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 63
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Hear Res ; 443: 108963, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38308936

RESUMO

Exposure to brief, intense sound can produce profound changes in the auditory system, from the internal structure of inner hair cells to reduced synaptic connections between the auditory nerves and the inner hair cells. Moreover, noisy environments can also lead to alterations in the auditory nerve or to processing changes in the auditory midbrain, all without affecting hearing thresholds. This so-called hidden hearing loss (HHL) has been shown in tinnitus patients and has been posited to account for hearing difficulties in noisy environments. However, much of the neuronal research thus far has investigated how HHL affects the response characteristics of individual fibres in the auditory nerve, as opposed to higher stations in the auditory pathway. Human models show that the auditory nerve encodes sound stochastically. Therefore, a sufficient reduction in nerve fibres could result in lowering the sampling of the acoustic scene below the minimum rate necessary to fully encode the scene, thus reducing the efficacy of sound encoding. Here, we examine how HHL affects the responses to frequency and intensity of neurons in the inferior colliculus of rats, and the duration and firing rate of those responses. Finally, we examined how shorter stimuli are encoded less effectively by the auditory midbrain than longer stimuli, and how this could lead to a clinical test for HHL.


Assuntos
Perda Auditiva Provocada por Ruído , Colículos Inferiores , Humanos , Ratos , Animais , Colículos Inferiores/fisiologia , Ruído/efeitos adversos , Limiar Auditivo/fisiologia , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Cóclea
2.
Trends Hear ; 28: 23312165241227818, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38291713

RESUMO

The past decade has seen a wealth of research dedicated to determining which and how morphological changes in the auditory periphery contribute to people experiencing hearing difficulties in noise despite having clinically normal audiometric thresholds in quiet. Evidence from animal studies suggests that cochlear synaptopathy in the inner ear might lead to auditory nerve deafferentation, resulting in impoverished signal transmission to the brain. Here, we quantify the likely perceptual consequences of auditory deafferentation in humans via a physiologically inspired encoding-decoding model. The encoding stage simulates the processing of an acoustic input stimulus (e.g., speech) at the auditory periphery, while the decoding stage is trained to optimally regenerate the input stimulus from the simulated auditory nerve firing data. This allowed us to quantify the effect of different degrees of auditory deafferentation by measuring the extent to which the decoded signal supported the identification of speech in quiet and in noise. In a series of experiments, speech perception thresholds in quiet and in noise increased (worsened) significantly as a function of the degree of auditory deafferentation for modeled deafferentation greater than 90%. Importantly, this effect was significantly stronger in a noisy than in a quiet background. The encoding-decoding model thus captured the hallmark symptom of degraded speech perception in noise together with normal speech perception in quiet. As such, the model might function as a quantitative guide to evaluating the degree of auditory deafferentation in human listeners.


Assuntos
Perda Auditiva , Percepção da Fala , Animais , Humanos , Limiar Auditivo/fisiologia , Ruído/efeitos adversos , Estimulação Acústica , Percepção Auditiva/fisiologia
3.
Hear Res ; 441: 108917, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38061268

RESUMO

Previous studies have shown that in challenging listening situations, people find it hard to equally divide their attention between two simultaneous talkers and tend to favor one talker over the other. The aim here was to investigate whether talker onset/offset, sex and location determine the favored talker. Fifteen people with normal hearing were asked to recognize as many words as possible from two sentences uttered by two talkers located at 45° and +45° azimuth, respectively. The sentences were from the same corpus, were time-centered and had equal sound level. In Conditions 1 and 2, the talkers had different sexes (male at +45°), sentence duration was not controlled for, and sentences were presented at 65 and 35 dB SPL, respectively. Listeners favored the male over the female talker, even more so at 35 dB SPL (62 % vs 43 % word recognition, respectively) than at 65 dB SPL (74 % vs 64 %, respectively). The greater asymmetry in intelligibility at the lower level supports that divided listening is harder and more 'asymmetric' in challenging acoustic scenarios. Listeners continued to favor the male talker when the experiment was repeated with sentences of equal average duration for the two talkers (Condition 3). This suggests that the earlier onset or later offset of male sentences (52 ms on average) was not the reason for the asymmetric intelligibility in Conditions 1 or 2. When the location of the talkers was switched (Condition 4) or the two talkers were the same woman (Condition 5), listeners continued to favor the talker to their right albeit non-significantly. Altogether, results confirm that in hard divided listening situations, listeners tend to favor the talker to their right. This preference is not affected by talker onset/offset delays less than 52 ms on average. Instead, the preference seems to be modulated by the voice characteristics of the talkers.


Assuntos
Percepção da Fala , Voz , Humanos , Masculino , Feminino , Inteligibilidade da Fala , Idioma , Acústica
4.
Trends Hear ; 27: 23312165231213191, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37956654

RESUMO

Older people often show auditory temporal processing deficits and speech-in-noise intelligibility difficulties even when their audiogram is clinically normal. The causes of such problems remain unclear. Some studies have suggested that for people with normal audiograms, age-related hearing impairments may be due to a cognitive decline, while others have suggested that they may be caused by cochlear synaptopathy. Here, we explore an alternative hypothesis, namely that age-related hearing deficits are associated with decreased inhibition. For human adults (N = 30) selected to cover a reasonably wide age range (25-59 years), with normal audiograms and normal cognitive function, we measured speech reception thresholds in noise (SRTNs) for disyllabic words, gap detection thresholds (GDTs), and frequency modulation detection thresholds (FMDTs). We also measured the rate of growth (slope) of auditory brainstem response wave-I amplitude with increasing level as an indirect indicator of cochlear synaptopathy, and the interference inhibition score in the Stroop color and word test (SCWT) as a proxy for inhibition. As expected, performance in the auditory tasks worsened (SRTNs, GDTs, and FMDTs increased), and wave-I slope and SCWT inhibition scores decreased with ageing. Importantly, SRTNs, GDTs, and FMDTs were not related to wave-I slope but worsened with decreasing SCWT inhibition. Furthermore, after partialling out the effect of SCWT inhibition, age was no longer related to SRTNs or GDTs and became less strongly related to FMDTs. Altogether, results suggest that for people with normal audiograms, age-related deficits in auditory temporal processing and speech-in-noise intelligibility are mediated by decreased inhibition rather than cochlear synaptopathy.


Assuntos
Presbiacusia , Percepção da Fala , Adulto , Humanos , Idoso , Pessoa de Meia-Idade , Limiar Auditivo/fisiologia , Cóclea , Audição , Percepção Auditiva/fisiologia , Presbiacusia/diagnóstico , Potenciais Evocados Auditivos do Tronco Encefálico/fisiologia , Percepção da Fala/fisiologia
5.
Hear Res ; 432: 108743, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37003080

RESUMO

We have recently proposed a binaural sound pre-processing method to attenuate sounds contralateral to each ear and shown that it can improve speech intelligibility for normal-hearing (NH) people in simulated "cocktail party" listening situations (Lopez-Poveda et al., 2022, Hear Res 418:108,469). The aim here was to evaluate if this benefit remains for hearing-impaired listeners when the method is combined with two independently functioning hearing aids, one per ear. Twelve volunteers participated in the experiments; five of them had bilateral sensorineural hearing loss and seven were NH listeners with simulated bilateral conductive hearing loss. Speech reception thresholds (SRTs) for sentences in competition with a source of steady, speech-shaped noise were measured in unilateral and bilateral listening, and for (target, masker) azimuthal angles of (0°, 0°), (270°, 45°), and (270°, 90°). Stimuli were processed through a pair of software-based multichannel, fast-acting, wide dynamic range compressors, with and without binaural pre-processing. For spatially collocated target and masker sources at 0° azimuth, the pre-processing did not affect SRTs. For spatially separated target and masker sources, the pre-processing improved SRTs when listening bilaterally (improvements up to 10.7 dB) or unilaterally with the acoustically better ear (improvements up to 13.9 dB), while it worsened SRTs when listening unilaterally with the acoustically worse ear (decrements of up to 17.0 dB). Results show that binaural pre-processing for contralateral sound attenuation can improve speech-in-noise intelligibility in laboratory tests also for bilateral hearing-aid users.


Assuntos
Implantes Cocleares , Auxiliares de Audição , Percepção da Fala , Humanos , Inteligibilidade da Fala , Ruído/efeitos adversos , Audição
6.
Hear Res ; 432: 108744, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37004271

RESUMO

Computational models are useful tools to investigate scientific questions that would be complicated to address using an experimental approach. In the context of cochlear-implants (CIs), being able to simulate the neural activity evoked by these devices could help in understanding their limitations to provide natural hearing. Here, we present a computational modelling framework to quantify the transmission of information from sound to spikes in the auditory nerve of a CI user. The framework includes a model to simulate the electrical current waveform sensed by each auditory nerve fiber (electrode-neuron interface), followed by a model to simulate the timing at which a nerve fiber spikes in response to a current waveform (auditory nerve fiber model). Information theory is then applied to determine the amount of information transmitted from a suitable reference signal (e.g., the acoustic stimulus) to a simulated population of auditory nerve fibers. As a use case example, the framework is applied to simulate published data on modulation detection by CI users obtained using direct stimulation via a single electrode. Current spread as well as the number of fibers were varied independently to illustrate the framework capabilities. Simulations reasonably matched experimental data and suggested that the encoded modulation information is proportional to the total neural response. They also suggested that amplitude modulation is well encoded in the auditory nerve for modulation rates up to 1000 Hz and that the variability in modulation sensitivity across CI users is partly because different CI users use different references for detecting modulation.


Assuntos
Implante Coclear , Implantes Cocleares , Estimulação Acústica , Nervo Coclear/fisiologia , Simulação por Computador , Estimulação Elétrica , Potenciais Evocados Auditivos/fisiologia
7.
Hear Res ; 426: 108621, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36182814

RESUMO

We report a theoretical study aimed at investigating the impact of cochlear synapse loss (synaptopathy) on the encoding of the envelope (ENV) and temporal fine structure (TFS) of sounds by the population of auditory nerve fibers. A computational model was used to simulate auditory-nerve spike trains evoked by sinusoidally amplitude-modulated (AM) tones at 10 Hz with various carrier frequencies and levels. The model included 16 cochlear channels with characteristic frequencies (CFs) from 250 Hz to 8 kHz. Each channel was innervated by 3, 4 and 10 fibers with low (LSR), medium (MSR), and high spontaneous rates (HSR), respectively. For each channel, spike trains were collapsed into three separate 'population' post-stimulus time histograms (PSTHs), one per fiber type. Information theory was applied to reconstruct the stimulus waveform, ENV, and TFS from one or more PSTHs in a mathematically optimal way. The quality of the reconstruction was regarded as an estimate of the information present in the used PSTHs. Various synaptopathy scenarios were simulated by removing fibers of specific types and/or cochlear regions before stimulus reconstruction. We found that the TFS was predominantly encoded by HSR fibers at all stimulus carrier frequencies and levels. The encoding of the ENV was more complex. At lower levels, the ENV was predominantly encoded by HSR fibers with CFs near the stimulus carrier frequency. At higher levels, the ENV was equally well or better encoded by HSR fibers with CFs different from the AM carrier frequency as by LSR fibers with CFs at the carrier frequency. Altogether, findings suggest that a healthy population of HSR fibers (i.e., including fibers with CFs around and remote from the AM carrier frequency) might be sufficient to encode the ENV and TFS over a wide range of stimulus levels. Findings are discussed regarding their relevance for diagnosing synaptopathy using non-invasive ENV- and TFS-based measures.


Assuntos
Humanos , Nervo Coclear/fisiologia , Cóclea/fisiologia , Som , Estimulação Acústica
8.
J Acoust Soc Am ; 151(3): 1741, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35364964

RESUMO

Many aspects of hearing function are negatively affected by background noise. Listeners, however, have some ability to adapt to background noise. For instance, the detection of pure tones and the recognition of isolated words embedded in noise can improve gradually as tones and words are delayed a few hundred milliseconds in the noise. While some evidence suggests that adaptation to noise could be mediated by the medial olivocochlear reflex, adaptation can occur for people who do not have a functional reflex. Since adaptation can facilitate hearing in noise, and hearing in noise is often harder for hearing-impaired than for normal-hearing listeners, it is conceivable that adaptation is impaired with hearing loss. It remains unclear, however, if and to what extent this is the case, or whether impaired adaptation contributes to the greater difficulties experienced by hearing-impaired listeners understanding speech in noise. Here, we review adaptation to noise, the mechanisms potentially contributing to this adaptation, and factors that might reduce the ability to adapt to background noise, including cochlear hearing loss, cochlear synaptopathy, aging, and noise exposure. The review highlights few knowns and many unknowns about adaptation to noise, and thus paves the way for further research on this topic.


Assuntos
Perda Auditiva Neurossensorial , Percepção da Fala , Adaptação Fisiológica , Audição , Humanos , Ruído/efeitos adversos
9.
Hear Res ; 418: 108469, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35263696

RESUMO

Understanding speech presented in competition with other sounds can be challenging. Here, we reason that in free-field settings, this task can be facilitated by attenuating the sound field contralateral to each ear and propose to achieve this by linear subtraction of the weighted contralateral stimulus. We mathematically justify setting the weight equal to the ratio of ipsilateral to contralateral head-related transfer functions (HRTFs) averaged over an appropriate azimuth range. The algorithm is implemented in the frequency domain and evaluated technically and experimentally for normal-hearing listeners in simulated free-field conditions. Results show that (1) it can substantially improve the signal-to-noise ratio (up to 30 dB) and the short-term objective intelligibility in the ear ipsilateral to the target source, particularly for maskers with speech-like spectra; (2) it can improve speech reception thresholds (SRTs) for sentences in competition with speech-shaped noise by up to 8.5 dB in bilateral listening and 10.0 dB in unilateral listening; (3) for sentences in competition with speech maskers and in bilateral listening, it can improve SRTs by 2 to 5 dB, depending on the number and location of the masker sources; (4) it hardly affects virtual sound-source lateralization; and (5) the improvements, and the algorithm's directivity pattern depend on the azimuth range used to calculate the weights. Contralateral HRTF-weighted subtraction may prove valuable for users of binaural hearing devices.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Implante Coclear/métodos , Ruído/efeitos adversos , Fala
10.
Hear Res ; 416: 108444, 2022 03 15.
Artigo em Inglês | MEDLINE | ID: mdl-35078133

RESUMO

Verbal communication in social environments often requires dividing attention between two or more simultaneous talkers. The ability to do this, however, may be diminished when the listener has limited access to acoustic cues or those cues are degraded, as is the case for hearing-impaired listeners or users of cochlear implants or hearing aids. The aim of the present study was to investigate the ability of normal-hearing (NH) listeners to divide their attention and recognize speech from two simultaneous talkers in simulated free-field listening conditions, with and without reduced acoustic cues. Participants (N = 11 or 12 depending on the experiment) were asked to recognize and repeat as many words as possible from two simultaneous, time-centered sentences uttered by a male and a female talker. In Experiment 1, the female and male talkers were located at 15° and +15°, 45° and +45°, or 90° and +90° azimuth, respectively. Speech was natural or processed through a noise vocoder and was presented at a comfortable loudness level (∼65 dB SPL). In Experiment 2, the female and male talkers were located at 45° and +45° azimuth, respectively. Speech was natural but was presented at a lower level (35 dB SPL) to reduce audibility. In Experiment 3, speech was vocoded and presented at a comfortable loudness level (∼65 dB SPL), but the location of the talkers was switched relative to Experiment 1 (i.e., the male and female talkers were at 45° and +45°, respectively) to reveal possible interactions of talker sex and location. Listeners recognized overall more natural words at a comfortable loudness level (76%) than vocoded words at a similar level (39%) or natural words at a lower level (43%). This indicates that recognition was more difficult for the two latter stimuli. On the other hand, listeners recognized roughly the same proportion of words (76%) from the two talkers when speech was natural and comfortable in loudness, but a greater proportion of words from the male than from the female talker when speech was vocoded (50% vs 27%, respectively) or was natural but lower in level (55% vs 32%, respectively). This asymmetry occurred and was similar for the three spatial configurations. These results suggest that divided listening becomes asymmetric when speech cues are reduced. They also suggest that listeners preferentially recognized the male talker, located on the right side of the head. Switching the talker's location produced similar recognition for the two talkers for vocoded speech, suggesting an interaction between talkers' location and their speech characteristics. For natural speech at comfortable loudness level, listeners can divide their attention almost equally between two simultaneous talkers. When speech cues are limited (as is the case for vocoded speech or for speech at low sensation level), by contrast, the ability to divide attention equally between talkers is diminished and listeners favor one of the talkers based on their location, sex, and/or speech characteristics. Findings are discussed in the context of limited cognitive capacity affecting dividing listening in difficult listening situations.


Assuntos
Implantes Cocleares , Percepção da Fala , Estimulação Acústica , Acústica , Sinais (Psicologia) , Feminino , Humanos , Masculino , Fala
11.
Hear Res ; 409: 108320, 2021 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-34348202

RESUMO

Cochlear implant (CI) users find it hard and effortful to understand speech in noise with current devices. Binaural CI sound processing inspired by the contralateral medial olivocochlear (MOC) reflex (an approach termed the 'MOC strategy') can improve speech-in-noise recognition for CI users. All reported evaluations of this strategy, however, disregarded automatic gain control (AGC) and fine-structure (FS) processing, two standard features in some current CI devices. To better assess the potential of implementing the MOC strategy in contemporary CIs, here, we compare intelligibility with and without MOC processing in combination with linked AGC and FS processing. Speech reception thresholds (SRTs) were compared for an FS and a MOC-FS strategy for sentences in steady and fluctuating noises, for various speech levels, in bilateral and unilateral listening modes, and for multiple spatial configurations of the speech and noise sources. Word recall scores and verbal response times in a word recognition test (two proxies for listening effort) were also compared for the two strategies in quiet and in steady noise at 5 dB signal-to-noise ratio (SNR) and the individual SRT. In steady noise, mean SRTs were always equal or better with the MOC-FS than with the standard FS strategy, both in bilateral (the mean and largest improvement across spatial configurations and speech levels were 0.8 and 2.2 dB, respectively) and unilateral listening (mean and largest improvement of 1.7 and 2.1 dB, respectively). In fluctuating noise and in bilateral listening, SRTs were equal for the two strategies. Word recall scores and verbal response times were not significantly affected by the test SNR or the processing strategy. Results show that MOC processing can be combined with linked AGC and FS processing. Compared to using FS processing alone, combined MOC-FS processing can improve speech intelligibility in noise without affecting word recall scores or verbal response times.


Assuntos
Implantes Cocleares , Percepção da Fala , Esforço de Escuta , Reflexo , Inteligibilidade da Fala
12.
iScience ; 24(6): 102658, 2021 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-34151241

RESUMO

Central gain compensation for reduced auditory nerve output has been hypothesized as a mechanism for tinnitus with a normal audiogram. Here, we investigate if gain compensation occurs with aging. For 94 people (aged 12-68 years, 64 women, 7 tinnitus) with normal or close-to-normal audiograms, the amplitude of wave I of the auditory brainstem response decreased with increasing age but was not correlated with wave V amplitude after accounting for age-related subclinical hearing loss and cochlear damage, a result indicative of age-related gain compensation. The correlations between age and wave I/III or III/V amplitude ratios suggested that compensation occurs at the wave III generator site. For each one of the seven participants with non-pulsatile tinnitus, the amplitude of wave I, wave V, and the wave I/V amplitude ratio were well within the confidence limits of the non-tinnitus participants. We conclude that increased central gain occurs with aging and is not specific to tinnitus.

13.
Hear Res ; 405: 108246, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33872834

RESUMO

For speech in competition with a noise source in the free field, normal-hearing (NH) listeners recognize speech better when listening binaurally than when listening monaurally with the ear that has the better acoustic signal-to-noise ratio (SNR). This benefit from listening binaurally is known as binaural unmasking and indicates that the brain combines information from the two ears to improve intelligibility. Here, we address three questions pertaining to binaural unmasking for NH listeners. First, we investigate if binaural unmasking results from combining the speech and/or the noise from the two ears. In a simulated acoustic free field with speech and noise sources at 0° and 270°azimuth, respectively, we found comparable unmasking regardless of whether the speech was present or absent in the ear with the worse SNR. This indicates that binaural unmasking probably involves combining only the noise at the two ears. Second, we investigate if having binaurally coherent location cues for the noise signal is sufficient for binaural unmasking to occur. We found no unmasking when location cues were coherent but noise signals were generated incoherent or were processed unilaterally through a hearing aid with linear, minimal amplification. This indicates that binaural unmasking requires interaurally coherent noise signals, source location cues, and processing. Third, we investigate if the hypothesized antimasking benefits of the medial olivocochlear reflex (MOCR) contribute to binaural unmasking. We found comparable unmasking regardless of whether speech tokens (words) were sufficiently delayed from the noise onset to fully activate the MOCR or not. Moreover, unmasking was absent when the noise was binaurally incoherent whereas the physiological antimasking effects of the MOCR are similar for coherent and incoherent noises. This indicates that the MOCR is unlikely involved in binaural unmasking.


Assuntos
Auxiliares de Audição , Ruído , Percepção da Fala , Percepção Auditiva , Ruído/efeitos adversos , Reflexo
14.
Front Neurosci ; 15: 640127, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33664649

RESUMO

The roles of the medial olivocochlear reflex (MOCR) in human hearing have been widely investigated but remain controversial. We reason that this may be because the effects of MOCR activation on cochlear mechanical responses can be assessed only indirectly in healthy humans, and the different methods used to assess those effects possibly yield different and/or unreliable estimates. One aim of this study was to investigate the correlation between three methods often employed to assess the strength of MOCR activation by contralateral acoustic stimulation (CAS). We measured tone detection thresholds (N = 28), click-evoked otoacoustic emission (CEOAE) input/output (I/O) curves (N = 18), and distortion-product otoacoustic emission (DPOAE) I/O curves (N = 18) for various test frequencies in the presence and the absence of CAS (broadband noise of 60 dB SPL). As expected, CAS worsened tone detection thresholds, suppressed CEOAEs and DPOAEs, and horizontally shifted CEOAE and DPOAE I/O curves to higher levels. However, the CAS effect on tone detection thresholds was not correlated with the horizontal shift of CEOAE or DPOAE I/O curves, and the CAS-induced CEOAE suppression was not correlated with DPOAE suppression. Only the horizontal shifts of CEOAE and DPOAE I/O functions were correlated with each other at 1.5, 2, and 3 kHz. A second aim was to investigate which of the methods is more reliable. The test-retest variability of the CAS effect was high overall but smallest for tone detection thresholds and CEOAEs, suggesting that their use should be prioritized over the use of DPOAEs. Many factors not related with the MOCR, including the limited parametric space studied, the low resolution of the I/O curves, and the reduced numbers of observations due to data exclusion likely contributed to the weak correlations and the large test-retest variability noted. These findings can help us understand the inconsistencies among past studies and improve our understanding of the functional significance of the MOCR.

15.
Ear Hear ; 41(6): 1492-1510, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33136626

RESUMO

OBJECTIVES: Cochlear implant (CI) users continue to struggle understanding speech in noisy environments with current clinical devices. We have previously shown that this outcome can be improved by using binaural sound processors inspired by the medial olivocochlear (MOC) reflex, which involve dynamic (contralaterally controlled) rather than fixed compressive acoustic-to-electric maps. The present study aimed at investigating the potential additional benefits of using more realistic implementations of MOC processing. DESIGN: Eight users of bilateral CIs and two users of unilateral CIs participated in the study. Speech reception thresholds (SRTs) for sentences in competition with steady state noise were measured in unilateral and bilateral listening modes. Stimuli were processed through two independently functioning sound processors (one per ear) with fixed compression, the current clinical standard (STD); the originally proposed MOC strategy with fast contralateral control of compression (MOC1); a MOC strategy with slower control of compression (MOC2); and a slower MOC strategy with comparatively greater contralateral inhibition in the lower-frequency than in the higher-frequency channels (MOC3). Performance with the four strategies was compared for multiple simulated spatial configurations of the speech and noise sources. Based on a previously published technical evaluation of these strategies, we hypothesized that SRTs would be overall better (lower) with the MOC3 strategy than with any of the other tested strategies. In addition, we hypothesized that the MOC3 strategy would be advantageous over the STD strategy in listening conditions and spatial configurations where the MOC1 strategy was not. RESULTS: In unilateral listening and when the implant ear had the worse acoustic signal-to-noise ratio, the mean SRT was 4 dB worse for the MOC1 than for the STD strategy (as expected), but it became equal or better for the MOC2 or MOC3 strategies than for the STD strategy. In bilateral listening, mean SRTs were 1.6 dB better for the MOC3 strategy than for the STD strategy across all spatial configurations tested, including a condition with speech and noise sources colocated at front where the MOC1 strategy was slightly disadvantageous relative to the STD strategy. All strategies produced significantly better SRTs for spatially separated than for colocated speech and noise sources. A statistically significant binaural advantage (i.e., better mean SRTs across spatial configurations and participants in bilateral than in unilateral listening) was found for the MOC2 and MOC3 strategies but not for the STD or MOC1 strategies. CONCLUSIONS: Overall, performance was best with the MOC3 strategy, which maintained the benefits of the originally proposed MOC1 strategy over the STD strategy for spatially separated speech and noise sources and extended those benefits to additional spatial configurations. In addition, the MOC3 strategy provided a significant binaural advantage, which did not occur with the STD or the original MOC1 strategies.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Humanos , Reflexo , Fala
16.
J Neurosci ; 40(34): 6613-6623, 2020 08 19.
Artigo em Inglês | MEDLINE | ID: mdl-32680938

RESUMO

Human hearing adapts to background noise, as evidenced by the fact that listeners recognize more isolated words when words are presented later rather than earlier in noise. This adaptation likely occurs because the leading noise shifts ("adapts") the dynamic range of auditory neurons, which can improve the neural encoding of speech spectral and temporal cues. Because neural dynamic range adaptation depends on stimulus-level statistics, here we investigated the importance of "statistical" adaptation for improving speech recognition in noisy backgrounds. We compared the recognition of noised-masked words in the presence and in the absence of adapting noise precursors whose level was either constant or was changing every 50 ms according to different statistical distributions. Adaptation was measured for 28 listeners (9 men) and was quantified as the recognition improvement in the precursor relative to the no-precursor condition. Adaptation was largest for constant-level precursors and did not occur for highly fluctuating precursors, even when the two types of precursors had the same mean level and both activated the medial olivocochlear reflex. Instantaneous amplitude compression of the highly fluctuating precursor produced as much adaptation as the constant-level precursor did without compression. Together, results suggest that noise adaptation in speech recognition is probably mediated by neural dynamic range adaptation to the most frequent sound level. Further, they suggest that auditory peripheral compression per se, rather than the medial olivocochlear reflex, could facilitate noise adaptation by reducing the level fluctuations in the noise.SIGNIFICANCE STATEMENT Recognizing speech in noise is challenging but can be facilitated by noise adaptation. The neural mechanisms underlying this adaptation remain unclear. Here, we report some benefits of adaptation for word-in-noise recognition and show that (1) adaptation occurs for stationary but not for highly fluctuating precursors with equal mean level; (2) both stationary and highly fluctuating noises activate the medial olivocochlear reflex; and (3) adaptation occurs even for highly fluctuating precursors when the stimuli are passed through a fast amplitude compressor. These findings suggest that noise adaptation reflects neural dynamic range adaptation to the most frequent noise level and that auditory peripheral compression, rather than the medial olivocochlear reflex, could facilitate noise adaptation.


Assuntos
Adaptação Fisiológica , Ruído , Percepção da Fala/fisiologia , Adulto , Limiar Auditivo/fisiologia , Feminino , Humanos , Masculino , Neurônios/fisiologia , Razão Sinal-Ruído , Adulto Jovem
17.
Hear Res ; 379: 103-116, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31150955

RESUMO

Many users of bilateral cochlear implants (BiCIs) localize sound sources less accurately than do people with normal hearing. This may be partly due to using two independently functioning CIs with fixed compression, which distorts and/or reduces interaural level differences (ILDs). Here, we investigate the potential benefits of using binaurally coupled, dynamic compression inspired by the medial olivocochlear reflex; an approach termed "the MOC strategy" (Lopez-Poveda et al., 2016, Ear Hear 37:e138-e148). Twelve BiCI users were asked to localize wideband (125-6000 Hz) noise tokens in a virtual horizontal plane. Stimuli were processed through a standard (STD) sound processing strategy (i.e., involving two independently functioning sound processors with fixed compression) and three different implementations of the MOC strategy: one with fast (MOC1) and two with slower contralateral control of compression (MOC2 and MOC3). The MOC1 and MOC2 strategies had effectively greater inhibition in the higher than in the lower frequency channels, while the MOC3 strategy had slightly greater inhibition in the lower than in the higher frequency channels. Localization was most accurate with the MOC1 strategy, presumably because it provided the largest and less ambiguous ILDs. The angle error improved slightly from 25.3° with the STD strategy to 22.7° with the MOC1 strategy. The improvement in localization ability over the STD strategy disappeared when the contralateral control of compression was made slower, presumably because stimuli were too short (200 ms) for the slower contralateral inhibition to enhance ILDs. Results suggest that some MOC implementations hold promise for improving not only speech-in-noise intelligibility, as shown elsewhere, but also sound source lateralization.


Assuntos
Implantes Cocleares , Localização de Som/fisiologia , Estimulação Acústica , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Membrana Basilar/fisiopatologia , Implantes Cocleares/estatística & dados numéricos , Compressão de Dados , Processamento Eletrônico de Dados , Feminino , Perda Auditiva Bilateral/fisiopatologia , Perda Auditiva Bilateral/reabilitação , Humanos , Masculino , Pessoa de Meia-Idade , Órgão Espiral/fisiopatologia , Reflexo Acústico/fisiologia , Complexo Olivar Superior/fisiopatologia
18.
Cognition ; 192: 103992, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31254890

RESUMO

In difficult listening situations, such as in noisy environments, one would expect speech intelligibility to improve over time thanks to noise adaptation and/or to speech predictability facilitating the recognition of upcoming words. We tested this possibility by presenting normal-hearing human listeners (N = 100; 70 women) with sentences and measuring word recognition as a function of word position in a sentence. Sentences were presented in quiet and in competition with various masker sounds at individualized levels where listeners had 50% probability of recognizing a full sentence. Contrary to expectations, recognition was best for the first word and gradually deteriorated with increasing word position along the sentence. The worsening in recognition was unlikely due to differences in word audibility or word type and was uncorrelated with age or working memory capacity. Using a probabilistic model of word recognition, we show that the worsening effect probably occurs because misunderstandings generate inaccurate predictions that outweigh the benefits from accurate predictions. Analyses also revealed that predictions overruled the potential benefits from noise adaptation. We conclude that although speech predictability can facilitate sentence recognition, it can also result in declines in word recognition as the sentence unfolds because of inaccuracies in prediction.


Assuntos
Reconhecimento Psicológico , Inteligibilidade da Fala , Percepção da Fala , Adaptação Psicológica , Adolescente , Adulto , Criança , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Ruído , Mascaramento Perceptivo , Adulto Jovem
19.
Hear Res ; 377: 133-141, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30933705

RESUMO

The detection of amplitude modulation (AM) in quiet or in noise improves when the AM carrier is preceded by noise, an effect that has been attributed to the medial olivocochlear reflex (MOCR). We investigate whether this improvement can occur without the MOCR by measuring AM sensitivity for cochlear implant (CI) users, whose MOCR effects are circumvented as a result of the electrical stimulation provided by the CI. AM detection thresholds were measured monaurally for short (50 ms) AM probes presented at the onset (early condition) or delayed by 300 ms (late condition) from the onset of a broadband noise. The noise was presented ipsilaterally, contralaterally and bilaterally to the test ear. Stimuli were processed through an experimental, time-invariant sound processing strategy. On average, thresholds were 4 dB better in the late than in the early condition and the size of the improvement was similar for the three noise lateralities. The pattern and magnitude of the improvement was broadly consistent with that for normal hearing listeners [Marrufo-Pérez et al., 2018, J Assoc Res Otolaryngol 19:147-161]. Because the electrical stimulation provided by CIs is independent from the middle-ear muscle reflex (MEMR) or the MOCR, this shows that mechanisms other than the MEMR or the MOCR can facilitate AM detection in noisy backgrounds.


Assuntos
Percepção Auditiva , Implante Coclear/instrumentação , Implantes Cocleares , Ruído/efeitos adversos , Pessoas com Deficiência Auditiva/reabilitação , Estimulação Acústica , Adaptação Psicológica , Adolescente , Adulto , Idoso , Limiar Auditivo , Criança , Cóclea/inervação , Estimulação Elétrica , Feminino , Audição , Humanos , Masculino , Pessoa de Meia-Idade , Mascaramento Perceptivo , Pessoas com Deficiência Auditiva/psicologia , Reflexo , Complexo Olivar Superior/fisiopatologia , Fatores de Tempo
20.
Hear Res ; 377: 88-103, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30921644

RESUMO

Animal studies demonstrate that noise exposure can permanently damage the synapses between inner hair cells and auditory nerve fibers, even when outer hair cells are intact and there is no clinically relevant permanent threshold shift. Synaptopathy disrupts the afferent connection between the cochlea and the central auditory system and is predicted to impair speech understanding in noisy environments and potentially result in tinnitus and/or hyperacusis. While cochlear synaptopathy has been demonstrated in numerous experimental animal models, synaptopathy can only be confirmed through post-mortem temporal bone analysis, making it difficult to study in living humans. A variety of non-invasive measures have been used to determine whether noise-induced synaptopathy occurs in humans, but the results are conflicting. The overall objective of this article is to synthesize the existing data on the functional impact of noise-induced synaptopathy in the human auditory system. The first section of the article summarizes the studies that provide evidence for and against noise-induced synaptopathy in humans. The second section offers potential explanations for the differing results between studies. The final section outlines suggested methodologies for diagnosing synaptopathy in humans with the aim of improving consistency across studies.


Assuntos
Percepção Auditiva , Cóclea/patologia , Cóclea/fisiopatologia , Doenças Cocleares/etiologia , Audição , Ruído/efeitos adversos , Doenças Cocleares/patologia , Doenças Cocleares/fisiopatologia , Sinapses Elétricas/patologia , Humanos , Fatores de Risco
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...