Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
Add more filters










Publication year range
1.
Hear Res ; 441: 108917, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38061268

ABSTRACT

Previous studies have shown that in challenging listening situations, people find it hard to equally divide their attention between two simultaneous talkers and tend to favor one talker over the other. The aim here was to investigate whether talker onset/offset, sex and location determine the favored talker. Fifteen people with normal hearing were asked to recognize as many words as possible from two sentences uttered by two talkers located at 45° and +45° azimuth, respectively. The sentences were from the same corpus, were time-centered and had equal sound level. In Conditions 1 and 2, the talkers had different sexes (male at +45°), sentence duration was not controlled for, and sentences were presented at 65 and 35 dB SPL, respectively. Listeners favored the male over the female talker, even more so at 35 dB SPL (62 % vs 43 % word recognition, respectively) than at 65 dB SPL (74 % vs 64 %, respectively). The greater asymmetry in intelligibility at the lower level supports that divided listening is harder and more 'asymmetric' in challenging acoustic scenarios. Listeners continued to favor the male talker when the experiment was repeated with sentences of equal average duration for the two talkers (Condition 3). This suggests that the earlier onset or later offset of male sentences (52 ms on average) was not the reason for the asymmetric intelligibility in Conditions 1 or 2. When the location of the talkers was switched (Condition 4) or the two talkers were the same woman (Condition 5), listeners continued to favor the talker to their right albeit non-significantly. Altogether, results confirm that in hard divided listening situations, listeners tend to favor the talker to their right. This preference is not affected by talker onset/offset delays less than 52 ms on average. Instead, the preference seems to be modulated by the voice characteristics of the talkers.


Subject(s)
Speech Perception , Voice , Humans , Male , Female , Speech Intelligibility , Language , Acoustics
2.
Hear Res ; 432: 108743, 2023 05.
Article in English | MEDLINE | ID: mdl-37003080

ABSTRACT

We have recently proposed a binaural sound pre-processing method to attenuate sounds contralateral to each ear and shown that it can improve speech intelligibility for normal-hearing (NH) people in simulated "cocktail party" listening situations (Lopez-Poveda et al., 2022, Hear Res 418:108,469). The aim here was to evaluate if this benefit remains for hearing-impaired listeners when the method is combined with two independently functioning hearing aids, one per ear. Twelve volunteers participated in the experiments; five of them had bilateral sensorineural hearing loss and seven were NH listeners with simulated bilateral conductive hearing loss. Speech reception thresholds (SRTs) for sentences in competition with a source of steady, speech-shaped noise were measured in unilateral and bilateral listening, and for (target, masker) azimuthal angles of (0°, 0°), (270°, 45°), and (270°, 90°). Stimuli were processed through a pair of software-based multichannel, fast-acting, wide dynamic range compressors, with and without binaural pre-processing. For spatially collocated target and masker sources at 0° azimuth, the pre-processing did not affect SRTs. For spatially separated target and masker sources, the pre-processing improved SRTs when listening bilaterally (improvements up to 10.7 dB) or unilaterally with the acoustically better ear (improvements up to 13.9 dB), while it worsened SRTs when listening unilaterally with the acoustically worse ear (decrements of up to 17.0 dB). Results show that binaural pre-processing for contralateral sound attenuation can improve speech-in-noise intelligibility in laboratory tests also for bilateral hearing-aid users.


Subject(s)
Cochlear Implants , Hearing Aids , Speech Perception , Humans , Speech Intelligibility , Noise/adverse effects , Hearing
3.
Hear Res ; 418: 108469, 2022 05.
Article in English | MEDLINE | ID: mdl-35263696

ABSTRACT

Understanding speech presented in competition with other sounds can be challenging. Here, we reason that in free-field settings, this task can be facilitated by attenuating the sound field contralateral to each ear and propose to achieve this by linear subtraction of the weighted contralateral stimulus. We mathematically justify setting the weight equal to the ratio of ipsilateral to contralateral head-related transfer functions (HRTFs) averaged over an appropriate azimuth range. The algorithm is implemented in the frequency domain and evaluated technically and experimentally for normal-hearing listeners in simulated free-field conditions. Results show that (1) it can substantially improve the signal-to-noise ratio (up to 30 dB) and the short-term objective intelligibility in the ear ipsilateral to the target source, particularly for maskers with speech-like spectra; (2) it can improve speech reception thresholds (SRTs) for sentences in competition with speech-shaped noise by up to 8.5 dB in bilateral listening and 10.0 dB in unilateral listening; (3) for sentences in competition with speech maskers and in bilateral listening, it can improve SRTs by 2 to 5 dB, depending on the number and location of the masker sources; (4) it hardly affects virtual sound-source lateralization; and (5) the improvements, and the algorithm's directivity pattern depend on the azimuth range used to calculate the weights. Contralateral HRTF-weighted subtraction may prove valuable for users of binaural hearing devices.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Cochlear Implantation/methods , Noise/adverse effects , Speech
4.
Hear Res ; 416: 108444, 2022 03 15.
Article in English | MEDLINE | ID: mdl-35078133

ABSTRACT

Verbal communication in social environments often requires dividing attention between two or more simultaneous talkers. The ability to do this, however, may be diminished when the listener has limited access to acoustic cues or those cues are degraded, as is the case for hearing-impaired listeners or users of cochlear implants or hearing aids. The aim of the present study was to investigate the ability of normal-hearing (NH) listeners to divide their attention and recognize speech from two simultaneous talkers in simulated free-field listening conditions, with and without reduced acoustic cues. Participants (N = 11 or 12 depending on the experiment) were asked to recognize and repeat as many words as possible from two simultaneous, time-centered sentences uttered by a male and a female talker. In Experiment 1, the female and male talkers were located at 15° and +15°, 45° and +45°, or 90° and +90° azimuth, respectively. Speech was natural or processed through a noise vocoder and was presented at a comfortable loudness level (∼65 dB SPL). In Experiment 2, the female and male talkers were located at 45° and +45° azimuth, respectively. Speech was natural but was presented at a lower level (35 dB SPL) to reduce audibility. In Experiment 3, speech was vocoded and presented at a comfortable loudness level (∼65 dB SPL), but the location of the talkers was switched relative to Experiment 1 (i.e., the male and female talkers were at 45° and +45°, respectively) to reveal possible interactions of talker sex and location. Listeners recognized overall more natural words at a comfortable loudness level (76%) than vocoded words at a similar level (39%) or natural words at a lower level (43%). This indicates that recognition was more difficult for the two latter stimuli. On the other hand, listeners recognized roughly the same proportion of words (76%) from the two talkers when speech was natural and comfortable in loudness, but a greater proportion of words from the male than from the female talker when speech was vocoded (50% vs 27%, respectively) or was natural but lower in level (55% vs 32%, respectively). This asymmetry occurred and was similar for the three spatial configurations. These results suggest that divided listening becomes asymmetric when speech cues are reduced. They also suggest that listeners preferentially recognized the male talker, located on the right side of the head. Switching the talker's location produced similar recognition for the two talkers for vocoded speech, suggesting an interaction between talkers' location and their speech characteristics. For natural speech at comfortable loudness level, listeners can divide their attention almost equally between two simultaneous talkers. When speech cues are limited (as is the case for vocoded speech or for speech at low sensation level), by contrast, the ability to divide attention equally between talkers is diminished and listeners favor one of the talkers based on their location, sex, and/or speech characteristics. Findings are discussed in the context of limited cognitive capacity affecting dividing listening in difficult listening situations.


Subject(s)
Cochlear Implants , Speech Perception , Acoustic Stimulation , Acoustics , Cues , Female , Humans , Male , Speech
5.
Hear Res ; 409: 108320, 2021 09 15.
Article in English | MEDLINE | ID: mdl-34348202

ABSTRACT

Cochlear implant (CI) users find it hard and effortful to understand speech in noise with current devices. Binaural CI sound processing inspired by the contralateral medial olivocochlear (MOC) reflex (an approach termed the 'MOC strategy') can improve speech-in-noise recognition for CI users. All reported evaluations of this strategy, however, disregarded automatic gain control (AGC) and fine-structure (FS) processing, two standard features in some current CI devices. To better assess the potential of implementing the MOC strategy in contemporary CIs, here, we compare intelligibility with and without MOC processing in combination with linked AGC and FS processing. Speech reception thresholds (SRTs) were compared for an FS and a MOC-FS strategy for sentences in steady and fluctuating noises, for various speech levels, in bilateral and unilateral listening modes, and for multiple spatial configurations of the speech and noise sources. Word recall scores and verbal response times in a word recognition test (two proxies for listening effort) were also compared for the two strategies in quiet and in steady noise at 5 dB signal-to-noise ratio (SNR) and the individual SRT. In steady noise, mean SRTs were always equal or better with the MOC-FS than with the standard FS strategy, both in bilateral (the mean and largest improvement across spatial configurations and speech levels were 0.8 and 2.2 dB, respectively) and unilateral listening (mean and largest improvement of 1.7 and 2.1 dB, respectively). In fluctuating noise and in bilateral listening, SRTs were equal for the two strategies. Word recall scores and verbal response times were not significantly affected by the test SNR or the processing strategy. Results show that MOC processing can be combined with linked AGC and FS processing. Compared to using FS processing alone, combined MOC-FS processing can improve speech intelligibility in noise without affecting word recall scores or verbal response times.


Subject(s)
Cochlear Implants , Speech Perception , Listening Effort , Reflex , Speech Intelligibility
6.
Hear Res ; 405: 108246, 2021 06.
Article in English | MEDLINE | ID: mdl-33872834

ABSTRACT

For speech in competition with a noise source in the free field, normal-hearing (NH) listeners recognize speech better when listening binaurally than when listening monaurally with the ear that has the better acoustic signal-to-noise ratio (SNR). This benefit from listening binaurally is known as binaural unmasking and indicates that the brain combines information from the two ears to improve intelligibility. Here, we address three questions pertaining to binaural unmasking for NH listeners. First, we investigate if binaural unmasking results from combining the speech and/or the noise from the two ears. In a simulated acoustic free field with speech and noise sources at 0° and 270°azimuth, respectively, we found comparable unmasking regardless of whether the speech was present or absent in the ear with the worse SNR. This indicates that binaural unmasking probably involves combining only the noise at the two ears. Second, we investigate if having binaurally coherent location cues for the noise signal is sufficient for binaural unmasking to occur. We found no unmasking when location cues were coherent but noise signals were generated incoherent or were processed unilaterally through a hearing aid with linear, minimal amplification. This indicates that binaural unmasking requires interaurally coherent noise signals, source location cues, and processing. Third, we investigate if the hypothesized antimasking benefits of the medial olivocochlear reflex (MOCR) contribute to binaural unmasking. We found comparable unmasking regardless of whether speech tokens (words) were sufficiently delayed from the noise onset to fully activate the MOCR or not. Moreover, unmasking was absent when the noise was binaurally incoherent whereas the physiological antimasking effects of the MOCR are similar for coherent and incoherent noises. This indicates that the MOCR is unlikely involved in binaural unmasking.


Subject(s)
Hearing Aids , Noise , Speech Perception , Auditory Perception , Noise/adverse effects , Reflex
7.
Ear Hear ; 41(6): 1492-1510, 2020.
Article in English | MEDLINE | ID: mdl-33136626

ABSTRACT

OBJECTIVES: Cochlear implant (CI) users continue to struggle understanding speech in noisy environments with current clinical devices. We have previously shown that this outcome can be improved by using binaural sound processors inspired by the medial olivocochlear (MOC) reflex, which involve dynamic (contralaterally controlled) rather than fixed compressive acoustic-to-electric maps. The present study aimed at investigating the potential additional benefits of using more realistic implementations of MOC processing. DESIGN: Eight users of bilateral CIs and two users of unilateral CIs participated in the study. Speech reception thresholds (SRTs) for sentences in competition with steady state noise were measured in unilateral and bilateral listening modes. Stimuli were processed through two independently functioning sound processors (one per ear) with fixed compression, the current clinical standard (STD); the originally proposed MOC strategy with fast contralateral control of compression (MOC1); a MOC strategy with slower control of compression (MOC2); and a slower MOC strategy with comparatively greater contralateral inhibition in the lower-frequency than in the higher-frequency channels (MOC3). Performance with the four strategies was compared for multiple simulated spatial configurations of the speech and noise sources. Based on a previously published technical evaluation of these strategies, we hypothesized that SRTs would be overall better (lower) with the MOC3 strategy than with any of the other tested strategies. In addition, we hypothesized that the MOC3 strategy would be advantageous over the STD strategy in listening conditions and spatial configurations where the MOC1 strategy was not. RESULTS: In unilateral listening and when the implant ear had the worse acoustic signal-to-noise ratio, the mean SRT was 4 dB worse for the MOC1 than for the STD strategy (as expected), but it became equal or better for the MOC2 or MOC3 strategies than for the STD strategy. In bilateral listening, mean SRTs were 1.6 dB better for the MOC3 strategy than for the STD strategy across all spatial configurations tested, including a condition with speech and noise sources colocated at front where the MOC1 strategy was slightly disadvantageous relative to the STD strategy. All strategies produced significantly better SRTs for spatially separated than for colocated speech and noise sources. A statistically significant binaural advantage (i.e., better mean SRTs across spatial configurations and participants in bilateral than in unilateral listening) was found for the MOC2 and MOC3 strategies but not for the STD or MOC1 strategies. CONCLUSIONS: Overall, performance was best with the MOC3 strategy, which maintained the benefits of the originally proposed MOC1 strategy over the STD strategy for spatially separated speech and noise sources and extended those benefits to additional spatial configurations. In addition, the MOC3 strategy provided a significant binaural advantage, which did not occur with the STD or the original MOC1 strategies.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Humans , Reflex , Speech
8.
J Neurosci ; 40(34): 6613-6623, 2020 08 19.
Article in English | MEDLINE | ID: mdl-32680938

ABSTRACT

Human hearing adapts to background noise, as evidenced by the fact that listeners recognize more isolated words when words are presented later rather than earlier in noise. This adaptation likely occurs because the leading noise shifts ("adapts") the dynamic range of auditory neurons, which can improve the neural encoding of speech spectral and temporal cues. Because neural dynamic range adaptation depends on stimulus-level statistics, here we investigated the importance of "statistical" adaptation for improving speech recognition in noisy backgrounds. We compared the recognition of noised-masked words in the presence and in the absence of adapting noise precursors whose level was either constant or was changing every 50 ms according to different statistical distributions. Adaptation was measured for 28 listeners (9 men) and was quantified as the recognition improvement in the precursor relative to the no-precursor condition. Adaptation was largest for constant-level precursors and did not occur for highly fluctuating precursors, even when the two types of precursors had the same mean level and both activated the medial olivocochlear reflex. Instantaneous amplitude compression of the highly fluctuating precursor produced as much adaptation as the constant-level precursor did without compression. Together, results suggest that noise adaptation in speech recognition is probably mediated by neural dynamic range adaptation to the most frequent sound level. Further, they suggest that auditory peripheral compression per se, rather than the medial olivocochlear reflex, could facilitate noise adaptation by reducing the level fluctuations in the noise.SIGNIFICANCE STATEMENT Recognizing speech in noise is challenging but can be facilitated by noise adaptation. The neural mechanisms underlying this adaptation remain unclear. Here, we report some benefits of adaptation for word-in-noise recognition and show that (1) adaptation occurs for stationary but not for highly fluctuating precursors with equal mean level; (2) both stationary and highly fluctuating noises activate the medial olivocochlear reflex; and (3) adaptation occurs even for highly fluctuating precursors when the stimuli are passed through a fast amplitude compressor. These findings suggest that noise adaptation reflects neural dynamic range adaptation to the most frequent noise level and that auditory peripheral compression, rather than the medial olivocochlear reflex, could facilitate noise adaptation.


Subject(s)
Adaptation, Physiological , Noise , Speech Perception/physiology , Adult , Auditory Threshold/physiology , Female , Humans , Male , Neurons/physiology , Signal-To-Noise Ratio , Young Adult
9.
Hear Res ; 379: 103-116, 2019 08.
Article in English | MEDLINE | ID: mdl-31150955

ABSTRACT

Many users of bilateral cochlear implants (BiCIs) localize sound sources less accurately than do people with normal hearing. This may be partly due to using two independently functioning CIs with fixed compression, which distorts and/or reduces interaural level differences (ILDs). Here, we investigate the potential benefits of using binaurally coupled, dynamic compression inspired by the medial olivocochlear reflex; an approach termed "the MOC strategy" (Lopez-Poveda et al., 2016, Ear Hear 37:e138-e148). Twelve BiCI users were asked to localize wideband (125-6000 Hz) noise tokens in a virtual horizontal plane. Stimuli were processed through a standard (STD) sound processing strategy (i.e., involving two independently functioning sound processors with fixed compression) and three different implementations of the MOC strategy: one with fast (MOC1) and two with slower contralateral control of compression (MOC2 and MOC3). The MOC1 and MOC2 strategies had effectively greater inhibition in the higher than in the lower frequency channels, while the MOC3 strategy had slightly greater inhibition in the lower than in the higher frequency channels. Localization was most accurate with the MOC1 strategy, presumably because it provided the largest and less ambiguous ILDs. The angle error improved slightly from 25.3° with the STD strategy to 22.7° with the MOC1 strategy. The improvement in localization ability over the STD strategy disappeared when the contralateral control of compression was made slower, presumably because stimuli were too short (200 ms) for the slower contralateral inhibition to enhance ILDs. Results suggest that some MOC implementations hold promise for improving not only speech-in-noise intelligibility, as shown elsewhere, but also sound source lateralization.


Subject(s)
Cochlear Implants , Sound Localization/physiology , Acoustic Stimulation , Adolescent , Adult , Aged , Aged, 80 and over , Basilar Membrane/physiopathology , Cochlear Implants/statistics & numerical data , Data Compression , Electronic Data Processing , Female , Hearing Loss, Bilateral/physiopathology , Hearing Loss, Bilateral/rehabilitation , Humans , Male , Middle Aged , Organ of Corti/physiopathology , Reflex, Acoustic/physiology , Superior Olivary Complex/physiopathology
10.
Cognition ; 192: 103992, 2019 11.
Article in English | MEDLINE | ID: mdl-31254890

ABSTRACT

In difficult listening situations, such as in noisy environments, one would expect speech intelligibility to improve over time thanks to noise adaptation and/or to speech predictability facilitating the recognition of upcoming words. We tested this possibility by presenting normal-hearing human listeners (N = 100; 70 women) with sentences and measuring word recognition as a function of word position in a sentence. Sentences were presented in quiet and in competition with various masker sounds at individualized levels where listeners had 50% probability of recognizing a full sentence. Contrary to expectations, recognition was best for the first word and gradually deteriorated with increasing word position along the sentence. The worsening in recognition was unlikely due to differences in word audibility or word type and was uncorrelated with age or working memory capacity. Using a probabilistic model of word recognition, we show that the worsening effect probably occurs because misunderstandings generate inaccurate predictions that outweigh the benefits from accurate predictions. Analyses also revealed that predictions overruled the potential benefits from noise adaptation. We conclude that although speech predictability can facilitate sentence recognition, it can also result in declines in word recognition as the sentence unfolds because of inaccuracies in prediction.


Subject(s)
Recognition, Psychology , Speech Intelligibility , Speech Perception , Adaptation, Psychological , Adolescent , Adult , Child , Female , Humans , Male , Middle Aged , Noise , Perceptual Masking , Young Adult
11.
Hear Res ; 377: 133-141, 2019 06.
Article in English | MEDLINE | ID: mdl-30933705

ABSTRACT

The detection of amplitude modulation (AM) in quiet or in noise improves when the AM carrier is preceded by noise, an effect that has been attributed to the medial olivocochlear reflex (MOCR). We investigate whether this improvement can occur without the MOCR by measuring AM sensitivity for cochlear implant (CI) users, whose MOCR effects are circumvented as a result of the electrical stimulation provided by the CI. AM detection thresholds were measured monaurally for short (50 ms) AM probes presented at the onset (early condition) or delayed by 300 ms (late condition) from the onset of a broadband noise. The noise was presented ipsilaterally, contralaterally and bilaterally to the test ear. Stimuli were processed through an experimental, time-invariant sound processing strategy. On average, thresholds were 4 dB better in the late than in the early condition and the size of the improvement was similar for the three noise lateralities. The pattern and magnitude of the improvement was broadly consistent with that for normal hearing listeners [Marrufo-Pérez et al., 2018, J Assoc Res Otolaryngol 19:147-161]. Because the electrical stimulation provided by CIs is independent from the middle-ear muscle reflex (MEMR) or the MOCR, this shows that mechanisms other than the MEMR or the MOCR can facilitate AM detection in noisy backgrounds.


Subject(s)
Auditory Perception , Cochlear Implantation/instrumentation , Cochlear Implants , Noise/adverse effects , Persons With Hearing Impairments/rehabilitation , Acoustic Stimulation , Adaptation, Psychological , Adolescent , Adult , Aged , Auditory Threshold , Child , Cochlea/innervation , Electric Stimulation , Female , Hearing , Humans , Male , Middle Aged , Perceptual Masking , Persons With Hearing Impairments/psychology , Reflex , Superior Olivary Complex/physiopathology , Time Factors
12.
J Acoust Soc Am ; 143(4): 2217, 2018 04.
Article in English | MEDLINE | ID: mdl-29716283

ABSTRACT

It has been recently shown that cochlear implant users could enjoy better speech reception in noise and enhanced spatial unmasking with binaural audio processing inspired by the inhibitory effects of the contralateral medial olivocochlear (MOC) reflex on compression [Lopez-Poveda, Eustaquio-Martin, Stohl, Wolford, Schatzer, and Wilson (2016). Ear Hear. 37, e138-e148]. The perceptual evidence supporting those benefits, however, is limited to a few target-interferer spatial configurations and to a particular implementation of contralateral MOC inhibition. Here, the short-term objective intelligibility index is used to (1) objectively demonstrate potential benefits over many more spatial configurations, and (2) investigate if the predicted benefits may be enhanced by using more realistic MOC implementations. Results corroborate the advantages and drawbacks of MOC processing indicated by the previously published perceptual tests. The results also suggest that the benefits may be enhanced and the drawbacks overcome by using longer time constants for the activation and deactivation of inhibition and, to a lesser extent, by using a comparatively greater inhibition in the lower than in the higher frequency channels. Compared to using two functionally independent processors, the better MOC processor improved the signal-to-noise ratio in the two ears between 1 and 6 decibels by enhancing head-shadow effects, and was advantageous for all tested target-interferer spatial configurations.


Subject(s)
Auditory Pathways/physiology , Cochlear Implants/standards , Deafness/rehabilitation , Reflex , Sound , Speech Perception/physiology , Cochlear Nerve/physiology , Humans , Olivary Nucleus/physiology , Perceptual Masking , Signal-To-Noise Ratio
13.
J Neurosci ; 38(17): 4138-4145, 2018 04 25.
Article in English | MEDLINE | ID: mdl-29593051

ABSTRACT

Sensory systems constantly adapt their responses to the current environment. In hearing, adaptation may facilitate communication in noisy settings, a benefit frequently (but controversially) attributed to the medial olivocochlear reflex (MOCR) enhancing the neural representation of speech. Here, we show that human listeners (N = 14; five male) recognize more words presented monaurally in ipsilateral, contralateral, and bilateral noise when they are given some time to adapt to the noise. This finding challenges models and theories that claim that speech intelligibility in noise is invariant over time. In addition, we show that this adaptation to the noise occurs also for words processed to maintain the slow-amplitude modulations in speech (the envelope) disregarding the faster fluctuations (the temporal fine structure). This demonstrates that noise adaptation reflects an enhancement of amplitude modulation speech cues and is unaffected by temporal fine structure cues. Last, we show that cochlear implant users (N = 7; four male) show normal monaural adaptation to ipsilateral noise. Because the electrical stimulation delivered by cochlear implants is independent from the MOCR, this demonstrates that noise adaptation does not require the MOCR. We argue that noise adaptation probably reflects adaptation of the dynamic range of auditory neurons to the noise level statistics.SIGNIFICANCE STATEMENT People find it easier to understand speech in noisy environments when they are given some time to adapt to the noise. This benefit is frequently but controversially attributed to the medial olivocochlear efferent reflex enhancing the representation of speech cues in the auditory nerve. Here, we show that the adaptation to noise reflects an enhancement of the slow fluctuations in amplitude over time that are present in speech. In addition, we show that adaptation to noise for cochlear implant users is not statistically different from that for listeners with normal hearing. Because the electrical stimulation delivered by cochlear implants is independent from the medial olivocochlear efferent reflex, this demonstrates that adaptation to noise does not require this reflex.


Subject(s)
Adaptation, Physiological , Cochlear Nucleus/physiology , Olivary Nucleus/physiology , Reflex , Speech Perception , Adult , Cochlear Implants , Cochlear Nucleus/cytology , Female , Humans , Male , Neurons, Efferent/physiology , Noise , Olivary Nucleus/cytology
14.
J Assoc Res Otolaryngol ; 19(2): 147-161, 2018 04.
Article in English | MEDLINE | ID: mdl-29508100

ABSTRACT

The amplitude modulations (AMs) in speech signals are useful cues for speech recognition. Several adaptation mechanisms may make the detection of AM in noisy backgrounds easier when the AM carrier is presented later rather than earlier in the noise. The aim of the present study was to characterize temporal adaptation to noise in AM detection. AM detection thresholds were measured for monaural (50 ms, 1.5 kHz) pure-tone carriers presented at the onset ('early' condition) and 300 ms after the onset ('late' condition) of ipsilateral, contralateral, and bilateral (diotic) broadband noise, as well as in quiet. Thresholds were 2-4 dB better in the late than in the early condition for the three noise lateralities. The temporal effect held for carriers at equal sensation levels, confirming that it was not due to overshoot on carrier audibility. The temporal effect was larger for broadband than for low-band contralateral noises. Many aspects in the results were consistent with the noise activating the medial olivocochlear reflex (MOCR) and enhancing AM depth in the peripheral auditory response. Other aspects, however, indicate that central masking and adaptation unrelated to the MOCR also affect both carrier-tone and AM detection and are involved in the temporal effects.


Subject(s)
Auditory Threshold , Hearing , Noise , Humans
15.
Hear Res ; 348: 134-137, 2017 05.
Article in English | MEDLINE | ID: mdl-28188882

ABSTRACT

We have recently proposed a binaural cochlear implant (CI) sound processing strategy inspired by the contralateral medial olivocochlear reflex (the MOC strategy) and shown that it improves intelligibility in steady-state noise (Lopez-Poveda et al., 2016, Ear Hear 37:e138-e148). The aim here was to evaluate possible speech-reception benefits of the MOC strategy for speech maskers, a more natural type of interferer. Speech reception thresholds (SRTs) were measured in six bilateral and two single-sided deaf CI users with the MOC strategy and with a standard (STD) strategy. SRTs were measured in unilateral and bilateral listening conditions, and for target and masker stimuli located at azimuthal angles of (0°, 0°), (-15°, +15°), and (-90°, +90°). Mean SRTs were 2-5 dB better with the MOC than with the STD strategy for spatially separated target and masker sources. For bilateral CI users, the MOC strategy (1) facilitated the intelligibility of speech in competition with spatially separated speech maskers in both unilateral and bilateral listening conditions; and (2) led to an overall improvement in spatial release from masking in the two listening conditions. Insofar as speech is a more natural type of interferer than steady-state noise, the present results suggest that the MOC strategy holds potential for promising outcomes for CI users.


Subject(s)
Cochlear Implantation/methods , Cochlear Implants , Hearing , Speech Intelligibility , Adult , Aged, 80 and over , Auditory Threshold , Child , Female , Humans , Male , Middle Aged , Noise , Perceptual Masking , Sound , Sound Localization , Speech , Speech Perception , Speech Reception Threshold Test , Treatment Outcome
16.
Adv Exp Med Biol ; 894: 105-114, 2016.
Article in English | MEDLINE | ID: mdl-27080651

ABSTRACT

Our two ears do not function as fixed and independent sound receptors; their functioning is coupled and dynamically adjusted via the contralateral medial olivocochlear efferent reflex (MOCR). The MOCR possibly facilitates speech recognition in noisy environments. Such a role, however, is yet to be demonstrated because selective deactivation of the reflex during natural acoustic listening has not been possible for human subjects up until now. Here, we propose that this and other roles of the MOCR may be elucidated using the unique stimulus controls provided by cochlear implants (CIs). Pairs of sound processors were constructed to mimic or not mimic the effects of the contralateral MOCR with CIs. For the non-mimicking condition (STD strategy), the two processors in a pair functioned independently of each other. When configured to mimic the effects of the MOCR (MOC strategy), however, the two processors communicated with each other and the amount of compression in a given frequency channel of each processor in the pair decreased with increases in the output energy from the contralateral processor. The analysis of output signals from the STD and MOC strategies suggests that in natural binaural listening, the MOCR possibly causes a small reduction of audibility but enhances frequency-specific inter-aural level differences and the segregation of spatially non-overlapping sound sources. The proposed MOC strategy could improve the performance of CI and hearing-aid users.


Subject(s)
Cochlea/physiology , Cochlear Implants , Hearing/physiology , Reflex, Acoustic/physiology , Humans
17.
Ear Hear ; 37(3): e138-48, 2016.
Article in English | MEDLINE | ID: mdl-26862711

ABSTRACT

OBJECTIVES: In natural hearing, cochlear mechanical compression is dynamically adjusted via the efferent medial olivocochlear reflex (MOCR). These adjustments probably help understanding speech in noisy environments and are not available to the users of current cochlear implants (CIs). The aims of the present study are to: (1) present a binaural CI sound processing strategy inspired by the control of cochlear compression provided by the contralateral MOCR in natural hearing; and (2) assess the benefits of the new strategy for understanding speech presented in competition with steady noise with a speech-like spectrum in various spatial configurations of the speech and noise sources. DESIGN: Pairs of CI sound processors (one per ear) were constructed to mimic or not mimic the effects of the contralateral MOCR on compression. For the nonmimicking condition (standard strategy or STD), the two processors in a pair functioned similarly to standard clinical processors (i.e., with fixed back-end compression and independently of each other). When configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated with each other and the amount of back-end compression in a given frequency channel of each processor in the pair decreased/increased dynamically (so that output levels dropped/increased) with increases/decreases in the output energy from the corresponding frequency channel in the contralateral processor. Speech reception thresholds in speech-shaped noise were measured for 3 bilateral CI users and 2 single-sided deaf unilateral CI users. Thresholds were compared for the STD and MOC strategies in unilateral and bilateral listening conditions and for three spatial configurations of the speech and noise sources in simulated free-field conditions: speech and noise sources colocated in front of the listener, speech on the left ear with noise in front of the listener, and speech on the left ear with noise on the right ear. In both bilateral and unilateral listening, the electrical stimulus delivered to the test ear(s) was always calculated as if the listeners were wearing bilateral processors. RESULTS: In both unilateral and bilateral listening conditions, mean speech reception thresholds were comparable with the two strategies for colocated speech and noise sources, but were at least 2 dB lower (better) with the MOC than with the STD strategy for spatially separated speech and noise sources. In unilateral listening conditions, mean thresholds improved with increasing the spatial separation between the speech and noise sources regardless of the strategy but the improvement was significantly greater with the MOC strategy. In bilateral listening conditions, thresholds improved significantly with increasing the speech-noise spatial separation only with the MOC strategy. CONCLUSIONS: The MOC strategy (1) significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; (2) produced significant spatial release from masking in bilateral listening conditions, something that did not occur with fixed compression; and (3) enhanced spatial release from masking in unilateral listening conditions. The MOC strategy as implemented here, or a modified version of it, may be usefully applied in CIs and in hearing aids.


Subject(s)
Cochlear Implantation , Cochlear Implants , Deafness/rehabilitation , Reflex , Speech Perception , Female , Humans , Male , Software
18.
J Assoc Res Otolaryngol ; 14(5): 673-86, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23690279

ABSTRACT

In signal processing terms, the operation of the mammalian cochlea in the inner ear may be likened to a bank of filters. Based on otoacoustic emission evidence, it has been recently claimed that cochlear tuning is sharper for human than for other mammals. The claim was corroborated with a behavioral method that involves the masking of pure tones with forward notched noises (NN). Using this method, it has been further claimed that human cochlear tuning is sharper than suggested by earlier behavioral studies. These claims are controversial. Here, we contribute to the controversy by theoretically assessing the accuracy of the NN method at inferring the bandwidth (BW) of nonlinear cochlear filters. Behavioral forward masking was mimicked using a computer model of the squared basilar membrane response followed by a temporal integrator. Isoresponse and isolevel versions of the forward masking NN method were applied to infer the already known BW of the cochlear filter used in the model. We show that isolevel methods were overall more accurate than isoresponse methods. We also show that BWs for NNs and sinusoids equate only for isolevel methods and when the levels of the two stimuli are appropriately scaled. Lastly, we show that the inferred BW depends on the method version (isolevel BW was twice as broad as isoresponse BW at 40 dB SPL) and on the stimulus level (isoresponse and isolevel BW decreased and increased, respectively, with increasing level over the level range where cochlear responses went from linear to compressive). We suggest that the latter may contribute to explaining the reported differences in cochlear tuning across behavioral studies and species. We further suggest that given the well-established nonlinear nature of cochlear responses, even greater care must be exercised when using a single BW value to describe and compare cochlear tuning.


Subject(s)
Cochlea/physiology , Computer Simulation , Models, Biological , Otoacoustic Emissions, Spontaneous/physiology , Pitch Perception/physiology , Animals , Audiometry, Pure-Tone/methods , Chinchilla , Humans , Mammals , Noise , Nonlinear Dynamics , Perceptual Masking/physiology
19.
Adv Exp Med Biol ; 787: 47-54, 2013.
Article in English | MEDLINE | ID: mdl-23716208

ABSTRACT

In binaural listening, the two cochleae do not act as independent sound receptors; their functioning is linked via the contralateral medial olivo-cochlear reflex (MOCR), which can be activated by contralateral sounds. The present study aimed at characterizing the effect of a contralateral white noise (CWN) on psychophysical tuning curves (PTCs). PTCs were measured in forward masking for probe frequencies of 500 Hz and 4 kHz, with and without CWN. The sound pressure level of the probe was fixed across conditions. PTCs for different response criteria were measured by using various masker-probe time gaps. The CWN had no significant effects on PTCs at 4 kHz. At 500 Hz, by contrast, PTCs measured with CWN appeared broader, particularly for short gaps, and they showed a decrease in the masker level. This decrease was greater the longer the masker-probe time gap. A computer model of forward masking with efferent control of cochlear gain was used to explain the data. The model accounted for the data based on the assumption that the sole effect of the CWN was to reduce the cochlear gain by ∼6.5 dB at 500 Hz for low and moderate levels. It also suggested that the pattern of data at 500 Hz is the result of combined broad bandwidth of compression and off-frequency listening. Results are discussed in relation with other physiological and psychoacoustical studies on the effect of activation of MOCR on cochlear function.


Subject(s)
Auditory Perception/physiology , Cochlea/physiology , Computer Simulation , Models, Biological , Psychoacoustics , Acoustic Stimulation/methods , Behavior , Efferent Pathways/physiology , Functional Laterality/physiology , Humans , Perceptual Masking/physiology
20.
J Assoc Res Otolaryngol ; 14(3): 341-57, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23423559

ABSTRACT

Medial olivocochlear efferent neurons can control cochlear frequency selectivity and may be activated in a reflexive manner by contralateral sounds. The present study investigated the significance of the contralateral medial olivocochlear reflex (MOCR) on human psychoacoustical tuning curves (PTCs), a behavioral correlate of cochlear tuning curves. PTCs were measured using forward masking in the presence and in the absence of a contralateral white noise, assumed to elicit the MOCR. To assess MOCR effects on apical and basal cochlear regions over a wide range of sound levels, PTCs were measured for probe frequencies of 500 Hz and 4 kHz and for near- and suprathreshold conditions. Results show that the contralateral noise affected the PTCs predominantly at 500 Hz. At near-threshold levels, its effect was obvious only for frequencies in the tails of the PTCs; at suprathreshold levels, its effects were obvious for all frequencies. It was verified that the effects were not due to the contralateral noise activating the middle-ear muscle reflex or changing the postmechanical rate of recovery from forward masking. A phenomenological computer model of forward masking with efferent control was used to explain the data. The model supports the hypothesis that the behavioral results were due to the contralateral noise reducing apical cochlear gain in a frequency- and level-dependent manner consistent with physiological evidence. Altogether, this shows that the contralateral MOCR may be changing apical cochlear responses in natural, binaural listening situations.


Subject(s)
Cochlea/physiology , Models, Biological , Perceptual Masking , Psychoacoustics , Adult , Auditory Threshold , Computer Simulation , Female , Healthy Volunteers , Humans , Male , Neurons, Efferent/physiology , Reflex, Acoustic
SELECTION OF CITATIONS
SEARCH DETAIL
...