Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Hear Res ; 441: 108917, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38061268

ABSTRACT

Previous studies have shown that in challenging listening situations, people find it hard to equally divide their attention between two simultaneous talkers and tend to favor one talker over the other. The aim here was to investigate whether talker onset/offset, sex and location determine the favored talker. Fifteen people with normal hearing were asked to recognize as many words as possible from two sentences uttered by two talkers located at 45° and +45° azimuth, respectively. The sentences were from the same corpus, were time-centered and had equal sound level. In Conditions 1 and 2, the talkers had different sexes (male at +45°), sentence duration was not controlled for, and sentences were presented at 65 and 35 dB SPL, respectively. Listeners favored the male over the female talker, even more so at 35 dB SPL (62 % vs 43 % word recognition, respectively) than at 65 dB SPL (74 % vs 64 %, respectively). The greater asymmetry in intelligibility at the lower level supports that divided listening is harder and more 'asymmetric' in challenging acoustic scenarios. Listeners continued to favor the male talker when the experiment was repeated with sentences of equal average duration for the two talkers (Condition 3). This suggests that the earlier onset or later offset of male sentences (52 ms on average) was not the reason for the asymmetric intelligibility in Conditions 1 or 2. When the location of the talkers was switched (Condition 4) or the two talkers were the same woman (Condition 5), listeners continued to favor the talker to their right albeit non-significantly. Altogether, results confirm that in hard divided listening situations, listeners tend to favor the talker to their right. This preference is not affected by talker onset/offset delays less than 52 ms on average. Instead, the preference seems to be modulated by the voice characteristics of the talkers.


Subject(s)
Speech Perception , Voice , Humans , Male , Female , Speech Intelligibility , Language , Acoustics
2.
Hear Res ; 416: 108444, 2022 03 15.
Article in English | MEDLINE | ID: mdl-35078133

ABSTRACT

Verbal communication in social environments often requires dividing attention between two or more simultaneous talkers. The ability to do this, however, may be diminished when the listener has limited access to acoustic cues or those cues are degraded, as is the case for hearing-impaired listeners or users of cochlear implants or hearing aids. The aim of the present study was to investigate the ability of normal-hearing (NH) listeners to divide their attention and recognize speech from two simultaneous talkers in simulated free-field listening conditions, with and without reduced acoustic cues. Participants (N = 11 or 12 depending on the experiment) were asked to recognize and repeat as many words as possible from two simultaneous, time-centered sentences uttered by a male and a female talker. In Experiment 1, the female and male talkers were located at 15° and +15°, 45° and +45°, or 90° and +90° azimuth, respectively. Speech was natural or processed through a noise vocoder and was presented at a comfortable loudness level (∼65 dB SPL). In Experiment 2, the female and male talkers were located at 45° and +45° azimuth, respectively. Speech was natural but was presented at a lower level (35 dB SPL) to reduce audibility. In Experiment 3, speech was vocoded and presented at a comfortable loudness level (∼65 dB SPL), but the location of the talkers was switched relative to Experiment 1 (i.e., the male and female talkers were at 45° and +45°, respectively) to reveal possible interactions of talker sex and location. Listeners recognized overall more natural words at a comfortable loudness level (76%) than vocoded words at a similar level (39%) or natural words at a lower level (43%). This indicates that recognition was more difficult for the two latter stimuli. On the other hand, listeners recognized roughly the same proportion of words (76%) from the two talkers when speech was natural and comfortable in loudness, but a greater proportion of words from the male than from the female talker when speech was vocoded (50% vs 27%, respectively) or was natural but lower in level (55% vs 32%, respectively). This asymmetry occurred and was similar for the three spatial configurations. These results suggest that divided listening becomes asymmetric when speech cues are reduced. They also suggest that listeners preferentially recognized the male talker, located on the right side of the head. Switching the talker's location produced similar recognition for the two talkers for vocoded speech, suggesting an interaction between talkers' location and their speech characteristics. For natural speech at comfortable loudness level, listeners can divide their attention almost equally between two simultaneous talkers. When speech cues are limited (as is the case for vocoded speech or for speech at low sensation level), by contrast, the ability to divide attention equally between talkers is diminished and listeners favor one of the talkers based on their location, sex, and/or speech characteristics. Findings are discussed in the context of limited cognitive capacity affecting dividing listening in difficult listening situations.


Subject(s)
Cochlear Implants , Speech Perception , Acoustic Stimulation , Acoustics , Cues , Female , Humans , Male , Speech
3.
Hear Res ; 409: 108320, 2021 09 15.
Article in English | MEDLINE | ID: mdl-34348202

ABSTRACT

Cochlear implant (CI) users find it hard and effortful to understand speech in noise with current devices. Binaural CI sound processing inspired by the contralateral medial olivocochlear (MOC) reflex (an approach termed the 'MOC strategy') can improve speech-in-noise recognition for CI users. All reported evaluations of this strategy, however, disregarded automatic gain control (AGC) and fine-structure (FS) processing, two standard features in some current CI devices. To better assess the potential of implementing the MOC strategy in contemporary CIs, here, we compare intelligibility with and without MOC processing in combination with linked AGC and FS processing. Speech reception thresholds (SRTs) were compared for an FS and a MOC-FS strategy for sentences in steady and fluctuating noises, for various speech levels, in bilateral and unilateral listening modes, and for multiple spatial configurations of the speech and noise sources. Word recall scores and verbal response times in a word recognition test (two proxies for listening effort) were also compared for the two strategies in quiet and in steady noise at 5 dB signal-to-noise ratio (SNR) and the individual SRT. In steady noise, mean SRTs were always equal or better with the MOC-FS than with the standard FS strategy, both in bilateral (the mean and largest improvement across spatial configurations and speech levels were 0.8 and 2.2 dB, respectively) and unilateral listening (mean and largest improvement of 1.7 and 2.1 dB, respectively). In fluctuating noise and in bilateral listening, SRTs were equal for the two strategies. Word recall scores and verbal response times were not significantly affected by the test SNR or the processing strategy. Results show that MOC processing can be combined with linked AGC and FS processing. Compared to using FS processing alone, combined MOC-FS processing can improve speech intelligibility in noise without affecting word recall scores or verbal response times.


Subject(s)
Cochlear Implants , Speech Perception , Listening Effort , Reflex , Speech Intelligibility
4.
Ear Hear ; 41(6): 1492-1510, 2020.
Article in English | MEDLINE | ID: mdl-33136626

ABSTRACT

OBJECTIVES: Cochlear implant (CI) users continue to struggle understanding speech in noisy environments with current clinical devices. We have previously shown that this outcome can be improved by using binaural sound processors inspired by the medial olivocochlear (MOC) reflex, which involve dynamic (contralaterally controlled) rather than fixed compressive acoustic-to-electric maps. The present study aimed at investigating the potential additional benefits of using more realistic implementations of MOC processing. DESIGN: Eight users of bilateral CIs and two users of unilateral CIs participated in the study. Speech reception thresholds (SRTs) for sentences in competition with steady state noise were measured in unilateral and bilateral listening modes. Stimuli were processed through two independently functioning sound processors (one per ear) with fixed compression, the current clinical standard (STD); the originally proposed MOC strategy with fast contralateral control of compression (MOC1); a MOC strategy with slower control of compression (MOC2); and a slower MOC strategy with comparatively greater contralateral inhibition in the lower-frequency than in the higher-frequency channels (MOC3). Performance with the four strategies was compared for multiple simulated spatial configurations of the speech and noise sources. Based on a previously published technical evaluation of these strategies, we hypothesized that SRTs would be overall better (lower) with the MOC3 strategy than with any of the other tested strategies. In addition, we hypothesized that the MOC3 strategy would be advantageous over the STD strategy in listening conditions and spatial configurations where the MOC1 strategy was not. RESULTS: In unilateral listening and when the implant ear had the worse acoustic signal-to-noise ratio, the mean SRT was 4 dB worse for the MOC1 than for the STD strategy (as expected), but it became equal or better for the MOC2 or MOC3 strategies than for the STD strategy. In bilateral listening, mean SRTs were 1.6 dB better for the MOC3 strategy than for the STD strategy across all spatial configurations tested, including a condition with speech and noise sources colocated at front where the MOC1 strategy was slightly disadvantageous relative to the STD strategy. All strategies produced significantly better SRTs for spatially separated than for colocated speech and noise sources. A statistically significant binaural advantage (i.e., better mean SRTs across spatial configurations and participants in bilateral than in unilateral listening) was found for the MOC2 and MOC3 strategies but not for the STD or MOC1 strategies. CONCLUSIONS: Overall, performance was best with the MOC3 strategy, which maintained the benefits of the originally proposed MOC1 strategy over the STD strategy for spatially separated speech and noise sources and extended those benefits to additional spatial configurations. In addition, the MOC3 strategy provided a significant binaural advantage, which did not occur with the STD or the original MOC1 strategies.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Humans , Reflex , Speech
5.
Hear Res ; 379: 103-116, 2019 08.
Article in English | MEDLINE | ID: mdl-31150955

ABSTRACT

Many users of bilateral cochlear implants (BiCIs) localize sound sources less accurately than do people with normal hearing. This may be partly due to using two independently functioning CIs with fixed compression, which distorts and/or reduces interaural level differences (ILDs). Here, we investigate the potential benefits of using binaurally coupled, dynamic compression inspired by the medial olivocochlear reflex; an approach termed "the MOC strategy" (Lopez-Poveda et al., 2016, Ear Hear 37:e138-e148). Twelve BiCI users were asked to localize wideband (125-6000 Hz) noise tokens in a virtual horizontal plane. Stimuli were processed through a standard (STD) sound processing strategy (i.e., involving two independently functioning sound processors with fixed compression) and three different implementations of the MOC strategy: one with fast (MOC1) and two with slower contralateral control of compression (MOC2 and MOC3). The MOC1 and MOC2 strategies had effectively greater inhibition in the higher than in the lower frequency channels, while the MOC3 strategy had slightly greater inhibition in the lower than in the higher frequency channels. Localization was most accurate with the MOC1 strategy, presumably because it provided the largest and less ambiguous ILDs. The angle error improved slightly from 25.3° with the STD strategy to 22.7° with the MOC1 strategy. The improvement in localization ability over the STD strategy disappeared when the contralateral control of compression was made slower, presumably because stimuli were too short (200 ms) for the slower contralateral inhibition to enhance ILDs. Results suggest that some MOC implementations hold promise for improving not only speech-in-noise intelligibility, as shown elsewhere, but also sound source lateralization.


Subject(s)
Cochlear Implants , Sound Localization/physiology , Acoustic Stimulation , Adolescent , Adult , Aged , Aged, 80 and over , Basilar Membrane/physiopathology , Cochlear Implants/statistics & numerical data , Data Compression , Electronic Data Processing , Female , Hearing Loss, Bilateral/physiopathology , Hearing Loss, Bilateral/rehabilitation , Humans , Male , Middle Aged , Organ of Corti/physiopathology , Reflex, Acoustic/physiology , Superior Olivary Complex/physiopathology
6.
Hear Res ; 377: 133-141, 2019 06.
Article in English | MEDLINE | ID: mdl-30933705

ABSTRACT

The detection of amplitude modulation (AM) in quiet or in noise improves when the AM carrier is preceded by noise, an effect that has been attributed to the medial olivocochlear reflex (MOCR). We investigate whether this improvement can occur without the MOCR by measuring AM sensitivity for cochlear implant (CI) users, whose MOCR effects are circumvented as a result of the electrical stimulation provided by the CI. AM detection thresholds were measured monaurally for short (50 ms) AM probes presented at the onset (early condition) or delayed by 300 ms (late condition) from the onset of a broadband noise. The noise was presented ipsilaterally, contralaterally and bilaterally to the test ear. Stimuli were processed through an experimental, time-invariant sound processing strategy. On average, thresholds were 4 dB better in the late than in the early condition and the size of the improvement was similar for the three noise lateralities. The pattern and magnitude of the improvement was broadly consistent with that for normal hearing listeners [Marrufo-Pérez et al., 2018, J Assoc Res Otolaryngol 19:147-161]. Because the electrical stimulation provided by CIs is independent from the middle-ear muscle reflex (MEMR) or the MOCR, this shows that mechanisms other than the MEMR or the MOCR can facilitate AM detection in noisy backgrounds.


Subject(s)
Auditory Perception , Cochlear Implantation/instrumentation , Cochlear Implants , Noise/adverse effects , Persons With Hearing Impairments/rehabilitation , Acoustic Stimulation , Adaptation, Psychological , Adolescent , Adult , Aged , Auditory Threshold , Child , Cochlea/innervation , Electric Stimulation , Female , Hearing , Humans , Male , Middle Aged , Perceptual Masking , Persons With Hearing Impairments/psychology , Reflex , Superior Olivary Complex/physiopathology , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...