Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 129
Filter
1.
JASA Express Lett ; 4(3)2024 03 01.
Article in English | MEDLINE | ID: mdl-38426890

ABSTRACT

English-speaking bimodal and bilateral cochlear implant (CI) users can segregate competing speech using talker sex cues but not spatial cues. While tonal language experience allows for greater utilization of talker sex cues for listeners with normal hearing, tonal language benefits remain unclear for CI users. The present study assessed the ability of Mandarin-speaking bilateral and bimodal CI users to recognize target sentences amidst speech maskers that varied in terms of spatial cues and/or talker sex cues, relative to the target. Different from English-speaking CI users, Mandarin-speaking CI users exhibited greater utilization of spatial cues, particularly in bimodal listening.


Subject(s)
Cochlear Implants , Speech Perception , Humans , Speech , Cues , Language , Caffeine , Niacinamide
2.
PLoS One ; 18(11): e0287728, 2023.
Article in English | MEDLINE | ID: mdl-37917727

ABSTRACT

Differences in spectro-temporal degradation may explain some variability in cochlear implant users' speech outcomes. The present study employs vocoder simulations on listeners with typical hearing to evaluate how differences in degree of channel interaction across ears affects spatial speech recognition. Speech recognition thresholds and spatial release from masking were measured in 16 normal-hearing subjects listening to simulated bilateral cochlear implants. 16-channel sine-vocoded speech simulated limited, broad, or mixed channel interaction, in dichotic and diotic target-masker conditions, across ears. Thresholds were highest with broad channel interaction in both ears but improved when interaction decreased in one ear and again in both ears. Masking release was apparent across conditions. Results from this simulation study on listeners with typical hearing show that channel interaction may impact speech recognition more than masking release, and may have implications for the effects of channel interaction on cochlear implant users' speech recognition outcomes.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Humans , Speech , Perceptual Masking , Cochlear Implantation/methods
3.
Heliyon ; 9(8): e18922, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37583764

ABSTRACT

Elderly adults often experience difficulties in speech understanding, possibly due to age-related deficits in frequency perception. It is unclear whether age-related deficits in frequency perception differ between the apical or basal regions of the cochlea. It is also unclear how aging might differently affect frequency discrimination or detection of a change in frequency within a stimulus. In the present study, pure-tone frequency thresholds were measured in 19 older (61-74 years) and 20 younger (22-28 years) typically hearing adults. Participants were asked to discriminate between reference and probe frequencies or to detect changes in frequency within a probe stimulus. Broadband spectro-temporal pattern perception was also measured using the spectro-temporal modulated ripple test (SMRT). Frequency thresholds were significantly poorer in the basal than in the apical region of the cochlea; the deficit in the basal region was 2 times larger for the older than for the younger group. Frequency thresholds were significantly poorer in the older group, especially in the basal region where frequency detection thresholds were 3.9 times poorer for the older than for the younger group. SMRT thresholds were 1.5 times better for the younger than for the older group. Significant age effects were observed for SMRT thresholds and for frequency thresholds only in the basal region. SMRT thresholds were significantly correlated with frequency thresholds only in the older group. The poorer frequency and spectro-temporal pattern perception may contribute to age-related deficits in speech perception, even when audiometric thresholds are nearly normal.

4.
JASA Express Lett ; 3(7)2023 07 01.
Article in English | MEDLINE | ID: mdl-37404165

ABSTRACT

Speech recognition thresholds were measured as a function of the relative level between two speech maskers that differed in perceptual similarity from the target. Results showed that recognition thresholds were driven by the relative level between the target and perceptually similar masker when the perceptually similar masker was softer, and by the relative level between the target and both maskers when the perceptually similar masker was louder. This suggests that effectiveness of a two-talker masker is primarily determined by the masker stream that is most perceptually similar to the target, but also by the relative levels between the two maskers.


Subject(s)
Speech Perception , Speech , Noise , Perceptual Masking , Recognition, Psychology
5.
J Acoust Soc Am ; 153(5): 2745, 2023 05 01.
Article in English | MEDLINE | ID: mdl-37133816

ABSTRACT

Hearing loss in the extended high-frequency (EHF) range (>8 kHz) is widespread among young normal-hearing adults and could have perceptual consequences such as difficulty understanding speech in noise. However, it is unclear how EHF hearing loss might affect basic psychoacoustic processes. The hypothesis that EHF hearing loss is associated with poorer auditory resolution in the standard frequencies was tested. Temporal resolution was characterized by amplitude modulation detection thresholds (AMDTs), and spectral resolution was characterized by frequency change detection thresholds (FCDTs). AMDTs and FCDTs were measured in adults with or without EHF loss but with normal clinical audiograms. AMDTs were measured with 0.5- and 4-kHz carrier frequencies; similarly, FCDTs were measured for 0.5- and 4-kHz base frequencies. AMDTs were significantly higher with the 4 kHz than the 0.5 kHz carrier, but there was no significant effect of EHF loss. There was no significant effect of EHF loss on FCDTs at 0.5 kHz; however, FCDTs were significantly higher at 4 kHz for listeners with than without EHF loss. This suggests that some aspects of auditory resolution in the standard audiometric frequency range may be compromised in listeners with EHF hearing loss despite having a normal audiogram.


Subject(s)
Hearing Loss , Speech Perception , Adult , Humans , Auditory Threshold , Hearing , Hearing Tests , Audiometry
6.
PLoS One ; 18(4): e0285154, 2023.
Article in English | MEDLINE | ID: mdl-37115775

ABSTRACT

For French cochlear implant (CI) recipients, in-person clinical auditory rehabilitation is typically provided during the first few years post-implantation. However, this is often inconvenient, it requires substantial time resources and can be problematic when appointments are unavailable. In response, we developed a computer-based home training software ("French AngelSound™") for French CI recipients. We recently conducted a pilot study to evaluate the newly developed French AngelSound™ in 15 CI recipients (5 unilateral, 5 bilateral, 5 bimodal). Outcome measures included phoneme recognition in quiet and sentence recognition in noise. Unilateral CI users were tested with the CI alone. Bilateral CI users were tested with each CI ear alone to determine the poorer ear to be trained, as well as with both ears (binaural performance). Bimodal CI users were tested with the CI ear alone, and with the contralateral hearing aid (binaural performance). Participants trained at home over a one-month period (10 hours total). Phonemic contrast training was used; the level of difficulty ranged from phoneme discrimination in quiet to phoneme identification in multi-talker babble. Unilateral and bimodal CI users trained with the CI alone; bilateral CI users trained with the poorer ear alone. Outcomes were measured before training (pre-training), immediately after training was completed (post-training), and one month after training was stopped (follow-up). For all participants, post-training CI-only vowel and consonant recognition scores significantly improved after phoneme training with the CI ear alone. For bilateral and bimodal CI users, binaural vowel and consonant recognition scores also significantly improved after training with a single CI ear. Follow-up measures showed that training benefits were largely retained. These preliminary data suggest that the phonemic contrast training in French AngelSound™ may significantly benefit French CI recipients and may complement clinical auditory rehabilitation, especially when in-person visits are not possible.


Subject(s)
Cochlear Implantation , Cochlear Implants , Hearing Aids , Speech Perception , Humans , Pilot Projects , Speech Perception/physiology , Computers
7.
Sci Rep ; 13(1): 4960, 2023 03 27.
Article in English | MEDLINE | ID: mdl-36973380

ABSTRACT

Bimodal cochlear implant (CI) listeners have difficulty utilizing spatial cues to segregate competing speech, possibly due to tonotopic mismatch between the acoustic input frequency and electrode place of stimulation. The present study investigated the effects of tonotopic mismatch in the context of residual acoustic hearing in the non-CI ear or residual hearing in both ears. Speech recognition thresholds (SRTs) were measured with two co-located or spatially separated speech maskers in normal-hearing adults listening to acoustic simulations of CIs; low frequency acoustic information was available in the non-CI ear (bimodal listening) or in both ears. Bimodal SRTs were significantly better with tonotopically matched than mismatched electric hearing for both co-located and spatially separated speech maskers. When there was no tonotopic mismatch, residual acoustic hearing in both ears provided a significant benefit when maskers were spatially separated, but not when co-located. The simulation data suggest that hearing preservation in the implanted ear for bimodal CI listeners may significantly benefit utilization of spatial cues to segregate competing speech, especially when the residual acoustic hearing is comparable across two ears. Also, the benefits of bilateral residual acoustic hearing may be best ascertained for spatially separated maskers.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Speech Perception/physiology , Hearing/physiology , Hearing Tests
8.
Ear Hear ; 44(1): 77-91, 2023.
Article in English | MEDLINE | ID: mdl-35733275

ABSTRACT

OBJECTIVES: Talker sex and spatial cues can facilitate segregation of competing speech. However, the spectrotemporal degradation associated with cochlear implants (CIs) can limit the benefit of talker sex and spatial cues. Acoustic hearing in the nonimplanted ear can improve access to talker sex cues in CI users. However, it's unclear whether the CI can improve segregation of competing speech when maskers are symmetrically placed around the target (i.e., when spatial cues are available), compared with acoustic hearing alone. The aim of this study was to investigate whether a CI can improve segregation of competing speech by individuals with unilateral hearing loss. DESIGN: Speech recognition thresholds (SRTs) for competing speech were measured in 16 normal-hearing (NH) adults and 16 unilaterally deaf CI users. All participants were native speakers of Mandarin Chinese. CI users were divided into two groups according to thresholds in the nonimplanted ear: (1) single-sided deaf (SSD); pure-tone thresholds <25 dB HL at all audiometric frequencies, and (2) Asymmetric hearing loss (AHL; one or more thresholds > 25 dB HL). SRTs were measured for target sentences produced by a male talker in the presence of two masker talkers (different male or female talkers). The target sentence was always presented via loudspeaker directly in front of the listener (0°), and the maskers were either colocated with the target (0°) or spatially separated from the target at ±90°. Three segregation cue conditions were tested to measure masking release (MR) relative to the baseline condition: (1) Talker sex, (2) Spatial, and (3) Talker sex + Spatial. For CI users, SRTs were measured with the CI on or off. RESULTS: Binaural MR was significantly better for the NH group than for the AHL or SSD groups ( P < 0.001 in all cases). For the NH group, mean MR was largest with the Talker sex + spatial cues (18.8 dB) and smallest for the Talker sex cues (10.7 dB). In contrast, mean MR for the SSD group was largest with the Talker sex + spatial cues (14.7 dB), and smallest with the Spatial cues (4.8 dB). For the AHL group, mean MR was largest with the Talker sex + spatial cues (7.8 dB) and smallest with the Talker sex (4.8 dB) and the Spatial cues (4.8 dB). MR was significantly better with the CI on than off for both the AHL ( P = 0.014) and SSD groups ( P < 0.001). Across all unilaterally deaf CI users, monaural (acoustic ear alone) and binaural MR were significantly correlated with unaided pure-tone average thresholds in the nonimplanted ear for the Talker sex and Talker sex + spatial conditions ( P < 0.001 in both cases) but not for the Spatial condition. CONCLUSION: Although the CI benefitted unilaterally deaf listeners' segregation of competing speech, MR was much poorer than that observed in NH listeners. Different from previous findings with steady noise maskers, the CI benefit for segregation of competing speech from a different talker sex was greater in the SSD group than in the AHL group.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Adult , Male , Humans , Female , Cues , Speech
9.
PLoS One ; 17(7): e0270759, 2022.
Article in English | MEDLINE | ID: mdl-35788202

ABSTRACT

In the clinical fitting of cochlear implants (CIs), the lowest input acoustic frequency is typically much lower than the characteristic frequency associated with the most apical electrode position, due to the limited electrode insertion depth. For bilateral CI users, electrode positions may differ across ears. However, the same acoustic-to-electrode frequency allocation table (FAT) is typically assigned to both ears. As such, bilateral CI users may experience both intra-aural frequency mismatch within each ear and inter-aural mismatch across ears. This inter-aural mismatch may limit the ability of bilateral CI users to take advantage of spatial cues when attempting to segregate competing speech. Adjusting the FAT to tonotopically match the electrode position in each ear (i.e., increasing the low acoustic input frequency) is theorized to reduce this inter-aural mismatch. Unfortunately, this approach may also introduce the loss of acoustic information below the modified input acoustic frequency. The present study explored the trade-off between reduced inter-aural frequency mismatch and low-frequency information loss for segregation of competing speech. Normal-hearing participants were tested while listening to acoustic simulations of bilateral CIs. Speech reception thresholds (SRTs) were measured for target sentences produced by a male talker in the presence of two different male talkers. Masker speech was either co-located with or spatially separated from the target speech. The bilateral CI simulations were produced by 16-channel sinewave vocoders; the simulated insertion depth was fixed in one ear and varied in the other ear, resulting in an inter-aural mismatch of 0, 2, or 6 mm in terms of cochlear place. Two FAT conditions were compared: 1) clinical (200-8000 Hz in both ears), or 2) matched to the simulated insertion depth in each ear. Results showed that SRTs were significantly lower with the matched than with the clinical FAT, regardless of the insertion depth or spatial configuration of the masker speech. The largest improvement in SRTs with the matched FAT was observed when the inter-aural mismatch was largest (6 mm). These results suggest that minimizing inter-aural mismatch with tonotopically matched FATs may benefit bilateral CI users' ability to segregate competing speech despite substantial low-frequency information loss in ears with shallow insertion depths.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Cochlear Implantation/methods , Cues , Humans , Male , Speech
10.
Front Neurosci ; 16: 908989, 2022.
Article in English | MEDLINE | ID: mdl-35733932

ABSTRACT

Acoustic change complex (ACC) is a cortical auditory-evoked potential induced by a change of continuous sound stimulation. This study aimed to explore: (1) whether the change of horizontal sound location can elicit ACC; (2) the relationship between the change of sound location and the amplitude or latency of ACC; (3) the relationship between the behavioral measure of localization, minimum audible angle (MAA), and ACC. A total of 36 normal-hearing adults participated in this study. A 180° horizontal arc-shaped bracket with a 1.2 m radius was set in a sound field where participants sat at the center. MAA was measured in a two-alternative forced-choice setting. The objective electroencephalography recording of ACC was conducted with the location changed at four sets of positions, ±45°, ±15°, ±5°, and ±2°. The test stimulus was a 125-6,000 Hz broadband noise of 1 s at 60 ± 2 dB SPL with a 2 s interval. The N1'-P2' amplitudes, N1' latencies, and P2' latencies of ACC under four positions were evaluated. The influence of electrode sites and the direction of sound position change on ACC waveform was analyzed with analysis of variance. Results suggested that (1) ACC can be elicited successfully by changing the horizontal sound location position. The elicitation rate of ACC increased with the increase of location change. (2) N1'-P2' amplitude increased and N1' and P2' latencies decreased as the change of sound location increased. The effects of test angles on N1'-P2' amplitude [F(1.91,238.1) = 97.172, p < 0.001], N1' latency [F(1.78,221.90) = 96.96, p < 0.001], and P2' latency [F(1.87,233.11) = 79.97, p < 0.001] showed a statistical significance. (3) The direction of sound location change had no significant effect on any of the ACC peak amplitudes or latencies. (4) Sound location discrimination threshold by the ACC test (97.0% elicitation rate at ±5°) was higher than MAA threshold (2.08 ± 0.5°). The current study results show that though the ACC thresholds are higher than the behavioral thresholds on MAA task, ACC can be used as an objective method to evaluate sound localization ability. This article discusses the implications of this research for clinical practice and evaluation of localization skills, especially for children.

11.
J Acoust Soc Am ; 150(1): 339, 2021 07.
Article in English | MEDLINE | ID: mdl-34340485

ABSTRACT

Children with normal hearing (CNH) have greater difficulty segregating competing speech than do adults with normal hearing (ANH). Children with cochlear implants (CCI) have greater difficulty segregating competing speech than do CNH. In the present study, speech reception thresholds (SRTs) in competing speech were measured in Chinese Mandarin-speaking ANH, CNH, and CCIs. Target sentences were produced by a male Mandarin-speaking talker. Maskers were time-forward or -reversed sentences produced by a native Mandarin-speaking male (different from the target) or female or a non-native English-speaking male. The SRTs were lowest (best) for the ANH group, followed by the CNH and CCI groups. The masking release (MR) was comparable between the ANH and CNH group, but much poorer in the CCI group. The temporal properties differed between the native and non-native maskers and between forward and reversed speech. The temporal properties of the maskers were significantly associated with the SRTs for the CCI and CNH groups but not for the ANH group. Whereas the temporal properties of the maskers were significantly associated with the MR for all three groups, the association was stronger for the CCI and CNH groups than for the ANH group.


Subject(s)
Cochlear Implants , Speech Perception , Adult , Child , Female , Hearing , Humans , Male , Perceptual Masking , Speech
12.
JASA Express Lett ; 1(1): 014401, 2021 Jan.
Article in English | MEDLINE | ID: mdl-33521793

ABSTRACT

Cochlear implant (CI) users have greater difficulty perceiving talker sex and spatial cues than do normal-hearing (NH) listeners. The present study measured recognition of target sentences in the presence of two co-located or spatially separated speech maskers in NH, bilateral CI, and bimodal CI listeners; masker sex was the same as or different than the target. NH listeners demonstrated a large masking release with masker sex and/or spatial cues. For CI listeners, significant masking release was observed with masker sex cues, but not with spatial cues, at least for the spatially symmetrically placed maskers and listening task used in this study.

13.
JASA Express Lett ; 1(1): 015203, 2021 Jan.
Article in English | MEDLINE | ID: mdl-33589889

ABSTRACT

In competing speech, recognition of target speech may be limited by the number and characteristics of maskers, which produce energetic, envelope, and/or informational masking. In this study, speech recognition thresholds (SRTs) were measured with one, two, or four maskers. The target and masker sex was the same or different, and SRTs were measured with time-forward or time-reversed maskers. SRTs were significantly affected by target-masker sex differences with time-forward maskers, but not with time-reversed maskers. The multi-masker penalty was much greater with time-reversed maskers than with time-forward maskers when there were more than two talkers.

14.
Hear Res ; 400: 108110, 2021 02.
Article in English | MEDLINE | ID: mdl-33220506

ABSTRACT

Sounds we hear in our daily life contain changes in the acoustic features (e.g., frequency, intensity, and duration or "what" information) and/or changes in location ("where" information). The purpose of this study was to examine the cortical auditory evoked potentials (CAEPs) to the change within a stimulus, the acoustic change complex (ACC), in frequency (F) and location (L) of the sound in normal hearing listeners. Fifteen right-handed young normal hearing listeners participated in the electroencephalographic (EEG) recordings. The acoustic stimuli were pure tones (base frequency at 250 Hz) of 1 s, with a perceivable change either in location (L, 180°), frequency (F, 5% and 50%), or both location and frequency (L+F) in the middle of the tone. Additionally, the 250 Hz tone of 1 sec without any change was used as a reference. The participants were asked to listen passively to the stimuli and not to move their heads during the testing. Compared to the reference tone, by which only the onset-CAEP was elicited, the tones containing changes (L, F, or L+F) elicited both onset-CAEP and the ACC. The waveform analysis of ACCs from the vertex electrode (electrode Cz) showed that, larger sound changes evoked larger peak amplitudes [e.g., (L+50%F)- > L-change; (L+50%F)- > 5%F-change] and shorter the peak latencies ([(L+5%F)- < 5%F-change; 50%F- < 5%F-change; (L+50%F)- < 5%F-change] . The current density patterns for the ACC N1' peak displayed some differences between L-change vs. F-change, supporting different cortical processing for "where" and "what" information of the sound; regardless of the nature of the sound change, larger changes evoked a stronger activation than smaller changes [e.g., L- > 5%F-change; (L+5%F)- > 5%F-change; 50%F- > 5%F-change] in frontal lobe regions including the cingulate gyrus, medial frontal gyrus (MFG), superior frontal gyrus (SFG), the limbic lobe cingulate gyrus, and the parietal lobe postcentral gyrus. The results suggested that sound change-detection involves memory-based acoustic comparison (the neural encoding for the sound change vs. neural encoding for the pre-change stimulus stored in memory) and involuntary attention switch.


Subject(s)
Auditory Cortex , Hearing , Acoustic Stimulation , Auditory Perception , Evoked Potentials, Auditory , Humans
15.
Sci Rep ; 10(1): 19851, 2020 11 16.
Article in English | MEDLINE | ID: mdl-33199782

ABSTRACT

Many tinnitus patients report difficulties understanding speech in noise or competing talkers, despite having "normal" hearing in terms of audiometric thresholds. The interference caused by tinnitus is more likely central in origin. Release from informational masking (more central in origin) produced by competing speech may further illuminate central interference due to tinnitus. In the present study, masked speech understanding was measured in normal hearing listeners with or without tinnitus. Speech recognition thresholds were measured for target speech in the presence of multi-talker babble or competing speech. For competing speech, speech recognition thresholds were measured for different cue conditions (i.e., with and without target-masker sex differences and/or with and without spatial cues). The present data suggest that tinnitus negatively affected masked speech recognition even in individuals with no measurable hearing loss. Tinnitus severity appeared to especially limit listeners' ability to segregate competing speech using talker sex differences. The data suggest that increased informational masking via lexical interference may tax tinnitus patients' central auditory processing resources.


Subject(s)
Speech Reception Threshold Test/methods , Tinnitus/physiopathology , Adolescent , Adult , Female , Humans , Male , Pattern Recognition, Physiological , Perceptual Masking , Speech Perception , Young Adult
16.
PLoS One ; 15(10): e0240752, 2020.
Article in English | MEDLINE | ID: mdl-33057396

ABSTRACT

In bimodal listening, cochlear implant (CI) users combine electric hearing (EH) in one ear and acoustic hearing (AH) in the other ear. In electric-acoustic stimulation (EAS), CI users combine EH and AH in the same ear. In quiet, integration of EH and AH has been shown to be better with EAS, but with greater sensitivity to tonotopic mismatch in EH. The goal of the present study was to evaluate how external noise might affect integration of AH and EH within or across ears. Recognition of monosyllabic words was measured for normal-hearing subjects listening to simulations of unimodal (AH or EH alone), EAS, and bimodal listening in quiet and in speech-shaped steady noise (10 dB, 0 dB signal-to-noise ratio). The input/output frequency range for AH was 0.1-0.6 kHz. EH was simulated using an 8-channel noise vocoder. The output frequency range was 1.2-8.0 kHz to simulate a shallow insertion depth. The input frequency range was either matched (1.2-8.0 kHz) or mismatched (0.6-8.0 kHz) to the output frequency range; the mismatched input range maximized the amount of speech information, while the matched input resulted in some speech information loss. In quiet, tonotopic mismatch differently affected EAS and bimodal performance. In noise, EAS and bimodal performance was similarly affected by tonotopic mismatch. The data suggest that tonotopic mismatch may differently affect integration of EH and AH in quiet and in noise.


Subject(s)
Acoustics , Cochlear Implants , Ear/physiology , Hearing/physiology , Noise , Acoustic Stimulation , Adult , Electric Stimulation , Female , Humans , Male , Signal-To-Noise Ratio , Statistics as Topic , Vocabulary , Young Adult
17.
J Speech Lang Hear Res ; 63(8): 2811-2824, 2020 08 10.
Article in English | MEDLINE | ID: mdl-32777196

ABSTRACT

Purpose For colocated targets and maskers, binaural listening typically offers a small but significant advantage over monaural listening. This study investigated how monaural asymmetry and target-masker similarity may limit binaural advantage in adults and children. Method Ten Mandarin-speaking Chinese adults (aged 22-27 years) and 12 children (aged 7-14 years) with normal hearing participated in the study. Monaural and binaural speech recognition thresholds (SRTs) were adaptively measured for colocated competing speech. The target-masker sex was the same or different. Performance was measured using headphones for three listening conditions: left ear, right ear, and both ears. Binaural advantage was calculated relative to the poorer or better ear. Results Mean SRTs were significantly lower for adults than children. When the target-masker sex was the same, SRTs were significantly lower with the better ear than with the poorer ear or both ears (p < .05). When the target-masker sex was different, SRTs were significantly lower with the better ear or both ears than with the poorer ear (p < .05). Children and adults similarly benefitted from target-masker sex differences. Substantial monaural asymmetry was observed, but the effects of asymmetry on binaural advantage were similar between adults and children. Monaural asymmetry was significantly correlated with binaural advantage relative to the poorer ear (p = .004), but not to the better ear (p = .056). Conclusions Binaural listening may offer little advantage (or even a disadvantage) over monaural listening with the better ear, especially when competing talkers have similar vocal characteristics. Monaural asymmetry appears to limit binaural advantage in listeners with normal hearing, similar to observations in listeners with hearing impairment. While language development may limit perception of competing speech, it does not appear to limit the effects of monaural asymmetry or target-masker sex on binaural advantage.


Subject(s)
Hearing Loss , Speech Perception , Adult , Auditory Perception , Child , Female , Hearing , Hearing Tests , Humans , Male
18.
J Speech Lang Hear Res ; 63(8): 2801-2810, 2020 08 10.
Article in English | MEDLINE | ID: mdl-32692939

ABSTRACT

Purpose The aim of this study was to compare release from masking (RM) between Mandarin-speaking and English-speaking listeners with normal hearing for competing speech when target-masker sex cues, spatial cues, or both were available. Method Speech recognition thresholds (SRTs) for competing speech were measured in 21 Mandarin-speaking and 15 English-speaking adults with normal hearing using a modified coordinate response measure task. SRTs were measured for target sentences produced by a male talker in the presence of two masker talkers (different male talkers or female talkers). The target sentence was always presented directly in front of the listener, and the maskers were either colocated with the target or were spatially separated from the target (+90°, -90°). Stimuli were presented via headphones and were virtually spatialized using head-related transfer functions. Three masker conditions were used to measure RM relative to the baseline condition: (a) talker sex cues, (b) spatial cues, or (c) combined talker sex and spatial cues. Results The results showed large amounts of RM according to talker sex and/or spatial cues. There was no significant difference in SRTs between Chinese and English listeners for the baseline condition, where no talker sex or spatial cues were available. Furthermore, there was no significant difference in RM between Chinese and English listeners when spatial cues were available. However, RM was significantly larger for Chinese listeners when talker sex cues or combined talker sex and spatial cues were available. Conclusion Listeners who speak a tonal language such as Mandarin Chinese may be able to take greater advantage of talker sex cues than listeners who do not speak a tonal language.


Subject(s)
Speech Perception , Speech , Adult , Female , Humans , Language , Male , Perceptual Masking , Sex Characteristics
20.
Ear Hear ; 41(5): 1362-1371, 2020.
Article in English | MEDLINE | ID: mdl-32132377

ABSTRACT

OBJECTIVES: Due to interaural frequency mismatch, bilateral cochlear-implant (CI) users may be less able to take advantage of binaural cues that normal-hearing (NH) listeners use for spatial hearing, such as interaural time differences and interaural level differences. As such, bilateral CI users have difficulty segregating competing speech even when the target and competing talkers are spatially separated. The goal of this study was to evaluate the effects of spectral resolution, tonotopic mismatch (the frequency mismatch between the acoustic center frequency assigned to CI electrode within an implanted ear relative to the expected spiral ganglion characteristic frequency), and interaural mismatch (differences in the degree of tonotopic mismatch in each ear) on speech understanding and spatial release from masking (SRM) in the presence of competing talkers in NH subjects listening to bilateral vocoder simulations. DESIGN: During testing, both target and masker speech were presented in five-word sentences that had the same syntax but were not necessarily meaningful. The sentences were composed of five categories in fixed order (Name, Verb, Number, Color, and Clothes), each of which had 10 items, such that multiple sentences could be generated by randomly selecting a word from each category. Speech reception thresholds (SRTs) for the target sentence presented in competing speech maskers were measured. The target speech was delivered to both ears and the two speech maskers were delivered to (1) both ears (diotic masker), or (2) different ears (dichotic masker: one delivered to the left ear and the other delivered to the right ear). Stimuli included the unprocessed speech and four 16-channel sine-vocoder simulations with different interaural mismatch (0, 1, and 2 mm). SRM was calculated as the difference between the diotic and dichotic listening conditions. RESULTS: With unprocessed speech, SRTs were 0.3 and -18.0 dB for the diotic and dichotic maskers, respectively. For the spectrally degraded speech with mild tonotopic mismatch and no interaural mismatch, SRTs were 5.6 and -2.0 dB for the diotic and dichotic maskers, respectively. When the tonotopic mismatch increased in both ears, SRTs worsened to 8.9 and 2.4 dB for the diotic and dichotic maskers, respectively. When the two ears had different tonotopic mismatch (e.g., there was interaural mismatch), the performance drop in SRTs was much larger for the dichotic than for the diotic masker. The largest SRM was observed with unprocessed speech (18.3 dB). With the CI simulations, SRM was significantly reduced to 7.6 dB even with mild tonotopic mismatch but no interaural mismatch; SRM was further reduced with increasing interaural mismatch. CONCLUSIONS: The results demonstrate that frequency resolution, tonotopic mismatch, and interaural mismatch have differential effects on speech understanding and SRM in simulation of bilateral CIs. Minimizing interaural mismatch may be critical to optimize binaural benefits and improve CI performance for competing speech, a typical listening environment. SRM (the difference in SRTs between diotic and dichotic maskers) may be a useful clinical tool to assess interaural frequency mismatch in bilateral CI users and to evaluate the benefits of optimization methods that minimize interaural mismatch.


Subject(s)
Cochlear Implantation , Cochlear Implants , Sound Localization , Speech Perception , Humans , Perceptual Masking , Speech
SELECTION OF CITATIONS
SEARCH DETAIL
...