Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
Add more filters










Publication year range
1.
Am J Audiol ; 29(3S): 577-590, 2020 Sep 18.
Article in English | MEDLINE | ID: mdl-32946250

ABSTRACT

Purpose Cochlear implant (CI) performance varies considerably across individuals and across domains of auditory function, but clinical testing is typically restricted to speech intelligibility. The goals of this study were (a) to develop a basic auditory skills evaluation battery of tests for comprehensive assessment of ecologically relevant aspects of auditory perception and (b) to compare CI listeners' performance on the battery when tested in the laboratory by an audiologist or independently at home. Method The battery included 17 tests to evaluate (a) basic spectrotemporal processing, (b) processing of music and environmental sounds, and (c) speech perception in both quiet and background noise. The battery was administered online to three groups of adult listeners: two groups of postlingual CI listeners and a group of older normal-hearing (ONH) listeners of similar age. The ONH group and one CI group were tested in a laboratory by an audiologist, whereas the other CI group self-tested independently at home following online instructions. Results Results indicated a wide range in the performance of CI but not ONH listeners. Significant differences were not found between the two CI groups on any test, whereas on all but two tests, CI listeners' performance was lower than that of the ONH participants. Principal component analysis revealed that four components accounted for 82% of the variance in measured results, with component loading indicating that the test battery successfully captures differences across dimensions of auditory perception. Conclusions These results provide initial support for the use of the basic auditory skills evaluation battery for comprehensive online assessment of auditory skills in adult CI listeners.


Subject(s)
Auditory Perception , Cochlear Implantation , Deafness/rehabilitation , Hearing Tests/methods , Internet , Speech Perception , Telemedicine/methods , Adult , Aged , Cochlear Implants , Deafness/physiopathology , Female , Humans , Male , Middle Aged , Music , Noise
2.
J Speech Lang Hear Res ; 61(10): 2578-2588, 2018 10 26.
Article in English | MEDLINE | ID: mdl-30458532

ABSTRACT

Purpose: Visual recognition of interrupted text may predict speech intelligibility under adverse listening conditions. This study investigated the nature of the linguistic information and perceptual processes underlying this relationship. Method: To directly compare the perceptual organization of interrupted speech and text, we examined the recognition of spoken and printed sentences interrupted at different rates in 14 adults with normal hearing. The interruption method approximated deletion and retention of rate-specific linguistic information (0.5-64 Hz) in speech by substituting either white space or silent intervals for text or speech in the original sentences. Results: A similar U-shaped pattern of cross-rate variation in performance was observed in both modalities, with minima at 2 Hz. However, at the highest and lowest interruption rates, recognition accuracy was greater for text than speech, whereas the reverse was observed at middle rates. An analysis of word duration and the frequency of word sampling across interruption rates suggested that the location of the function minima was influenced by perceptual reconstruction of whole words. Overall, the findings indicate a high degree of similarity in the perceptual organization of interrupted speech and text. Conclusion: The observed rate-specific variation in the perception of speech and text may potentially affect the degree to which recognition accuracy in one modality is predictive of the other.


Subject(s)
Pattern Recognition, Visual/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Humans , Perceptual Masking/physiology , Photic Stimulation , Reading , Speech Intelligibility/physiology , Young Adult
3.
J Neurosci ; 37(43): 10323-10333, 2017 10 25.
Article in English | MEDLINE | ID: mdl-28951450

ABSTRACT

Although working memory (WM) is considered as an emergent property of the speech perception and production systems, the role of WM in sensorimotor integration during speech processing is largely unknown. We conducted two event-related potential experiments with female and male young adults to investigate the contribution of WM to the neurobehavioural processing of altered auditory feedback during vocal production. A delayed match-to-sample task that required participants to indicate whether the pitch feedback perturbations they heard during vocalizations in test and sample sequences matched, elicited significantly larger vocal compensations, larger N1 responses in the left middle and superior temporal gyrus, and smaller P2 responses in the left middle and superior temporal gyrus, inferior parietal lobule, somatosensory cortex, right inferior frontal gyrus, and insula compared with a control task that did not require memory retention of the sequence of pitch perturbations. On the other hand, participants who underwent extensive auditory WM training produced suppressed vocal compensations that were correlated with improved auditory WM capacity, and enhanced P2 responses in the left middle frontal gyrus, inferior parietal lobule, right inferior frontal gyrus, and insula that were predicted by pretraining auditory WM capacity. These findings indicate that WM can enhance the perception of voice auditory feedback errors while inhibiting compensatory vocal behavior to prevent voice control from being excessively influenced by auditory feedback. This study provides the first evidence that auditory-motor integration for voice control can be modulated by top-down influences arising from WM, rather than modulated exclusively by bottom-up and automatic processes.SIGNIFICANCE STATEMENT One outstanding question that remains unsolved in speech motor control is how the mismatch between predicted and actual voice auditory feedback is detected and corrected. The present study provides two lines of converging evidence, for the first time, that working memory cannot only enhance the perception of vocal feedback errors but also exert inhibitory control over vocal motor behavior. These findings represent a major advance in our understanding of the top-down modulatory mechanisms that support the detection and correction of prediction-feedback mismatches during sensorimotor control of speech production driven by working memory. Rather than being an exclusively bottom-up and automatic process, auditory-motor integration for voice control can be modulated by top-down influences arising from working memory.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Memory, Short-Term/physiology , Psychomotor Performance/physiology , Speech/physiology , Acoustic Stimulation/methods , Adult , Female , Humans , Male , Photic Stimulation/methods , Random Allocation , Speech Perception/physiology , Young Adult
4.
J Acoust Soc Am ; 142(1): EL155, 2017 07.
Article in English | MEDLINE | ID: mdl-28764419

ABSTRACT

Past studies of speech-on-speech masking in young adults (YA) indicate that the intelligibility of masked speech can improve if the target and masker speech are in different languages. Current work investigated whether a linguistic masking release is obtained in older adults (OA) with age-typical hearing abilities. Participants listened to English sentences in the presence of two-talker Spanish or English maskers. A similar masking release with Spanish-language maskers was obtained for OA and YA listeners, despite greater accuracy by YA listeners. In speech-on-speech masking, older listeners can thus improve speech intelligibility by utilizing nonenergetic linguistic differences between the target and masker speech.

5.
PLoS One ; 11(11): e0167030, 2016.
Article in English | MEDLINE | ID: mdl-27893791

ABSTRACT

OBJECTIVE: Sounds in everyday environments tend to follow one another as events unfold over time. The tacit knowledge of contextual relationships among environmental sounds can influence their perception. We examined the effect of semantic context on the identification of sequences of environmental sounds by adults of varying age and hearing abilities, with an aim to develop a nonspeech test of auditory cognition. METHOD: The familiar environmental sound test (FEST) consisted of 25 individual sounds arranged into ten five-sound sequences: five contextually coherent and five incoherent. After hearing each sequence, listeners identified each sound and arranged them in the presentation order. FEST was administered to young normal-hearing, middle-to-older normal-hearing, and middle-to-older hearing-impaired adults (Experiment 1), and to postlingual cochlear-implant users and young normal-hearing adults tested through vocoder-simulated implants (Experiment 2). RESULTS: FEST scores revealed a strong positive effect of semantic context in all listener groups, with young normal-hearing listeners outperforming other groups. FEST scores also correlated with other measures of cognitive ability, and for CI users, with the intelligibility of speech-in-noise. CONCLUSIONS: Being sensitive to semantic context effects, FEST can serve as a nonspeech test of auditory cognition for diverse listener populations to assess and potentially improve everyday listening skills.


Subject(s)
Auditory Perception/physiology , Environment , Hearing Loss/physiopathology , Hearing/physiology , Semantics , Sound Localization , Speech Perception , Acoustic Stimulation , Adolescent , Adult , Aged , Case-Control Studies , Cognition/physiology , Female , Hearing Loss/rehabilitation , Hearing Tests/methods , Humans , Male , Middle Aged , Young Adult
6.
J Acoust Soc Am ; 139(1): 455-65, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26827039

ABSTRACT

Temporal constraints on the perception of interrupted speech were investigated by comparing the intelligibility of speech that was periodically gated (PG) and subsequently either temporally compressed (PGTC) by concatenating remaining speech fragments or temporally expanded (PGTE) by doubling the silent intervals between speech fragments. Experiment 1 examined the effects of PGTC and PGTE at different gating rates (0.5 -16 Hz) on the intelligibility of words and sentences for young normal-hearing adults. In experiment 2, older normal-hearing (ONH) and older hearing-impaired (OHI) adults were tested with sentences only. The results of experiment 1 indicated that sentences were more intelligible than words. In both experiments, PGTC sentences were less intelligible than either PG or PGTE sentences. Compared with PG sentences, the intelligibility of PGTE sentences was significantly reduced by the same amount for ONH and OHI groups. Temporal alterations tended to produce a U-shaped rate-intelligibility function with a dip at 2-4 Hz, indicating that temporal alterations interacted with the duration of speech fragments. The present findings demonstrate that both aging and hearing loss negatively affect the overall intelligibility of interrupted and temporally altered speech. However, a mild-to-moderate hearing loss did not exacerbate the negative effects of temporal alterations associated with aging.


Subject(s)
Hearing Loss/physiopathology , Speech Intelligibility/physiology , Acoustic Stimulation , Adult , Age Factors , Aged , Aged, 80 and over , Analysis of Variance , Auditory Threshold/physiology , Female , Humans , Male , Middle Aged , Sound Spectrography , Speech Perception/physiology , Young Adult
7.
PLoS One ; 10(8): e0134330, 2015.
Article in English | MEDLINE | ID: mdl-26237423

ABSTRACT

OBJECTIVE: The objective was to evaluate the association of peripheral and central hearing abilities with cognitive function in older adults. METHODS: Recruited from epidemiological studies of aging and cognition at the Rush Alzheimer's Disease Center, participants were a community-dwelling cohort of older adults (range 63-98 years) without diagnosis of dementia. The cohort contained roughly equal numbers of Black (n=61) and White (n=63) subjects with groups similar in terms of age, gender, and years of education. Auditory abilities were measured with pure-tone audiometry, speech-in-noise perception, and discrimination thresholds for both static and dynamic spectral patterns. Cognitive performance was evaluated with a 12-test battery assessing episodic, semantic, and working memory, perceptual speed, and visuospatial abilities. RESULTS: Among the auditory measures, only the static and dynamic spectral-pattern discrimination thresholds were associated with cognitive performance in a regression model that included the demographic covariates race, age, gender, and years of education. Subsequent analysis indicated substantial shared variance among the covariates race and both measures of spectral-pattern discrimination in accounting for cognitive performance. Among cognitive measures, working memory and visuospatial abilities showed the strongest interrelationship to spectral-pattern discrimination performance. CONCLUSIONS: For a cohort of older adults without diagnosis of dementia, neither hearing thresholds nor speech-in-noise ability showed significant association with a summary measure of global cognition. In contrast, the two auditory metrics of spectral-pattern discrimination ability significantly contributed to a regression model prediction of cognitive performance, demonstrating association of central auditory ability to cognitive status using auditory metrics that avoided the confounding effect of speech materials.


Subject(s)
Aging/psychology , Cognition/physiology , Hearing/physiology , Acoustic Stimulation , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Auditory Threshold/physiology , Female , Humans , Male , Middle Aged , Neuropsychological Tests
8.
J Am Acad Audiol ; 26(6): 572-81, 2015 Jun.
Article in English | MEDLINE | ID: mdl-26134724

ABSTRACT

BACKGROUND: Past work has shown that low-rate frequency modulation (FM) may help preserve signal coherence, aid segmentation at word and syllable boundaries, and benefit speech intelligibility in the presence of a masker. PURPOSE: This study evaluated whether difficulties in speech perception by cochlear implant (CI) users relate to a deficit in the ability to discriminate among stochastic low-rate patterns of FM. RESEARCH DESIGN: This is a correlational study assessing the association between the ability to discriminate stochastic patterns of low-rate FM and the intelligibility of speech in noise. STUDY SAMPLE: Thirteen postlingually deafened adult CI users participated in this study. DATA COLLECTION AND ANALYSIS: Using modulators derived from 5-Hz lowpass noise applied to a 1-kHz carrier, thresholds were measured in terms of frequency excursion both in quiet and with a speech-babble masker present, stimulus duration, and signal-to-noise ratio in the presence of a speech-babble masker. Speech perception ability was assessed in the presence of the same speech-babble masker. Relationships were evaluated with Pearson product-moment correlation analysis with correction for family-wise error, and commonality analysis to determine the unique and common contributions across psychoacoustic variables to the association with speech ability. RESULTS: Significant correlations were obtained between masked speech intelligibility and three metrics of FM discrimination involving either signal-to-noise ratio or stimulus duration, with shared variance among the three measures accounting for much of the effect. Compared to past results from young normal-hearing adults and older adults with either normal hearing or a mild-to-moderate hearing loss, mean FM discrimination thresholds obtained from CI users were higher in all conditions. CONCLUSIONS: The ability to process the pattern of frequency excursions of stochastic FM may, in part, have a common basis with speech perception in noise. Discrimination of differences in the temporally distributed place coding of the stimulus could serve as this common basis for CI users.


Subject(s)
Auditory Threshold/physiology , Cochlear Implants , Hearing Loss/physiopathology , Speech Perception/physiology , Adult , Aged , Cochlear Implantation , Female , Hearing Loss/diagnosis , Hearing Loss/therapy , Humans , Male , Middle Aged , Noise , Signal-To-Noise Ratio , Young Adult
9.
J Acoust Soc Am ; 137(2): 745-56, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25698009

ABSTRACT

How age and hearing loss affect the perception of interrupted speech may vary based on both the physical properties of preserved or obliterated speech fragments and individual listener characteristics. To investigate perceptual processes and interruption parameters influencing intelligibility across interruption rates, participants of different age and hearing status heard sentences interrupted by silence at either a single primary rate (0.5-8 Hz; 25%, 50%, 75% duty cycle) or at an additional concurrent secondary rate (24 Hz; 50% duty cycle). Although age and hearing loss significantly affected intelligibility, the ability to integrate sub-phonemic speech fragments produced by the fast secondary rate was similar in all listener groups. Age and hearing loss interacted with rate with smallest group differences observed at the lowest and highest interruption rates of 0.5 and 24 Hz. Furthermore, intelligibility of dual-rate gated sentences was higher than single-rate gated sentences with the same proportion of retained speech. Correlations of intelligibility of interrupted speech to pure-tone thresholds, age, or measures of working memory and auditory spectro-temporal pattern discrimination were generally low-to-moderate and mostly nonsignificant. These findings demonstrate rate-dependent effects of age and hearing loss on the perception of interrupted speech, suggesting complex interactions of perceptual processes across different time scales.


Subject(s)
Aging/psychology , Hearing Loss, Sensorineural/psychology , Persons With Hearing Impairments/psychology , Presbycusis/psychology , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Age Factors , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Audiometry, Speech , Auditory Threshold , Cues , Female , Hearing Loss, Sensorineural/diagnosis , Humans , Male , Memory, Short-Term , Middle Aged , Pattern Recognition, Physiological , Presbycusis/diagnosis , Psychoacoustics , Sound Spectrography , Time Factors , Young Adult
10.
J Speech Lang Hear Res ; 58(2): 509-19, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25633579

ABSTRACT

PURPOSE: The study investigated the effect of a short computer-based environmental sound training regimen on the perception of environmental sounds and speech in experienced cochlear implant (CI) patients. METHOD: Fourteen CI patients with the average of 5 years of CI experience participated. The protocol consisted of 2 pretests, 1 week apart, followed by 4 environmental sound training sessions conducted on separate days in 1 week, and concluded with 2 posttest sessions, separated by another week without training. Each testing session included an environmental sound test, which consisted of 40 familiar everyday sounds, each represented by 4 different tokens, as well as the Consonant Nucleus Consonant (CNC) word test, and Revised Speech Perception in Noise (SPIN-R) sentence test. RESULTS: Environmental sounds scores were lower than for either of the speech tests. Following training, there was a significant average improvement of 15.8 points in environmental sound perception, which persisted 1 week later after training was discontinued. No significant improvements were observed for either speech test. CONCLUSIONS: The findings demonstrate that environmental sound perception, which remains problematic even for experienced CI patients, can be improved with a home-based computer training regimen. Such computer-based training may thus provide an effective low-cost approach to rehabilitation for CI users, and potentially, other hearing impaired populations.


Subject(s)
Acoustic Stimulation/methods , Auditory Perception , Cochlear Implants/psychology , Correction of Hearing Impairment/methods , Hearing Loss/rehabilitation , Aged , Aged, 80 and over , Environment , Female , Hearing Loss/psychology , Humans , Male , Middle Aged , Noise , Sound
11.
J Assoc Res Otolaryngol ; 15(5): 839-48, 2014 Oct.
Article in English | MEDLINE | ID: mdl-24899379

ABSTRACT

Noise reduction (NR) systems are commonplace in modern digital hearing aids. Though not improving speech intelligibility, NR helps the hearing-aid user in terms of lowering noise annoyance, reducing cognitive load and improving ease of listening. Previous psychophysical work has shown that NR does in fact improve the ability of normal-hearing (NH) listeners to discriminate the slow amplitude-modulation (AM) cues representative of those found in speech. The goal of this study was to assess whether this improvement of AM discrimination with NR can also be observed for hearing-impaired (HI) listeners. AM discrimination was measured at two audio frequencies of 500 Hz and 2 kHz in a background noise with a signal-to-noise ratio of 12 dB. Discrimination was measured for ten HI and ten NH listeners with and without NR processing. The HI listeners had a moderate sensorineural hearing loss of about 50 dB HL at 2 kHz and normal hearing (≤ 20 dB HL) at 500 Hz. The results showed that most of the HI listeners tended to benefit from NR at 500 Hz but not at 2 kHz. However, statistical analyses showed that HI listeners did not benefit significantly from NR at any frequency region. In comparison, the NH listeners showed a significant benefit from NR at both frequencies. For each condition, the fidelity of AM transmission was quantified by a computational model of early auditory processing. The parameters of the model were adjusted separately for the two groups (NH and HI) of listeners. The AM discrimination performance of the HI group (with and without NR) was best captured by a model simulating the loss of the fast-acting amplitude compression applied by the normal cochlea. This suggests that the lack of benefit from NR for HI listeners results from loudness recruitment.


Subject(s)
Auditory Perception , Noise , Acoustic Stimulation , Adult , Aged , Algorithms , Humans , Middle Aged , Persons With Hearing Impairments , Speech Discrimination Tests
12.
Audiol Res ; 4(1)2014.
Article in English | MEDLINE | ID: mdl-25568764

ABSTRACT

Past work has shown relationship between the ability to discriminate spectral patterns and measures of speech intelligibility. The purpose of this study was to investigate the ability of both children and young adults to discriminate static and dynamic spectral patterns, comparing performance between the two groups and evaluating within-group results in terms of relationship to speech-in-noise perception. Data were collected from normal-hearing children (age range: 5.4 - 12.8 yrs) and young adults (mean age: 22.8 yrs) on two spectral discrimination tasks and speech-in-noise perception. The first discrimination task, involving static spectral profiles, measured the ability to detect a change in the phase of a low-density sinusoidal spectral ripple of wideband noise. Using dynamic spectral patterns, the second task determined the signal-to-noise ratio needed to discriminate the temporal pattern of frequency fluctuation imposed by stochastic low-rate frequency modulation (FM). Children performed significantly poorer than young adults on both discrimination tasks. For children, a significant correlation between speech-in-noise perception and spectral-pattern discrimination was obtained only with the dynamic patterns of the FM condition, with partial correlation suggesting that factors related to the children's age mediated the relationship.

13.
Atten Percept Psychophys ; 75(1): 121-31, 2013 Jan.
Article in English | MEDLINE | ID: mdl-23007205

ABSTRACT

In three experiments, we examined the ability of listeners to discriminate the duration of temporal gaps (silent intervals) and the influence of other temporal stimulus properties on their performance. In the first experiment, gap-duration discrimination thresholds were measured either in continuous noise or with noise markers with durations of 3 and 300 ms. Thresholds measured with 300-ms markers differed from those measured in continuous noise or with 3-ms markers. In the second experiment, stimuli consisting of a gap between two discrete markers were generated such that the gap duration, the onset-to-onset duration between markers, and the duration of the first marker were pseudorandomized across trials. Listeners' responses generally were consistent with the cue that was identified as the target cue from among the three cues in each block of trials, but the data suggested that the onset-to-onset cue was particularly salient in all conditions. Using a modified method-of-adjustment procedure in the third experiment, subjects were instructed to discriminate between the durations of gaps in discrete markers of different durations in two intervals, where the gap duration in one interval was adapted to measure the point of subjective equality. Without feedback, listeners tended to equate the onset-to-onset times of the markers rather than the gap durations. Overall, the results indicated that listeners' judgments of silent gaps between two discrete markers are strongly influenced by the onset-to-onset time, or rhythm, of the markers.


Subject(s)
Auditory Perception/physiology , Auditory Threshold/physiology , Cues , Discrimination, Psychological/physiology , Adult , Audiometry, Pure-Tone , Computer Simulation , Female , Humans , Male , Models, Psychological , Models, Statistical , Noise , Psychoacoustics , Regression Analysis , Time Factors
14.
J Assoc Res Otolaryngol ; 14(1): 149-57, 2013 Feb.
Article in English | MEDLINE | ID: mdl-23180229

ABSTRACT

The goal of noise reduction (NR) algorithms in digital hearing aid devices is to reduce background noise whilst preserving as much of the original signal as possible. These algorithms may increase the signal-to-noise ratio (SNR) in an ideal case, but they generally fail to improve speech intelligibility. However, due to the complex nature of speech, it is difficult to disentangle the numerous low- and high-level effects of NR that may underlie the lack of speech perception benefits. The goal of this study was to better understand why NR algorithms do not improve speech intelligibility by investigating the effects of NR on the ability to discriminate two basic acoustic features, namely amplitude modulation (AM) and frequency modulation (FM) cues, known to be crucial for speech identification in quiet and in noise. Here, discrimination of complex, non-linguistic AM and FM patterns was measured for normal hearing listeners using a same/different task. The stimuli were generated by modulating 1-kHz pure tones by either a two-component AM or FM modulator with patterns changed by manipulating component phases. Modulation rates were centered on 3 Hz. Discrimination of AM and FM patterns was measured in quiet and in the presence of a white noise that had been passed through a gammatone filter centered on 1 kHz. The noise was presented at SNRs ranging from -6 to +12 dB. Stimuli were left as such or processed via an NR algorithm based on the spectral subtraction method. NR was found to yield small but systematic improvements in discrimination for the AM conditions at favorable SNRs but had little effect, if any, on FM discrimination. A computational model of early auditory processing was developed to quantify the fidelity of AM and FM transmission. The model captured the improvement in discrimination performance for AM stimuli at high SNRs with NR. However, the model also predicted a relatively small detrimental effect of NR for FM stimuli in contrast with the average psychophysical data. Overall, these results suggest that the lack of benefits of NR on speech intelligibility is partly caused by the limited effect of NR on the transmission of narrowband speech modulation cues.


Subject(s)
Algorithms , Auditory Perception/physiology , Noise , Speech Intelligibility/physiology , Acoustic Stimulation , Adult , Auditory Threshold/physiology , Humans , Models, Biological , Speech Perception/physiology
15.
Proc Meet Acoust ; 19(1)2013 Jun.
Article in English | MEDLINE | ID: mdl-26500713

ABSTRACT

Both behavioral and physiological studies have demonstrated enhanced processing of speech in challenging listening environments attributable to musical training. The relationship, however, of this benefit to auditory abilities as assessed by psychoacoustic measures remains unclear. Using tasks previously shown to relate to speech-in-noise perception, the present study evaluated discrimination ability for static and dynamic spectral patterns by 49 listeners grouped as either musicians or nonmusicians. The two static conditions measured the ability to detect a change in the phase of a logarithmic sinusoidal spectral ripple of wideband noise with ripple densities of 1.5 and 3.0 cycles per octave chosen to emphasize either timbre or pitch distinctions, respectively. The dynamic conditions assessed temporal-pattern discrimination of 1-kHz pure tones frequency modulated by different lowpass noise samples with thresholds estimated in terms of either stimulus duration or signal-to-noise ratio. Musicians performed significantly better than nonmusicians on all four tasks. Discriminant analysis showed that group membership was correctly predicted for 88% of the listeners with the structure coefficient of each measure greater than 0.51. Results suggest that enhanced processing of static and dynamic spectral patterns defined by low-rate modulation may contribute to the relationship between musical training and speech-in-noise perception. [Supported by NIH.].

16.
J Am Acad Audiol ; 24(10): 969-79, 2013.
Article in English | MEDLINE | ID: mdl-24384082

ABSTRACT

PURPOSE: The goals of this study were (1) to investigate the reliability of a clinical music perception test, Appreciation of Music in Cochlear Implantees (AMICI), and (2) examine associations between the perception of music and speech. AMICI was developed as a clinical instrument for assessing music perception in persons with cochlear implants (CIs). The test consists of four subtests: (1) music versus environmental noise discrimination, (2) musical instrument identification (closed-set), (3) musical style identification (closed-set), and (4) identification of musical pieces (open-set). To be clinically useful, it is crucial for AMICI to demonstrate high test-retest reliability, so that CI users can be assessed and retested after changes in maps or programming strategies. RESEARCH DESIGN: Thirteen CI subjects were tested with AMICI for the initial visit and retested again 10-14 days later. Two speech perception tests (consonant-nucleus-consonant [CNC] and Bamford-Kowal-Bench Speech-in-Noise [BKB-SIN]) were also administered. DATA ANALYSIS: Test-retest reliability and equivalence of the test's three forms were analyzed using paired t-tests and correlation coefficients, respectively. Correlation analysis was also conducted between results from the music and speech perception tests. RESULTS: Results showed no significant difference between test and retest (p > 0.05) with adequate power (0.9) as well as high correlations between the three forms (Forms A and B, r = 0.91; Forms A and C, r = 0.91; Forms B and C, r = 0.95). Correlation analysis showed high correlation between AMICI and BKB-SIN (r = -0.71), and moderate correlation between AMICI and CNC (r = 0.4). CONCLUSIONS: The study showed AMICI is highly reliable for assessing musical perception in CI users.


Subject(s)
Audiometry/standards , Auditory Perception/physiology , Cochlear Implantation/adverse effects , Cochlear Implants , Hearing Loss, Sensorineural/physiopathology , Music/psychology , Acoustic Stimulation/methods , Adult , Aged , Aged, 80 and over , Audiometry/methods , Audiometry/statistics & numerical data , Female , Hearing Loss, Sensorineural/rehabilitation , Humans , Male , Middle Aged , Noise , Predictive Value of Tests , Recognition, Psychology/physiology , Reproducibility of Results , Surveys and Questionnaires
17.
Trends Amplif ; 16(2): 83-101, 2012 Jun.
Article in English | MEDLINE | ID: mdl-22891070

ABSTRACT

Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients.


Subject(s)
Acoustic Stimulation/methods , Cognition , Environment , Noise/adverse effects , Perceptual Masking , Speech Perception , Adaptation, Psychological , Adult , Audiometry, Speech , Cochlear Implants , Executive Function , Female , Humans , Male , Neuropsychological Tests , Recognition, Psychology , Signal Detection, Psychological , Speech Acoustics , Speech Intelligibility , Time Factors , Young Adult
18.
Ear Hear ; 33(6): 709-20, 2012.
Article in English | MEDLINE | ID: mdl-22790319

ABSTRACT

OBJECTIVE: The frequency modulation (FM) of speech can convey linguistic information and also enhance speech-stream coherence and segmentation. The purpose of the present study was to use a clinically oriented approach to examine the effects of age and hearing loss on the ability to discriminate between stochastic patterns of low-rate FM and determine whether difficulties in speech perception experienced by older listeners relate to a deficit in this ability. DESIGN: Data were collected from 18 normal-hearing young adults, and 18 participants who were at least 60 years old, nine of whom had normal hearing and the remaining nine who had a mild-to-moderate sensorineural hearing loss. Using stochastic frequency modulators derived from 5-Hz low-pass noise applied to a 1-kHz carrier, discrimination thresholds were measured in terms of frequency excursion (ΔF) both in quiet and with a speech-babble masker present, stimulus duration, and signal-to-noise ratio (SNR(FM)) in the presence of a speech-babble masker. Speech-perception ability was evaluated using Quick Speech-in-Noise (QuickSIN) sentences in four-talker babble. RESULTS: Results showed a significant effect of age but not of hearing loss among the older listeners, for FM discrimination conditions with masking present (ΔF and SNR(FM)). The effect of age was not significant for the FM measures based on stimulus duration. ΔF and SNR(FM) were also the two conditions for which performance was significantly correlated with listener age when controlling for effect of hearing loss as measured by pure-tone average. With respect to speech-in-noise ability, results from the SNR(FM) condition were significantly correlated with QuickSIN performance. CONCLUSIONS: Results indicate that aging is associated with reduced ability to discriminate moderate-duration patterns of low-rate stochastic FM. Furthermore, the relationship between QuickSIN performance and the SNR(FM) thresholds suggests that the difficulty experienced by older listeners with speech-in-noise processing may, in part, relate to diminished ability to process slower fine-structure modulation at low sensation levels. Results thus suggest that clinical consideration of stochastic FM discrimination measures may offer a fuller picture of auditory-processing abilities.


Subject(s)
Pitch Discrimination , Presbycusis/diagnosis , Presbycusis/psychology , Sound Spectrography , Speech Acoustics , Speech Perception , Speech Reception Threshold Test , Adult , Age Factors , Aged , Auditory Threshold , Female , Humans , Male , Middle Aged , Perceptual Masking , Reference Values , Speech Discrimination Tests , Stochastic Processes
19.
J Acoust Soc Am ; 130(4): 2076-87, 2011 Oct.
Article in English | MEDLINE | ID: mdl-21973362

ABSTRACT

Perception of interrupted speech and the influence of speech materials and memory load were investigated using one or two concurrent square-wave gating functions. Sentences (Experiment 1) and random one-, three-, and five-word sequences (Experiment 2) were interrupted using either a primary gating rate alone (0.5-24 Hz) or a combined primary and faster secondary rate. The secondary rate interrupted only speech left intact after primary gating, reducing the original speech to 25%. In both experiments, intelligibility increased with primary rate, but varied with memory load and speech material (highest for sentences, lowest for five-word sequences). With dual-rate gating of sentences, intelligibility with fast secondary rates was superior to that with single rates and a 25% duty cycle, approaching that of single rates with a 50% duty cycle for some low and high rates. For dual-rate gating of words, the positive effect of fast secondary gating was smaller than for sentences, and the advantage of sentences over word-sequences was not obtained in many dual-rate conditions. These findings suggest that integration of interrupted speech fragments after gating depends on the duration of the gated speech interval and that sufficiently robust acoustic-phonetic word cues are needed to access higher-level contextual sentence information.


Subject(s)
Phonetics , Speech Acoustics , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Analysis of Variance , Audiometry, Speech , Auditory Threshold , Cues , Female , Humans , Male , Memory , Sound Spectrography , Time Factors , Young Adult
20.
J Acoust Soc Am ; 130(2): EL108-14, 2011 Aug.
Article in English | MEDLINE | ID: mdl-21877768

ABSTRACT

Temporal constraints on the perception of variable-size speech fragments produced by interruption rates between 0.5 and 16 Hz were investigated by contrasting the intelligibility of gated sentences with and without silent intervals. Concatenation of consecutive speech fragments produced a significant decrease in intelligibility at 2 and 4 Hz, while having little effect at lower and higher rates. Consistent with previous studies, these findings indicate that (1) syllable-sized intervals associated with intermediate-rate interruptions are more susceptible to temporal distortions than the longer word-size or shorter phoneme-size intervals and (2) suggest qualitative differences in underlying perceptual processes at different rates.


Subject(s)
Cues , Phonetics , Signal Detection, Psychological , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Analysis of Variance , Audiometry, Speech , Auditory Threshold , Female , Humans , Male , Sound Spectrography , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...