Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
Add more filters










Publication year range
1.
J Speech Lang Hear Res ; 59(1): 110-21, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26540194

ABSTRACT

PURPOSE: This study compared masking release for adults and children with normal hearing and hearing loss. For the participants with hearing loss, masking release using simulated hearing aid amplification with 2 different compression speeds (slow, fast) was compared. METHOD: Sentence recognition in unmodulated noise was compared with recognition in modulated noise (masking release). Recognition was measured for participants with hearing loss using individualized amplification via the hearing-aid simulator. RESULTS: Adults with hearing loss showed greater masking release than the children with hearing loss. Average masking release was small (1 dB) and did not depend on hearing status. Masking release was comparable for slow and fast compression. CONCLUSIONS: The use of amplification in this study contrasts with previous studies that did not use amplification. The results suggest that when differences in audibility are reduced, participants with hearing loss may be able to take advantage of dips in the noise levels, similar to participants with normal hearing. Although children required a more favorable signal-to-noise ratio than adults for both unmodulated and modulated noise, masking release was not statistically different. However, the ability to detect a difference may have been limited by the small amount of masking release observed.


Subject(s)
Hearing Loss , Speech Perception , Acoustic Stimulation/methods , Adolescent , Adult , Aged , Aging/psychology , Auditory Threshold , Child , Female , Hearing Loss/psychology , Hearing Tests , Humans , Language Tests , Male , Middle Aged , Noise/adverse effects , Pattern Recognition, Physiological , Sex Characteristics , Speech Acoustics , Young Adult
2.
J Am Acad Audiol ; 25(10): 983-98, 2014.
Article in English | MEDLINE | ID: mdl-25514451

ABSTRACT

BACKGROUND: Preference for speech and music processed with nonlinear frequency compression (NFC) and two controls (restricted bandwidth [RBW] and extended bandwidth [EBW] hearing aid processing) was examined in adults and children with hearing loss. PURPOSE: The purpose of this study was to determine if stimulus type (music, sentences), age (children, adults), and degree of hearing loss influence listener preference for NFC, RBW, and EBW. RESEARCH DESIGN: Design was a within-participant, quasi-experimental study. Using a round-robin procedure, participants listened to amplified stimuli that were (1) frequency lowered using NFC, (2) low-pass filtered at 5 kHz to simulate the RBW of conventional hearing aid processing, or (3) low-pass filtered at 11 kHz to simulate EBW amplification. The examiner and participants were blinded to the type of processing. Using a two-alternative forced-choice task, participants selected the preferred music or sentence passage. STUDY SAMPLE: Participants included 16 children (ages 8-16 yr) and 16 adults (ages 19-65 yr) with mild to severe sensorineural hearing loss. INTERVENTION: All participants listened to speech and music processed using a hearing aid simulator fit to the Desired Sensation Level algorithm v5.0a. RESULTS: Children and adults did not differ in their preferences. For speech, participants preferred EBW to both NFC and RBW. Participants also preferred NFC to RBW. Preference was not related to the degree of hearing loss. For music, listeners did not show a preference. However, participants with greater hearing loss preferred NFC to RBW more than participants with less hearing loss. Conversely, participants with greater hearing loss were less likely to prefer EBW to RBW. CONCLUSIONS: Both age groups preferred access to high-frequency sounds, as demonstrated by their preference for either the EBW or NFC conditions over the RBW condition. Preference for EBW can be limited for those with greater degrees of hearing loss, but participants with greater hearing loss may be more likely to prefer NFC. Further investigation using participants with more severe hearing loss may be warranted.


Subject(s)
Acoustic Stimulation/methods , Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Adolescent , Adult , Aged , Audiology/instrumentation , Child , Female , Humans , Male , Matched-Pair Analysis , Middle Aged , Music , Young Adult
3.
Ear Hear ; 35(5): 519-32, 2014.
Article in English | MEDLINE | ID: mdl-24699702

ABSTRACT

OBJECTIVES: The authors have demonstrated that the limited bandwidth associated with conventional hearing aid amplification prevents useful high-frequency speech information from being transmitted. The purpose of this study was to examine the efficacy of two popular frequency-lowering algorithms and one novel algorithm (spectral envelope decimation) in adults with mild to moderate sensorineural hearing loss and in normal-hearing controls. DESIGN: Participants listened monaurally through headphones to recordings of nine fricatives and affricates spoken by three women in a vowel-consonant context. Stimuli were mixed with speech-shaped noise at 10 dB SNR and recorded through a Widex Inteo IN-9 and a Phonak Naída UP V behind-the-ear (BTE) hearing aid. Frequency transposition (FT) is used in the Inteo and nonlinear frequency compression (NFC) used in the Naída. Both devices were programmed to lower frequencies above 4 kHz, but neither device could lower frequencies above 6 to 7 kHz. Each device was tested under four conditions: frequency lowering deactivated (FT-off and NFC-off), frequency lowering activated (FT and NFC), wideband (WB), and a fourth condition unique to each hearing aid. The WB condition was constructed by mixing recordings from the first condition with high-pass filtered versions of the source stimuli. For the Inteo, the fourth condition consisted of recordings made with the same settings as the first, but with the noise-reduction feature activated (FT-off). For the Naída, the fourth condition was the same as the first condition except that source stimuli were preprocessed by a novel frequency compression algorithm, spectral envelope decimation (SED), designed in MATLAB, which allowed for a more complete lowering of the 4 to 10 kHz input band. A follow-up experiment with NFC used Phonak's Naída SP V BTE, which could also lower a greater range of input frequencies. RESULTS: For normal-hearing and hearing-impaired listeners, performance with FT was significantly worse compared with that in the other conditions. Consistent with previous findings, performance for the hearing-impaired listeners in the WB condition was significantly better than in the FT-off condition. In addition, performance in the SED and WB conditions were both significantly better than in the NFC-off condition and the NFC condition with 6 kHz input bandwidth. There were no significant differences between SED and WB, indicating that improvements in fricative identification obtained by increasing bandwidth can also be obtained using this form of frequency compression. Significant differences between most conditions could be largely attributed to an increase or decrease in confusions for the phonemes /s/ and /z/. In the follow-up experiment, performance in the NFC condition with 10 kHz input bandwidth was significantly better than NFC-off, replicating the results obtained with SED. Furthermore, listeners who performed poorly with NFC-off tended to show the most improvement with NFC. CONCLUSIONS: Improvements in the identification of stimuli chosen to be sensitive to the effects of frequency lowering have been demonstrated using two forms of frequency compression (NFC and SED) in individuals with mild to moderate high-frequency sensorineural hearing loss. However, negative results caution against using FT for this population. Results also indicate that the advantage of an extended bandwidth as reported here and elsewhere applies to the input bandwidth for frequency compression (NFC/SED) when the start frequency is ≥4 kHz.


Subject(s)
Algorithms , Hearing Aids , Hearing Loss, Sensorineural/physiopathology , Speech Perception/physiology , Adolescent , Adult , Aged , Auditory Perception/physiology , Case-Control Studies , Equipment Design , Female , Hearing Loss, Sensorineural/rehabilitation , Humans , Male , Middle Aged , Sound Spectrography , Young Adult
4.
Ear Hear ; 35(4): 440-7, 2014.
Article in English | MEDLINE | ID: mdl-24535558

ABSTRACT

OBJECTIVE: The primary goal of nonlinear frequency compression (NFC) and other frequency-lowering strategies is to increase the audibility of high-frequency sounds that are not otherwise audible with conventional hearing aid (HA) processing due to the degree of hearing loss, limited HA bandwidth, or a combination of both factors. The aim of the present study was to compare estimates of speech audibility processed by NFC with improvements in speech recognition for a group of children and adults with high-frequency hearing loss. DESIGN: Monosyllabic word recognition was measured in noise for 24 adults and 12 children with mild to severe sensorineural hearing loss. Stimuli were amplified based on each listener's audiogram with conventional processing (CP) with amplitude compression or with NFC and presented under headphones using a software-based HA simulator. A modification of the speech intelligibility index (SII) was used to estimate audibility of information in frequency-lowered bands. The mean improvement in SII was compared with the mean improvement in speech recognition. RESULTS: All but 2 listeners experienced improvements in speech recognition with NFC compared with CP, consistent with the small increase in audibility that was estimated using the modification of the SII. Children and adults had similar improvements in speech recognition with NFC. CONCLUSION: Word recognition with NFC was higher than CP for children and adults with mild to severe hearing loss. The average improvement in speech recognition with NFC (7%) was consistent with the modified SII, which indicated that listeners experienced an increase in audibility with NFC compared with CP. Further studies are necessary to determine whether changes in audibility with NFC are related to speech recognition with NFC for listeners with greater degrees of hearing loss, with a greater variety of compression settings, and using auditory training.


Subject(s)
Hearing Aids , Hearing Loss, High-Frequency/rehabilitation , Hearing Loss, Sensorineural/rehabilitation , Speech Intelligibility , Speech Perception , Adolescent , Adult , Aged , Child , Female , Humans , Male , Middle Aged , Treatment Outcome , Young Adult
5.
Ear Hear ; 34(5): 585-91, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23446226

ABSTRACT

OBJECTIVES: Understanding speech in acoustically degraded environments can place significant cognitive demands on school-age children who are developing the cognitive and linguistic skills needed to support this process. Previous studies suggest the speech understanding, word learning, and academic performance can be negatively impacted by background noise, but the effect of limited audibility on cognitive processes in children has not been directly studied. The aim of the present study was to evaluate the impact of limited audibility on speech understanding and working memory tasks in school-age children with normal hearing. DESIGN: Seventeen children with normal hearing between 6 and 12 years of age participated in the present study. Repetition of nonword consonant-vowel-consonant stimuli was measured under conditions with combinations of two different signal to noise ratios (SNRs; 3 and 9 dB) and two low-pass filter settings (3.2 and 5.6 kHz). Verbal processing time was calculated based on the time from the onset of the stimulus to the onset of the child's response. Monosyllabic word repetition and recall were also measured in conditions with a full bandwidth and 5.6 kHz low-pass cutoff. RESULTS: Nonword repetition scores decreased as audibility decreased. Verbal processing time increased as audibility decreased, consistent with predictions based on increased listening effort. Although monosyllabic word repetition did not vary between the full bandwidth and 5.6 kHz low-pass filter condition, recall was significantly poorer in the condition with limited bandwidth (low pass at 5.6 kHz). Age and expressive language scores predicted performance on word recall tasks, but did not predict nonword repetition accuracy or verbal processing time. CONCLUSIONS: Decreased audibility was associated with reduced accuracy for nonword repetition and increased verbal processing time in children with normal hearing. Deficits in free recall were observed even under conditions where word repetition was not affected. The negative effects of reduced audibility may occur even under conditions where speech repetition is not impacted. Limited stimulus audibility may result in greater cognitive effort for verbal rehearsal in working memory and may limit the availability of cognitive resources to allocate to working memory and other processes.


Subject(s)
Hearing/physiology , Phonetics , Speech Discrimination Tests , Speech Perception/physiology , Verbal Learning/physiology , Acoustic Stimulation/methods , Child , Female , Humans , Male , Memory, Short-Term/physiology , Mental Recall/physiology , Noise , Reaction Time/physiology , Reference Values
6.
Ear Hear ; 34(2): e24-7, 2013.
Article in English | MEDLINE | ID: mdl-23104144

ABSTRACT

OBJECTIVE: Nonlinear frequency compression attempts to restore high-frequency audibility by lowering high-frequency input signals. Methods of determining the optimal parameters that maximize speech understanding have not been evaluated. The effect of maximizing the audible bandwidth on speech recognition for a group of listeners with normal hearing is described. DESIGN: Nonword recognition was measured with 20 normal-hearing adults. Three audiograms with different high-frequency thresholds were used to create conditions with varying high-frequency audibility. Bandwidth was manipulated using three conditions for each audiogram: conventional processing, the manufacturer's default compression parameters, and compression parameters that optimized bandwidth. RESULTS: Nonlinear frequency compression optimized to provide the widest audible bandwidth improved nonword recognition compared with both conventional processing and the default parameters. CONCLUSIONS: These results showed that using the widest audible bandwidth maximized speech identification when using nonlinear frequency compression. Future studies should apply these methods to listeners with hearing loss to demonstrate efficacy in clinical populations.


Subject(s)
Acoustic Stimulation/methods , Speech Perception/physiology , Adult , Auditory Threshold , Humans , Young Adult
7.
Ear Hear ; 33(6): 731-44, 2012.
Article in English | MEDLINE | ID: mdl-22732772

ABSTRACT

OBJECTIVES: The purpose of this study was to determine how combinations of reverberation and noise, typical of environments in many elementary school classrooms, affect normal-hearing school-aged children's speech recognition in stationary and amplitude-modulated noise, and to compare their performance with that of normal-hearing young adults. In addition, the magnitude of release from masking in the modulated noise relative to that in stationary noise was compared across age groups in nonreverberant and reverberant listening conditions. Last, for all noise and reverberation combinations the degree of change in predicted performance at 70% correct was obtained for all age groups using a best-fit cubic polynomial. DESIGN: Bamford-Kowal-Bench sentences and noise were convolved with binaural room impulse responses representing nonreverberant and reverberant environments to create test materials representative of both audiology clinics and school classroom environments. Speech recognition of 48 school-aged children and 12 adults was measured in speech-shaped and amplitude-modulated speech-shaped noise, in the following three virtual listening environments: nonreverberant, reverberant at 2 m, and reverberant at 6 m. RESULTS: Speech recognition decreased in the reverberant conditions and with decreasing age. Release from masking in modulated noise relative to stationary noise decreased with age and was reduced by reverberation. In the nonreverberant condition, participants showed similar amounts of masking release across ages. The slopes of performance-intensity functions increased with age, with the exception of the nonreverberant modulated masker condition. The slopes were steeper in the stationary masker conditions, where they also decreased with reverberation and distance. In the presence of a modulated masker, the slopes did not differ between the two reverberant conditions. CONCLUSIONS: The results of this study reveal systematic developmental changes in speech recognition in noisy and reverberant environments for elementary-school-aged children. The overall pattern suggests that younger children require better acoustic conditions to achieve sentence recognition equivalent to their older peers and adults. In addition, this is the first study to report a reduction of masking release in children as a result of reverberation. Results support the importance of minimizing noise and reverberation in classrooms, and highlight the need to incorporate noise and reverberation into audiological speech-recognition testing to improve predictions of performance in the real world.


Subject(s)
Noise/adverse effects , Perceptual Masking , Speech Perception , Speech Reception Threshold Test , Adolescent , Adult , Age Factors , Child , Female , Humans , Male , Noise/prevention & control , Reference Values , Social Environment , Sound Spectrography , Young Adult
8.
J Acoust Soc Am ; 130(6): 4070-81, 2011 Dec.
Article in English | MEDLINE | ID: mdl-22225061

ABSTRACT

This study investigated the relationship between audibility and predictions of speech recognition for children and adults with normal hearing. The Speech Intelligibility Index (SII) is used to quantify the audibility of speech signals and can be applied to transfer functions to predict speech recognition scores. Although the SII is used clinically with children, relatively few studies have evaluated SII predictions of children's speech recognition directly. Children have required more audibility than adults to reach maximum levels of speech understanding in previous studies. Furthermore, children may require greater bandwidth than adults for optimal speech understanding, which could influence frequency-importance functions used to calculate the SII. Speech recognition was measured for 116 children and 19 adults with normal hearing. Stimulus bandwidth and background noise level were varied systematically in order to evaluate speech recognition as predicted by the SII and derive frequency-importance functions for children and adults. Results suggested that children required greater audibility to reach the same level of speech understanding as adults. However, differences in performance between adults and children did not vary across frequency bands.


Subject(s)
Phonetics , Speech Intelligibility/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Analysis of Variance , Audiometry, Speech , Child , Child, Preschool , Humans , Middle Aged , Perceptual Masking/physiology , Sound Spectrography
9.
J Acoust Soc Am ; 127(5): 3177-88, 2010 May.
Article in English | MEDLINE | ID: mdl-21117766

ABSTRACT

In contrast to the availability of consonant confusion studies with adults, to date, no investigators have compared children's consonant confusion patterns in noise to those of adults in a single study. To examine whether children's error patterns are similar to those of adults, three groups of children (24 each in 4-5, 6-7, and 8-9 yrs. old) and 24 adult native speakers of American English (AE) performed a recognition task for 15 AE consonants in /ɑ/-consonant-/ɑ/ nonsense syllables presented in a background of speech-shaped noise. Three signal-to-noise ratios (SNR: 0, +5, and +10 dB) were used. Although the performance improved as a function of age, the overall consonant recognition accuracy as a function of SNR improved at a similar rate for all groups. Detailed analyses using phonetic features (manner, place, and voicing) revealed that stop consonants were the most problematic for all groups. In addition, for the younger children, front consonants presented in the 0 dB SNR condition were more error prone than others. These results suggested that children's use of phonetic cues do not develop at the same rate for all phonetic features.


Subject(s)
Language , Noise/adverse effects , Perceptual Masking , Phonetics , Recognition, Psychology , Speech Acoustics , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Age Factors , Audiometry, Speech , Child , Child, Preschool , Cues , Humans
10.
Ear Hear ; 31(5): 625-35, 2010 Oct.
Article in English | MEDLINE | ID: mdl-20548239

ABSTRACT

OBJECTIVE: Studies of language development in children with mild-moderate hearing loss are relatively rare. Longitudinal studies of children with late-identified hearing loss are relevant for determining how a period of unaided mild-moderate hearing loss impacts development. In recent years, newborn hearing screening programs have effectively reduced the ages of identification for most children with permanent hearing loss. However, some children continue to be identified late, and research is needed to guide management decisions. Furthermore, studies of this group may help to discern whether language normalizes after intervention and/or whether certain aspects of language might be vulnerable to persistent delays. The current study examines the impact of late identification and reduced audibility on speech and language outcomes via a longitudinal study of four children with mild-moderate sensorineural hearing loss. DESIGN: Longitudinal outcomes of four children with late-identified mild-moderate sensorineural hearing loss were studied using standardized measures and language sampling procedures from at or near the point of identification (28 to 41 mos) through 84 mos of age. The children with hearing loss were compared with 10 age-matched children with normal hearing on a majority of the measures through 60 mos of age. Spontaneous language samples were collected from mother-child interaction sessions recorded at consistent intervals in a laboratory-based play setting. Transcripts were analyzed using computer-based procedures (Systematic Analysis of Language Transcripts) and the Index of Productive Syntax. Possible influences of audibility were explored by examining the onset and productive use of a set of verb tense markers and by monitoring the children's accuracy in the use of morphological endings. Phonological samples at baseline were transcribed and analyzed using Computerized Profiling. RESULTS: At entry to the study, the four children with hearing loss demonstrated language delays with pronounced delays in phonological development. Three of the four children demonstrated rapid progress with development and interventions and performed within the average range on standardized speech and language measures compared with age-matched children by 60 mos of age. However, persistent differences from children with normal hearing were observed in the areas of morphosyntax, speech intelligibility in conversation, and production of fricatives. Children with mild-moderate hearing loss demonstrated later than typical emergence of certain verb tense markers, which may be related to reduced or inconsistent audibility. CONCLUSIONS: The results of this study suggest that early communication delays will resolve for children with late-identified, mild-moderate hearing loss, given appropriate amplification and intervention services. A positive result is that three of four children demonstrated normalization of broad language behaviors by 60 mos of age, despite significant delays at baseline. However, these children are at risk for persistent delays in phonology at the conversational level and for accuracy in use of morphological markers. The ways in which reduced auditory experiences and audibility may contribute to these delays are explored along with implications for evaluation of outcomes.


Subject(s)
Articulation Disorders/diagnosis , Articulation Disorders/etiology , Hearing Loss, Sensorineural/complications , Hearing Loss, Sensorineural/diagnosis , Severity of Illness Index , Child , Child, Preschool , Humans , Infant , Language Development Disorders/diagnosis , Language Development Disorders/etiology , Longitudinal Studies , Phonetics , Semantics , Speech Intelligibility
11.
Ear Hear ; 31(6): 761-8, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20562623

ABSTRACT

OBJECTIVES: Speech perception difficulties experienced by children in adverse listening environments have been well documented. It has been suggested that phonological awareness may be related to children's ability to understand speech in noise. The goal of this study was to provide data that will allow a clearer characterization of this potential relation in typically developing children. Doing so may result in a better understanding of how children learn to listen in noise as well as providing information to identify children who are at risk for difficulties listening in noise. DESIGN: Thirty-six children (5 to 7 yrs) with normal hearing participated in the study. Three phonological awareness tasks (syllable counting, initial consonant same, and phoneme deletion), representing a range of skills, were administered. For perception in noise tasks, nonsense syllables, monosyllabic words, and meaningful sentences with three key words were presented (50 dB SPL) at three signal to noise ratios (0, +5, and +10 dB). RESULTS: Among the speech in noise tasks, there was a significant effect of signal to noise ratio, with children performing less well at 0-dB signal to noise ratio for all stimuli. A significant age effect occurred only for word recognition, with 7-yr-olds scoring significantly higher than 5-yr olds. For all three phonological awareness tasks, an age effect existed with 7-year-olds again performing significantly better than 5-yr-olds. However, when examining the relation between speech recognition in noise and phonological awareness skills, no single variable accounted for a significant part of the variance in performance on nonsense syllables, words, or sentences. However, there was an association between vocabulary knowledge and speech perception in noise. CONCLUSIONS: Although phonological awareness skills are strongly related to reading and some children with reading difficulties also demonstrate poor speech perception in noise, results of this study question a relation between phonological awareness skills and speech perception in moderate levels of noise for typically developing children with normal hearing from 5 to 7 yrs of age. Further research in this area is needed to examine possible relations among the many factors that affect both speech perception in noise and the development of phonological awareness.


Subject(s)
Child Development/physiology , Hearing/physiology , Language Development , Noise , Phonetics , Speech Perception/physiology , Acoustic Stimulation/methods , Child , Child, Preschool , Humans , Reading
12.
Ear Hear ; 31(3): 345-55, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20081536

ABSTRACT

OBJECTIVE: Although numerous studies have investigated the effects of single-microphone digital noise-reduction algorithms for adults with hearing loss, similar studies have not been conducted with young hearing-impaired children. The goal of this study was to examine the effects of a commonly used digital noise-reduction scheme (spectral subtraction) in children with mild to moderately severe sensorineural hearing losses. It was hypothesized that the process of spectral subtraction may alter or degrade speech signals in some way. Such degradation may have little influence on the perception of speech by hearing-impaired adults who are likely to use contextual information under such circumstances. For young children who are still developing various language skills, however, signal degradation may have a more detrimental effect on the perception of speech. DESIGN: Sixteen children (eight 5- to 7-yr-olds and eight 8- to 10-yr-olds) with mild to moderately severe hearing loss participated in this study. All participants wore binaural behind the ear hearing aids where noise-reduction processing was performed independently in 16 bands with center frequencies spaced 500 Hz apart up to 7500 Hz. Test stimuli were nonsense syllables, words, and sentences in a background of noise. For all stimuli, data were obtained with noise reduction (NR) on and off conditions. RESULTS: In general, performance improved as a function of speech to noise ratio for all three speech materials. The main effect for stimulus type was significant and post hoc comparisons of stimulus type indicated that speech recognition was higher for sentences than that for both nonsense syllables and words, but no significant differences were observed between nonsense syllables and words. The main effect for NR and the two-way interaction between NR and stimulus type were not significant. Significant age group effects were observed, but the two-way interaction between NR and age group was not significant. CONCLUSIONS: Consistent with previous findings from studies with adults, results suggest that the form of NR used in this study does not have a negative effect on the overall perception of nonsense syllables, words, or sentences across the age range (5 to 10 yrs) and speech to noise ratios (0, +5, and +10 dB) tested.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural/physiopathology , Hearing Loss, Sensorineural/therapy , Language Development , Noise/prevention & control , Speech Perception/physiology , Acoustic Stimulation , Algorithms , Auditory Threshold , Child , Child, Preschool , Humans , Phonetics , Speech Reception Threshold Test
13.
Ear Hear ; 31(1): 95-101, 2010 Feb.
Article in English | MEDLINE | ID: mdl-19773658

ABSTRACT

OBJECTIVES: The Computer-Aided Speech Perception Assessment (CASPA) is a clinical measure of speech recognition that uses 10-item, isophonemic word lists to derive performance intensity (PI) functions for adult listeners. Because CASPA was developed for adults, the ability to obtain PI functions in children has not been evaluated directly. This study sought to evaluate PI functions for adults and four age groups of children with normal hearing to compare speech recognition as a function of age using CASPA. Comparisons between age groups for scoring by words and phonemes correct were made to determine the relative benefits of available scoring methods in CASPA. DESIGN: Speech recognition using CASPA was completed with 12 adults and four age groups of children (5- to 6-, 7- to 8-, 9- to 10-, and 11- to 12-yr olds), each with 12 participants. Results were scored by the percentage of words, phonemes, consonants, and vowels correct. All participants had normal hearing and age-appropriate speech production skills. RESULTS: Differences in speech recognition were significant across all age groups when responses were scored by the percentage of words correct. However, only differences between adults and the two youngest groups of children were significant when results were scored by the number of phonemes correct. Speech recognition scores decreased as a function of signal to noise ratio for both children and adults. However, the magnitude of degradation at poorer signal to noise ratios did not vary between adults and children, suggesting that mean differences could not be explained by interference from noise. CONCLUSIONS: Obtaining PI functions in noise using CASPA is feasible with children as young as 5 yrs. Statistically significant differences in speech recognition were observed between adults and the two youngest age groups of children when scored by the percentage of words correct. When results were scored by the percentage of phonemes correct, however, the only significant difference was between the youngest group of children and the adults. These results suggest that phoneme scoring may help to minimize differences between recognition scores of adults and children because children may be more likely to provide responses that are phonemic approximations when words are outside their lexicon.


Subject(s)
Diagnosis, Computer-Assisted/methods , Speech Reception Threshold Test/methods , Adult , Age Factors , Child , Child, Preschool , Female , Humans , Male , Perceptual Masking , Phonetics , Reference Values , Software
14.
J Acoust Soc Am ; 126(6): 3114-24, 2009 Dec.
Article in English | MEDLINE | ID: mdl-20000925

ABSTRACT

Sound pressure level in-situ measurements are sensitive to standing-wave pressure minima and have the potential to result in over-amplification with risk to residual hearing in hearing-aid fittings. Forward pressure level (FPL) quantifies the pressure traveling toward the tympanic membrane and may be a potential solution as it is insensitive to ear-canal pressure minima. Derivation of FPL is dependent on a Thevenin-equivalent source calibration technique yielding source pressure and impedance. This technique is found to accurately decompose cavity pressure into incident and reflected components in both a hard-walled test cavity and in the human ear canal through the derivation of a second sound-level measure termed integrated pressure level (IPL). IPL is quantified by the sum of incident and reflected pressure amplitudes. FPL and IPL were both investigated as measures of sound-level entering the middle ear. FPL may be a better measure of middle-ear input because IPL is more dependent on middle-ear reflectance and ear-canal conductance. The use of FPL in hearing-aid applications is expected to provide an accurate means of quantifying high-frequency amplification.


Subject(s)
Acoustics , Calibration , Ear, Middle , Electronics/methods , Pressure , Sound , Acoustic Stimulation , Adolescent , Algorithms , Audiometry, Pure-Tone , Auditory Threshold , Child , Ear Canal/physiology , Ear, Middle/physiology , Electronics/instrumentation , Hearing/physiology , Humans , Young Adult
15.
J Acoust Soc Am ; 126(1): 15-24, 2009 Jul.
Article in English | MEDLINE | ID: mdl-19603858

ABSTRACT

Probe-microphone measurements are a reliable method of verifying hearing-aid sound pressure level (SPL) in the ear canal for frequencies between 0.25 and 4 kHz. However, standing waves in the ear canal reduce the accuracy of these measurements above 4 kHz. Recent data suggest that speech information at frequencies up to 10 kHz may enhance speech perception, particularly for children. Incident and reflected components of a stimulus in the ear canal can be separated, allowing the use of forward (incident) pressure as a measure of stimulus level. Two experiments were conducted to determine if hearing-aid output in forward pressure provides valid estimates of in-situ sound level in the ear canal. In experiment 1, SPL measurements were obtained at the tympanic membrane and the medial end of an earmold in ten adults. While within-subject test-retest reliability was acceptable, measures near the tympanic membrane reduced the influence of standing waves for two of the ten participants. In experiment 2, forward pressure measurements were found to be unaffected by standing waves in the ear canal for frequencies up to 10 kHz. Implications for clinical assessment of amplification are discussed.


Subject(s)
Acoustics , Ear Canal , Electronics , Hearing Aids , Pressure , Adolescent , Adult , Analysis of Variance , Child , Female , Humans , Male , Middle Aged , Reproducibility of Results , Tympanic Membrane
16.
J Speech Lang Hear Res ; 51(5): 1369-80, 2008 Oct.
Article in English | MEDLINE | ID: mdl-18664693

ABSTRACT

PURPOSE: Recent studies from the authors' laboratory have suggested that reduced audibility in the high frequencies (because of the bandwidth of hearing instruments) may play a role in the delays in phonological development often exhibited by children with hearing impairment. The goal of the current study was to extend previous findings on the effect of bandwidth on fricatives/affricates to more complex stimuli. METHOD: Nine fricatives/affricates embedded in 2-syllable nonsense words were filtered at 5 and 10 kHz and presented to normal-hearing 6- to 7-year-olds who repeated words exactly as heard. Responses were recorded for subsequent phonetic and acoustic analyses. RESULTS: Significant effects of talker gender and bandwidth were found, with better performance for the male talker and the wider bandwidth condition. In contrast to previous studies, relatively small (5%) mean bandwidth effects were observed for /s/ and /z/ spoken by the female talker. Acoustic analyses of stimuli used in the previous and the current studies failed to explain this discrepancy. CONCLUSIONS: It appears likely that a combination of factors (i.e., dynamic cues, prior phonotactic knowledge, and perhaps other unidentified cues to fricative identity) may have facilitated the perception of these complex nonsense words in the current study.


Subject(s)
Hearing Aids , Hearing Loss/complications , Hearing Loss/therapy , Language Development Disorders/etiology , Phonetics , Child , Female , Humans , Language Tests , Male , Pitch Perception , Psychoacoustics , Speech Perception
17.
J Speech Lang Hear Res ; 51(4): 1042-54, 2008 Aug.
Article in English | MEDLINE | ID: mdl-18658070

ABSTRACT

PURPOSE: This study investigated an account of limited short-term memory capacity for children's speech perception in noise using a dual-task paradigm. METHOD: Sixty-four normal-hearing children (7-14 years of age) participated in this study. Dual tasks were repeating monosyllabic words presented in noise at 8 dB signal-to-noise ratio and rehearsing sets of 3 or 5 digits for subsequent serial recall. Half of the children were told to allocate their primary attention to word repetition and the other half to remembering digits. Dual-task performance was compared to single-task performance. Limitations in short-term memory demands required for the primary task were measured by dual-task decrements in nonprimary tasks. RESULTS: Results revealed that (a) regardless of task priority, no dual-task decrements were found for word recognition, but significant dual-task decrements were found for digit recall; (b) most children did not show the ability to allocate attention preferentially to primary tasks; and (c) younger children (7- to 10-year-olds) demonstrated improved word recognition in the dual-task conditions relative to their single-task performance. CONCLUSIONS: Seven- to 8-year-old children showed the greatest improvement in word recognition at the expense of the greatest decrement in digit recall during dual tasks. Several possibilities for improved word recognition in the dual-task conditions are discussed.


Subject(s)
Attention , Recognition, Psychology , Speech Perception , Vocabulary , Child , Female , Humans , Male
18.
Ear Hear ; 28(4): 483-94, 2007 Aug.
Article in English | MEDLINE | ID: mdl-17609611

ABSTRACT

OBJECTIVE: Previous studies from our laboratory have shown that a restricted stimulus bandwidth can have a negative effect upon the perception of the phonemes /s/ and /z/, which serve multiple linguistic functions in the English language. These findings may have important implications for the development of speech and language in young children with hearing loss because the bandwidth of current hearing aids generally is restricted to 6 to 7 kHz. The primary goal of the current study was to expand our previous work to examine the effects of stimulus bandwidth on a wide range of speech materials, to include a variety of auditory-related tasks, and to include the effects of background noise. DESIGN: Thirty-two children with normal hearing and 24 children with sensorineural hearing loss (7 to 14 yr) participated in this study. To assess the effects of stimulus bandwidth, four different auditory tasks were used: 1) nonsense syllable perception, 2) word recognition, 3) novel-word learning, and 4) listening effort. Auditory stimuli recorded by a female talker were low-pass filtered at 5 and 10 kHz and presented in noise. RESULTS: For the children with normal hearing, significant bandwidth effects were observed for the perception of nonsense syllables and for words but not for novel-word learning or listening effort. In the 10-kHz bandwidth condition, children with hearing loss showed significant improvements for monosyllabic words but not for nonsense syllables, novel-word learning, or listening effort. Further examination, however, revealed marked improvements for the perception of specific phonemes. For example, bandwidth effects for the perception of /s/ and /z/ were not only significant but much greater than that seen in the group with normal hearing. CONCLUSIONS: The current results are consistent with previous studies that have shown that a restricted stimulus bandwidth can negatively affect the perception of /s/ and /z/ spoken by female talkers. Given the importance of these phonemes in the English language and the tendency of early caregivers to be female, an inability to perceive these sounds correctly may have a negative impact on both phonological and morphological development.


Subject(s)
Hearing Disorders/diagnosis , Hearing Loss, Sensorineural/diagnosis , Speech Perception , Adolescent , Child , Child Language , Cochlea/physiopathology , Female , Hearing Disorders/epidemiology , Hearing Loss, Sensorineural/epidemiology , Hearing Loss, Sensorineural/physiopathology , Humans , Male , Memory, Short-Term , Phonetics , Severity of Illness Index , Speech Discrimination Tests
19.
J Am Acad Audiol ; 18(4): 292-303, 2007 Apr.
Article in English | MEDLINE | ID: mdl-17580725

ABSTRACT

Larson et al (2000) reported the findings of a multicenter, NIDCDNA clinical trial that compared hearing aid performance for three output limiting circuits in 360 adults with symmetrical sensorineural hearing loss. The current study was undertaken to examine long-term hearing aid benefit in this same group of participants following five to six years of hearing aid use. The speech-recognition portion of the follow-up study enrolled 108 participants from the original study, 85% of whom were current hearing aid users and 15% of whom had not worn hearing aids during the past month (nonusers). Recognition performance in sound field on the NU-6 (quiet at 62 dB SPL) and the CST (quiet at 74 dB SPL and with -3 and 3 dB signal-to-babble ratios [S/B] at 62 and 74 dB SPL) was measured unaided and aided whenever possible. Speech-recognition abilities decreased significantly since the original study. Speech-recognition decrements were observed regardless of the speech materials (NU-6 and CST), test condition (quiet and noise), S/B (-3 and 3 dB), or stimulus level (62 and 74 dB SPL). Despite decreases in speech recognition, hearing aid benefit remained largely unchanged since the original study; aided performance exceeded unaided performance regardless of presentation level or noise condition. As in the original study, the relations among stimulus level, S/B, and speech-recognition performance were complex.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural/physiopathology , Hearing Loss, Sensorineural/rehabilitation , Speech Perception/physiology , Aged , Audiometry, Pure-Tone , Audiometry, Speech , Follow-Up Studies , Humans , Patient Compliance , Time Factors
20.
Arch Otolaryngol Head Neck Surg ; 130(5): 556-62, 2004 May.
Article in English | MEDLINE | ID: mdl-15148176

ABSTRACT

OBJECTIVES: To review recent research studies concerning the importance of high-frequency amplification for speech perception in adults and children with hearing loss and to provide preliminary data on the phonological development of normal-hearing and hearing-impaired infants. DESIGN AND SETTING: With the exception of preliminary data from a longitudinal study of phonological development, all of the reviewed studies were taken from the archival literature. To determine the course of phonological development in the first 4 years of life, the following 3 groups of children were recruited: 20 normal-hearing children, 12 hearing-impaired children identified and aided up to 12 months of age (early-ID group), and 4 hearing-impaired children identified after 12 months of age (late-ID group). Children were videotaped in 30-minute sessions at 6- to 8-week intervals from 4 to 36 months of age (or shortly after identification of hearing loss) and at 2- and 6-month intervals thereafter. Broad transcription of child vocalizations, babble, and words was conducted using the International Phonetic Alphabet. A phoneme was judged acquired if it was produced 3 times in a 30-minute session. SUBJECTS: Preliminary data are presented from the 20 normal-hearing children, 3 children from the early-ID group, and 2 children from the late-ID group. RESULTS: Compared with the normal-hearing group, the 3 children from the early-ID group showed marked delays in the acquisition of all phonemes. The delay was shortest for vowels and longest for fricatives. Delays for the 2 children from the late-ID group were substantially longer. CONCLUSIONS: The reviewed studies and preliminary results from our longitudinal study suggest that (1) hearing-aid studies with adult subjects should not be used to predict speech and language performance in infants and young children; (2) the bandwidth of current behind-the-ear hearing aids is inadequate to accurately represent the high-frequency sounds of speech, particularly for female speakers; and (3) preliminary data on phonological development in infants with hearing loss suggest that the greatest delays occur for fricatives, consistent with predictions based on hearing-aid bandwidth.


Subject(s)
Language Development , Persons With Hearing Impairments , Speech Perception , Acoustics , Adult , Age of Onset , Child, Preschool , Disabled Children , Female , Hearing Aids , Humans , Infant , Language Development Disorders/etiology , Male , Radio Waves
SELECTION OF CITATIONS
SEARCH DETAIL
...