Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
1.
J Am Acad Audiol ; 25(10): 983-98, 2014.
Article in English | MEDLINE | ID: mdl-25514451

ABSTRACT

BACKGROUND: Preference for speech and music processed with nonlinear frequency compression (NFC) and two controls (restricted bandwidth [RBW] and extended bandwidth [EBW] hearing aid processing) was examined in adults and children with hearing loss. PURPOSE: The purpose of this study was to determine if stimulus type (music, sentences), age (children, adults), and degree of hearing loss influence listener preference for NFC, RBW, and EBW. RESEARCH DESIGN: Design was a within-participant, quasi-experimental study. Using a round-robin procedure, participants listened to amplified stimuli that were (1) frequency lowered using NFC, (2) low-pass filtered at 5 kHz to simulate the RBW of conventional hearing aid processing, or (3) low-pass filtered at 11 kHz to simulate EBW amplification. The examiner and participants were blinded to the type of processing. Using a two-alternative forced-choice task, participants selected the preferred music or sentence passage. STUDY SAMPLE: Participants included 16 children (ages 8-16 yr) and 16 adults (ages 19-65 yr) with mild to severe sensorineural hearing loss. INTERVENTION: All participants listened to speech and music processed using a hearing aid simulator fit to the Desired Sensation Level algorithm v5.0a. RESULTS: Children and adults did not differ in their preferences. For speech, participants preferred EBW to both NFC and RBW. Participants also preferred NFC to RBW. Preference was not related to the degree of hearing loss. For music, listeners did not show a preference. However, participants with greater hearing loss preferred NFC to RBW more than participants with less hearing loss. Conversely, participants with greater hearing loss were less likely to prefer EBW to RBW. CONCLUSIONS: Both age groups preferred access to high-frequency sounds, as demonstrated by their preference for either the EBW or NFC conditions over the RBW condition. Preference for EBW can be limited for those with greater degrees of hearing loss, but participants with greater hearing loss may be more likely to prefer NFC. Further investigation using participants with more severe hearing loss may be warranted.


Subject(s)
Acoustic Stimulation/methods , Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Adolescent , Adult , Aged , Audiology/instrumentation , Child , Female , Humans , Male , Matched-Pair Analysis , Middle Aged , Music , Young Adult
2.
Ear Hear ; 35(4): 440-7, 2014.
Article in English | MEDLINE | ID: mdl-24535558

ABSTRACT

OBJECTIVE: The primary goal of nonlinear frequency compression (NFC) and other frequency-lowering strategies is to increase the audibility of high-frequency sounds that are not otherwise audible with conventional hearing aid (HA) processing due to the degree of hearing loss, limited HA bandwidth, or a combination of both factors. The aim of the present study was to compare estimates of speech audibility processed by NFC with improvements in speech recognition for a group of children and adults with high-frequency hearing loss. DESIGN: Monosyllabic word recognition was measured in noise for 24 adults and 12 children with mild to severe sensorineural hearing loss. Stimuli were amplified based on each listener's audiogram with conventional processing (CP) with amplitude compression or with NFC and presented under headphones using a software-based HA simulator. A modification of the speech intelligibility index (SII) was used to estimate audibility of information in frequency-lowered bands. The mean improvement in SII was compared with the mean improvement in speech recognition. RESULTS: All but 2 listeners experienced improvements in speech recognition with NFC compared with CP, consistent with the small increase in audibility that was estimated using the modification of the SII. Children and adults had similar improvements in speech recognition with NFC. CONCLUSION: Word recognition with NFC was higher than CP for children and adults with mild to severe hearing loss. The average improvement in speech recognition with NFC (7%) was consistent with the modified SII, which indicated that listeners experienced an increase in audibility with NFC compared with CP. Further studies are necessary to determine whether changes in audibility with NFC are related to speech recognition with NFC for listeners with greater degrees of hearing loss, with a greater variety of compression settings, and using auditory training.


Subject(s)
Hearing Aids , Hearing Loss, High-Frequency/rehabilitation , Hearing Loss, Sensorineural/rehabilitation , Speech Intelligibility , Speech Perception , Adolescent , Adult , Aged , Child , Female , Humans , Male , Middle Aged , Treatment Outcome , Young Adult
3.
Ear Hear ; 35(2): 183-94, 2014.
Article in English | MEDLINE | ID: mdl-24473240

ABSTRACT

OBJECTIVES: The goal of this study was to evaluate how digital noise reduction (DNR) impacts listening effort and judgment of sound clarity in children with normal hearing. It was hypothesized that when two DNR algorithms differing in signal-to-noise ratio (SNR) output are compared, the algorithm that provides the greatest improvement in overall output SNR will reduce listening effort and receive a better clarity rating from child listeners. A secondary goal was to evaluate the relation between the inversion method measurements and listening effort with DNR processing. DESIGN: Twenty-four children with normal hearing (ages 7 to 12 years) participated in a speech recognition task in which consonant-vowel-consonant nonwords were presented in broadband background noise. Test stimuli were recorded through two hearing aids with DNR off and DNR on at 0 dB and +5 dB input SNR. Stimuli were presented to listeners and verbal response time (VRT) and phoneme recognition scores were measured. The underlying assumption was that an increase in VRT reflects an increase in listening effort. Children rated the sound clarity for each condition. The two commercially available HAs were chosen based on: (1) an inversion technique, which was used to quantify the magnitude of change in SNR with the activation of DNR, and (2) a measure of magnitude-squared coherence, which was used to ensure that DNR in both devices preserved the spectrum. RESULTS: One device provided a greater improvement in overall output SNR than the other. Both DNR algorithms resulted in minimal spectral distortion as measured using coherence. For both devices, VRT decreased for the DNR-on condition, suggesting that listening effort decreased with DNR in both devices. Clarity ratings were also better in the DNR-on condition for both devices. The device showing the greatest improvement in output SNR with DNR engaged improved phoneme recognition scores. The magnitude of this improved phoneme recognition was not accurately predicted with measurements of output SNR. Measured output SNR varied in the ability to predict other outcomes. CONCLUSIONS: Overall, results suggest that DNR effectively reduces listening effort and improves subjective clarity ratings in children with normal hearing but that these improvements are not necessarily related to the output SNR improvements or preserved speech spectra provided by the DNR.


Subject(s)
Algorithms , Hearing Aids , Noise/prevention & control , Signal-To-Noise Ratio , Speech Perception/physiology , Auditory Perception/physiology , Child , Female , Healthy Volunteers , Humans , Male
4.
Ear Hear ; 34(2): e24-7, 2013.
Article in English | MEDLINE | ID: mdl-23104144

ABSTRACT

OBJECTIVE: Nonlinear frequency compression attempts to restore high-frequency audibility by lowering high-frequency input signals. Methods of determining the optimal parameters that maximize speech understanding have not been evaluated. The effect of maximizing the audible bandwidth on speech recognition for a group of listeners with normal hearing is described. DESIGN: Nonword recognition was measured with 20 normal-hearing adults. Three audiograms with different high-frequency thresholds were used to create conditions with varying high-frequency audibility. Bandwidth was manipulated using three conditions for each audiogram: conventional processing, the manufacturer's default compression parameters, and compression parameters that optimized bandwidth. RESULTS: Nonlinear frequency compression optimized to provide the widest audible bandwidth improved nonword recognition compared with both conventional processing and the default parameters. CONCLUSIONS: These results showed that using the widest audible bandwidth maximized speech identification when using nonlinear frequency compression. Future studies should apply these methods to listeners with hearing loss to demonstrate efficacy in clinical populations.


Subject(s)
Acoustic Stimulation/methods , Speech Perception/physiology , Adult , Auditory Threshold , Humans , Young Adult
5.
J Acoust Soc Am ; 127(5): 3177-88, 2010 May.
Article in English | MEDLINE | ID: mdl-21117766

ABSTRACT

In contrast to the availability of consonant confusion studies with adults, to date, no investigators have compared children's consonant confusion patterns in noise to those of adults in a single study. To examine whether children's error patterns are similar to those of adults, three groups of children (24 each in 4-5, 6-7, and 8-9 yrs. old) and 24 adult native speakers of American English (AE) performed a recognition task for 15 AE consonants in /ɑ/-consonant-/ɑ/ nonsense syllables presented in a background of speech-shaped noise. Three signal-to-noise ratios (SNR: 0, +5, and +10 dB) were used. Although the performance improved as a function of age, the overall consonant recognition accuracy as a function of SNR improved at a similar rate for all groups. Detailed analyses using phonetic features (manner, place, and voicing) revealed that stop consonants were the most problematic for all groups. In addition, for the younger children, front consonants presented in the 0 dB SNR condition were more error prone than others. These results suggested that children's use of phonetic cues do not develop at the same rate for all phonetic features.


Subject(s)
Language , Noise/adverse effects , Perceptual Masking , Phonetics , Recognition, Psychology , Speech Acoustics , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Age Factors , Audiometry, Speech , Child , Child, Preschool , Cues , Humans
6.
Ear Hear ; 31(5): 625-35, 2010 Oct.
Article in English | MEDLINE | ID: mdl-20548239

ABSTRACT

OBJECTIVE: Studies of language development in children with mild-moderate hearing loss are relatively rare. Longitudinal studies of children with late-identified hearing loss are relevant for determining how a period of unaided mild-moderate hearing loss impacts development. In recent years, newborn hearing screening programs have effectively reduced the ages of identification for most children with permanent hearing loss. However, some children continue to be identified late, and research is needed to guide management decisions. Furthermore, studies of this group may help to discern whether language normalizes after intervention and/or whether certain aspects of language might be vulnerable to persistent delays. The current study examines the impact of late identification and reduced audibility on speech and language outcomes via a longitudinal study of four children with mild-moderate sensorineural hearing loss. DESIGN: Longitudinal outcomes of four children with late-identified mild-moderate sensorineural hearing loss were studied using standardized measures and language sampling procedures from at or near the point of identification (28 to 41 mos) through 84 mos of age. The children with hearing loss were compared with 10 age-matched children with normal hearing on a majority of the measures through 60 mos of age. Spontaneous language samples were collected from mother-child interaction sessions recorded at consistent intervals in a laboratory-based play setting. Transcripts were analyzed using computer-based procedures (Systematic Analysis of Language Transcripts) and the Index of Productive Syntax. Possible influences of audibility were explored by examining the onset and productive use of a set of verb tense markers and by monitoring the children's accuracy in the use of morphological endings. Phonological samples at baseline were transcribed and analyzed using Computerized Profiling. RESULTS: At entry to the study, the four children with hearing loss demonstrated language delays with pronounced delays in phonological development. Three of the four children demonstrated rapid progress with development and interventions and performed within the average range on standardized speech and language measures compared with age-matched children by 60 mos of age. However, persistent differences from children with normal hearing were observed in the areas of morphosyntax, speech intelligibility in conversation, and production of fricatives. Children with mild-moderate hearing loss demonstrated later than typical emergence of certain verb tense markers, which may be related to reduced or inconsistent audibility. CONCLUSIONS: The results of this study suggest that early communication delays will resolve for children with late-identified, mild-moderate hearing loss, given appropriate amplification and intervention services. A positive result is that three of four children demonstrated normalization of broad language behaviors by 60 mos of age, despite significant delays at baseline. However, these children are at risk for persistent delays in phonology at the conversational level and for accuracy in use of morphological markers. The ways in which reduced auditory experiences and audibility may contribute to these delays are explored along with implications for evaluation of outcomes.


Subject(s)
Articulation Disorders/diagnosis , Articulation Disorders/etiology , Hearing Loss, Sensorineural/complications , Hearing Loss, Sensorineural/diagnosis , Severity of Illness Index , Child , Child, Preschool , Humans , Infant , Language Development Disorders/diagnosis , Language Development Disorders/etiology , Longitudinal Studies , Phonetics , Semantics , Speech Intelligibility
7.
Ear Hear ; 31(6): 761-8, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20562623

ABSTRACT

OBJECTIVES: Speech perception difficulties experienced by children in adverse listening environments have been well documented. It has been suggested that phonological awareness may be related to children's ability to understand speech in noise. The goal of this study was to provide data that will allow a clearer characterization of this potential relation in typically developing children. Doing so may result in a better understanding of how children learn to listen in noise as well as providing information to identify children who are at risk for difficulties listening in noise. DESIGN: Thirty-six children (5 to 7 yrs) with normal hearing participated in the study. Three phonological awareness tasks (syllable counting, initial consonant same, and phoneme deletion), representing a range of skills, were administered. For perception in noise tasks, nonsense syllables, monosyllabic words, and meaningful sentences with three key words were presented (50 dB SPL) at three signal to noise ratios (0, +5, and +10 dB). RESULTS: Among the speech in noise tasks, there was a significant effect of signal to noise ratio, with children performing less well at 0-dB signal to noise ratio for all stimuli. A significant age effect occurred only for word recognition, with 7-yr-olds scoring significantly higher than 5-yr olds. For all three phonological awareness tasks, an age effect existed with 7-year-olds again performing significantly better than 5-yr-olds. However, when examining the relation between speech recognition in noise and phonological awareness skills, no single variable accounted for a significant part of the variance in performance on nonsense syllables, words, or sentences. However, there was an association between vocabulary knowledge and speech perception in noise. CONCLUSIONS: Although phonological awareness skills are strongly related to reading and some children with reading difficulties also demonstrate poor speech perception in noise, results of this study question a relation between phonological awareness skills and speech perception in moderate levels of noise for typically developing children with normal hearing from 5 to 7 yrs of age. Further research in this area is needed to examine possible relations among the many factors that affect both speech perception in noise and the development of phonological awareness.


Subject(s)
Child Development/physiology , Hearing/physiology , Language Development , Noise , Phonetics , Speech Perception/physiology , Acoustic Stimulation/methods , Child , Child, Preschool , Humans , Reading
8.
Ear Hear ; 31(3): 345-55, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20081536

ABSTRACT

OBJECTIVE: Although numerous studies have investigated the effects of single-microphone digital noise-reduction algorithms for adults with hearing loss, similar studies have not been conducted with young hearing-impaired children. The goal of this study was to examine the effects of a commonly used digital noise-reduction scheme (spectral subtraction) in children with mild to moderately severe sensorineural hearing losses. It was hypothesized that the process of spectral subtraction may alter or degrade speech signals in some way. Such degradation may have little influence on the perception of speech by hearing-impaired adults who are likely to use contextual information under such circumstances. For young children who are still developing various language skills, however, signal degradation may have a more detrimental effect on the perception of speech. DESIGN: Sixteen children (eight 5- to 7-yr-olds and eight 8- to 10-yr-olds) with mild to moderately severe hearing loss participated in this study. All participants wore binaural behind the ear hearing aids where noise-reduction processing was performed independently in 16 bands with center frequencies spaced 500 Hz apart up to 7500 Hz. Test stimuli were nonsense syllables, words, and sentences in a background of noise. For all stimuli, data were obtained with noise reduction (NR) on and off conditions. RESULTS: In general, performance improved as a function of speech to noise ratio for all three speech materials. The main effect for stimulus type was significant and post hoc comparisons of stimulus type indicated that speech recognition was higher for sentences than that for both nonsense syllables and words, but no significant differences were observed between nonsense syllables and words. The main effect for NR and the two-way interaction between NR and stimulus type were not significant. Significant age group effects were observed, but the two-way interaction between NR and age group was not significant. CONCLUSIONS: Consistent with previous findings from studies with adults, results suggest that the form of NR used in this study does not have a negative effect on the overall perception of nonsense syllables, words, or sentences across the age range (5 to 10 yrs) and speech to noise ratios (0, +5, and +10 dB) tested.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural/physiopathology , Hearing Loss, Sensorineural/therapy , Language Development , Noise/prevention & control , Speech Perception/physiology , Acoustic Stimulation , Algorithms , Auditory Threshold , Child , Child, Preschool , Humans , Phonetics , Speech Reception Threshold Test
9.
Ear Hear ; 31(1): 95-101, 2010 Feb.
Article in English | MEDLINE | ID: mdl-19773658

ABSTRACT

OBJECTIVES: The Computer-Aided Speech Perception Assessment (CASPA) is a clinical measure of speech recognition that uses 10-item, isophonemic word lists to derive performance intensity (PI) functions for adult listeners. Because CASPA was developed for adults, the ability to obtain PI functions in children has not been evaluated directly. This study sought to evaluate PI functions for adults and four age groups of children with normal hearing to compare speech recognition as a function of age using CASPA. Comparisons between age groups for scoring by words and phonemes correct were made to determine the relative benefits of available scoring methods in CASPA. DESIGN: Speech recognition using CASPA was completed with 12 adults and four age groups of children (5- to 6-, 7- to 8-, 9- to 10-, and 11- to 12-yr olds), each with 12 participants. Results were scored by the percentage of words, phonemes, consonants, and vowels correct. All participants had normal hearing and age-appropriate speech production skills. RESULTS: Differences in speech recognition were significant across all age groups when responses were scored by the percentage of words correct. However, only differences between adults and the two youngest groups of children were significant when results were scored by the number of phonemes correct. Speech recognition scores decreased as a function of signal to noise ratio for both children and adults. However, the magnitude of degradation at poorer signal to noise ratios did not vary between adults and children, suggesting that mean differences could not be explained by interference from noise. CONCLUSIONS: Obtaining PI functions in noise using CASPA is feasible with children as young as 5 yrs. Statistically significant differences in speech recognition were observed between adults and the two youngest age groups of children when scored by the percentage of words correct. When results were scored by the percentage of phonemes correct, however, the only significant difference was between the youngest group of children and the adults. These results suggest that phoneme scoring may help to minimize differences between recognition scores of adults and children because children may be more likely to provide responses that are phonemic approximations when words are outside their lexicon.


Subject(s)
Diagnosis, Computer-Assisted/methods , Speech Reception Threshold Test/methods , Adult , Age Factors , Child , Child, Preschool , Female , Humans , Male , Perceptual Masking , Phonetics , Reference Values , Software
10.
Am J Audiol ; 18(1): 14-23, 2009 Jun.
Article in English | MEDLINE | ID: mdl-19029531

ABSTRACT

PURPOSE: To examine the consistency of hearing aid use by infants. A goal was to identify maternal, child, and situational factors that affected consistency of device use. METHOD: Maternal interviews were conducted using a nonvalidated structured interview (Amplification in Daily Life Questionnaire) that included 5-point Likert scale items and open-ended questions. Participants were mothers of 7 infants with mild to moderately severe hearing loss who were enrolled in a longitudinal study. Data were collected at 4 intervals (10.5-12, 16.5, 22.5, and 28.5 months old). RESULTS: Consistency of amplification use was variable at early ages but improved with age. By age 28.5 months, toddlers used amplification regularly in most settings. Selected daily situations (e.g., in car or outdoors) were more challenging for maintaining device use than contexts where the child was closely monitored. Only 2 families established early, consistent full-time use across all contexts examined. Qualitative results were used to identify familial, developmental, and situational variables that influenced the consistency of infant/toddler device use. CONCLUSION: Families may benefit from audiologic counseling that acknowledges the multifaceted challenges that arise. Audiologists can work in partnership with families to promote consistent device use across a variety of daily situations.


Subject(s)
Hearing Aids/statistics & numerical data , Hearing Loss/rehabilitation , Patient Compliance/statistics & numerical data , Adaptation, Psychological , Age Factors , Child, Preschool , Female , Hearing Loss/psychology , Humans , Infant , Language Development , Longitudinal Studies , Male , Motivation , Patient Compliance/psychology , Prospective Studies , Social Environment , Surveys and Questionnaires , Utilization Review/statistics & numerical data
11.
J Speech Lang Hear Res ; 51(5): 1369-80, 2008 Oct.
Article in English | MEDLINE | ID: mdl-18664693

ABSTRACT

PURPOSE: Recent studies from the authors' laboratory have suggested that reduced audibility in the high frequencies (because of the bandwidth of hearing instruments) may play a role in the delays in phonological development often exhibited by children with hearing impairment. The goal of the current study was to extend previous findings on the effect of bandwidth on fricatives/affricates to more complex stimuli. METHOD: Nine fricatives/affricates embedded in 2-syllable nonsense words were filtered at 5 and 10 kHz and presented to normal-hearing 6- to 7-year-olds who repeated words exactly as heard. Responses were recorded for subsequent phonetic and acoustic analyses. RESULTS: Significant effects of talker gender and bandwidth were found, with better performance for the male talker and the wider bandwidth condition. In contrast to previous studies, relatively small (5%) mean bandwidth effects were observed for /s/ and /z/ spoken by the female talker. Acoustic analyses of stimuli used in the previous and the current studies failed to explain this discrepancy. CONCLUSIONS: It appears likely that a combination of factors (i.e., dynamic cues, prior phonotactic knowledge, and perhaps other unidentified cues to fricative identity) may have facilitated the perception of these complex nonsense words in the current study.


Subject(s)
Hearing Aids , Hearing Loss/complications , Hearing Loss/therapy , Language Development Disorders/etiology , Phonetics , Child , Female , Humans , Language Tests , Male , Pitch Perception , Psychoacoustics , Speech Perception
12.
J Speech Lang Hear Res ; 51(4): 1042-54, 2008 Aug.
Article in English | MEDLINE | ID: mdl-18658070

ABSTRACT

PURPOSE: This study investigated an account of limited short-term memory capacity for children's speech perception in noise using a dual-task paradigm. METHOD: Sixty-four normal-hearing children (7-14 years of age) participated in this study. Dual tasks were repeating monosyllabic words presented in noise at 8 dB signal-to-noise ratio and rehearsing sets of 3 or 5 digits for subsequent serial recall. Half of the children were told to allocate their primary attention to word repetition and the other half to remembering digits. Dual-task performance was compared to single-task performance. Limitations in short-term memory demands required for the primary task were measured by dual-task decrements in nonprimary tasks. RESULTS: Results revealed that (a) regardless of task priority, no dual-task decrements were found for word recognition, but significant dual-task decrements were found for digit recall; (b) most children did not show the ability to allocate attention preferentially to primary tasks; and (c) younger children (7- to 10-year-olds) demonstrated improved word recognition in the dual-task conditions relative to their single-task performance. CONCLUSIONS: Seven- to 8-year-old children showed the greatest improvement in word recognition at the expense of the greatest decrement in digit recall during dual tasks. Several possibilities for improved word recognition in the dual-task conditions are discussed.


Subject(s)
Attention , Recognition, Psychology , Speech Perception , Vocabulary , Child , Female , Humans , Male
13.
Ear Hear ; 28(5): 628-42, 2007 Sep.
Article in English | MEDLINE | ID: mdl-17804977

ABSTRACT

OBJECTIVE: By 24 mo of age, most typically developing infants with normal hearing successfully transition to the production of words that can be understood about 50% of the time. This study compares early phonological development in children with and without hearing loss to gain a clearer understanding of the effects of hearing loss in early-identified children. A secondary goal was to identify measures of early phonetic development that are predictors of later speech production outcomes. DESIGN: The vocalizations and early words of 21 infants with normal hearing and 12 early-identified infants with hearing loss were followed longitudinally over a period of 14 mo (from 10 to 24 mo of age). Thirty-minute mother-child interaction samples were video recorded at 6- to 8-wk intervals in a laboratory playroom setting. Vocalizations produced at 16 and 24 mo were categorized according to communicative intent and recognizable words versus other types. Groups were compared on the structural complexity of words produced at 24 mo of age. Parent report measures of vocabulary development were collected from 10 to 30 mo of age, and Goldman-Fristoe Test of Articulation scores at 36 mo were used in regression analyses. RESULTS: Both groups increased the purposeful use of voice between 16 and 24 mo of age. However, at 24 mo of age, the toddlers with hearing loss produced significantly fewer words that could be recognized by their mothers. Their samples were dominated by unintelligible communicative attempts at this age. In contrast, the samples from normal hearing children were dominated by words and phrases. At 24 mo of age, toddlers with normal hearing were more advanced than those with hearing loss on seven measures of the structural complexity of words. The children with normal hearing attempted more complex words and productions were more accurate than those of children with hearing loss. At 10 to 16 mo of age, the groups did not differ significantly on parent-report measures of receptive vocabulary. However, the hearing loss group was much slower to develop expressive vocabulary and demonstrated larger individual differences than the normal hearing group. Six children identified as atypical differed from all other children in vowel accuracy and complexity of word attempts. However, both atypical infants and typical infants with hearing loss were significantly less accurate than normal hearing infants in consonant and word production. Early measures of syllable production predicted unique variance in later speech production and vocabulary outcomes. CONCLUSIONS: The transition from babble to words in infants with hearing loss appears to be delayed but parallel to that of infants with normal hearing. These delays appear to exert significant influences on expressive vocabulary development. Parents may appreciate knowing that some children with hearing loss may develop early vocabulary at a slower rate than children with normal hearing. Clinicians should monitor landmarks from babble onset through transitions to words. Indicators of atypical development were delayed and/or limited use of syllables with consonants, vowel errors and limited production of recognizable words.


Subject(s)
Child Language , Hearing Loss, Bilateral/diagnosis , Hearing Loss, Sensorineural/diagnosis , Language Development Disorders/diagnosis , Phonetics , Speech Production Measurement , Verbal Learning , Child, Preschool , Communication , Comprehension , Female , Humans , Individuality , Infant , Longitudinal Studies , Male , Reference Values , Risk Assessment , Speech Intelligibility , Vocabulary
14.
Ear Hear ; 28(5): 605-27, 2007 Sep.
Article in English | MEDLINE | ID: mdl-17804976

ABSTRACT

OBJECTIVE: Infants with hearing loss are known to be slower to develop spoken vocabulary than peers with normal hearing. Previous research demonstrates that they differ from normal-hearing children in several aspects of prelinguistic vocal development. Less is known about the vocalizations of early-identified infants with access to current hearing technologies. This longitudinal study documents changes in prelinguistic vocalizations in early-identified infants with varying degrees of hearing loss, compared with a group of infants with normal hearing. It was hypothesized that infants with hearing loss would demonstrate phonetic delays and that selected aspects of phonetic learning may be differentially affected by restricted auditory access. DESIGN: The vocalizations and early verbalizations of 21 infants with normal hearing and 12 early-identified infants with hearing loss were compared over a period of 14 mo (from 10 to 24 mo of age). Thirty-minute mother-child interaction sessions were video recorded at 6- to 8-wk intervals in a laboratory playroom setting. Syllable complexity changes and consonantal development were quantified from vocalizations and early verbalizations. Early behaviors were related to speech production measures at 36 mo of age. Participants with hearing loss were recruited from local audiology clinics and early intervention programs. Participants with normal hearing were recruited through day care centers and pediatrician offices. RESULTS: Relative to age-matched, normal-hearing peers, children with hearing loss were delayed in the onset of consistent canonical babble. However, certain children with moderately-severe losses babbled on time, and infants with cochlear implants babbled within 2 to 6 mo of implantation. The infants with hearing loss had smaller consonantal inventories and were slower to increase syllable shape complexity than age-matched normal-hearing peers. The overall pattern of results suggested that consonant development in infants with hearing loss was delayed but not qualitatively different from children with normal hearing. Delays appeared to be less pronounced than suggested by previous research. However, fricative/affricate development progressed slowly in infants with hearing loss and divergence from the patterns of normal-hearing children was observed. Six children (2 with normal hearing; 4 with hearing loss) were identified as atypical, based on their rates of development. At 24 mo of age, these children persisted in producing a high proportion (0.59) of vocalizations lacking consonants, which was negatively correlated with Goldman-Fristoe scores at 36 mo (r = -0.60). CONCLUSIONS: Results suggest that early-identified children are delayed in consonant and syllable structure development, which may influence early word learning rates. Fricative/affricate development appears to be challenging for some infants with hearing loss. This may be related to the effects of sensorineural hearing loss on high-frequency information, restricted bandwidth provided by amplification, and reduced audibility in contexts of noise and reverberation. Delayed fricative use may have implications for morphological development. Atypically slow rates of change in syllable development may indicate that a child is at risk for delayed speech development.


Subject(s)
Child Language , Hearing Loss, Bilateral/diagnosis , Hearing Loss, Sensorineural/diagnosis , Language Development Disorders/diagnosis , Phonetics , Age Factors , Articulation Disorders/diagnosis , Articulation Disorders/physiopathology , Articulation Disorders/rehabilitation , Audiometry , Auditory Threshold/physiology , Brain Stem/physiopathology , Cochlear Implants , Evoked Potentials, Auditory, Brain Stem/physiology , Female , Follow-Up Studies , Hearing Aids , Hearing Loss, Bilateral/physiopathology , Hearing Loss, Bilateral/rehabilitation , Hearing Loss, Sensorineural/physiopathology , Hearing Loss, Sensorineural/rehabilitation , Humans , Infant , Infant, Newborn , Language Development Disorders/physiopathology , Language Development Disorders/rehabilitation , Longitudinal Studies , Male , Neonatal Screening , Otoacoustic Emissions, Spontaneous/physiology , Phonation/physiology , Reference Values , Verbal Behavior/physiology
15.
Ear Hear ; 28(4): 483-94, 2007 Aug.
Article in English | MEDLINE | ID: mdl-17609611

ABSTRACT

OBJECTIVE: Previous studies from our laboratory have shown that a restricted stimulus bandwidth can have a negative effect upon the perception of the phonemes /s/ and /z/, which serve multiple linguistic functions in the English language. These findings may have important implications for the development of speech and language in young children with hearing loss because the bandwidth of current hearing aids generally is restricted to 6 to 7 kHz. The primary goal of the current study was to expand our previous work to examine the effects of stimulus bandwidth on a wide range of speech materials, to include a variety of auditory-related tasks, and to include the effects of background noise. DESIGN: Thirty-two children with normal hearing and 24 children with sensorineural hearing loss (7 to 14 yr) participated in this study. To assess the effects of stimulus bandwidth, four different auditory tasks were used: 1) nonsense syllable perception, 2) word recognition, 3) novel-word learning, and 4) listening effort. Auditory stimuli recorded by a female talker were low-pass filtered at 5 and 10 kHz and presented in noise. RESULTS: For the children with normal hearing, significant bandwidth effects were observed for the perception of nonsense syllables and for words but not for novel-word learning or listening effort. In the 10-kHz bandwidth condition, children with hearing loss showed significant improvements for monosyllabic words but not for nonsense syllables, novel-word learning, or listening effort. Further examination, however, revealed marked improvements for the perception of specific phonemes. For example, bandwidth effects for the perception of /s/ and /z/ were not only significant but much greater than that seen in the group with normal hearing. CONCLUSIONS: The current results are consistent with previous studies that have shown that a restricted stimulus bandwidth can negatively affect the perception of /s/ and /z/ spoken by female talkers. Given the importance of these phonemes in the English language and the tendency of early caregivers to be female, an inability to perceive these sounds correctly may have a negative impact on both phonological and morphological development.


Subject(s)
Hearing Disorders/diagnosis , Hearing Loss, Sensorineural/diagnosis , Speech Perception , Adolescent , Child , Child Language , Cochlea/physiopathology , Female , Hearing Disorders/epidemiology , Hearing Loss, Sensorineural/epidemiology , Hearing Loss, Sensorineural/physiopathology , Humans , Male , Memory, Short-Term , Phonetics , Severity of Illness Index , Speech Discrimination Tests
16.
Ear Hear ; 25(3): 302-7, 2004 Jun.
Article in English | MEDLINE | ID: mdl-15179120

ABSTRACT

OBJECTIVE: To determine the maximum stimulus levels at which a measured auditory steady-state response (ASSR) can be assumed to be a reliable measure of auditory thresholds. DESIGN: ASSR thresholds were measured at octave frequencies from 500 to 4000 Hz in 10 subjects with profound hearing loss. These subjects provided no behavioral responses to sound at the limits of pure-tone audiometers and at the limits of the stimulus levels produced by the ASSR device. Subjects were divided into two groups of five, with repeated measures obtained within the same session in one group and repeated measures obtained in a separate session on a different day in the other group. RESULTS: ASSR thresholds were observed in all 10 subjects at each of four frequencies and in both trials. On average, these ASSR thresholds were observed at 100 dB HL (SD = 5 dB). Because these responses were at least 18 to 22 dB below the limits of the equipment where all subjects had no behavioral responses, it is reasonable to conclude that the ASSRs were not generated by the auditory system. CONCLUSIONS: An artifact or distortion may be present in the recording of ASSRs at high levels. These data bring into question the view that there is a wider dynamic range for ASSR measurements compared with auditory brain stem response measurements, at least with current implementation.


Subject(s)
Auditory Threshold/physiology , Cochlear Implants , Evoked Potentials, Auditory/physiology , Hearing Loss/physiopathology , Adult , Aged , Hearing Loss/therapy , Humans , Middle Aged , Reproducibility of Results
17.
Arch Otolaryngol Head Neck Surg ; 130(5): 556-62, 2004 May.
Article in English | MEDLINE | ID: mdl-15148176

ABSTRACT

OBJECTIVES: To review recent research studies concerning the importance of high-frequency amplification for speech perception in adults and children with hearing loss and to provide preliminary data on the phonological development of normal-hearing and hearing-impaired infants. DESIGN AND SETTING: With the exception of preliminary data from a longitudinal study of phonological development, all of the reviewed studies were taken from the archival literature. To determine the course of phonological development in the first 4 years of life, the following 3 groups of children were recruited: 20 normal-hearing children, 12 hearing-impaired children identified and aided up to 12 months of age (early-ID group), and 4 hearing-impaired children identified after 12 months of age (late-ID group). Children were videotaped in 30-minute sessions at 6- to 8-week intervals from 4 to 36 months of age (or shortly after identification of hearing loss) and at 2- and 6-month intervals thereafter. Broad transcription of child vocalizations, babble, and words was conducted using the International Phonetic Alphabet. A phoneme was judged acquired if it was produced 3 times in a 30-minute session. SUBJECTS: Preliminary data are presented from the 20 normal-hearing children, 3 children from the early-ID group, and 2 children from the late-ID group. RESULTS: Compared with the normal-hearing group, the 3 children from the early-ID group showed marked delays in the acquisition of all phonemes. The delay was shortest for vowels and longest for fricatives. Delays for the 2 children from the late-ID group were substantially longer. CONCLUSIONS: The reviewed studies and preliminary results from our longitudinal study suggest that (1) hearing-aid studies with adult subjects should not be used to predict speech and language performance in infants and young children; (2) the bandwidth of current behind-the-ear hearing aids is inadequate to accurately represent the high-frequency sounds of speech, particularly for female speakers; and (3) preliminary data on phonological development in infants with hearing loss suggest that the greatest delays occur for fricatives, consistent with predictions based on hearing-aid bandwidth.


Subject(s)
Language Development , Persons With Hearing Impairments , Speech Perception , Acoustics , Adult , Age of Onset , Child, Preschool , Disabled Children , Female , Hearing Aids , Humans , Infant , Language Development Disorders/etiology , Male , Radio Waves
18.
Ear Hear ; 25(1): 47-56, 2004 Feb.
Article in English | MEDLINE | ID: mdl-14770017

ABSTRACT

OBJECTIVE: The goal of this study was to assess performance on a novel-word learning task by normal-hearing and hearing-impaired children for words varying in form (noun versus verb), stimulus level (50 versus 60 dB SPL), and number of repetitions (4 versus 6). It was hypothesized that novel-word learning would be significantly poorer in the subjects with hearing loss, would increase with both level and repetition, and would be better for nouns than verbs. DESIGN: Twenty normal-hearing and 11 hearing-impaired children (6 to 9 yr old) participated in this study. Each child viewed a 4-minute animated slide show containing 8 novel words. The effects of hearing status, word form, repetition, and stimulus level were examined systematically. The influence of audibility, word recognition, chronological age, and lexical development also were evaluated. After hearing the story twice, children were asked to identify each word from a set of four pictures. RESULTS: Overall performance was 60% for the normal-hearing children and 41% for the children with hearing loss. Significant predictors of performance were PPVT raw scores, hearing status, stimulus level, and repetitions. The variables age, audibility, word recognition scores, and word form were not significant predictors. CONCLUSIONS: Results suggest that a child's ability to learn new words can be predicted from vocabulary size, stimulus level, number of exposures, and hearing status. Further, the sensitivity to presentation level observed in this novel-word learning task suggests that this type of paradigm may be an effective tool for studying various forms of hearing aid signal processing algorithms.


Subject(s)
Hearing Loss, Sensorineural/physiopathology , Learning , Persons With Hearing Impairments , Vocabulary , Acoustic Stimulation , Age Factors , Auditory Threshold , Child , Female , Hearing Aids , Hearing Loss, Bilateral/physiopathology , Hearing Loss, Bilateral/therapy , Hearing Loss, Sensorineural/therapy , Humans , Male , Task Performance and Analysis
20.
J Speech Lang Hear Res ; 46(3): 649-57, 2003 Jun.
Article in English | MEDLINE | ID: mdl-14696992

ABSTRACT

This study examined the long- and short-term spectral characteristics of speech simultaneously recorded at the ear and at a reference microphone position (30 cm at 0 degrees azimuth). Twenty adults and 26 children (2-4 years of age) with normal hearing were asked to produce 9 short sentences in a quiet environment. Long-term average speech spectra (LTASS) were calculated for the concatenated sentences, and short-term spectra were calculated for selected phonemes within the sentences (/m/, /n/, /s/, [see text], /f/, /a/, /u/, and /i/). Relative to the reference microphone position, the LTASS at the ear showed higher amplitudes for frequencies below 1 kHz and lower amplitudes for frequencies above 2 kHz for both groups. At both microphone positions, the short-term spectra of the children's phonemes revealed reduced amplitudes for /s/ and [see text] and for vowel energy above 2 kHz relative to the adults' phonemes. The results of this study suggest that, for listeners with hearing loss (a) the talker's own voice through a hearing instrument would contain lower overall energy at frequencies above 2 kHz relative to speech originating in front of the talker, (b) a child's own speech would contain even lower energy above 2 kHz because of adult-child differences in overall amplitude, and (c) frequency regions important to normal speech development (e.g., high-frequency energy in the phonemes /s/ and [see text]) may not be amplified sufficiently by many hearing instruments.


Subject(s)
Hearing Aids , Hearing Loss/rehabilitation , Speech Acoustics , Adult , Child, Preschool , Female , Humans , Male , Middle Aged , Sound Spectrography , Speech Production Measurement
SELECTION OF CITATIONS
SEARCH DETAIL
...