Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
Add more filters










Publication year range
1.
Semin Hear ; 44(Suppl 1): S36-S48, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36970648

ABSTRACT

Numerous studies have shown that children with mild bilateral (MBHL) or unilateral hearing loss (UHL) experience speech perception difficulties in poor acoustics. Much of the research in this area has been conducted via laboratory studies using speech-recognition tasks with a single talker and presentation via earphones and/or from a loudspeaker located directly in front of the listener. Real-world speech understanding is more complex, however, and these children may need to exert greater effort than their peers with normal hearing to understand speech, potentially impacting progress in a number of developmental areas. This article discusses issues and research relative to speech understanding in complex environments for children with MBHL or UHL and implications for real-world listening and understanding.

2.
Lang Speech Hear Serv Sch ; 51(1): 55-67, 2020 01 08.
Article in English | MEDLINE | ID: mdl-31913801

ABSTRACT

Purpose Because of uncertainty about the level of hearing where hearing aids should be provided to children, the goal of the current study was to develop audibility-based hearing aid candidacy criteria based on the relationship between unaided hearing and language outcomes in a group of children with hearing loss who did not wear hearing aids. Method Unaided hearing and language outcomes were examined for 52 children with mild-to-severe hearing losses. A group of 52 children with typical hearing matched for age, nonverbal intelligence, and socioeconomic status was included as a comparison group representing the range of optimal language outcomes. Two audibility-based criteria were considered: (a) the level of unaided hearing where unaided children with hearing loss fell below the median for children with typical hearing and (b) the level of unaided hearing where the slope of language outcomes changed significantly based on an iterative, piecewise regression modeling approach. Results The level of unaided audibility for children with hearing loss that was associated with differences in language development from children with typical hearing or based on the modeling approach varied across outcomes and criteria but converged at an unaided speech intelligibility index of 80. Conclusions Children with hearing loss who have unaided speech intelligibility index values less than 80 may be at risk for delays in language development without hearing aids. The unaided speech intelligibility index potentially could be used as a clinical criterion for hearing aid fitting candidacy for children with hearing loss.


Subject(s)
Hearing Aids , Hearing Loss, Bilateral/rehabilitation , Hearing Tests/standards , Language Development , Speech Intelligibility , Speech Perception , Acoustics , Audiometry , Child , Child, Preschool , Deafness , Female , Humans , Intelligence , Language , Male , Treatment Outcome
3.
Lang Speech Hear Serv Sch ; 51(1): 98-102, 2020 01 08.
Article in English | MEDLINE | ID: mdl-31913804

ABSTRACT

Purpose This epilogue discusses messages that we can take forward from the articles in the forum. A common theme throughout the forum is the ongoing need for research. The forum begins with evidence of potential progressive hearing loss in infants with mild bilateral hearing loss, who may be missed by current newborn hearing screening protocols, and supports the need for consensus regarding early identification in this population. Consensus regarding management similarly is a continuing need. Three studies add to the growing body of evidence that children with mild bilateral or unilateral hearing loss are at risk for difficulties in speech understanding in adverse environments, as well as delays in language and cognition, and that difficulties may persist beyond early childhood. Ambivalence regarding if and when children with mild bilateral or unilateral hearing loss should be fitted with personal amplification also impacts management decisions. Two articles address current evidence and support the need for further research into factors influencing decisions regarding amplification in these populations. A third article examines new criteria to determine hearing aid candidacy in children with mild hearing loss. The final contribution in this forum discusses listening-related fatigue in children with unilateral hearing loss. The absence of research specific to this population is evidence for the need for further investigation. Ongoing research that addresses difficulties experienced by children with mild bilateral and unilateral hearing loss and potential management options can help guide us toward interventions that are specific for the needs of these children.


Subject(s)
Audiology/methods , Hearing Aids , Hearing Loss, Bilateral/epidemiology , Hearing Loss, Bilateral/rehabilitation , Hearing Loss, Unilateral/epidemiology , Hearing Loss, Unilateral/rehabilitation , Speech , Child , Child, Preschool , Hearing Loss, Bilateral/diagnosis , Hearing Loss, Unilateral/diagnosis , Humans , Infant , Severity of Illness Index
4.
Ear Hear ; 39(4): 783-794, 2018.
Article in English | MEDLINE | ID: mdl-29252979

ABSTRACT

OBJECTIVES: Visual information from talkers facilitates speech intelligibility for listeners when audibility is challenged by environmental noise and hearing loss. Less is known about how listeners actively process and attend to visual information from different talkers in complex multi-talker environments. This study tracked looking behavior in children with normal hearing (NH), mild bilateral hearing loss (MBHL), and unilateral hearing loss (UHL) in a complex multi-talker environment to examine the extent to which children look at talkers and whether looking patterns relate to performance on a speech-understanding task. It was hypothesized that performance would decrease as perceptual complexity increased and that children with hearing loss would perform more poorly than their peers with NH. Children with MBHL or UHL were expected to demonstrate greater attention to individual talkers during multi-talker exchanges, indicating that they were more likely to attempt to use visual information from talkers to assist in speech understanding in adverse acoustics. It also was of interest to examine whether MBHL, versus UHL, would differentially affect performance and looking behavior. DESIGN: Eighteen children with NH, eight children with MBHL, and 10 children with UHL participated (8-12 years). They followed audiovisual instructions for placing objects on a mat under three conditions: a single talker providing instructions via a video monitor, four possible talkers alternately providing instructions on separate monitors in front of the listener, and the same four talkers providing both target and nontarget information. Multi-talker background noise was presented at a 5 dB signal-to-noise ratio during testing. An eye tracker monitored looking behavior while children performed the experimental task. RESULTS: Behavioral task performance was higher for children with NH than for either group of children with hearing loss. There were no differences in performance between children with UHL and children with MBHL. Eye-tracker analysis revealed that children with NH looked more at the screens overall than did children with MBHL or UHL, though individual differences were greater in the groups with hearing loss. Listeners in all groups spent a small proportion of time looking at relevant screens as talkers spoke. Although looking was distributed across all screens, there was a bias toward the right side of the display. There was no relationship between overall looking behavior and performance on the task. CONCLUSIONS: The present study examined the processing of audiovisual speech in the context of a naturalistic task. Results demonstrated that children distributed their looking to a variety of sources during the task, but that children with NH were more likely to look at screens than were those with MBHL/UHL. However, all groups looked at the relevant talkers as they were speaking only a small proportion of the time. Despite variability in looking behavior, listeners were able to follow the audiovisual instructions and children with NH demonstrated better performance than children with MBHL/UHL. These results suggest that performance on some challenging multi-talker audiovisual tasks is not dependent on visual fixation to relevant talkers for children with NH or with MBHL/UHL.


Subject(s)
Fixation, Ocular , Hearing Loss, Bilateral/physiopathology , Hearing Loss, Unilateral/physiopathology , Speech Perception , Visual Perception , Case-Control Studies , Child , Child Behavior , Female , Humans , Male , Severity of Illness Index , Task Performance and Analysis
5.
Ear Hear ; 36(1): 136-44, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25170780

ABSTRACT

OBJECTIVES: While classroom acoustics can affect educational performance for all students, the impact for children with minimal/mild hearing loss (MMHL) may be greater than for children with normal hearing (NH). The purpose of this study was to examine the effect of MMHL on children's speech recognition comprehension and looking behavior in a simulated classroom environment. It was hypothesized that children with MMHL would perform similarly to their peers with NH on the speech recognition task but would perform more poorly on the comprehension task. Children with MMHL also were expected to look toward talkers more often than children with NH. DESIGN: Eighteen children with MMHL and 18 age-matched children with NH participated. In a simulated classroom environment, children listened to lines from an elementary-age-appropriate play read by a teacher and four students reproduced over LCD monitors and loudspeakers located around the listener. A gyroscopic headtracking device was used to monitor looking behavior during the task. At the end of the play, comprehension was assessed by asking a series of 18 factual questions. Children also were asked to repeat 50 meaningful sentences with three key words each presented audio-only by a single talker either from the loudspeaker at 0 degree azimuth or randomly from the five loudspeakers. RESULTS: Both children with NH and those with MMHL performed at or near ceiling on the sentence recognition task. For the comprehension task, children with MMHL performed more poorly than those with NH. Assessment of looking behavior indicated that both groups of children looked at talkers while they were speaking less than 50% of the time. In addition, the pattern of overall looking behaviors suggested that, compared with older children with NH, a larger portion of older children with MMHL may demonstrate looking behaviors similar to younger children with or without MMHL. CONCLUSIONS: The results of this study demonstrate that, under realistic acoustic conditions, it is difficult to differentiate performance among children with MMHL and children with NH using a sentence recognition task. The more cognitively demanding comprehension task identified performance differences between these two groups. The comprehension task represented a condition in which the persons talking change rapidly and are not readily visible to the listener. Examination of looking behavior suggested that, in this complex task, attempting to visualize the talker may inefficiently utilize cognitive resources that would otherwise be allocated for comprehension.


Subject(s)
Child Behavior , Hearing Loss/physiopathology , Noise , Schools , Speech Perception/physiology , Acoustics , Audiometry, Pure-Tone , Auditory Threshold , Case-Control Studies , Child , Humans , Severity of Illness Index , Sound Localization/physiology
6.
Am J Audiol ; 23(3): 326-36, 2014 Sep.
Article in English | MEDLINE | ID: mdl-25036922

ABSTRACT

PURPOSE: This study examined children's ability to follow audio-visual instructions presented in noise and reverberation. METHOD: Children (8-12 years of age) with normal hearing followed instructions in noise or noise plus reverberation. Performance was compared for a single talker (ST), multiple talkers speaking one at a time (MT), and multiple talkers with competing comments from other talkers (MTC). Working memory was assessed using measures of digit span. RESULTS: Performance was better for children in noise than for those in noise plus reverberation. In noise, performance for ST was better than for either MT or MTC, and performance for MT was better than for MTC. In noise plus reverberation, performance for ST and MT was better than for MTC, but there were no differences between ST and MT. Digit span did not account for significant variance in the task. CONCLUSIONS: Overall, children performed better in noise than in noise plus reverberation. However, differing patterns across conditions for the 2 environments suggested that the addition of reverberation may have affected performance in a way that was not apparent in noise alone. Continued research is needed to examine the differing effects of noise and reverberation on children's speech understanding.


Subject(s)
Comprehension , Noise/adverse effects , Speech Perception , Acoustics , Child , Female , Humans , Male , Memory, Short-Term
7.
J Educ Audiol ; 20: 24-33, 2014 Jan 01.
Article in English | MEDLINE | ID: mdl-26478719

ABSTRACT

Audiovisual cues can improve speech perception in adverse acoustical environments when compared to auditory cues alone. In classrooms, where acoustics often are less than ideal, the availability of visual cues has the potential to benefit children during learning activities. The current study evaluated the effects of looking behavior on speech understanding of children (8-11 years) and adults during comprehension and sentence repetition tasks in a simulated classroom environment. For the comprehension task, results revealed an effect of looking behavior (looking required versus looking not required) for older children and adults only. Within the looking-behavior conditions, age effects also were evident. There was no effect of looking behavior for the sentence-repetition task (looking versus no looking) but an age effect also was found. The current findings suggest that looking behavior may impact speech understanding differently depending on the task and the age of the listener. In classrooms, these potential differences should be taken into account when designing learning tasks.

8.
Ear Hear ; 33(6): 731-44, 2012.
Article in English | MEDLINE | ID: mdl-22732772

ABSTRACT

OBJECTIVES: The purpose of this study was to determine how combinations of reverberation and noise, typical of environments in many elementary school classrooms, affect normal-hearing school-aged children's speech recognition in stationary and amplitude-modulated noise, and to compare their performance with that of normal-hearing young adults. In addition, the magnitude of release from masking in the modulated noise relative to that in stationary noise was compared across age groups in nonreverberant and reverberant listening conditions. Last, for all noise and reverberation combinations the degree of change in predicted performance at 70% correct was obtained for all age groups using a best-fit cubic polynomial. DESIGN: Bamford-Kowal-Bench sentences and noise were convolved with binaural room impulse responses representing nonreverberant and reverberant environments to create test materials representative of both audiology clinics and school classroom environments. Speech recognition of 48 school-aged children and 12 adults was measured in speech-shaped and amplitude-modulated speech-shaped noise, in the following three virtual listening environments: nonreverberant, reverberant at 2 m, and reverberant at 6 m. RESULTS: Speech recognition decreased in the reverberant conditions and with decreasing age. Release from masking in modulated noise relative to stationary noise decreased with age and was reduced by reverberation. In the nonreverberant condition, participants showed similar amounts of masking release across ages. The slopes of performance-intensity functions increased with age, with the exception of the nonreverberant modulated masker condition. The slopes were steeper in the stationary masker conditions, where they also decreased with reverberation and distance. In the presence of a modulated masker, the slopes did not differ between the two reverberant conditions. CONCLUSIONS: The results of this study reveal systematic developmental changes in speech recognition in noisy and reverberant environments for elementary-school-aged children. The overall pattern suggests that younger children require better acoustic conditions to achieve sentence recognition equivalent to their older peers and adults. In addition, this is the first study to report a reduction of masking release in children as a result of reverberation. Results support the importance of minimizing noise and reverberation in classrooms, and highlight the need to incorporate noise and reverberation into audiological speech-recognition testing to improve predictions of performance in the real world.


Subject(s)
Noise/adverse effects , Perceptual Masking , Speech Perception , Speech Reception Threshold Test , Adolescent , Adult , Age Factors , Child , Female , Humans , Male , Noise/prevention & control , Reference Values , Social Environment , Sound Spectrography , Young Adult
9.
J Acoust Soc Am ; 131(1): 232-46, 2012 Jan.
Article in English | MEDLINE | ID: mdl-22280587

ABSTRACT

The potential effects of acoustical environment on speech understanding are especially important as children enter school where students' ability to hear and understand complex verbal information is critical to learning. However, this ability is compromised because of widely varied and unfavorable classroom acoustics. The extent to which unfavorable classroom acoustics affect children's performance on longer learning tasks is largely unknown as most research has focused on testing children using words, syllables, or sentences as stimuli. In the current study, a simulated classroom environment was used to measure comprehension performance of two classroom learning activities: a discussion and lecture. Comprehension performance was measured for groups of elementary-aged students in one of four environments with varied reverberation times and background noise levels. The reverberation time was either 0.6 or 1.5 s, and the signal-to-noise level was either +10 or +7 dB. Performance is compared to adult subjects as well as to sentence-recognition in the same condition. Significant differences were seen in comprehension scores as a function of age and condition; both increasing background noise and reverberation degraded performance in comprehension tasks compared to minimal differences in measures of sentence-recognition.


Subject(s)
Acoustics , Comprehension/physiology , Recognition, Psychology/physiology , Speech Perception/physiology , Adolescent , Adult , Auditory Threshold/physiology , Child , Computer Simulation , Humans , Learning , Middle Aged , Models, Theoretical , Noise , Perceptual Masking/physiology , Schools , Speech Intelligibility/physiology , Young Adult
10.
J Acoust Soc Am ; 127(5): 3177-88, 2010 May.
Article in English | MEDLINE | ID: mdl-21117766

ABSTRACT

In contrast to the availability of consonant confusion studies with adults, to date, no investigators have compared children's consonant confusion patterns in noise to those of adults in a single study. To examine whether children's error patterns are similar to those of adults, three groups of children (24 each in 4-5, 6-7, and 8-9 yrs. old) and 24 adult native speakers of American English (AE) performed a recognition task for 15 AE consonants in /ɑ/-consonant-/ɑ/ nonsense syllables presented in a background of speech-shaped noise. Three signal-to-noise ratios (SNR: 0, +5, and +10 dB) were used. Although the performance improved as a function of age, the overall consonant recognition accuracy as a function of SNR improved at a similar rate for all groups. Detailed analyses using phonetic features (manner, place, and voicing) revealed that stop consonants were the most problematic for all groups. In addition, for the younger children, front consonants presented in the 0 dB SNR condition were more error prone than others. These results suggested that children's use of phonetic cues do not develop at the same rate for all phonetic features.


Subject(s)
Language , Noise/adverse effects , Perceptual Masking , Phonetics , Recognition, Psychology , Speech Acoustics , Speech Intelligibility , Speech Perception , Acoustic Stimulation , Adult , Age Factors , Audiometry, Speech , Child , Child, Preschool , Cues , Humans
11.
Ear Hear ; 30(6): 635-52, 2009 Dec.
Article in English | MEDLINE | ID: mdl-19701089

ABSTRACT

OBJECTIVE: Universal newborn hearing screening (UNHS) test outcomes can be influenced by conditions affecting the sound conduction pathway, including ear canal and/or middle ear function. The purpose of this study was to evaluate the test performance of wideband (WB) acoustic transfer functions and 1-kHz tympanometry in terms of their ability to predict the status of the sound conduction pathway for ears that passed or were referred in a UNHS program. DESIGN: A distortion-product otoacoustic emission (DPOAE) test was used to determine the UNHS status of 455 infant ears (375 passed and 80 referred). WB and 1-kHz tests were performed immediately after the infant's first DPOAE test (day 1). Of the 80 infants referred on day 1, 67 infants were evaluated again after a second UNHS DPOAE test the next day (day 2). WB data were acquired under ambient and tympanometric (pressurized) ear canal conditions. Clinical decision theory analysis was used to assess the test performance of WB and 1-kHz tests in terms of their ability to classify ears that passed or were referred, using DPOAE UNHS test outcomes as the "gold standard." Specifically, performance was assessed using previously published measurement criteria and a maximum-likelihood procedure for 1-kHz tympanometry and WB measurements, respectively. RESULTS: For measurements from day 1, the highest area under the receiver operating characteristic curve was 0.87 for an ambient WB test predictor. The highest area under the receiver operating characteristic curve among several variables derived from 1-kHz tympanometry was 0.75. In general, ears that passed the DPOAE UNHS test had higher energy absorbance compared with those that were referred, indicating that infants who passed the DPOAE UNHS had a more acoustically efficient conductive pathway. CONCLUSIONS: Results showed that (1) WB tests had better performance in classifying UNHS DPOAE outcomes than 1-kHz tympanometry; (2) WB tests provide data to suggest that many UNHS referrals are a consequence of transient conditions affecting the sound conduction pathway; (3) WB data reveal changes in sound conduction during the first 2 days of life; and (4) because WB measurements used in the present study are objective and quick it may be feasible to consider implementing such measurements in conjunction with UNHS programs.


Subject(s)
Acoustic Impedance Tests/methods , Hearing Loss, Conductive/congenital , Hearing Loss, Sensorineural/congenital , Neonatal Screening , Otoacoustic Emissions, Spontaneous/physiology , Signal Processing, Computer-Assisted/instrumentation , Area Under Curve , Hearing Loss, Conductive/diagnosis , Hearing Loss, Sensorineural/diagnosis , Humans , Infant, Newborn , Predictive Value of Tests , Reference Values , Referral and Consultation , Software
12.
J Speech Lang Hear Res ; 51(5): 1369-80, 2008 Oct.
Article in English | MEDLINE | ID: mdl-18664693

ABSTRACT

PURPOSE: Recent studies from the authors' laboratory have suggested that reduced audibility in the high frequencies (because of the bandwidth of hearing instruments) may play a role in the delays in phonological development often exhibited by children with hearing impairment. The goal of the current study was to extend previous findings on the effect of bandwidth on fricatives/affricates to more complex stimuli. METHOD: Nine fricatives/affricates embedded in 2-syllable nonsense words were filtered at 5 and 10 kHz and presented to normal-hearing 6- to 7-year-olds who repeated words exactly as heard. Responses were recorded for subsequent phonetic and acoustic analyses. RESULTS: Significant effects of talker gender and bandwidth were found, with better performance for the male talker and the wider bandwidth condition. In contrast to previous studies, relatively small (5%) mean bandwidth effects were observed for /s/ and /z/ spoken by the female talker. Acoustic analyses of stimuli used in the previous and the current studies failed to explain this discrepancy. CONCLUSIONS: It appears likely that a combination of factors (i.e., dynamic cues, prior phonotactic knowledge, and perhaps other unidentified cues to fricative identity) may have facilitated the perception of these complex nonsense words in the current study.


Subject(s)
Hearing Aids , Hearing Loss/complications , Hearing Loss/therapy , Language Development Disorders/etiology , Phonetics , Child , Female , Humans , Language Tests , Male , Pitch Perception , Psychoacoustics , Speech Perception
13.
Percept Psychophys ; 69(7): 1140-51, 2007 Oct.
Article in English | MEDLINE | ID: mdl-18038952

ABSTRACT

Although researchers are currently studying auditory object formation in adults, little is known about the development of this phenomenon in children. Amplitude modulation has been suggested as one of the characteristics of the speech signal that allows auditory grouping. In this experiment, we evaluated children (4 to 13 years of age) and adults to examine whether children's ability to use amplitude modulation (AM) in perception of time-varying sinusoidal (TVS) sentences is different from that of adults, and whether there are developmental changes. We evaluated performance on recognition of TVS sentences (unmodulated, amplitude-comodulated at 25, 50, 100, and 200 Hz, and amplitude-modulated using conflicting frequencies). Overall, the youngest children performed more poorly than did older children and adults. However, difference scores, defined as the percentage of phonemes correct in a given modulation condition minus the percentage correct for the unmodulated condition, showed no significant effects of age. Unlike the findings of previous studies (Carrell & Opie, 1992), these results support the ability of modulation with conflicting frequencies to improve intelligibility. The present study provides evidence that children and adults receive the same benefits (or decrements) from amplitude modulation.


Subject(s)
Speech Intelligibility , Speech Perception , Adolescent , Adult , Child , Child, Preschool , Female , Humans , Male , Pilot Projects , Sound Spectrography
14.
Ear Hear ; 28(4): 483-94, 2007 Aug.
Article in English | MEDLINE | ID: mdl-17609611

ABSTRACT

OBJECTIVE: Previous studies from our laboratory have shown that a restricted stimulus bandwidth can have a negative effect upon the perception of the phonemes /s/ and /z/, which serve multiple linguistic functions in the English language. These findings may have important implications for the development of speech and language in young children with hearing loss because the bandwidth of current hearing aids generally is restricted to 6 to 7 kHz. The primary goal of the current study was to expand our previous work to examine the effects of stimulus bandwidth on a wide range of speech materials, to include a variety of auditory-related tasks, and to include the effects of background noise. DESIGN: Thirty-two children with normal hearing and 24 children with sensorineural hearing loss (7 to 14 yr) participated in this study. To assess the effects of stimulus bandwidth, four different auditory tasks were used: 1) nonsense syllable perception, 2) word recognition, 3) novel-word learning, and 4) listening effort. Auditory stimuli recorded by a female talker were low-pass filtered at 5 and 10 kHz and presented in noise. RESULTS: For the children with normal hearing, significant bandwidth effects were observed for the perception of nonsense syllables and for words but not for novel-word learning or listening effort. In the 10-kHz bandwidth condition, children with hearing loss showed significant improvements for monosyllabic words but not for nonsense syllables, novel-word learning, or listening effort. Further examination, however, revealed marked improvements for the perception of specific phonemes. For example, bandwidth effects for the perception of /s/ and /z/ were not only significant but much greater than that seen in the group with normal hearing. CONCLUSIONS: The current results are consistent with previous studies that have shown that a restricted stimulus bandwidth can negatively affect the perception of /s/ and /z/ spoken by female talkers. Given the importance of these phonemes in the English language and the tendency of early caregivers to be female, an inability to perceive these sounds correctly may have a negative impact on both phonological and morphological development.


Subject(s)
Hearing Disorders/diagnosis , Hearing Loss, Sensorineural/diagnosis , Speech Perception , Adolescent , Child , Child Language , Cochlea/physiopathology , Female , Hearing Disorders/epidemiology , Hearing Loss, Sensorineural/epidemiology , Hearing Loss, Sensorineural/physiopathology , Humans , Male , Memory, Short-Term , Phonetics , Severity of Illness Index , Speech Discrimination Tests
15.
Arch Otolaryngol Head Neck Surg ; 130(5): 556-62, 2004 May.
Article in English | MEDLINE | ID: mdl-15148176

ABSTRACT

OBJECTIVES: To review recent research studies concerning the importance of high-frequency amplification for speech perception in adults and children with hearing loss and to provide preliminary data on the phonological development of normal-hearing and hearing-impaired infants. DESIGN AND SETTING: With the exception of preliminary data from a longitudinal study of phonological development, all of the reviewed studies were taken from the archival literature. To determine the course of phonological development in the first 4 years of life, the following 3 groups of children were recruited: 20 normal-hearing children, 12 hearing-impaired children identified and aided up to 12 months of age (early-ID group), and 4 hearing-impaired children identified after 12 months of age (late-ID group). Children were videotaped in 30-minute sessions at 6- to 8-week intervals from 4 to 36 months of age (or shortly after identification of hearing loss) and at 2- and 6-month intervals thereafter. Broad transcription of child vocalizations, babble, and words was conducted using the International Phonetic Alphabet. A phoneme was judged acquired if it was produced 3 times in a 30-minute session. SUBJECTS: Preliminary data are presented from the 20 normal-hearing children, 3 children from the early-ID group, and 2 children from the late-ID group. RESULTS: Compared with the normal-hearing group, the 3 children from the early-ID group showed marked delays in the acquisition of all phonemes. The delay was shortest for vowels and longest for fricatives. Delays for the 2 children from the late-ID group were substantially longer. CONCLUSIONS: The reviewed studies and preliminary results from our longitudinal study suggest that (1) hearing-aid studies with adult subjects should not be used to predict speech and language performance in infants and young children; (2) the bandwidth of current behind-the-ear hearing aids is inadequate to accurately represent the high-frequency sounds of speech, particularly for female speakers; and (3) preliminary data on phonological development in infants with hearing loss suggest that the greatest delays occur for fricatives, consistent with predictions based on hearing-aid bandwidth.


Subject(s)
Language Development , Persons With Hearing Impairments , Speech Perception , Acoustics , Adult , Age of Onset , Child, Preschool , Disabled Children , Female , Hearing Aids , Humans , Infant , Language Development Disorders/etiology , Male , Radio Waves
16.
Ear Hear ; 25(1): 47-56, 2004 Feb.
Article in English | MEDLINE | ID: mdl-14770017

ABSTRACT

OBJECTIVE: The goal of this study was to assess performance on a novel-word learning task by normal-hearing and hearing-impaired children for words varying in form (noun versus verb), stimulus level (50 versus 60 dB SPL), and number of repetitions (4 versus 6). It was hypothesized that novel-word learning would be significantly poorer in the subjects with hearing loss, would increase with both level and repetition, and would be better for nouns than verbs. DESIGN: Twenty normal-hearing and 11 hearing-impaired children (6 to 9 yr old) participated in this study. Each child viewed a 4-minute animated slide show containing 8 novel words. The effects of hearing status, word form, repetition, and stimulus level were examined systematically. The influence of audibility, word recognition, chronological age, and lexical development also were evaluated. After hearing the story twice, children were asked to identify each word from a set of four pictures. RESULTS: Overall performance was 60% for the normal-hearing children and 41% for the children with hearing loss. Significant predictors of performance were PPVT raw scores, hearing status, stimulus level, and repetitions. The variables age, audibility, word recognition scores, and word form were not significant predictors. CONCLUSIONS: Results suggest that a child's ability to learn new words can be predicted from vocabulary size, stimulus level, number of exposures, and hearing status. Further, the sensitivity to presentation level observed in this novel-word learning task suggests that this type of paradigm may be an effective tool for studying various forms of hearing aid signal processing algorithms.


Subject(s)
Hearing Loss, Sensorineural/physiopathology , Learning , Persons With Hearing Impairments , Vocabulary , Acoustic Stimulation , Age Factors , Auditory Threshold , Child , Female , Hearing Aids , Hearing Loss, Bilateral/physiopathology , Hearing Loss, Bilateral/therapy , Hearing Loss, Sensorineural/therapy , Humans , Male , Task Performance and Analysis
17.
J Speech Lang Hear Res ; 46(3): 649-57, 2003 Jun.
Article in English | MEDLINE | ID: mdl-14696992

ABSTRACT

This study examined the long- and short-term spectral characteristics of speech simultaneously recorded at the ear and at a reference microphone position (30 cm at 0 degrees azimuth). Twenty adults and 26 children (2-4 years of age) with normal hearing were asked to produce 9 short sentences in a quiet environment. Long-term average speech spectra (LTASS) were calculated for the concatenated sentences, and short-term spectra were calculated for selected phonemes within the sentences (/m/, /n/, /s/, [see text], /f/, /a/, /u/, and /i/). Relative to the reference microphone position, the LTASS at the ear showed higher amplitudes for frequencies below 1 kHz and lower amplitudes for frequencies above 2 kHz for both groups. At both microphone positions, the short-term spectra of the children's phonemes revealed reduced amplitudes for /s/ and [see text] and for vowel energy above 2 kHz relative to the adults' phonemes. The results of this study suggest that, for listeners with hearing loss (a) the talker's own voice through a hearing instrument would contain lower overall energy at frequencies above 2 kHz relative to speech originating in front of the talker, (b) a child's own speech would contain even lower energy above 2 kHz because of adult-child differences in overall amplitude, and (c) frequency regions important to normal speech development (e.g., high-frequency energy in the phonemes /s/ and [see text]) may not be amplified sufficiently by many hearing instruments.


Subject(s)
Hearing Aids , Hearing Loss/rehabilitation , Speech Acoustics , Adult , Child, Preschool , Female , Humans , Male , Middle Aged , Sound Spectrography , Speech Production Measurement
18.
Ear Hear ; 23(4): 316-24, 2002 Aug.
Article in English | MEDLINE | ID: mdl-12195174

ABSTRACT

OBJECTIVE: The overall goal of this study was to determine the accuracy with which hearing-impaired children can detect the inflectional morphemes /s/ and /z/ when listening to speech through hearing aids. DESIGN: In the first part of the study a perceptual test was developed with equal numbers of singular and plural nouns spoken by both a male and female talker. Thirty-six normal-hearing children (3 to 5 yr) were tested to determine the age at which children could perform this test without difficulty. In the second part of the study, 40 children with bilateral sensorineural hearing losses (5 to 13 yr) were tested while wearing personal hearing aids. Stimuli were presented in the sound field at 65 dB SPL. RESULTS: For the normal-hearing children, mean performance increased and inter-subject variability decreased through age 5 yr 3 mo when performance reached >or=90% for all children. No significant talker or form (plural versus singular) effects were noted for this group. For the hearing-impaired children, performance varied considerably across all ages. For these subjects, significant effects of talker and form were observed. Specifically, plural test items spoken by the female talker showed the highest error rate. CONCLUSIONS: In general, mid-frequency audibility (2 to 4 kHz) appeared to be most important for perception of the fricative noise for the male talker while a somewhat wider frequency range (2 to 8 kHz) was important for the female talker.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Speech Perception , Audiometry, Pure-Tone , Auditory Threshold , Child , Child, Preschool , Female , Hearing Loss, Bilateral/diagnosis , Hearing Loss, Bilateral/rehabilitation , Hearing Loss, Sensorineural/diagnosis , Humans , Male , Phonetics , Pilot Projects
19.
J Speech Lang Hear Res ; 45(6): 1276-84, 2002 Dec.
Article in English | MEDLINE | ID: mdl-12546493

ABSTRACT

To accommodate growing vocabularies, young children are thought to modify their perceptual weights as they gain experience with speech and language. The purpose of the present study was to determine whether the perceptual weights of children and adults with hearing loss differ from those of their normal-hearing counterparts. Adults and children with normal hearing and with hearing loss served as participants. Fricative and vowel segments within consonant-vowel-consonant stimuli were presented at randomly selected levels under two conditions: unaltered and with the formant transition removed. Overall performance for each group was calculated as a function of segment level. Perceptual weights were also calculated for each group using point-biserial correlation coefficients that relate the level of each segment to performance. Results revealed child-adult differences in overall performance and also revealed an effect of hearing loss. Despite these performance differences, the pattern of perceptual weights was similar across all four groups for most conditions.


Subject(s)
Hearing Loss, Sensorineural/diagnosis , Speech Perception/physiology , Verbal Behavior , Adult , Child , Child, Preschool , Female , Humans , Male , Phonetics , Severity of Illness Index , Sound Spectrography , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...