Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
Ear Hear ; 39(4): 783-794, 2018.
Article in English | MEDLINE | ID: mdl-29252979

ABSTRACT

OBJECTIVES: Visual information from talkers facilitates speech intelligibility for listeners when audibility is challenged by environmental noise and hearing loss. Less is known about how listeners actively process and attend to visual information from different talkers in complex multi-talker environments. This study tracked looking behavior in children with normal hearing (NH), mild bilateral hearing loss (MBHL), and unilateral hearing loss (UHL) in a complex multi-talker environment to examine the extent to which children look at talkers and whether looking patterns relate to performance on a speech-understanding task. It was hypothesized that performance would decrease as perceptual complexity increased and that children with hearing loss would perform more poorly than their peers with NH. Children with MBHL or UHL were expected to demonstrate greater attention to individual talkers during multi-talker exchanges, indicating that they were more likely to attempt to use visual information from talkers to assist in speech understanding in adverse acoustics. It also was of interest to examine whether MBHL, versus UHL, would differentially affect performance and looking behavior. DESIGN: Eighteen children with NH, eight children with MBHL, and 10 children with UHL participated (8-12 years). They followed audiovisual instructions for placing objects on a mat under three conditions: a single talker providing instructions via a video monitor, four possible talkers alternately providing instructions on separate monitors in front of the listener, and the same four talkers providing both target and nontarget information. Multi-talker background noise was presented at a 5 dB signal-to-noise ratio during testing. An eye tracker monitored looking behavior while children performed the experimental task. RESULTS: Behavioral task performance was higher for children with NH than for either group of children with hearing loss. There were no differences in performance between children with UHL and children with MBHL. Eye-tracker analysis revealed that children with NH looked more at the screens overall than did children with MBHL or UHL, though individual differences were greater in the groups with hearing loss. Listeners in all groups spent a small proportion of time looking at relevant screens as talkers spoke. Although looking was distributed across all screens, there was a bias toward the right side of the display. There was no relationship between overall looking behavior and performance on the task. CONCLUSIONS: The present study examined the processing of audiovisual speech in the context of a naturalistic task. Results demonstrated that children distributed their looking to a variety of sources during the task, but that children with NH were more likely to look at screens than were those with MBHL/UHL. However, all groups looked at the relevant talkers as they were speaking only a small proportion of the time. Despite variability in looking behavior, listeners were able to follow the audiovisual instructions and children with NH demonstrated better performance than children with MBHL/UHL. These results suggest that performance on some challenging multi-talker audiovisual tasks is not dependent on visual fixation to relevant talkers for children with NH or with MBHL/UHL.


Subject(s)
Fixation, Ocular , Hearing Loss, Bilateral/physiopathology , Hearing Loss, Unilateral/physiopathology , Speech Perception , Visual Perception , Case-Control Studies , Child , Child Behavior , Female , Humans , Male , Severity of Illness Index , Task Performance and Analysis
2.
Ear Hear ; 36(1): 136-44, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25170780

ABSTRACT

OBJECTIVES: While classroom acoustics can affect educational performance for all students, the impact for children with minimal/mild hearing loss (MMHL) may be greater than for children with normal hearing (NH). The purpose of this study was to examine the effect of MMHL on children's speech recognition comprehension and looking behavior in a simulated classroom environment. It was hypothesized that children with MMHL would perform similarly to their peers with NH on the speech recognition task but would perform more poorly on the comprehension task. Children with MMHL also were expected to look toward talkers more often than children with NH. DESIGN: Eighteen children with MMHL and 18 age-matched children with NH participated. In a simulated classroom environment, children listened to lines from an elementary-age-appropriate play read by a teacher and four students reproduced over LCD monitors and loudspeakers located around the listener. A gyroscopic headtracking device was used to monitor looking behavior during the task. At the end of the play, comprehension was assessed by asking a series of 18 factual questions. Children also were asked to repeat 50 meaningful sentences with three key words each presented audio-only by a single talker either from the loudspeaker at 0 degree azimuth or randomly from the five loudspeakers. RESULTS: Both children with NH and those with MMHL performed at or near ceiling on the sentence recognition task. For the comprehension task, children with MMHL performed more poorly than those with NH. Assessment of looking behavior indicated that both groups of children looked at talkers while they were speaking less than 50% of the time. In addition, the pattern of overall looking behaviors suggested that, compared with older children with NH, a larger portion of older children with MMHL may demonstrate looking behaviors similar to younger children with or without MMHL. CONCLUSIONS: The results of this study demonstrate that, under realistic acoustic conditions, it is difficult to differentiate performance among children with MMHL and children with NH using a sentence recognition task. The more cognitively demanding comprehension task identified performance differences between these two groups. The comprehension task represented a condition in which the persons talking change rapidly and are not readily visible to the listener. Examination of looking behavior suggested that, in this complex task, attempting to visualize the talker may inefficiently utilize cognitive resources that would otherwise be allocated for comprehension.


Subject(s)
Child Behavior , Hearing Loss/physiopathology , Noise , Schools , Speech Perception/physiology , Acoustics , Audiometry, Pure-Tone , Auditory Threshold , Case-Control Studies , Child , Humans , Severity of Illness Index , Sound Localization/physiology
3.
J Acoust Soc Am ; 136(2): 728-35, 2014 Aug.
Article in English | MEDLINE | ID: mdl-25096107

ABSTRACT

Subjects with normal hearing (NH) and with sensorineural hearing loss (SNHL) judged the overall loudness of six-tone complexes comprised of octave frequencies from 0.25 to 8 kHz. The level of each tone was selected from a normal distribution with a standard deviation of 5 dB, and subjects judged which of two complexes was louder. Overall level varied across conditions. In the "loudness" task, there was no difference in mean level across the two stimuli. In the "sample discrimination" task, the two complexes differed by an average of 5 dB. For both tasks, perceptual weights were derived by correlating the differences in level between matched-frequency tones in the complexes and the loudness decision on each trial. Weights obtained in the two tasks showed similar shifts from low to high frequency components with increasing overall level. Simulation of these experiments using a model of loudness perception [Moore and Glasberg (2004), Hear Res. 188, 70-88] yielded predicted weights for these stimuli that were highly correlated with predicted specific loudness, but not with the observed weights.


Subject(s)
Hearing Loss, Sensorineural/psychology , Judgment , Loudness Perception , Persons With Hearing Impairments/psychology , Acoustic Stimulation , Adult , Audiometry , Auditory Threshold , Case-Control Studies , Discrimination, Psychological , Humans , Young Adult
4.
Am J Audiol ; 23(3): 326-36, 2014 Sep.
Article in English | MEDLINE | ID: mdl-25036922

ABSTRACT

PURPOSE: This study examined children's ability to follow audio-visual instructions presented in noise and reverberation. METHOD: Children (8-12 years of age) with normal hearing followed instructions in noise or noise plus reverberation. Performance was compared for a single talker (ST), multiple talkers speaking one at a time (MT), and multiple talkers with competing comments from other talkers (MTC). Working memory was assessed using measures of digit span. RESULTS: Performance was better for children in noise than for those in noise plus reverberation. In noise, performance for ST was better than for either MT or MTC, and performance for MT was better than for MTC. In noise plus reverberation, performance for ST and MT was better than for MTC, but there were no differences between ST and MT. Digit span did not account for significant variance in the task. CONCLUSIONS: Overall, children performed better in noise than in noise plus reverberation. However, differing patterns across conditions for the 2 environments suggested that the addition of reverberation may have affected performance in a way that was not apparent in noise alone. Continued research is needed to examine the differing effects of noise and reverberation on children's speech understanding.


Subject(s)
Comprehension , Noise/adverse effects , Speech Perception , Acoustics , Child , Female , Humans , Male , Memory, Short-Term
5.
Ear Hear ; 33(6): 731-44, 2012.
Article in English | MEDLINE | ID: mdl-22732772

ABSTRACT

OBJECTIVES: The purpose of this study was to determine how combinations of reverberation and noise, typical of environments in many elementary school classrooms, affect normal-hearing school-aged children's speech recognition in stationary and amplitude-modulated noise, and to compare their performance with that of normal-hearing young adults. In addition, the magnitude of release from masking in the modulated noise relative to that in stationary noise was compared across age groups in nonreverberant and reverberant listening conditions. Last, for all noise and reverberation combinations the degree of change in predicted performance at 70% correct was obtained for all age groups using a best-fit cubic polynomial. DESIGN: Bamford-Kowal-Bench sentences and noise were convolved with binaural room impulse responses representing nonreverberant and reverberant environments to create test materials representative of both audiology clinics and school classroom environments. Speech recognition of 48 school-aged children and 12 adults was measured in speech-shaped and amplitude-modulated speech-shaped noise, in the following three virtual listening environments: nonreverberant, reverberant at 2 m, and reverberant at 6 m. RESULTS: Speech recognition decreased in the reverberant conditions and with decreasing age. Release from masking in modulated noise relative to stationary noise decreased with age and was reduced by reverberation. In the nonreverberant condition, participants showed similar amounts of masking release across ages. The slopes of performance-intensity functions increased with age, with the exception of the nonreverberant modulated masker condition. The slopes were steeper in the stationary masker conditions, where they also decreased with reverberation and distance. In the presence of a modulated masker, the slopes did not differ between the two reverberant conditions. CONCLUSIONS: The results of this study reveal systematic developmental changes in speech recognition in noisy and reverberant environments for elementary-school-aged children. The overall pattern suggests that younger children require better acoustic conditions to achieve sentence recognition equivalent to their older peers and adults. In addition, this is the first study to report a reduction of masking release in children as a result of reverberation. Results support the importance of minimizing noise and reverberation in classrooms, and highlight the need to incorporate noise and reverberation into audiological speech-recognition testing to improve predictions of performance in the real world.


Subject(s)
Noise/adverse effects , Perceptual Masking , Speech Perception , Speech Reception Threshold Test , Adolescent , Adult , Age Factors , Child , Female , Humans , Male , Noise/prevention & control , Reference Values , Social Environment , Sound Spectrography , Young Adult
6.
J Speech Lang Hear Res ; 55(5): 1373-86, 2012 Oct.
Article in English | MEDLINE | ID: mdl-22411283

ABSTRACT

PURPOSE: In this study, the authors evaluated the effect of remote system and acoustic environment on speech perception via telehealth with cochlear implant recipients. METHOD: Speech perception was measured in quiet and in noise. Systems evaluated were Polycom visual concert (PVC) and a hybrid presentation system (HPS). Each system was evaluated in a sound-treated booth and in a quiet office. RESULTS: For speech in quiet, there was a significant effect of environment, with better performance in the sound-treated booth than in the office; there was no effect of system (PVC or HPS). Speech in noise revealed a significant interaction between environment and system. Subjects' performance was poorer for PVC in the office, whereas performance in the sound-treated booth was not significantly different for the two systems. Results from the current study were compared to results for the same group of subjects from an earlier study; these results suggested that poorer performance at remote sites in the previous study was primarily due to environment, not system. CONCLUSIONS: Speech perception was best when evaluated in a sound-treated booth. HPS was superior for speech in noise in a reverberant environment. Future research should focus on modifications to non-sound-treated environments for telehealth service delivery in rural areas.


Subject(s)
Cochlear Implantation/rehabilitation , Speech Acoustics , Speech Perception , Speech Reception Threshold Test/methods , Telemedicine/methods , Adolescent , Adult , Aged , Aged, 80 and over , Child , Environment , Female , Humans , Male , Middle Aged , Noise , Rural Health Services , Speech Reception Threshold Test/instrumentation , Telemedicine/instrumentation , Videoconferencing/instrumentation
7.
J Speech Lang Hear Res ; 55(4): 1112-27, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22232388

ABSTRACT

PURPOSE: The goal of this study was to compare clinical and research-based cochlear implant (CI) measures using telehealth versus traditional methods. METHOD: This prospective study used an ABA design (A = laboratory, B = remote site). All measures were made twice per visit for the purpose of assessing within-session variability. Twenty-nine adult and pediatric CI recipients participated. Measures included electrode impedance, electrically evoked compound action potential thresholds, psychophysical thresholds using an adaptive procedure, map thresholds and upper comfort levels, and speech perception. Subjects completed a questionnaire at the end of the study. RESULTS: Results for all electrode-specific measures revealed no statistically significant differences between traditional and remote conditions. Speech perception was significantly poorer in the remote condition, which was likely due to the lack of a sound booth. In general, subjects indicated that they would take advantage of telehealth options at least some of the time, if such options were available. CONCLUSIONS: Results from this study demonstrate that telehealth is a viable option for research and clinical measures. Additional studies are needed to investigate ways to improve speech perception at remote locations that lack sound booths and to validate the use of telehealth for pediatric services (e.g., play audiometry), sound-field threshold testing, and troubleshooting equipment.


Subject(s)
Audiology/methods , Audiology/standards , Cochlear Implantation/rehabilitation , Telemedicine/methods , Telemedicine/standards , Adolescent , Adult , Aged , Aged, 80 and over , Audiology/organization & administration , Auditory Threshold , Child , Evoked Potentials, Auditory , Female , Humans , Male , Middle Aged , Nebraska , Program Evaluation , Prospective Studies , Psychoacoustics , Speech Perception , Surveys and Questionnaires/standards , Telemedicine/organization & administration , Young Adult
8.
J Acoust Soc Am ; 131(1): 205-17, 2012 Jan.
Article in English | MEDLINE | ID: mdl-22280585

ABSTRACT

Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene.


Subject(s)
Auditory Perception/physiology , Cues , Visual Perception/physiology , Acoustic Stimulation/methods , Adult , Analysis of Variance , Computer Simulation , Humans , Photic Stimulation/methods , Sound Localization/physiology , Space Perception/physiology , Young Adult
9.
J Acoust Soc Am ; 131(1): 232-46, 2012 Jan.
Article in English | MEDLINE | ID: mdl-22280587

ABSTRACT

The potential effects of acoustical environment on speech understanding are especially important as children enter school where students' ability to hear and understand complex verbal information is critical to learning. However, this ability is compromised because of widely varied and unfavorable classroom acoustics. The extent to which unfavorable classroom acoustics affect children's performance on longer learning tasks is largely unknown as most research has focused on testing children using words, syllables, or sentences as stimuli. In the current study, a simulated classroom environment was used to measure comprehension performance of two classroom learning activities: a discussion and lecture. Comprehension performance was measured for groups of elementary-aged students in one of four environments with varied reverberation times and background noise levels. The reverberation time was either 0.6 or 1.5 s, and the signal-to-noise level was either +10 or +7 dB. Performance is compared to adult subjects as well as to sentence-recognition in the same condition. Significant differences were seen in comprehension scores as a function of age and condition; both increasing background noise and reverberation degraded performance in comprehension tasks compared to minimal differences in measures of sentence-recognition.


Subject(s)
Acoustics , Comprehension/physiology , Recognition, Psychology/physiology , Speech Perception/physiology , Adolescent , Adult , Auditory Threshold/physiology , Child , Computer Simulation , Humans , Learning , Middle Aged , Models, Theoretical , Noise , Perceptual Masking/physiology , Schools , Speech Intelligibility/physiology , Young Adult
10.
J Acoust Soc Am ; 130(1): EL32-7, 2011 Jul.
Article in English | MEDLINE | ID: mdl-21786865

ABSTRACT

Temporal integration of loudness of 1 kHz tones with 5 and 200 ms durations was assessed in four subjects using two loudness measurement procedures: categorical loudness scaling (CLS) and loudness matching. CLS provides a reliable and efficient procedure for collecting data on the temporal integration of loudness and previously reported nonmonotonic behavior observed at mid-sound pressure level levels is replicated with this procedure. Stimuli that are assigned to the same category are effectively matched in loudness, allowing the measurement of temporal integration with CLS without curve-fitting, interpolation, or assumptions concerning the form of the loudness growth function.


Subject(s)
Loudness Perception , Acoustic Stimulation , Adult , Audiometry , Auditory Threshold , Female , Humans , Male , Psychoacoustics , Time Factors
11.
J Acoust Soc Am ; 129(4): 2095-103, 2011 Apr.
Article in English | MEDLINE | ID: mdl-21476665

ABSTRACT

The detection of a brief increment in the intensity of a longer duration pedestal is commonly used as a measure of intensity-resolution. Increment detection is known to improve with increasing duration of the increment and also with increasing duration of the pedestal, but the relative effects of these two parameters have not been explored in the same study. In several past studies of the effects of increment duration, pedestal duration was increased as increment duration increased. In the present study, increment and pedestal duration were independently manipulated. Increment-detection thresholds were determined for four subjects with normal-hearing using a 500- or 4000-Hz pedestal presented at 60 dB sound pressure level (SPL). Increment durations were 10, 20, 40, 80, 160, and 320 ms. Pedestal durations were 20, 40, 80, 160, and 320 ms. Each increment duration was combined with all pedestals of equal or greater duration. Multiple-regression analyses indicate that increment detection under these conditions is determined primarily by pedestal duration. Follow-up experiments ruled out effects of off-frequency listening or overshoot. The results suggest that effects of increment duration have been confounded by effects of pedestal duration in studies that co-varied increment and pedestal duration. Implications for models of temporal integration are discussed.


Subject(s)
Acoustic Stimulation/methods , Auditory Threshold/physiology , Hearing/physiology , Models, Neurological , Psychoacoustics , Adaptation, Physiological/physiology , Adult , Female , Humans , Male , Pitch Perception/physiology , Young Adult
12.
J Acoust Soc Am ; 128(4): 1952-64, 2010 Oct.
Article in English | MEDLINE | ID: mdl-20968367

ABSTRACT

Although there have been numerous studies investigating subjective spatial impression in rooms, only a few of those studies have addressed the influence of visual cues on the judgment of auditory measures. In the psychophysical study presented here, video footage of five solo music/speech performers was shown for four different listening positions within a general-purpose space. The videos were presented in addition to the acoustic signals, which were auralized using binaural room impulse responses (BRIR) that were recorded in the same general-purpose space. The participants were asked to adjust the direct-to-reverberant energy ratio (D/R ratio) of the BRIR according to their expectation considering the visual cues. They were also directed to rate the apparent source width (ASW) and listener envelopment (LEV) for each condition. Visual cues generated by changing the sound-source position in the multi-purpose space, as well as the makeup of the sound stimuli affected the judgment of spatial impression. Participants also scaled the direct-to-reverberant energy ratio with greater direct sound energy than was measured in the acoustical environment.


Subject(s)
Acoustics , Auditory Pathways/physiology , Auditory Perception , Cues , Facility Design and Construction , Sound Localization , Space Perception , Visual Perception , Acoustic Stimulation , Adult , Female , Humans , Male , Middle Aged , Photic Stimulation , Psychoacoustics , Time Factors , Vibration , Video Recording , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...