Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
J Acoust Soc Am ; 150(5): 3263, 2021 11.
Article in English | MEDLINE | ID: mdl-34852617

ABSTRACT

Understanding speech in noisy environments, such as classrooms, is a challenge for children. When a spatial separation is introduced between the target and masker, as compared to when both are co-located, children demonstrate intelligibility improvement of the target speech. Such intelligibility improvement is known as spatial release from masking (SRM). In most reverberant environments, binaural cues associated with the spatial separation are distorted; the extent to which such distortion will affect children's SRM is unknown. Two virtual acoustic environments with reverberation times between 0.4 s and 1.1 s were compared. SRM was measured using a spatial separation with symmetrically displaced maskers to maximize access to binaural cues. The role of informational masking in modulating SRM was investigated through voice similarity between the target and masker. Results showed that, contradictory to previous developmental findings on free-field SRM, children's SRM in reverberation has not yet reached maturity in the 7-12 years age range. When reducing reverberation, an SRM improvement was seen in adults but not in children. Our findings suggest that, even though school-age children have access to binaural cues that are distorted in reverberation, they demonstrate immature use of such cues for speech-in-noise perception, even in mild reverberation.


Subject(s)
Perceptual Masking , Speech Perception , Acoustics , Adult , Child , Humans , Noise/adverse effects , Schools , Speech Intelligibility
2.
Children (Basel) ; 7(11)2020 Nov 07.
Article in English | MEDLINE | ID: mdl-33171753

ABSTRACT

The integration of virtual acoustic environments (VAEs) with functional near-infrared spectroscopy (fNIRS) offers novel avenues to investigate behavioral and neural processes of speech-in-noise (SIN) comprehension in complex auditory scenes. Particularly in children with hearing aids (HAs), the combined application might offer new insights into the neural mechanism of SIN perception in simulated real-life acoustic scenarios. Here, we present first pilot data from six children with normal hearing (NH) and three children with bilateral HAs to explore the potential applicability of this novel approach. Children with NH received a speech recognition benefit from low room reverberation and target-distractors' spatial separation, particularly when the pitch of the target and the distractors was similar. On the neural level, the left inferior frontal gyrus appeared to support SIN comprehension during effortful listening. Children with HAs showed decreased SIN perception across conditions. The VAE-fNIRS approach is critically compared to traditional SIN assessments. Although the current study shows that feasibility still needs to be improved, the combined application potentially offers a promising tool to investigate novel research questions in simulated real-life listening. Future modified VAE-fNIRS applications are warranted to replicate the current findings and to validate its application in research and clinical settings.

4.
J Speech Lang Hear Res ; 62(10): 3741-3751, 2019 10 25.
Article in English | MEDLINE | ID: mdl-31619115

ABSTRACT

Purpose Working memory capacity and language ability modulate speech reception; however, the respective roles of peripheral and cognitive processing are unclear. The contribution of individual differences in these abilities to utilization of spatial cues when separating speech from informational and energetic masking backgrounds in children has not yet been determined. Therefore, this study explored whether speech reception in children is modulated by environmental factors, such as the type of background noise and spatial configuration of target and noise sources, and individual differences in the cognitive and linguistic abilities of listeners. Method Speech reception thresholds were assessed in 39 children aged 5-7 years in simulated school listening environments. Speech reception thresholds of target sentences spoken by an adult male consisting of number and color combinations were measured using an adaptive procedure, with speech-shaped white noise and single-talker backgrounds that were either collocated (target and back-ground at 0°) or spatially separated (target at 0°, background noise at 90° to the right). Spatial release from masking was assessed alongside memory span and expressive language. Results and Conclusion Significant main effect results showed that speech reception thresholds were highest for informational maskers and collocated conditions. Significant interactions indicated that individual differences in memory span and language ability were related to spatial release from masking advantages. Specifically, individual differences in memory span and language were related to the utilization of spatial cues in separated conditions. Language differences were related to auditory stream segregation abilities in collocated conditions that lack helpful spatial cues, pointing to the utilization of language processes to make up for losses in spatial information.


Subject(s)
Individuality , Memory, Short-Term/physiology , Perceptual Masking/physiology , Spatial Processing/physiology , Speech Perception/physiology , Acoustic Stimulation , Auditory Threshold , Child , Child Language , Child, Preschool , Cues , Female , Humans , Linguistics , Male , Noise , South Africa , Speech Reception Threshold Test
5.
Trends Hear ; 22: 2331216518800871, 2018.
Article in English | MEDLINE | ID: mdl-30322347

ABSTRACT

Theory and implementation of acoustic virtual reality have matured and become a powerful tool for the simulation of entirely controllable virtual acoustic environments. Such virtual acoustic environments are relevant for various types of auditory experiments on subjects with normal hearing, facilitating flexible virtual scene generation and manipulation. When it comes to expanding the investigation group to subjects with hearing loss, choosing a reproduction system which offers a proper integration of hearing aids into the virtual acoustic scene is crucial. Current loudspeaker-based spatial audio reproduction systems rely on different techniques to synthesize a surrounding sound field, providing various possibilities for adaptation and extension to allow applications in the field of hearing aid-related research. Representing one option, the concept and implementation of an extended binaural real-time auralization system is presented here. This system is capable of generating complex virtual acoustic environments, including room acoustic simulations, which are reproduced as combined via loudspeakers and research hearing aids. An objective evaluation covers the investigation of different system components, a simulation benchmark analysis for assessing the processing performance, and end-to-end latency measurements.


Subject(s)
Hearing Aids/standards , Hearing Loss/rehabilitation , Sound Localization/physiology , Speech Perception/physiology , Virtual Reality , Acoustic Stimulation , Computer Simulation , Female , Hearing Aids/trends , Hearing Loss/diagnosis , Humans , Male , Prosthesis Design , Research , Sensitivity and Specificity
6.
Scand J Psychol ; 59(6): 567-577, 2018 Dec.
Article in English | MEDLINE | ID: mdl-30137681

ABSTRACT

This study considers whether bilingual children listening in a second language are among those on which higher processing and cognitive demands are placed when noise is present. Forty-four Swedish sequential bilingual 15 year-olds were given memory span and vocabulary assessments in their first and second language (Swedish and English). First and second language speech reception thresholds (SRTs) at 50% intelligibility for numbers and colors presented in noise were obtained using an adaptive procedure. The target sentences were presented in simulated, virtual classroom acoustics, masked by either 16-talker multi-talker babble noise (MTBN) or speech shaped noise (SSN), positioned either directly in front of the listener (collocated with the target speech), or spatially separated from the target speech by 90° to either side. Main effects in the Spatial and Noise factors indicated that intelligibility was 3.8 dB lower in collocated conditions and 2.9 dB lower in MTBN conditions. SRTs were unexpectedly higher by 0.9 dB in second language conditions. Memory span significantly predicted 17% of the variance in the second language SRTs, and 9% of the variance in first language SRTs, indicating the possibility that the SRT task places higher cognitive demands when listening to second language speech than when the target is in the listener's first language.


Subject(s)
Cognition/physiology , Language , Memory/physiology , Multilingualism , Perceptual Masking/physiology , Speech Perception/physiology , Acoustic Stimulation , Adolescent , Auditory Threshold/physiology , Female , Humans , Male , Memory, Short-Term/physiology , Neuropsychological Tests , Noise , Vocabulary
7.
J Exp Psychol Appl ; 24(2): 222-235, 2018 06.
Article in English | MEDLINE | ID: mdl-29878842

ABSTRACT

Telephone conversation is ubiquitous within the office setting. Overhearing a telephone conversation-whereby only one of the two speakers is heard-is subjectively more annoying and objectively more distracting than overhearing a full conversation. The present study sought to determine whether this "halfalogue" effect is attributable to unexpected offsets and onsets within the background speech (acoustic unexpectedness) or to the tendency to predict the unheard part of the conversation (semantic [un]predictability), and whether these effects can be shielded against through top-down cognitive control. In Experiment 1, participants performed an office-related task in quiet or in the presence of halfalogue and dialogue background speech. Irrelevant speech was either meaningful or meaningless speech. The halfalogue effect was only present for the meaningful speech condition. Experiment 2 addressed whether higher task-engagement could shield against the halfalogue effect by manipulating the font of the to-be-read material. Although the halfalogue effect was found with an easy-to-read font (fluent text), the use of a difficult-to-read font (disfluent text) eliminated the effect. The halfalogue effect is thus attributable to the semantic (un)predictability, not the acoustic unexpectedness, of background telephone conversation and can be prevented by simple means such as increasing the level of engagement required by the focal task. (PsycINFO Database Record


Subject(s)
Attention/physiology , Communication , Speech Perception/physiology , Telephone , Adolescent , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...