Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
J Acoust Soc Am ; 131(6): 4732-42, 2012 Jun.
Article in English | MEDLINE | ID: mdl-22712946

ABSTRACT

Sound localization with hearing aids has traditionally been investigated in artificial laboratory settings. These settings are not representative of environments in which hearing aids are used. With individual Head-Related Transfer Functions (HRTFs) and room simulations, realistic environments can be reproduced and the performance of hearing aid algorithms can be evaluated. In this study, four different environments with background noise have been implemented in which listeners had to localize different sound sources. The HRTFs were measured inside the ear canals of the test subjects and by the microphones of Behind-The-Ear (BTEs) hearing aids. In the first experiment the system for virtual acoustics was evaluated by comparing perceptual sound localization results for the four scenes in a real room with a simulated one. In the second experiment, sound localization with three BTE algorithms, an omnidirectional microphone, a monaural cardioid-shaped beamformer and a monaural noise canceler, was examined. The results showed that the system for generating virtual environments is a reliable tool to evaluate sound localization with hearing aids. With BTE hearing aids localization performance decreased and the number of front-back confusions was at chance level. The beamformer, due to its directivity characteristics, allowed the listener to resolve the front-back ambiguity.


Subject(s)
Acoustics , Hearing Aids , Sound Localization/physiology , Adult , Algorithms , Computer Simulation , Female , Humans , Male , Noise/adverse effects , Sound Spectrography
2.
Hear Res ; 239(1-2): 1-11, 2008 May.
Article in English | MEDLINE | ID: mdl-18295993

ABSTRACT

This study evaluated the maximal attainable performance of speech enhancement strategies based on coherent modulation filtering. An optimal adaptive coherent modulation filtering algorithm was designed to enhance known signals from a target talker in two-talker babble noise. The algorithm was evaluated in a closed-set, speech-recognition-in-noise task. The speech reception threshold (SRT) was measured using a one-down, one-up adaptive procedure. Five hearing-impaired subjects and five cochlear implant users were tested in three processing conditions: (1) original sounds; (2) fixed coherent modulation filtered sounds; and (3) optimal coherent modulation filtered sounds. Six normal-hearing subjects were tested with a 6-channel cochlear implant simulation of sounds processed in the same three conditions. Significant improvements in SRTs were observed when the signal was processed with the optimal coherent modulation filtering algorithm. There was no benefit when the signal was processed with the fixed modulation filter. The current study suggested that coherent modulation filtering might be a promising method for front-end processing in hearing aids and cochlear implants. An approach such as hidden Markov models could be used to generalize the optimal coherent modulation filtering algorithm to unknown utterances and to extend it to open-set speech.


Subject(s)
Cochlear Implants , Hearing Aids , Hearing Loss/therapy , Noise , Acoustics , Adult , Aged , Aged, 80 and over , Algorithms , Equipment Design , Hearing Loss/rehabilitation , Humans , Markov Chains , Middle Aged , Models, Statistical , Speech Perception/physiology
3.
J Acoust Soc Am ; 117(4 Pt 1): 2238-46, 2005 Apr.
Article in English | MEDLINE | ID: mdl-15898664

ABSTRACT

Previous studies have shown that infant-directed speech ('motherese') exhibits overemphasized acoustic properties which may facilitate the acquisition of phonetic categories by infant learners. It has been suggested that the use of infant-directed data for training automatic speech recognition systems might also enhance the automatic learning and discrimination of phonetic categories. This study investigates the properties of infant-directed vs. adult-directed speech from the point of view of the statistical pattern recognition paradigm underlying automatic speech recognition. Isolated-word speech recognizers were trained on adult-directed vs. infant-directed data sets and were tested on both matched and mismatched data. Results show that recognizers trained on infant-directed speech did not always exhibit better recognition performance; however, their relative loss in performance on mismatched data was significantly less severe than that of recognizers trained on adult-directed speech and presented with infant-directed test data. An analysis of the statistical distributions of a subset of phonetic classes in both data sets showed that this pattern is caused by larger class overlaps in infant-directed speech. This finding has implications for both automatic speech recognition and theories of infant speech perception.


Subject(s)
Language Development , Mother-Child Relations , Phonetics , Speech Production Measurement , Speech Recognition Software , Verbal Behavior , Adult , Child , Discriminant Analysis , Female , Humans , Infant , Linear Models , Models, Statistical
SELECTION OF CITATIONS
SEARCH DETAIL
...