Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Front Psychol ; 11: 577510, 2020.
Article in English | MEDLINE | ID: mdl-33117244

ABSTRACT

It has been suggested that early cry parameters are connected to later cognitive abilities. The present study is the first to investigate whether the acoustic features of infant cry are associated with cognitive development already during the first year, as measured by oculomotor orienting and attention disengagement. Cry sounds for acoustic analyses (fundamental frequency; F0) were recorded in two neonatal cohorts at the age of 0-8 days (Tampere, Finland) or at 6 weeks (Cape Town, South Africa). Eye tracking was used to measure oculomotor orienting to peripheral visual stimuli and attention disengagement from central stimuli at 8 months (Tampere) or at 6 months (Cape Town) of age. Only a marginal positive correlation between fundamental frequency of cry (F0) and visual attention disengagement was observed in the Tampere cohort, but not in the Cape Town cohort. This correlation indicated that infants from the Tampere cohort with a higher neonatal F0 were marginally slower to shift their gaze away from the central stimulus to the peripheral stimulus. No associations between F0 and oculomotor orienting were observed in either cohort. We discuss possible factors influencing the current pattern of results suggesting a lack of replicable associations between neonatal cry and visual attention and suggest directions for future research investigating the potential of early cry analysis in predicting later cognitive development.

2.
Trends Hear ; 23: 2331216519848288, 2019.
Article in English | MEDLINE | ID: mdl-31104580

ABSTRACT

People with hearing impairment find competing voices scenarios to be challenging, both with respect to switching attention from one talker to the other, as well as maintaining attention. With the Danish competing voices test (CVT) presented here, the dual-attention skills can be assessed. The CVT provides sentences spoken by three male and three female talkers, played in sentence pairs. The task of the listener is to repeat the target sentence from the sentence pair based on cueing either before or after playback. One potential way of assisting segregation of two talkers is to take advantage of spatial unmasking by presenting one talker per ear after application of time-frequency masks for separating the mixture. Using the CVT, this study evaluated four spatial conditions in 14 moderate-to-severely hearing-impaired listeners to establish benchmark results for this type of algorithm applied to hearing-impaired listeners. The four spatial conditions were as follows: summed (diotic), separate, the ideal ratio mask, and the ideal binary mask. The results show that the test is sensitive to the change in spatial condition. The temporal position of the cue has a large impact, as cueing the target talker before playback focuses the attention toward the target, whereas cueing after playback requires equal attention to the two talkers, which is more difficult. Furthermore, both applied ideal masks show test scores very close to the ideal separate spatial condition, suggesting that this technique is useful for future separation algorithms using estimated rather than ideal masks.


Subject(s)
Audiology , Auditory Perception , Hearing Loss , Speech Perception , Adult , Audiology/methods , Auditory Threshold , Cues , Female , Humans , Language , Male , Perceptual Masking
3.
J Acoust Soc Am ; 144(1): 172, 2018 07.
Article in English | MEDLINE | ID: mdl-30075667

ABSTRACT

Hearing aid users are challenged in listening situations with noise and especially speech-on-speech situations with two or more competing voices. Specifically, the task of attending to and segregating two competing voices is particularly hard, unlike for normal-hearing listeners, as shown in a small sub-experiment. In the main experiment, the competing voices benefit of a deep neural network (DNN) based stream segregation enhancement algorithm was tested on hearing-impaired listeners. A mixture of two voices was separated using a DNN and presented to the two ears as individual streams and tested for word score. Compared to the unseparated mixture, there was a 13%-point benefit from the separation, while attending to both voices. If only one output was selected as in a traditional target-masker scenario, a larger benefit of 37%-points was found. The results agreed well with objective metrics and show that for hearing-impaired listeners, DNNs have a large potential for improving stream segregation and speech intelligibility in difficult scenarios with two equally important targets without any prior selection of a primary target stream. An even higher benefit can be obtained if the user can select the preferred target via remote control.


Subject(s)
Algorithms , Auditory Perception/physiology , Hearing Loss/rehabilitation , Speech Intelligibility/physiology , Speech Perception/physiology , Aged , Aged, 80 and over , Auditory Threshold/physiology , Female , Hearing Tests , Humans , Male , Middle Aged , Perceptual Masking/physiology , Voice/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...