Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Neurosci ; 43(45): 7668-7677, 2023 11 08.
Article in English | MEDLINE | ID: mdl-37734948

ABSTRACT

Hearing is an active process, and recent studies show that even the ear is affected by cognitive states or motor actions. One example are movements of the eardrum induced by saccadic eye movements, known as "eye movement-related eardrum oscillations" (EMREOs). While these are systematically shaped by the direction and size of saccades, the consequences of saccadic eye movements and their resulting EMREOs for hearing remain unclear. We here studied their implications for the detection of near-threshold clicks in human participants. Across three experiments, sound detection was not affected by their time of presentation relative to saccade onset, by saccade amplitude or direction. While the EMREOs were shaped by the direction and amplitude of the saccadic movement, inducing covert shifts in spatial attention did not affect the EMREO, suggesting that this signature of active sensing is restricted to overt changes in visual focus. Importantly, in our experiments, fluctuations in the EMREO amplitude were not related to detection performance, at least when monaural cues are sufficient. Hence, while eye movements may shape the transduction of acoustic information, the behavioral implications remain to be understood.SIGNIFICANCE STATEMENT Previous studies suggest that oculomotor behavior may influence how we perceive spatially localized sounds. Recent work has introduced a new perspective on this question by showing that eye movements can directly modulate the eardrum. Yet, it remains unclear whether this signature of active hearing accounts for behavioral effects. We here show that overt but not covert changes in visual attention modulate the eardrum, but these modulations do not interfere with the detection of sounds. Our results provide a starting point to obtain a deeper understanding about the interplay of oculomotor behavior and the active ear.


Subject(s)
Eye Movements , Saccades , Humans , Tympanic Membrane , Hearing , Sound
2.
eNeuro ; 9(3)2022.
Article in English | MEDLINE | ID: mdl-35728955

ABSTRACT

Speech is an intrinsically multisensory signal, and seeing the speaker's lips forms a cornerstone of communication in acoustically impoverished environments. Still, it remains unclear how the brain exploits visual speech for comprehension. Previous work debated whether lip signals are mainly processed along the auditory pathways or whether the visual system directly implements speech-related processes. To probe this, we systematically characterized dynamic representations of multiple acoustic and visual speech-derived features in source localized MEG recordings that were obtained while participants listened to speech or viewed silent speech. Using a mutual-information framework we provide a comprehensive assessment of how well temporal and occipital cortices reflect the physically presented signals and unique aspects of acoustic features that were physically absent but may be critical for comprehension. Our results demonstrate that both cortices feature a functionally specific form of multisensory restoration: during lip reading, they reflect unheard acoustic features, independent of co-existing representations of the visible lip movements. This restoration emphasizes the unheard pitch signature in occipital cortex and the speech envelope in temporal cortex and is predictive of lip-reading performance. These findings suggest that when seeing the speaker's lips, the brain engages both visual and auditory pathways to support comprehension by exploiting multisensory correspondences between lip movements and spectro-temporal acoustic cues.


Subject(s)
Lipreading , Speech Perception , Acoustic Stimulation , Acoustics , Humans , Speech
3.
Brain Sci ; 12(5)2022 Apr 29.
Article in English | MEDLINE | ID: mdl-35624970

ABSTRACT

INTRODUCTION: Due to the changes in the indication range for cochlear implants and the demographic development towards an aging society, more and more people are in receipt of cochlear implants. An implantation requires a close-meshed audiological and logopedic aftercare. Hearing therapy rehabilitation currently requires great personnel effort and is time consuming. Hearing and speech therapy rehabilitation can be supported by digital hearing training programs. However, the apps currently on the market are to a limited degree personalized and structured. Increasing digitalization makes it possible, especially in times of pandemics, to decouple hearing therapy treatment from everyday clinical practice. MATERIAL AND METHODS: For this purpose, an app is in development that provides hearing therapy tailored to the patient. The individual factors that influence hearing outcome are considered. Using intelligent algorithms, the app determines the selection of exercises, the level of difficulty and the speed at which the difficulty is increased. RESULTS: The app works autonomously without being connected to local speech therapists. In addition, the app is able to analyze patient difficulties within the exercises and provides conclusions about the need for technical adjustments. CONCLUSIONS: The presented newly developed app represents a possibility to support, replace, expand and improve the classic outpatient hearing and speech therapy after CI implantation. The way the application works allows it to reach more people and provide a time- and cost-saving alternative to traditional therapy.

4.
Neuroimage ; 233: 117958, 2021 06.
Article in English | MEDLINE | ID: mdl-33744458

ABSTRACT

The representation of speech in the brain is often examined by measuring the alignment of rhythmic brain activity to the speech envelope. To conveniently quantify this alignment (termed 'speech tracking') many studies consider the broadband speech envelope, which combines acoustic fluctuations across the spectral range. Using EEG recordings, we show that using this broadband envelope can provide a distorted picture on speech encoding. We systematically investigated the encoding of spectrally-limited speech-derived envelopes presented by individual and multiple noise carriers in the human brain. Tracking in the 1 to 6 Hz EEG bands differentially reflected low (0.2 - 0.83 kHz) and high (2.66 - 8 kHz) frequency speech-derived envelopes. This was independent of the specific carrier frequency but sensitive to attentional manipulations, and may reflect the context-dependent emphasis of information from distinct spectral ranges of the speech envelope in low frequency brain activity. As low and high frequency speech envelopes relate to distinct phonemic features, our results suggest that functionally distinct processes contribute to speech tracking in the same EEG bands, and are easily confounded when considering the broadband speech envelope.


Subject(s)
Acoustic Stimulation/methods , Brain Mapping/methods , Brain/physiology , Delta Rhythm/physiology , Speech Perception/physiology , Theta Rhythm/physiology , Adult , Brain/diagnostic imaging , Electroencephalography/methods , Female , Humans , Male , Speech/physiology , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...