Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 10.402
Filter
1.
Cereb Cortex ; 34(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38700440

ABSTRACT

While the auditory and visual systems each provide distinct information to our brain, they also work together to process and prioritize input to address ever-changing conditions. Previous studies highlighted the trade-off between auditory change detection and visual selective attention; however, the relationship between them is still unclear. Here, we recorded electroencephalography signals from 106 healthy adults in three experiments. Our findings revealed a positive correlation at the population level between the amplitudes of event-related potential indices associated with auditory change detection (mismatch negativity) and visual selective attention (posterior contralateral N2) when elicited in separate tasks. This correlation persisted even when participants performed a visual task while disregarding simultaneous auditory stimuli. Interestingly, as visual attention demand increased, participants whose posterior contralateral N2 amplitude increased the most exhibited the largest reduction in mismatch negativity, suggesting a within-subject trade-off between the two processes. Taken together, our results suggest an intimate relationship and potential shared mechanism between auditory change detection and visual selective attention. We liken this to a total capacity limit that varies between individuals, which could drive correlated individual differences in auditory change detection and visual selective attention, and also within-subject competition between the two, with task-based modulation of visual attention causing within-participant decrease in auditory change detection sensitivity.


Subject(s)
Attention , Auditory Perception , Electroencephalography , Visual Perception , Humans , Attention/physiology , Male , Female , Young Adult , Adult , Auditory Perception/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Photic Stimulation/methods , Evoked Potentials/physiology , Brain/physiology , Adolescent
2.
Cereb Cortex ; 34(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38715408

ABSTRACT

Speech comprehension in noise depends on complex interactions between peripheral sensory and central cognitive systems. Despite having normal peripheral hearing, older adults show difficulties in speech comprehension. It remains unclear whether the brain's neural responses could indicate aging. The current study examined whether individual brain activation during speech perception in different listening environments could predict age. We applied functional near-infrared spectroscopy to 93 normal-hearing human adults (20 to 70 years old) during a sentence listening task, which contained a quiet condition and 4 different signal-to-noise ratios (SNR = 10, 5, 0, -5 dB) noisy conditions. A data-driven approach, the region-based brain-age predictive modeling was adopted. We observed a significant behavioral decrease with age under the 4 noisy conditions, but not under the quiet condition. Brain activations in SNR = 10 dB listening condition could successfully predict individual's age. Moreover, we found that the bilateral visual sensory cortex, left dorsal speech pathway, left cerebellum, right temporal-parietal junction area, right homolog Wernicke's area, and right middle temporal gyrus contributed most to prediction performance. These results demonstrate that the activations of regions about sensory-motor mapping of sound, especially in noisy conditions, could be sensitive measures for age prediction than external behavior measures.


Subject(s)
Aging , Brain , Comprehension , Noise , Spectroscopy, Near-Infrared , Speech Perception , Humans , Adult , Speech Perception/physiology , Male , Female , Spectroscopy, Near-Infrared/methods , Middle Aged , Young Adult , Aged , Comprehension/physiology , Brain/physiology , Brain/diagnostic imaging , Aging/physiology , Brain Mapping/methods , Acoustic Stimulation/methods
3.
JASA Express Lett ; 4(5)2024 May 01.
Article in English | MEDLINE | ID: mdl-38717467

ABSTRACT

A long-standing quest in audition concerns understanding relations between behavioral measures and neural representations of changes in sound intensity. Here, we examined relations between aspects of intensity perception and central neural responses within the inferior colliculus of unanesthetized rabbits (by averaging the population's spike count/level functions). We found parallels between the population's neural output and: (1) how loudness grows with intensity; (2) how loudness grows with duration; (3) how discrimination of intensity improves with increasing sound level; (4) findings that intensity discrimination does not depend on duration; and (5) findings that duration discrimination is a constant fraction of base duration.


Subject(s)
Inferior Colliculi , Loudness Perception , Animals , Rabbits , Loudness Perception/physiology , Inferior Colliculi/physiology , Acoustic Stimulation/methods , Discrimination, Psychological/physiology , Auditory Perception/physiology , Neurons/physiology
4.
PLoS One ; 19(5): e0303565, 2024.
Article in English | MEDLINE | ID: mdl-38781127

ABSTRACT

In this study, we attempted to improve brain-computer interface (BCI) systems by means of auditory stream segregation in which alternately presented tones are perceived as sequences of various different tones (streams). A 3-class BCI using three tone sequences, which were perceived as three different tone streams, was investigated and evaluated. Each presented musical tone was generated by a software synthesizer. Eleven subjects took part in the experiment. Stimuli were presented to each user's right ear. Subjects were requested to attend to one of three streams and to count the number of target stimuli in the attended stream. In addition, 64-channel electroencephalogram (EEG) and two-channel electrooculogram (EOG) signals were recorded from participants with a sampling frequency of 1000 Hz. The measured EEG data were classified based on Riemannian geometry to detect the object of the subject's selective attention. P300 activity was elicited by the target stimuli in the segregated tone streams. In five out of eleven subjects, P300 activity was elicited only by the target stimuli included in the attended stream. In a 10-fold cross validation test, a classification accuracy over 80% for five subjects and over 75% for nine subjects was achieved. For subjects whose accuracy was lower than 75%, either the P300 was also elicited for nonattended streams or the amplitude of P300 was small. It was concluded that the number of selected BCI systems based on auditory stream segregation can be increased to three classes, and these classes can be detected by a single ear without the aid of any visual modality.


Subject(s)
Acoustic Stimulation , Attention , Brain-Computer Interfaces , Electroencephalography , Humans , Male , Female , Electroencephalography/methods , Adult , Attention/physiology , Acoustic Stimulation/methods , Auditory Perception/physiology , Young Adult , Event-Related Potentials, P300/physiology , Electrooculography/methods
5.
eNeuro ; 11(5)2024 May.
Article in English | MEDLINE | ID: mdl-38702187

ABSTRACT

Mismatch negativity (MMN) is commonly recognized as a neural signal of prediction error evoked by deviants from the expected patterns of sensory input. Studies show that MMN diminishes when sequence patterns become more predictable over a longer timescale. This implies that MMN is composed of multiple subcomponents, each responding to different levels of temporal regularities. To probe the hypothesized subcomponents in MMN, we record human electroencephalography during an auditory local-global oddball paradigm where the tone-to-tone transition probability (local regularity) and the overall sequence probability (global regularity) are manipulated to control temporal predictabilities at two hierarchical levels. We find that the size of MMN is correlated with both probabilities and the spatiotemporal structure of MMN can be decomposed into two distinct subcomponents. Both subcomponents appear as negative waveforms, with one peaking early in the central-frontal area and the other late in a more frontal area. With a quantitative predictive coding model, we map the early and late subcomponents to the prediction errors that are tied to local and global regularities, respectively. Our study highlights the hierarchical complexity of MMN and offers an experimental and analytical platform for developing a multitiered neural marker applicable in clinical settings.


Subject(s)
Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory , Humans , Male , Female , Electroencephalography/methods , Young Adult , Adult , Evoked Potentials, Auditory/physiology , Acoustic Stimulation/methods , Auditory Perception/physiology , Brain/physiology , Brain Mapping , Adolescent
6.
Codas ; 36(2): e20230048, 2024.
Article in Portuguese, English | MEDLINE | ID: mdl-38695432

ABSTRACT

PURPOSE: To correlate behavioral assessment results of central auditory processing and the self-perception questionnaire after acoustically controlled auditory training. METHODS: The study assessed 10 individuals with a mean age of 44.5 years who had suffered mild traumatic brain injury. They underwent behavioral assessment of central auditory processing and answered the Formal Auditory Training self-perception questionnaire after the therapeutic intervention - whose questions address auditory perception, understanding orders, request to repeat statements, occurrence of misunderstandings, attention span, auditory performance in noisy environments, telephone communication, and self-esteem. Patients were asked to indicate the frequency with which the listed behaviors occurred. RESULTS: Figure-ground, sequential memory for sounds, and temporal processing correlated with improvement in following instructions, fewer requests to repeat statements, increased attention span, improved communication, and understanding on the phone and when watching TV. CONCLUSION: Auditory closure, figure-ground, and temporal processing had improved in the assessment after the acoustically controlled auditory training, and there were fewer auditory behavior complaints.


OBJETIVO: Correlacionar os resultados da avaliação comportamental do processamento auditivo central e do questionário de autopercepção após o treinamento auditivo acusticamente controlado. MÉTODO: Foram avaliados dez indivíduos com média de idade de 44,5 anos, que sofreram traumatismo cranioencefálico de grau leve. Os indivíduos foram submetidos a avaliação comportamental do processamento auditivo central e também responderam ao questionário de autopercepção "Treinamento Auditivo Formal" após a intervenção terapêutica. O questionário foi composto por questões referentes a percepção auditiva, compreensão de ordens, solicitação de repetição de enunciados, ocorrência mal-entendidos, tempo de atenção, desempenho auditivo em ambiente ruidoso, comunicação ao telefone e autoestima e os pacientes foram solicitados a assinalar a frequência de ocorrência dos comportamentos listados. RESULTADOS: As habilidades auditivas de figura-fundo e memória para sons em sequência e processamento temporal correlacionaram-se com melhora para seguir instruções, diminuição das solicitações de repetições e aumento do tempo de atenção e melhora da comunicação e da compreensão ao telefone e para assistir TV. CONCLUSÃO: Observou-se adequação das habilidades auditivas de fechamento auditivo, figura fundo, e processamento temporal na avaliação pós-treinamento auditivo acusticamente controlado, além de redução das queixas quanto ao comportamento auditivo.


Subject(s)
Auditory Perception , Self Concept , Humans , Adult , Male , Female , Auditory Perception/physiology , Surveys and Questionnaires , Middle Aged , Brain Concussion/psychology , Brain Concussion/rehabilitation , Acoustic Stimulation/methods , Young Adult
7.
Trends Hear ; 28: 23312165241246596, 2024.
Article in English | MEDLINE | ID: mdl-38738341

ABSTRACT

The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.


Subject(s)
Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory, Brain Stem , Speech Perception , Humans , Evoked Potentials, Auditory, Brain Stem/physiology , Male , Female , Speech Perception/physiology , Acoustic Stimulation/methods , Adult , Young Adult , Auditory Threshold/physiology , Time Factors , Cochlear Nerve/physiology , Healthy Volunteers
8.
PLoS Biol ; 22(5): e3002631, 2024 May.
Article in English | MEDLINE | ID: mdl-38805517

ABSTRACT

Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.


Subject(s)
Acoustic Stimulation , Auditory Perception , Music , Speech Perception , Humans , Male , Female , Adult , Auditory Perception/physiology , Acoustic Stimulation/methods , Speech Perception/physiology , Young Adult , Speech/physiology , Adolescent
9.
Article in English | MEDLINE | ID: mdl-38801679

ABSTRACT

Compared to traditional continuous performance tasks, virtual reality-based continuous performance tests (VR-CPT) offer higher ecological validity. While previous studies have primarily focused on behavioral outcomes in VR-CPT and incorporated various distractors to enhance ecological realism, little attention has been paid to the effects of distractors on EEG. Therefore, our study aimed to investigate the influence of distractors on EEG during VR-CPT. We studied visual distractors and auditory distractors separately, recruiting 68 subjects (M =20.82, SD =1.72) and asking each to complete four tasks. These tasks were categorized into four groups according to the presence or absence of visual and auditory distractors. We conducted paired t-tests on the mean relative power of the five electrodes in the ROI region across different frequency bands. Significant differences were found in theta waves between Group 3 (M =2.49, SD =2.02) and Group 4 (M =2.68, SD =2.39) (p < 0.05); in alpha waves between Group 3 (M =2.08, SD =3.73) and Group 4 (M =3.03, SD =4.60) (p < 0.001); and in beta waves between Group 1 (M = -4.44 , SD =2.29) and Group 2 (M = -5.03 , SD =2.48) (p < 0.001), as well as between Group 3 (M = -4.48 , SD =2.03) and Group 4 (M = -4.67 , SD =2.23) (p < 0.05). The incorporation of distractors in VR-CPT modulates EEG signals across different frequency bands, with visual distractors attenuating theta band activity, auditory distractors enhancing alpha band activity, and both types of distractors reducing beta oscillations following target stimuli. This insight holds significant promise for the rehabilitation of children and adolescents with attention deficits.


Subject(s)
Attention , Electroencephalography , Virtual Reality , Humans , Male , Female , Electroencephalography/methods , Young Adult , Attention/physiology , Adult , Visual Perception/physiology , Theta Rhythm/physiology , Acoustic Stimulation/methods , Alpha Rhythm/physiology , Photic Stimulation , Auditory Perception/physiology , Psychomotor Performance/physiology
10.
J Neural Eng ; 21(3)2024 May 30.
Article in English | MEDLINE | ID: mdl-38776893

ABSTRACT

Objective: Decoding auditory attention from brain signals is essential for the development of neuro-steered hearing aids. This study aims to overcome the challenges of extracting discriminative feature representations from electroencephalography (EEG) signals for auditory attention detection (AAD) tasks, particularly focusing on the intrinsic relationships between different EEG channels.Approach: We propose a novel attention-guided graph structure learning network, AGSLnet, which leverages potential relationships between EEG channels to improve AAD performance. Specifically, AGSLnet is designed to dynamically capture latent relationships between channels and construct a graph structure of EEG signals.Main result: We evaluated AGSLnet on two publicly available AAD datasets and demonstrated its superiority and robustness over state-of-the-art models. Visualization of the graph structure trained by AGSLnet supports previous neuroscience findings, enhancing our understanding of the underlying neural mechanisms.Significance: This study presents a novel approach for examining brain functional connections, improving AAD performance in low-latency settings, and supporting the development of neuro-steered hearing aids.


Subject(s)
Attention , Electroencephalography , Humans , Electroencephalography/methods , Attention/physiology , Auditory Perception/physiology , Neural Networks, Computer , Acoustic Stimulation/methods , Male , Adult , Female , Brain/physiology
11.
Brain Behav ; 14(5): e3520, 2024 May.
Article in English | MEDLINE | ID: mdl-38715412

ABSTRACT

OBJECTIVE: In previous animal studies, sound enhancement reduced tinnitus perception in cases associated with hearing loss. The aim of this study was to investigate the efficacy of sound enrichment therapy in tinnitus treatment by developing a protocol that includes criteria for psychoacoustic characteristics of tinnitus to determine whether the etiology is related to hearing loss. METHODS: A total of 96 patients with chronic tinnitus were included in the study. Fifty-two patients in the study group and 44 patients in the placebo group considered residual inhibition (RI) outcomes and tinnitus pitches. Both groups received sound enrichment treatment with different spectrum contents. The tinnitus handicap inventory (THI), visual analog scale (VAS), minimum masking level (MML), and tinnitus loudness level (TLL) results were compared before and at 1, 3, and 6 months after treatment. RESULTS: There was a statistically significant difference between the groups in THI, VAS, MML, and TLL scores from the first month to all months after treatment (p < .01). For the study group, there was a statistically significant decrease in THI, VAS, MML, and TLL scores in the first month (p < .01). This decrease continued at a statistically significant level in the third month of posttreatment for THI (p < .05) and at all months for VAS-1 (tinnitus severity) (p < .05) and VAS-2 (tinnitus discomfort) (p < .05). CONCLUSION: In clinical practice, after excluding other factors related to the tinnitus etiology, sound enrichment treatment can be effective in tinnitus cases where RI is positive and the tinnitus pitch is matched with a hearing loss between 45 and 55 dB HL in a relatively short period of 1 month.


Subject(s)
Hearing Loss , Tinnitus , Tinnitus/therapy , Humans , Male , Female , Middle Aged , Adult , Hearing Loss/rehabilitation , Hearing Loss/therapy , Treatment Outcome , Aged , Acoustic Stimulation/methods , Sound , Psychoacoustics
12.
eNeuro ; 11(5)2024 May.
Article in English | MEDLINE | ID: mdl-38702194

ABSTRACT

Elicited upon violation of regularity in stimulus presentation, mismatch negativity (MMN) reflects the brain's ability to perform automatic comparisons between consecutive stimuli and provides an electrophysiological index of sensory error detection whereas P300 is associated with cognitive processes such as updating of the working memory. To date, there has been extensive research on the roles of MMN and P300 individually, because of their potential to be used as clinical markers of consciousness and attention, respectively. Here, we intend to explore with an unsupervised and rigorous source estimation approach, the underlying cortical generators of MMN and P300, in the context of prediction error propagation along the hierarchies of brain information processing in healthy human participants. The existing methods of characterizing the two ERPs involve only approximate estimations of their amplitudes and latencies based on specific sensors of interest. Our objective is twofold: first, we introduce a novel data-driven unsupervised approach to compute latencies and amplitude of ERP components accurately on an individual-subject basis and reconfirm earlier findings. Second, we demonstrate that in multisensory environments, MMN generators seem to reflect a significant overlap of "modality-specific" and "modality-independent" information processing while P300 generators mark a shift toward completely "modality-independent" processing. Advancing earlier understanding that multisensory contexts speed up early sensory processing, our study reveals that temporal facilitation extends to even the later components of prediction error processing, using EEG experiments. Such knowledge can be of value to clinical research for characterizing the key developmental stages of lifespan aging, schizophrenia, and depression.


Subject(s)
Electroencephalography , Event-Related Potentials, P300 , Humans , Male , Female , Adult , Electroencephalography/methods , Young Adult , Event-Related Potentials, P300/physiology , Auditory Perception/physiology , Cerebral Cortex/physiology , Acoustic Stimulation/methods , Evoked Potentials/physiology
13.
J Vis ; 24(5): 16, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38819806

ABSTRACT

Multistable perception occurs in all sensory modalities, and there is ongoing theoretical debate about whether there are overarching mechanisms driving multistability across modalities. Here we study whether multistable percepts are coupled across vision and audition on a moment-by-moment basis. To assess perception simultaneously for both modalities without provoking a dual-task situation, we query auditory perception by direct report, while measuring visual perception indirectly via eye movements. A support-vector-machine (SVM)-based classifier allows us to decode visual perception from the eye-tracking data on a moment-by-moment basis. For each timepoint, we compare visual percept (SVM output) and auditory percept (report) and quantify the co-occurrence of integrated (one-object) or segregated (two-objects) interpretations in the two modalities. Our results show an above-chance coupling of auditory and visual perceptual interpretations. By titrating stimulus parameters toward an approximately symmetric distribution of integrated and segregated percepts for each modality and individual, we minimize the amount of coupling expected by chance. Because of the nature of our task, we can rule out that the coupling stems from postperceptual levels (i.e., decision or response interference). Our results thus indicate moment-by-moment perceptual coupling in the resolution of visual and auditory multistability, lending support to theories that postulate joint mechanisms for multistable perception across the senses.


Subject(s)
Auditory Perception , Photic Stimulation , Visual Perception , Humans , Auditory Perception/physiology , Visual Perception/physiology , Adult , Male , Female , Photic Stimulation/methods , Young Adult , Eye Movements/physiology , Acoustic Stimulation/methods
14.
J Neural Eng ; 21(2)2024 Apr 17.
Article in English | MEDLINE | ID: mdl-38579741

ABSTRACT

Objective. The auditory steady-state response (ASSR) allows estimation of hearing thresholds. The ASSR can be estimated from electroencephalography (EEG) recordings from electrodes positioned on both the scalp and within the ear (ear-EEG). Ear-EEG can potentially be integrated into hearing aids, which would enable automatic fitting of the hearing device in daily life. The conventional stimuli for ASSR-based hearing assessment, such as pure tones and chirps, are monotonous and tiresome, making them inconvenient for repeated use in everyday situations. In this study we investigate the use of natural speech sounds for ASSR estimation.Approach.EEG was recorded from 22 normal hearing subjects from both scalp and ear electrodes. Subjects were stimulated monaurally with 180 min of speech stimulus modified by applying a 40 Hz amplitude modulation (AM) to an octave frequency sub-band centered at 1 kHz. Each 50 ms sub-interval in the AM sub-band was scaled to match one of 10 pre-defined levels (0-45 dB sensation level, 5 dB steps). The apparent latency for the ASSR was estimated as the maximum average cross-correlation between the envelope of the AM sub-band and the recorded EEG and was used to align the EEG signal with the audio signal. The EEG was then split up into sub-epochs of 50 ms length and sorted according to the stimulation level. ASSR was estimated for each level for both scalp- and ear-EEG.Main results. Significant ASSRs with increasing amplitude as a function of presentation level were recorded from both scalp and ear electrode configurations.Significance. Utilizing natural sounds in ASSR estimation offers the potential for electrophysiological hearing assessment that are more comfortable and less fatiguing compared to existing ASSR methods. Combined with ear-EEG, this approach may allow convenient hearing threshold estimation in everyday life, utilizing ambient sounds. Additionally, it may facilitate both initial fitting and subsequent adjustments of hearing aids outside of clinical settings.


Subject(s)
Hearing , Sound , Humans , Acoustic Stimulation/methods , Auditory Threshold/physiology , Electroencephalography/methods
15.
Nat Commun ; 15(1): 3116, 2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38600132

ABSTRACT

Spatiotemporally congruent sensory stimuli are fused into a unified percept. The auditory cortex (AC) sends projections to the primary visual cortex (V1), which could provide signals for binding spatially corresponding audio-visual stimuli. However, whether AC inputs in V1 encode sound location remains unknown. Using two-photon axonal calcium imaging and a speaker array, we measured the auditory spatial information transmitted from AC to layer 1 of V1. AC conveys information about the location of ipsilateral and contralateral sound sources to V1. Sound location could be accurately decoded by sampling AC axons in V1, providing a substrate for making location-specific audiovisual associations. However, AC inputs were not retinotopically arranged in V1, and audio-visual modulations of V1 neurons did not depend on the spatial congruency of the sound and light stimuli. The non-topographic sound localization signals provided by AC might allow the association of specific audiovisual spatial patterns in V1 neurons.


Subject(s)
Auditory Cortex , Sound Localization , Visual Cortex , Visual Perception/physiology , Auditory Cortex/physiology , Neurons/physiology , Visual Cortex/physiology , Photic Stimulation/methods , Acoustic Stimulation/methods
16.
PeerJ ; 12: e17104, 2024.
Article in English | MEDLINE | ID: mdl-38680894

ABSTRACT

Advancements in cochlear implants (CIs) have led to a significant increase in bilateral CI users, especially among children. Yet, most bilateral CI users do not fully achieve the intended binaural benefit due to potential limitations in signal processing and/or surgical implant positioning. One crucial auditory cue that normal hearing (NH) listeners can benefit from is the interaural time difference (ITD), i.e., the time difference between the arrival of a sound at two ears. The ITD sensitivity is thought to be heavily relying on the effective utilization of temporal fine structure (very rapid oscillations in sound). Unfortunately, most current CIs do not transmit such true fine structure. Nevertheless, bilateral CI users have demonstrated sensitivity to ITD cues delivered through envelope or interaural pulse time differences, i.e., the time gap between the pulses delivered to the two implants. However, their ITD sensitivity is significantly poorer compared to NH individuals, and it further degrades at higher CI stimulation rates, especially when the rate exceeds 300 pulse per second. The overall purpose of this research thread is to improve spatial hearing abilities in bilateral CI users. This study aims to develop electroencephalography (EEG) paradigms that can be used with clinical settings to assess and optimize the delivery of ITD cues, which are crucial for spatial hearing in everyday life. The research objective of this article was to determine the effect of CI stimulation pulse rate on the ITD sensitivity, and to characterize the rate-dependent degradation in ITD perception using EEG measures. To develop protocols for bilateral CI studies, EEG responses were obtained from NH listeners using sinusoidal-amplitude-modulated (SAM) tones and filtered clicks with changes in either fine structure ITD (ITDFS) or envelope ITD (ITDENV). Multiple EEG responses were analyzed, which included the subcortical auditory steady-state responses (ASSRs) and cortical auditory evoked potentials (CAEPs) elicited by stimuli onset, offset, and changes. Results indicated that acoustic change complex (ACC) responses elicited by ITDENV changes were significantly smaller or absent compared to those elicited by ITDFS changes. The ACC morphologies evoked by ITDFS changes were similar to onset and offset CAEPs, although the peak latencies were longest for ACC responses and shortest for offset CAEPs. The high-frequency stimuli clearly elicited subcortical ASSRs, but smaller than those evoked by lower carrier frequency SAM tones. The 40-Hz ASSRs decreased with increasing carrier frequencies. Filtered clicks elicited larger ASSRs compared to high-frequency SAM tones, with the order being 40 > 160 > 80> 320 Hz ASSR for both stimulus types. Wavelet analysis revealed a clear interaction between detectable transient CAEPs and 40-Hz ASSRs in the time-frequency domain for SAM tones with a low carrier frequency.


Subject(s)
Cochlear Implants , Cues , Electroencephalography , Humans , Electroencephalography/methods , Acoustic Stimulation/methods , Sound Localization/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Time Factors
17.
J Neurosci ; 44(19)2024 May 08.
Article in English | MEDLINE | ID: mdl-38561224

ABSTRACT

Coordinated neuronal activity has been identified to play an important role in information processing and transmission in the brain. However, current research predominantly focuses on understanding the properties and functions of neuronal coordination in hippocampal and cortical areas, leaving subcortical regions relatively unexplored. In this study, we use single-unit recordings in female Sprague Dawley rats to investigate the properties and functions of groups of neurons exhibiting coordinated activity in the auditory thalamus-the medial geniculate body (MGB). We reliably identify coordinated neuronal ensembles (cNEs), which are groups of neurons that fire synchronously, in the MGB. cNEs are shown not to be the result of false-positive detections or by-products of slow-state oscillations in anesthetized animals. We demonstrate that cNEs in the MGB have enhanced information-encoding properties over individual neurons. Their neuronal composition is stable between spontaneous and evoked activity, suggesting limited stimulus-induced ensemble dynamics. These MGB cNE properties are similar to what is observed in cNEs in the primary auditory cortex (A1), suggesting that ensembles serve as a ubiquitous mechanism for organizing local networks and play a fundamental role in sensory processing within the brain.


Subject(s)
Acoustic Stimulation , Geniculate Bodies , Neurons , Rats, Sprague-Dawley , Animals , Female , Rats , Neurons/physiology , Geniculate Bodies/physiology , Acoustic Stimulation/methods , Auditory Pathways/physiology , Action Potentials/physiology , Auditory Cortex/physiology , Auditory Cortex/cytology , Thalamus/physiology , Thalamus/cytology , Evoked Potentials, Auditory/physiology
18.
Brain Res ; 1834: 148901, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38561085

ABSTRACT

Cognitive deficits are prevalent in Parkinson's disease (PD), ranging from mild deficits in perception and executive function to severe dementia. Multisensory integration (MSI), the ability to pool information from different sensory modalities to form a combined, coherent perception of the environment, is known to be impaired in PD. This study investigated the disruption of audiovisual MSI in PD patients by evaluating temporal discrimination ability between auditory and visual stimuli with different stimulus onset asynchronies (SOAs). The experiment was conducted with Fifteen PD patients and fifteen age-matched healthy controls where participants were requested to report whether the audiovisual stimuli pairs were temporal simultaneous. The temporal binding window (TBW), the time during which sensory modalities are perceived as synchronous, was adapted as the comparison index between PD patients and healthy individuals. Our results showed that PD patients had a significantly wider TBW than healthy controls, indicating abnormal audiovisual temporal discrimination. Furthermore, PD patients had more difficulty in discriminating temporal asynchrony in visual-first, but not in auditory-first stimuli, compared to healthy controls. In contrast, no significant difference was observed for auditory-first stimuli. PD patients also had shorter reaction times than healthy controls regardless of stimulus priority. Together, our findings point to abnormal audiovisual temporal discrimination, a major component of MSI irregularity, in PD patients. These results have important implications for future models of MSI experiments and models that aim to uncover the underlying mechanism of MSI in patients afflicted with PD.


Subject(s)
Acoustic Stimulation , Auditory Perception , Parkinson Disease , Photic Stimulation , Visual Perception , Humans , Parkinson Disease/physiopathology , Parkinson Disease/psychology , Male , Female , Aged , Auditory Perception/physiology , Middle Aged , Visual Perception/physiology , Acoustic Stimulation/methods , Photic Stimulation/methods , Discrimination, Psychological/physiology , Reaction Time/physiology , Time Perception/physiology
19.
Cereb Cortex ; 34(4)2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38679480

ABSTRACT

Existing neuroimaging studies on neural correlates of musical familiarity often employ a familiar vs. unfamiliar contrast analysis. This singular analytical approach reveals associations between explicit musical memory and musical familiarity. However, is the neural activity associated with musical familiarity solely related to explicit musical memory, or could it also be related to implicit musical memory? To address this, we presented 130 song excerpts of varying familiarity to 21 participants. While acquiring their brain activity using functional magnetic resonance imaging (fMRI), we asked the participants to rate the familiarity of each song on a five-point scale. To comprehensively analyze the neural correlates of musical familiarity, we examined it from four perspectives: the intensity of local neural activity, patterns of local neural activity, global neural activity patterns, and functional connectivity. The results from these four approaches were consistent and revealed that musical familiarity is related to the activity of both explicit and implicit musical memory networks. Our findings suggest that: (1) musical familiarity is also associated with implicit musical memory, and (2) there is a cooperative and competitive interaction between the two types of musical memory in the perception of music.


Subject(s)
Brain Mapping , Brain , Magnetic Resonance Imaging , Music , Recognition, Psychology , Humans , Music/psychology , Recognition, Psychology/physiology , Male , Female , Young Adult , Adult , Brain/physiology , Brain/diagnostic imaging , Brain Mapping/methods , Auditory Perception/physiology , Acoustic Stimulation/methods
20.
Neurobiol Dis ; 195: 106490, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38561111

ABSTRACT

The auditory oddball is a mainstay in research on attention, novelty, and sensory prediction. How this task engages subcortical structures like the subthalamic nucleus and substantia nigra pars reticulata is unclear. We administered an auditory OB task while recording single unit activity (35 units) and local field potentials (57 recordings) from the subthalamic nucleus and substantia nigra pars reticulata of 30 patients with Parkinson's disease undergoing deep brain stimulation surgery. We found tone modulated and oddball modulated units in both regions. Population activity differentiated oddball from standard trials from 200 ms to 1000 ms after the tone in both regions. In the substantia nigra, beta band activity in the local field potential was decreased following oddball tones. The oddball related activity we observe may underlie attention, sensory prediction, or surprise-induced motor suppression.


Subject(s)
Acoustic Stimulation , Deep Brain Stimulation , Parkinson Disease , Pars Reticulata , Subthalamic Nucleus , Humans , Subthalamic Nucleus/physiology , Male , Middle Aged , Female , Parkinson Disease/physiopathology , Parkinson Disease/therapy , Aged , Pars Reticulata/physiology , Deep Brain Stimulation/methods , Acoustic Stimulation/methods , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Substantia Nigra/physiology , Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...