Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 16.699
Filter
1.
Brain Behav ; 14(6): e3571, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38841736

ABSTRACT

OBJECTIVE: This study aims to control all hearing thresholds, including extended high frequencies (EHFs), presents stimuli of varying difficulty levels, and measures electroencephalography (EEG) and pupillometry responses to determine whether listening difficulty in tinnitus patients is effort or fatigue-related. METHODS: Twenty-one chronic tinnitus patients and 26 matched healthy controls having normal pure-tone averages with symmetrical hearing thresholds were included. Subjects were evaluated with 0.125-20 kHz pure-tone audiometry, Montreal Cognitive Assessment Test (MoCA), Tinnitus Handicap Inventory (THI), EEG, and pupillometry. RESULTS: Pupil dilatation and EEG alpha power during the "encoding" phase of the presented sentence in tinnitus patients were less in all listening conditions (p < .05). Also, there was no statistically significant relationship between EEG and pupillometry components for all listening conditions and THI or MoCA (p > .05). CONCLUSION: EEG and pupillometry results under various listening conditions indicate potential listening effort in tinnitus patients even if all frequencies, including EHFs, are controlled. Also, we suggest that pupillometry should be interpreted with caution in autonomic nervous system-related conditions such as tinnitus.


Subject(s)
Electroencephalography , Pupil , Tinnitus , Humans , Tinnitus/physiopathology , Tinnitus/diagnosis , Male , Female , Electroencephalography/methods , Adult , Middle Aged , Pupil/physiology , Audiometry, Pure-Tone , Auditory Perception/physiology , Auditory Threshold/physiology
2.
Nat Commun ; 15(1): 4835, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38844457

ABSTRACT

Humans produce two forms of cognitively complex vocalizations: speech and song. It is debated whether these differ based primarily on culturally specific, learned features, or if acoustical features can reliably distinguish them. We study the spectro-temporal modulation patterns of vocalizations produced by 369 people living in 21 urban, rural, and small-scale societies across six continents. Specific ranges of spectral and temporal modulations, overlapping within categories and across societies, significantly differentiate speech from song. Machine-learning classification shows that this effect is cross-culturally robust, vocalizations being reliably classified solely from their spectro-temporal features across all 21 societies. Listeners unfamiliar with the cultures classify these vocalizations using similar spectro-temporal cues as the machine learning algorithm. Finally, spectro-temporal features are better able to discriminate song from speech than a broad range of other acoustical variables, suggesting that spectro-temporal modulation-a key feature of auditory neuronal tuning-accounts for a fundamental difference between these categories.


Subject(s)
Machine Learning , Speech , Humans , Speech/physiology , Male , Female , Adult , Acoustics , Cross-Cultural Comparison , Auditory Perception/physiology , Sound Spectrography , Singing/physiology , Music , Middle Aged , Young Adult
3.
J Acoust Soc Am ; 155(6): 3639-3653, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38836771

ABSTRACT

The estimation of auditory evoked potentials requires deconvolution when the duration of the responses to be recovered exceeds the inter-stimulus interval. Based on least squares deconvolution, in this article we extend the procedure to the case of a multi-response convolutional model, that is, a model in which different categories of stimulus are expected to evoke different responses. The computational cost of the multi-response deconvolution significantly increases with the number of responses to be deconvolved, which restricts its applicability in practical situations. In order to alleviate this restriction, we propose to perform the multi-response deconvolution in a reduced representation space associated with a latency-dependent filtering of auditory responses, which provides a significant dimensionality reduction. We demonstrate the practical viability of the multi-response deconvolution with auditory responses evoked by clicks presented at different levels and categorized according to their stimulation level. The multi-response deconvolution applied in a reduced representation space provides the least squares estimation of the responses with a reasonable computational load. matlab/Octave code implementing the proposed procedure is included as supplementary material.


Subject(s)
Acoustic Stimulation , Evoked Potentials, Auditory , Evoked Potentials, Auditory/physiology , Humans , Acoustic Stimulation/methods , Male , Adult , Electroencephalography/methods , Female , Least-Squares Analysis , Young Adult , Signal Processing, Computer-Assisted , Reaction Time , Auditory Perception/physiology
4.
J Acoust Soc Am ; 155(6): 3742-3759, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38856312

ABSTRACT

Amplitude modulation (AM) of a masker reduces its masking on a simultaneously presented unmodulated pure-tone target, which likely involves dip listening. This study tested the idea that dip-listening efficiency may depend on stimulus context, i.e., the match in AM peakedness (AMP) between the masker and a precursor or postcursor stimulus, assuming a form of temporal pattern analysis process. Masked thresholds were measured in normal-hearing listeners using Schroeder-phase harmonic complexes as maskers and precursors or postcursors. Experiment 1 showed threshold elevation (i.e., interference) when a flat cursor preceded or followed a peaked masker, suggesting proactive and retroactive temporal pattern analysis. Threshold decline (facilitation) was observed when the masker AMP was matched to the precursor, irrespective of stimulus AMP, suggesting only proactive processing. Subsequent experiments showed that both interference and facilitation (1) remained robust when a temporal gap was inserted between masker and cursor, (2) disappeared when an F0-difference was introduced between masker and precursor, and (3) decreased when the presentation level was reduced. These results suggest an important role of envelope regularity in dip listening, especially when masker and cursor are F0-matched and, therefore, form one perceptual stream. The reported effects seem to represent a time-domain variant of comodulation masking release.


Subject(s)
Acoustic Stimulation , Auditory Threshold , Perceptual Masking , Humans , Young Adult , Adult , Time Factors , Female , Male , Audiometry, Pure-Tone , Auditory Perception/physiology
5.
Ann N Y Acad Sci ; 1536(1): 167-176, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38829709

ABSTRACT

Time discrimination, a critical aspect of auditory perception, is influenced by numerous factors. Previous research has suggested that musical experience can restructure the brain, thereby enhancing time discrimination. However, this phenomenon remains underexplored. In this study, we seek to elucidate the enhancing effect of musical experience on time discrimination, utilizing both behavioral and electroencephalogram methodologies. Additionally, we aim to explore, through brain connectivity analysis, the role of increased connectivity in brain regions associated with auditory perception as a potential contributory factor to time discrimination induced by musical experience. The results show that the music-experienced group demonstrated higher behavioral accuracy, shorter reaction time, and shorter P3 and mismatch response latencies as compared to the control group. Furthermore, the music-experienced group had higher connectivity in the left temporal lobe. In summary, our research underscores the positive impact of musical experience on time discrimination and suggests that enhanced connectivity in brain regions linked to auditory perception may be responsible for this enhancement.


Subject(s)
Auditory Perception , Electroencephalography , Music , Humans , Music/psychology , Male , Auditory Perception/physiology , Female , Adult , Young Adult , Time Perception/physiology , Reaction Time/physiology , Acoustic Stimulation/methods , Discrimination, Psychological/physiology , Evoked Potentials, Auditory/physiology , Brain/physiology
6.
Proc Natl Acad Sci U S A ; 121(24): e2311570121, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38830095

ABSTRACT

Even a transient period of hearing loss during the developmental critical period can induce long-lasting deficits in temporal and spectral perception. These perceptual deficits correlate with speech perception in humans. In gerbils, these hearing loss-induced perceptual deficits are correlated with a reduction of both ionotropic GABAA and metabotropic GABAB receptor-mediated synaptic inhibition in auditory cortex, but most research on critical period plasticity has focused on GABAA receptors. Therefore, we developed viral vectors to express proteins that would upregulate gerbil postsynaptic inhibitory receptor subunits (GABAA, Gabra1; GABAB, Gabbr1b) in pyramidal neurons, and an enzyme that mediates GABA synthesis (GAD65) presynaptically in parvalbumin-expressing interneurons. A transient period of developmental hearing loss during the auditory critical period significantly impaired perceptual performance on two auditory tasks: amplitude modulation depth detection and spectral modulation depth detection. We then tested the capacity of each vector to restore perceptual performance on these auditory tasks. While both GABA receptor vectors increased the amplitude of cortical inhibitory postsynaptic potentials, only viral expression of postsynaptic GABAB receptors improved perceptual thresholds to control levels. Similarly, presynaptic GAD65 expression improved perceptual performance on spectral modulation detection. These findings suggest that recovering performance on auditory perceptual tasks depends on GABAB receptor-dependent transmission at the auditory cortex parvalbumin to pyramidal synapse and point to potential therapeutic targets for developmental sensory disorders.


Subject(s)
Auditory Cortex , Gerbillinae , Hearing Loss , Animals , Auditory Cortex/metabolism , Auditory Cortex/physiopathology , Hearing Loss/genetics , Hearing Loss/physiopathology , Receptors, GABA-B/metabolism , Receptors, GABA-B/genetics , Glutamate Decarboxylase/metabolism , Glutamate Decarboxylase/genetics , Receptors, GABA-A/metabolism , Receptors, GABA-A/genetics , Parvalbumins/metabolism , Parvalbumins/genetics , Auditory Perception/physiology , Pyramidal Cells/metabolism , Pyramidal Cells/physiology , Genetic Vectors/genetics
7.
eNeuro ; 11(6)2024 Jun.
Article in English | MEDLINE | ID: mdl-38834300

ABSTRACT

Following repetitive visual stimulation, post hoc phase analysis finds that visually evoked response magnitudes vary with the cortical alpha oscillation phase that temporally coincides with sensory stimulus. This approach has not successfully revealed an alpha phase dependence for auditory evoked or induced responses. Here, we test the feasibility of tracking alpha with scalp electroencephalogram (EEG) recordings and play sounds phase-locked to individualized alpha phases in real-time using a novel end-point corrected Hilbert transform (ecHT) algorithm implemented on a research device. Based on prior work, we hypothesize that sound-evoked and induced responses vary with the alpha phase at sound onset and the alpha phase that coincides with the early sound-evoked response potential (ERP) measured with EEG. Thus, we use each subject's individualized alpha frequency (IAF) and individual auditory ERP latency to define target trough and peak alpha phases that allow an early component of the auditory ERP to align to the estimated poststimulus peak and trough phases, respectively. With this closed-loop and individualized approach, we find opposing alpha phase-dependent effects on the auditory ERP and alpha oscillations that follow stimulus onset. Trough and peak phase-locked sounds result in distinct evoked and induced post-stimulus alpha level and frequency modulations. Though additional studies are needed to localize the sources underlying these phase-dependent effects, these results suggest a general principle for alpha phase-dependence of sensory processing that includes the auditory system. Moreover, this study demonstrates the feasibility of using individualized neurophysiological indices to deliver automated, closed-loop, phase-locked auditory stimulation.


Subject(s)
Acoustic Stimulation , Alpha Rhythm , Electroencephalography , Evoked Potentials, Auditory , Humans , Acoustic Stimulation/methods , Evoked Potentials, Auditory/physiology , Male , Female , Electroencephalography/methods , Alpha Rhythm/physiology , Adult , Young Adult , Brain/physiology , Auditory Perception/physiology , Algorithms , Feasibility Studies
8.
J Neurodev Disord ; 16(1): 28, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38831410

ABSTRACT

BACKGROUND: In the search for objective tools to quantify neural function in Rett Syndrome (RTT), which are crucial in the evaluation of therapeutic efficacy in clinical trials, recordings of sensory-perceptual functioning using event-related potential (ERP) approaches have emerged as potentially powerful tools. Considerable work points to highly anomalous auditory evoked potentials (AEPs) in RTT. However, an assumption of the typical signal-averaging method used to derive these measures is "stationarity" of the underlying responses - i.e. neural responses to each input are highly stereotyped. An alternate possibility is that responses to repeated stimuli are highly variable in RTT. If so, this will significantly impact the validity of assumptions about underlying neural dysfunction, and likely lead to overestimation of underlying neuropathology. To assess this possibility, analyses at the single-trial level assessing signal-to-noise ratios (SNR), inter-trial variability (ITV) and inter-trial phase coherence (ITPC) are necessary. METHODS: AEPs were recorded to simple 100 Hz tones from 18 RTT and 27 age-matched controls (Ages: 6-22 years). We applied standard AEP averaging, as well as measures of neuronal reliability at the single-trial level (i.e. SNR, ITV, ITPC). To separate signal-carrying components from non-neural noise sources, we also applied a denoising source separation (DSS) algorithm and then repeated the reliability measures. RESULTS: Substantially increased ITV, lower SNRs, and reduced ITPC were observed in auditory responses of RTT participants, supporting a "neural unreliability" account. Application of the DSS technique made it clear that non-neural noise sources contribute to overestimation of the extent of processing deficits in RTT. Post-DSS, ITV measures were substantially reduced, so much so that pre-DSS ITV differences between RTT and TD populations were no longer detected. In the case of SNR and ITPC, DSS substantially improved these estimates in the RTT population, but robust differences between RTT and TD were still fully evident. CONCLUSIONS: To accurately represent the degree of neural dysfunction in RTT using the ERP technique, a consideration of response reliability at the single-trial level is highly advised. Non-neural sources of noise lead to overestimation of the degree of pathological processing in RTT, and denoising source separation techniques during signal processing substantially ameliorate this issue.


Subject(s)
Electroencephalography , Evoked Potentials, Auditory , Rett Syndrome , Humans , Rett Syndrome/physiopathology , Rett Syndrome/complications , Adolescent , Female , Evoked Potentials, Auditory/physiology , Child , Young Adult , Auditory Perception/physiology , Reproducibility of Results , Acoustic Stimulation , Male , Signal-To-Noise Ratio , Adult
9.
Proc Natl Acad Sci U S A ; 121(25): e2405588121, 2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38861607

ABSTRACT

Many animals can extract useful information from the vocalizations of other species. Neuroimaging studies have evidenced areas sensitive to conspecific vocalizations in the cerebral cortex of primates, but how these areas process heterospecific vocalizations remains unclear. Using fMRI-guided electrophysiology, we recorded the spiking activity of individual neurons in the anterior temporal voice patches of two macaques while they listened to complex sounds including vocalizations from several species. In addition to cells selective for conspecific macaque vocalizations, we identified an unsuspected subpopulation of neurons with strong selectivity for human voice, not merely explained by spectral or temporal structure of the sounds. The auditory representational geometry implemented by these neurons was strongly related to that measured in the human voice areas with neuroimaging and only weakly to low-level acoustical structure. These findings provide new insights into the neural mechanisms involved in auditory expertise and the evolution of communication systems in primates.


Subject(s)
Auditory Perception , Magnetic Resonance Imaging , Neurons , Vocalization, Animal , Voice , Animals , Humans , Neurons/physiology , Voice/physiology , Magnetic Resonance Imaging/methods , Vocalization, Animal/physiology , Auditory Perception/physiology , Male , Macaca mulatta , Brain/physiology , Acoustic Stimulation , Brain Mapping/methods
10.
Cereb Cortex ; 34(6)2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38879756

ABSTRACT

Midbrain multisensory neurons undergo a significant postnatal transition in how they process cross-modal (e.g. visual-auditory) signals. In early stages, signals derived from common events are processed competitively; however, at later stages they are processed cooperatively such that their salience is enhanced. This transition reflects adaptation to cross-modal configurations that are consistently experienced and become informative about which correspond to common events. Tested here was the assumption that overt behaviors follow a similar maturation. Cats were reared in omnidirectional sound thereby compromising the experience needed for this developmental process. Animals were then repeatedly exposed to different configurations of visual and auditory stimuli (e.g. spatiotemporally congruent or spatially disparate) that varied on each side of space and their behavior was assessed using a detection/localization task. Animals showed enhanced performance to stimuli consistent with the experience provided: congruent stimuli elicited enhanced behaviors where spatially congruent cross-modal experience was provided, and spatially disparate stimuli elicited enhanced behaviors where spatially disparate cross-modal experience was provided. Cross-modal configurations not consistent with experience did not enhance responses. The presumptive benefit of such flexibility in the multisensory developmental process is to sensitize neural circuits (and the behaviors they control) to the features of the environment in which they will function. These experiments reveal that these processes have a high degree of flexibility, such that two (conflicting) multisensory principles can be implemented by cross-modal experience on opposite sides of space even within the same animal.


Subject(s)
Acoustic Stimulation , Auditory Perception , Brain , Photic Stimulation , Visual Perception , Animals , Cats , Auditory Perception/physiology , Visual Perception/physiology , Photic Stimulation/methods , Brain/physiology , Brain/growth & development , Male , Female , Behavior, Animal/physiology
11.
Med Sci Monit ; 30: e944090, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38859565

ABSTRACT

BACKGROUND The dichotic digit test (DDT) is one of the tests for the behavioral assessment of central auditory processing. Dichotic listening tests are sensitive ways of assessing cortical structures, the corpus callossum, and binaural integration mechanisms, showing strong correlations with learning difficulties. The DDT is presently available in a number of languages, each appropriate for the subject's native language. However, there is presently no test in the Italian language. The goal of this study was to develop an Italian version of the one-pair dichotic digit test (DDT-IT) and analyze results in 39 normal-hearing Italian children 11 to 13 years old. We used 2 conditions of presentation: free recall and directed attention (left or right ear), and looked at possible effects of sex and ear side. MATERIAL AND METHODS This study involved 3 steps: creation of the stimuli, checking their quality with Italian speakers, and assessment of the DDT-IT in our subject pool. The study involved 39 children (26 girls and 13 boys), aged 11-13 years. All participants underwent basic audiological assessment, auditory brainstem response, and then DDT-IT. RESULTS Results under free recall and directed attention conditions were similar for right and left ears, and there were no sex or age effects. CONCLUSIONS The Italian version of DDT (DDT-IT) has been developed and its performance on 39 normal-hearing Italian children was assessed. We found there were no age or sex effects for either the free recall condition or the directed attention condition.


Subject(s)
Dichotic Listening Tests , Humans , Female , Male , Child , Adolescent , Dichotic Listening Tests/methods , Italy , Language , Hearing/physiology , Auditory Perception/physiology , Attention/physiology
12.
J Vis ; 24(5): 16, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38819806

ABSTRACT

Multistable perception occurs in all sensory modalities, and there is ongoing theoretical debate about whether there are overarching mechanisms driving multistability across modalities. Here we study whether multistable percepts are coupled across vision and audition on a moment-by-moment basis. To assess perception simultaneously for both modalities without provoking a dual-task situation, we query auditory perception by direct report, while measuring visual perception indirectly via eye movements. A support-vector-machine (SVM)-based classifier allows us to decode visual perception from the eye-tracking data on a moment-by-moment basis. For each timepoint, we compare visual percept (SVM output) and auditory percept (report) and quantify the co-occurrence of integrated (one-object) or segregated (two-objects) interpretations in the two modalities. Our results show an above-chance coupling of auditory and visual perceptual interpretations. By titrating stimulus parameters toward an approximately symmetric distribution of integrated and segregated percepts for each modality and individual, we minimize the amount of coupling expected by chance. Because of the nature of our task, we can rule out that the coupling stems from postperceptual levels (i.e., decision or response interference). Our results thus indicate moment-by-moment perceptual coupling in the resolution of visual and auditory multistability, lending support to theories that postulate joint mechanisms for multistable perception across the senses.


Subject(s)
Auditory Perception , Photic Stimulation , Visual Perception , Humans , Auditory Perception/physiology , Visual Perception/physiology , Adult , Male , Female , Photic Stimulation/methods , Young Adult , Eye Movements/physiology , Acoustic Stimulation/methods
13.
Cognition ; 249: 105830, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38810426

ABSTRACT

Prior studies have extensively examined modality-general representation of affect across various sensory modalities, particularly focusing on auditory and visual stimuli. However, little research has explored the modality-general representation of affect between gustatory and other sensory modalities. This study aimed to investigate whether the affective responses induced by tastes and musical pieces could be predicted within and across modalities. For each modality, eight stimuli were chosen based on four basic taste conditions (sweet, bitter, sour, and salty). Participants rated their responses to each stimulus using both taste and emotion scales. The multivariate analyses including multidimensional scaling and classification analysis were performed. The findings revealed that auditory and gustatory stimuli in the sweet category were associated with positive valence, whereas those from the other taste categories were linked to negative valence. Additionally, auditory and gustatory stimuli in sour taste category were linked to high arousal, whereas stimuli in bitter taste category were associated with low arousal. This study revealed the potential mapping of gustatory and auditory stimuli onto core affect space in everyday experiences. Moreover, it demonstrated that emotions evoked by taste and music could be predicted across modalities, supporting modality-general representation of affect.


Subject(s)
Music , Taste Perception , Humans , Female , Male , Young Adult , Adult , Taste Perception/physiology , Affect/physiology , Auditory Perception/physiology , Taste/physiology , Acoustic Stimulation , Emotions/physiology , Arousal/physiology , Adolescent
14.
Behav Res Methods ; 56(4): 3757-3778, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38702502

ABSTRACT

Music is omnipresent among human cultures and moves us both physically and emotionally. The perception of emotions in music is influenced by both psychophysical and cultural factors. Chinese traditional instrumental music differs significantly from Western music in cultural origin and music elements. However, previous studies on music emotion perception are based almost exclusively on Western music. Therefore, the construction of a dataset of Chinese traditional instrumental music is important for exploring the perception of music emotions in the context of Chinese culture. The present dataset included 273 10-second naturalistic music excerpts. We provided rating data for each excerpt on ten variables: familiarity, dimensional emotions (valence and arousal), and discrete emotions (anger, gentleness, happiness, peacefulness, sadness, solemnness, and transcendence). The excerpts were rated by a total of 168 participants on a seven-point Likert scale for the ten variables. Three labels for the excerpts were obtained: familiarity, discrete emotion, and cluster. Our dataset demonstrates good reliability, and we believe it could contribute to cross-cultural studies on emotional responses to music.


Subject(s)
Emotions , Music , Humans , Music/psychology , Emotions/physiology , Female , Male , Adult , China , Young Adult , Auditory Perception/physiology , Reproducibility of Results , Recognition, Psychology/physiology , Arousal/physiology , East Asian People
15.
Curr Biol ; 34(11): 2509-2516.e3, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38744283

ABSTRACT

Acoustic cues are crucial to communication, navigation, and foraging in many animals, which hence face the problem of detecting and discriminating these cues in fluctuating noise levels from natural or anthropogenic sources. Such auditory dynamics are perhaps most extreme for echolocating bats that navigate and hunt prey on the wing in darkness by listening for weak echo returns from their powerful calls in complex, self-generated umwelts.1,2 Due to high absorption of ultrasound in air and fast flight speeds, bats operate with short prey detection ranges and dynamic sensory volumes,3 leading us to hypothesize that bats employ superfast vocal-motor adjustments to rapidly changing sensory scenes. To test this hypothesis, we investigated the onset and offset times and magnitude of the Lombard response in free-flying echolocating greater mouse-eared bats exposed to onsets of intense constant or duty-cycled masking noise during a landing task. We found that the bats invoked a bandwidth-dependent Lombard response of 0.1-0.2 dB per dB increase in noise, with very short delay and relapse times of 20 ms in response to onsets and termination of duty-cycled noise. In concert with the absence call time-locking to noise-free periods, these results show that free-flying bats exhibit a superfast, but hard-wired, vocal-motor response to increased noise levels. We posit that this reflex is mediated by simple closed-loop audio-motor feedback circuits that operate independently of wingbeat and respiration cycles to allow for rapid adjustments to the highly dynamic auditory scenes encountered by these small predators.


Subject(s)
Chiroptera , Echolocation , Flight, Animal , Animals , Chiroptera/physiology , Echolocation/physiology , Flight, Animal/physiology , Noise , Auditory Perception/physiology , Male , Female , Vocalization, Animal/physiology
16.
Multisens Res ; 37(3): 261-273, 2024 May 10.
Article in English | MEDLINE | ID: mdl-38724023

ABSTRACT

Two types of disruptive effects of irrelevant sound on visual tasks have been reported: the changing-state effect and the deviation effect. The idea that the deviation effect, which arises from attentional capture, is independent of task requirements, whereas the changing-state effect is specific to tasks that require serial processing, has been examined by comparing tasks that do or do not require serial-order processing. While many previous studies used the missing-item task as the nonserial task, it is unclear whether other cognitive tasks lead to similar results regarding the different task specificity of both effects. Kattner et al. (Memory and Cognition, 2023) used the mental-arithmetic task as the nonserial task, and failed to demonstrate the deviation effect. However, there were several procedural factors that could account for the lack of deviation effect, such as differences in design and procedures (e.g., conducted online, intermixed conditions). In the present study, we aimed to investigate whether the deviation effect could be observed in both the serial-recall and mental-arithmetic tasks when these procedural factors were modified. We found strong evidence of the deviation effect in both the serial-recall and the mental-arithmetic tasks when stimulus presentation and experimental design were aligned with previous studies that demonstrated the deviation effect (e.g., conducted in-person, blockwise presentation of sound, etc.). The results support the idea that the deviation effect is not task-specific.


Subject(s)
Acoustic Stimulation , Attention , Auditory Perception , Humans , Female , Attention/physiology , Male , Young Adult , Auditory Perception/physiology , Adult , Mental Recall/physiology , Sound , Reaction Time/physiology , Visual Perception/physiology
17.
Nat Commun ; 15(1): 4313, 2024 May 21.
Article in English | MEDLINE | ID: mdl-38773109

ABSTRACT

Our brain is constantly extracting, predicting, and recognising key spatiotemporal features of the physical world in order to survive. While neural processing of visuospatial patterns has been extensively studied, the hierarchical brain mechanisms underlying conscious recognition of auditory sequences and the associated prediction errors remain elusive. Using magnetoencephalography (MEG), we describe the brain functioning of 83 participants during recognition of previously memorised musical sequences and systematic variations. The results show feedforward connections originating from auditory cortices, and extending to the hippocampus, anterior cingulate gyrus, and medial cingulate gyrus. Simultaneously, we observe backward connections operating in the opposite direction. Throughout the sequences, the hippocampus and cingulate gyrus maintain the same hierarchical level, except for the final tone, where the cingulate gyrus assumes the top position within the hierarchy. The evoked responses of memorised sequences and variations engage the same hierarchical brain network but systematically differ in terms of temporal dynamics, strength, and polarity. Furthermore, induced-response analysis shows that alpha and beta power is stronger for the variations, while gamma power is enhanced for the memorised sequences. This study expands on the predictive coding theory by providing quantitative evidence of hierarchical brain mechanisms during conscious memory and predictive processing of auditory sequences.


Subject(s)
Auditory Cortex , Auditory Perception , Magnetoencephalography , Humans , Male , Female , Adult , Auditory Perception/physiology , Young Adult , Auditory Cortex/physiology , Brain/physiology , Acoustic Stimulation , Brain Mapping , Music , Gyrus Cinguli/physiology , Memory/physiology , Hippocampus/physiology , Recognition, Psychology/physiology
18.
Psychiatry Res Neuroimaging ; 341: 111824, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38754348

ABSTRACT

Auditory verbal hallucinations (AVHs) involve perceptions, often voices, in the absence of external stimuli, and rank among the most common symptoms of schizophrenia. Metrical stress evaluation requires determination of the stronger syllable in words, and therefore requires auditory imagery, of interest for investigation of hallucinations in schizophrenia. The current functional magnetic resonance imaging study provides an updated whole-brain network analysis of a previously published study on metrical stress, which showed reduced directed connections between Broca's and Wernicke's regions of interest (ROIs) for hallucinations. Three functional brain networks were extracted, with the language network (LN) showing an earlier and shallower blood-oxygen-level dependent (BOLD) response for hallucinating patients, in the auditory imagery condition only (the reduced activation for hallucinations observed in the original ROI-based results were not specific to the imagery condition). This suggests that hypoactivation of the LN during internal auditory imagery may contribute to the propensity to hallucinate. This accords with cognitive accounts holding that an impaired balance between internal and external linguistic processes (underactivity in networks involved in internal auditory imagery and overactivity in networks involved in speech perception) contributes to our understanding of the biological underpinnings of hallucinations.


Subject(s)
Hallucinations , Magnetic Resonance Imaging , Schizophrenia , Humans , Hallucinations/physiopathology , Hallucinations/diagnostic imaging , Hallucinations/psychology , Hallucinations/etiology , Schizophrenia/physiopathology , Schizophrenia/diagnostic imaging , Schizophrenia/complications , Adult , Male , Female , Imagination/physiology , Language , Brain Mapping/methods , Nerve Net/diagnostic imaging , Nerve Net/physiopathology , Brain/physiopathology , Brain/diagnostic imaging , Auditory Perception/physiology
19.
eNeuro ; 11(5)2024 May.
Article in English | MEDLINE | ID: mdl-38702187

ABSTRACT

Mismatch negativity (MMN) is commonly recognized as a neural signal of prediction error evoked by deviants from the expected patterns of sensory input. Studies show that MMN diminishes when sequence patterns become more predictable over a longer timescale. This implies that MMN is composed of multiple subcomponents, each responding to different levels of temporal regularities. To probe the hypothesized subcomponents in MMN, we record human electroencephalography during an auditory local-global oddball paradigm where the tone-to-tone transition probability (local regularity) and the overall sequence probability (global regularity) are manipulated to control temporal predictabilities at two hierarchical levels. We find that the size of MMN is correlated with both probabilities and the spatiotemporal structure of MMN can be decomposed into two distinct subcomponents. Both subcomponents appear as negative waveforms, with one peaking early in the central-frontal area and the other late in a more frontal area. With a quantitative predictive coding model, we map the early and late subcomponents to the prediction errors that are tied to local and global regularities, respectively. Our study highlights the hierarchical complexity of MMN and offers an experimental and analytical platform for developing a multitiered neural marker applicable in clinical settings.


Subject(s)
Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory , Humans , Male , Female , Electroencephalography/methods , Young Adult , Adult , Evoked Potentials, Auditory/physiology , Acoustic Stimulation/methods , Auditory Perception/physiology , Brain/physiology , Brain Mapping , Adolescent
20.
J Neural Eng ; 21(3)2024 May 22.
Article in English | MEDLINE | ID: mdl-38729132

ABSTRACT

Objective.This study develops a deep learning (DL) method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment (HI). It addresses three classification tasks: differentiating noise from speech-in-noise, classifying the direction of attended speech (left vs. right) and identifying the activation status of hearing aid noise reduction algorithms (OFF vs. ON). These tasks contribute to our understanding of how hearing technology influences auditory processing in the hearing-impaired population.Approach.Deep convolutional neural network (DCNN) models were designed for each task. Two training strategies were employed to clarify the impact of data splitting on AAD tasks: inter-trial, where the testing set used classification windows from trials that the training set had not seen, and intra-trial, where the testing set used unseen classification windows from trials where other segments were seen during training. The models were evaluated on EEG data from 31 participants with HI, listening to competing talkers amidst background noise.Main results.Using 1 s classification windows, DCNN models achieve accuracy (ACC) of 69.8%, 73.3% and 82.9% and area-under-curve (AUC) of 77.2%, 80.6% and 92.1% for the three tasks respectively on inter-trial strategy. In the intra-trial strategy, they achieved ACC of 87.9%, 80.1% and 97.5%, along with AUC of 94.6%, 89.1%, and 99.8%. Our DCNN models show good performance on short 1 s EEG samples, making them suitable for real-world applications. Conclusion: Our DCNN models successfully addressed three tasks with short 1 s EEG windows from participants with HI, showcasing their potential. While the inter-trial strategy demonstrated promise for assessing AAD, the intra-trial approach yielded inflated results, underscoring the important role of proper data splitting in EEG-based AAD tasks.Significance.Our findings showcase the promising potential of EEG-based tools for assessing auditory attention in clinical contexts and advancing hearing technology, while also promoting further exploration of alternative DL architectures and their potential constraints.


Subject(s)
Attention , Auditory Perception , Deep Learning , Electroencephalography , Hearing Loss , Humans , Attention/physiology , Female , Electroencephalography/methods , Male , Middle Aged , Hearing Loss/physiopathology , Hearing Loss/rehabilitation , Hearing Loss/diagnosis , Aged , Auditory Perception/physiology , Noise , Adult , Hearing Aids , Speech Perception/physiology , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...