Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 16.671
Filter
1.
J Acoust Soc Am ; 155(6): 3639-3653, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38836771

ABSTRACT

The estimation of auditory evoked potentials requires deconvolution when the duration of the responses to be recovered exceeds the inter-stimulus interval. Based on least squares deconvolution, in this article we extend the procedure to the case of a multi-response convolutional model, that is, a model in which different categories of stimulus are expected to evoke different responses. The computational cost of the multi-response deconvolution significantly increases with the number of responses to be deconvolved, which restricts its applicability in practical situations. In order to alleviate this restriction, we propose to perform the multi-response deconvolution in a reduced representation space associated with a latency-dependent filtering of auditory responses, which provides a significant dimensionality reduction. We demonstrate the practical viability of the multi-response deconvolution with auditory responses evoked by clicks presented at different levels and categorized according to their stimulation level. The multi-response deconvolution applied in a reduced representation space provides the least squares estimation of the responses with a reasonable computational load. matlab/Octave code implementing the proposed procedure is included as supplementary material.


Subject(s)
Acoustic Stimulation , Evoked Potentials, Auditory , Evoked Potentials, Auditory/physiology , Humans , Acoustic Stimulation/methods , Male , Adult , Electroencephalography/methods , Female , Least-Squares Analysis , Young Adult , Signal Processing, Computer-Assisted , Reaction Time , Auditory Perception/physiology
2.
Brain Behav ; 14(6): e3571, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38841736

ABSTRACT

OBJECTIVE: This study aims to control all hearing thresholds, including extended high frequencies (EHFs), presents stimuli of varying difficulty levels, and measures electroencephalography (EEG) and pupillometry responses to determine whether listening difficulty in tinnitus patients is effort or fatigue-related. METHODS: Twenty-one chronic tinnitus patients and 26 matched healthy controls having normal pure-tone averages with symmetrical hearing thresholds were included. Subjects were evaluated with 0.125-20 kHz pure-tone audiometry, Montreal Cognitive Assessment Test (MoCA), Tinnitus Handicap Inventory (THI), EEG, and pupillometry. RESULTS: Pupil dilatation and EEG alpha power during the "encoding" phase of the presented sentence in tinnitus patients were less in all listening conditions (p < .05). Also, there was no statistically significant relationship between EEG and pupillometry components for all listening conditions and THI or MoCA (p > .05). CONCLUSION: EEG and pupillometry results under various listening conditions indicate potential listening effort in tinnitus patients even if all frequencies, including EHFs, are controlled. Also, we suggest that pupillometry should be interpreted with caution in autonomic nervous system-related conditions such as tinnitus.


Subject(s)
Electroencephalography , Pupil , Tinnitus , Humans , Tinnitus/physiopathology , Tinnitus/diagnosis , Male , Female , Electroencephalography/methods , Adult , Middle Aged , Pupil/physiology , Audiometry, Pure-Tone , Auditory Perception/physiology , Auditory Threshold/physiology
3.
Nat Commun ; 15(1): 4835, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38844457

ABSTRACT

Humans produce two forms of cognitively complex vocalizations: speech and song. It is debated whether these differ based primarily on culturally specific, learned features, or if acoustical features can reliably distinguish them. We study the spectro-temporal modulation patterns of vocalizations produced by 369 people living in 21 urban, rural, and small-scale societies across six continents. Specific ranges of spectral and temporal modulations, overlapping within categories and across societies, significantly differentiate speech from song. Machine-learning classification shows that this effect is cross-culturally robust, vocalizations being reliably classified solely from their spectro-temporal features across all 21 societies. Listeners unfamiliar with the cultures classify these vocalizations using similar spectro-temporal cues as the machine learning algorithm. Finally, spectro-temporal features are better able to discriminate song from speech than a broad range of other acoustical variables, suggesting that spectro-temporal modulation-a key feature of auditory neuronal tuning-accounts for a fundamental difference between these categories.


Subject(s)
Machine Learning , Speech , Humans , Speech/physiology , Male , Female , Adult , Acoustics , Cross-Cultural Comparison , Auditory Perception/physiology , Sound Spectrography , Singing/physiology , Music , Middle Aged , Young Adult
4.
Proc Natl Acad Sci U S A ; 121(24): e2311570121, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38830095

ABSTRACT

Even a transient period of hearing loss during the developmental critical period can induce long-lasting deficits in temporal and spectral perception. These perceptual deficits correlate with speech perception in humans. In gerbils, these hearing loss-induced perceptual deficits are correlated with a reduction of both ionotropic GABAA and metabotropic GABAB receptor-mediated synaptic inhibition in auditory cortex, but most research on critical period plasticity has focused on GABAA receptors. Therefore, we developed viral vectors to express proteins that would upregulate gerbil postsynaptic inhibitory receptor subunits (GABAA, Gabra1; GABAB, Gabbr1b) in pyramidal neurons, and an enzyme that mediates GABA synthesis (GAD65) presynaptically in parvalbumin-expressing interneurons. A transient period of developmental hearing loss during the auditory critical period significantly impaired perceptual performance on two auditory tasks: amplitude modulation depth detection and spectral modulation depth detection. We then tested the capacity of each vector to restore perceptual performance on these auditory tasks. While both GABA receptor vectors increased the amplitude of cortical inhibitory postsynaptic potentials, only viral expression of postsynaptic GABAB receptors improved perceptual thresholds to control levels. Similarly, presynaptic GAD65 expression improved perceptual performance on spectral modulation detection. These findings suggest that recovering performance on auditory perceptual tasks depends on GABAB receptor-dependent transmission at the auditory cortex parvalbumin to pyramidal synapse and point to potential therapeutic targets for developmental sensory disorders.


Subject(s)
Auditory Cortex , Gerbillinae , Hearing Loss , Animals , Auditory Cortex/metabolism , Auditory Cortex/physiopathology , Hearing Loss/genetics , Hearing Loss/physiopathology , Receptors, GABA-B/metabolism , Receptors, GABA-B/genetics , Glutamate Decarboxylase/metabolism , Glutamate Decarboxylase/genetics , Receptors, GABA-A/metabolism , Receptors, GABA-A/genetics , Parvalbumins/metabolism , Parvalbumins/genetics , Auditory Perception/physiology , Pyramidal Cells/metabolism , Pyramidal Cells/physiology , Genetic Vectors/genetics
5.
J Neurodev Disord ; 16(1): 28, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38831410

ABSTRACT

BACKGROUND: In the search for objective tools to quantify neural function in Rett Syndrome (RTT), which are crucial in the evaluation of therapeutic efficacy in clinical trials, recordings of sensory-perceptual functioning using event-related potential (ERP) approaches have emerged as potentially powerful tools. Considerable work points to highly anomalous auditory evoked potentials (AEPs) in RTT. However, an assumption of the typical signal-averaging method used to derive these measures is "stationarity" of the underlying responses - i.e. neural responses to each input are highly stereotyped. An alternate possibility is that responses to repeated stimuli are highly variable in RTT. If so, this will significantly impact the validity of assumptions about underlying neural dysfunction, and likely lead to overestimation of underlying neuropathology. To assess this possibility, analyses at the single-trial level assessing signal-to-noise ratios (SNR), inter-trial variability (ITV) and inter-trial phase coherence (ITPC) are necessary. METHODS: AEPs were recorded to simple 100 Hz tones from 18 RTT and 27 age-matched controls (Ages: 6-22 years). We applied standard AEP averaging, as well as measures of neuronal reliability at the single-trial level (i.e. SNR, ITV, ITPC). To separate signal-carrying components from non-neural noise sources, we also applied a denoising source separation (DSS) algorithm and then repeated the reliability measures. RESULTS: Substantially increased ITV, lower SNRs, and reduced ITPC were observed in auditory responses of RTT participants, supporting a "neural unreliability" account. Application of the DSS technique made it clear that non-neural noise sources contribute to overestimation of the extent of processing deficits in RTT. Post-DSS, ITV measures were substantially reduced, so much so that pre-DSS ITV differences between RTT and TD populations were no longer detected. In the case of SNR and ITPC, DSS substantially improved these estimates in the RTT population, but robust differences between RTT and TD were still fully evident. CONCLUSIONS: To accurately represent the degree of neural dysfunction in RTT using the ERP technique, a consideration of response reliability at the single-trial level is highly advised. Non-neural sources of noise lead to overestimation of the degree of pathological processing in RTT, and denoising source separation techniques during signal processing substantially ameliorate this issue.


Subject(s)
Electroencephalography , Evoked Potentials, Auditory , Rett Syndrome , Humans , Rett Syndrome/physiopathology , Rett Syndrome/complications , Adolescent , Female , Evoked Potentials, Auditory/physiology , Child , Young Adult , Auditory Perception/physiology , Reproducibility of Results , Acoustic Stimulation , Male , Signal-To-Noise Ratio , Adult
6.
J Neurodev Disord ; 16(1): 24, 2024 May 08.
Article in English | MEDLINE | ID: mdl-38720271

ABSTRACT

BACKGROUND: Autism spectrum disorder (ASD) is currently diagnosed in approximately 1 in 44 children in the United States, based on a wide array of symptoms, including sensory dysfunction and abnormal language development. Boys are diagnosed ~ 3.8 times more frequently than girls. Auditory temporal processing is crucial for speech recognition and language development. Abnormal development of temporal processing may account for ASD language impairments. Sex differences in the development of temporal processing may underlie the differences in language outcomes in male and female children with ASD. To understand mechanisms of potential sex differences in temporal processing requires a preclinical model. However, there are no studies that have addressed sex differences in temporal processing across development in any animal model of ASD. METHODS: To fill this major gap, we compared the development of auditory temporal processing in male and female wildtype (WT) and Fmr1 knock-out (KO) mice, a model of Fragile X Syndrome (FXS), a leading genetic cause of ASD-associated behaviors. Using epidural screw electrodes, we recorded auditory event related potentials (ERP) and auditory temporal processing with a gap-in-noise auditory steady state response (ASSR) paradigm at young (postnatal (p)21 and p30) and adult (p60) ages from both auditory and frontal cortices of awake, freely moving mice. RESULTS: The results show that ERP amplitudes were enhanced in both sexes of Fmr1 KO mice across development compared to WT counterparts, with greater enhancement in adult female than adult male KO mice. Gap-ASSR deficits were seen in the frontal, but not auditory, cortex in early development (p21) in female KO mice. Unlike male KO mice, female KO mice show WT-like temporal processing at p30. There were no temporal processing deficits in the adult mice of both sexes. CONCLUSIONS: These results show a sex difference in the developmental trajectories of temporal processing and hypersensitive responses in Fmr1 KO mice. Male KO mice show slower maturation of temporal processing than females. Female KO mice show stronger hypersensitive responses than males later in development. The differences in maturation rates of temporal processing and hypersensitive responses during various critical periods of development may lead to sex differences in language function, arousal and anxiety in FXS.


Subject(s)
Disease Models, Animal , Evoked Potentials, Auditory , Fragile X Mental Retardation Protein , Fragile X Syndrome , Mice, Knockout , Sex Characteristics , Animals , Fragile X Syndrome/physiopathology , Female , Male , Mice , Evoked Potentials, Auditory/physiology , Fragile X Mental Retardation Protein/genetics , Auditory Perception/physiology , Autism Spectrum Disorder/physiopathology , Auditory Cortex/physiopathology , Mice, Inbred C57BL
7.
JASA Express Lett ; 4(5)2024 May 01.
Article in English | MEDLINE | ID: mdl-38727569

ABSTRACT

Bimodal stimulation, a cochlear implant (CI) in one ear and a hearing aid (HA) in the other, provides highly asymmetrical inputs. To understand how asymmetry affects perception and memory, forward and backward digit spans were measured in nine bimodal listeners. Spans were unchanged from monotic to diotic presentation; there was an average two-digit decrease for dichotic presentation with some extreme cases of decreases to zero spans. Interaurally asymmetrical decreases were not predicted based on the device or better-functioning ear. Therefore, bimodal listeners can demonstrate a strong ear dominance, diminishing memory recall dichotically even when perception was intact monaurally.


Subject(s)
Cochlear Implants , Humans , Middle Aged , Aged , Male , Female , Dichotic Listening Tests , Adult , Auditory Perception/physiology , Hearing Aids
8.
Cereb Cortex ; 34(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38700440

ABSTRACT

While the auditory and visual systems each provide distinct information to our brain, they also work together to process and prioritize input to address ever-changing conditions. Previous studies highlighted the trade-off between auditory change detection and visual selective attention; however, the relationship between them is still unclear. Here, we recorded electroencephalography signals from 106 healthy adults in three experiments. Our findings revealed a positive correlation at the population level between the amplitudes of event-related potential indices associated with auditory change detection (mismatch negativity) and visual selective attention (posterior contralateral N2) when elicited in separate tasks. This correlation persisted even when participants performed a visual task while disregarding simultaneous auditory stimuli. Interestingly, as visual attention demand increased, participants whose posterior contralateral N2 amplitude increased the most exhibited the largest reduction in mismatch negativity, suggesting a within-subject trade-off between the two processes. Taken together, our results suggest an intimate relationship and potential shared mechanism between auditory change detection and visual selective attention. We liken this to a total capacity limit that varies between individuals, which could drive correlated individual differences in auditory change detection and visual selective attention, and also within-subject competition between the two, with task-based modulation of visual attention causing within-participant decrease in auditory change detection sensitivity.


Subject(s)
Attention , Auditory Perception , Electroencephalography , Visual Perception , Humans , Attention/physiology , Male , Female , Young Adult , Adult , Auditory Perception/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Photic Stimulation/methods , Evoked Potentials/physiology , Brain/physiology , Adolescent
9.
eNeuro ; 11(5)2024 May.
Article in English | MEDLINE | ID: mdl-38702194

ABSTRACT

Elicited upon violation of regularity in stimulus presentation, mismatch negativity (MMN) reflects the brain's ability to perform automatic comparisons between consecutive stimuli and provides an electrophysiological index of sensory error detection whereas P300 is associated with cognitive processes such as updating of the working memory. To date, there has been extensive research on the roles of MMN and P300 individually, because of their potential to be used as clinical markers of consciousness and attention, respectively. Here, we intend to explore with an unsupervised and rigorous source estimation approach, the underlying cortical generators of MMN and P300, in the context of prediction error propagation along the hierarchies of brain information processing in healthy human participants. The existing methods of characterizing the two ERPs involve only approximate estimations of their amplitudes and latencies based on specific sensors of interest. Our objective is twofold: first, we introduce a novel data-driven unsupervised approach to compute latencies and amplitude of ERP components accurately on an individual-subject basis and reconfirm earlier findings. Second, we demonstrate that in multisensory environments, MMN generators seem to reflect a significant overlap of "modality-specific" and "modality-independent" information processing while P300 generators mark a shift toward completely "modality-independent" processing. Advancing earlier understanding that multisensory contexts speed up early sensory processing, our study reveals that temporal facilitation extends to even the later components of prediction error processing, using EEG experiments. Such knowledge can be of value to clinical research for characterizing the key developmental stages of lifespan aging, schizophrenia, and depression.


Subject(s)
Electroencephalography , Event-Related Potentials, P300 , Humans , Male , Female , Adult , Electroencephalography/methods , Young Adult , Event-Related Potentials, P300/physiology , Auditory Perception/physiology , Cerebral Cortex/physiology , Acoustic Stimulation/methods , Evoked Potentials/physiology
10.
Brain Behav ; 14(5): e3517, 2024 May.
Article in English | MEDLINE | ID: mdl-38702896

ABSTRACT

INTRODUCTION: Attention and working memory are key cognitive functions that allow us to select and maintain information in our mind for a short time, being essential for our daily life and, in particular, for learning and academic performance. It has been shown that musical training can improve working memory performance, but it is still unclear if and how the neural mechanisms of working memory and particularly attention are implicated in this process. In this work, we aimed to identify the oscillatory signature of bimodal attention and working memory that contributes to improved working memory in musically trained children. MATERIALS AND METHODS: We recruited children with and without musical training and asked them to complete a bimodal (auditory/visual) attention and working memory task, whereas their brain activity was measured using electroencephalography. Behavioral, time-frequency, and source reconstruction analyses were made. RESULTS: Results showed that, overall, musically trained children performed better on the task than children without musical training. When comparing musically trained children with children without musical training, we found modulations in the alpha band pre-stimuli onset and the beginning of stimuli onset in the frontal and parietal regions. These correlated with correct responses to the attended modality. Moreover, during the end phase of stimuli presentation, we found modulations correlating with correct responses independent of attention condition in the theta and alpha bands, in the left frontal and right parietal regions. CONCLUSIONS: These results suggest that musically trained children have improved neuronal mechanisms for both attention allocation and memory encoding. Our results can be important for developing interventions for people with attention and working memory difficulties.


Subject(s)
Alpha Rhythm , Attention , Memory, Short-Term , Music , Theta Rhythm , Humans , Memory, Short-Term/physiology , Attention/physiology , Male , Female , Child , Theta Rhythm/physiology , Alpha Rhythm/physiology , Auditory Perception/physiology , Electroencephalography , Visual Perception/physiology , Brain/physiology
11.
Multisens Res ; 37(2): 89-124, 2024 Feb 13.
Article in English | MEDLINE | ID: mdl-38714311

ABSTRACT

Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes - a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of N = 130. Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.


Subject(s)
Attention , Video Games , Visual Perception , Humans , Male , Female , Visual Perception/physiology , Young Adult , Adult , Attention/physiology , Auditory Perception/physiology , Photic Stimulation , Adolescent , Reaction Time/physiology , Cues , Acoustic Stimulation
12.
Multisens Res ; 37(2): 143-162, 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38714315

ABSTRACT

A vital heuristic used when making judgements on whether audio-visual signals arise from the same event, is the temporal coincidence of the respective signals. Previous research has highlighted a process, whereby the perception of simultaneity rapidly recalibrates to account for differences in the physical temporal offsets of stimuli. The current paper investigated whether rapid recalibration also occurs in response to differences in central arrival latencies, driven by visual-intensity-dependent processing times. In a behavioural experiment, observers completed a temporal-order judgement (TOJ), simultaneity judgement (SJ) and simple reaction-time (RT) task and responded to audio-visual trials that were preceded by other audio-visual trials with either a bright or dim visual stimulus. It was found that the point of subjective simultaneity shifted, due to the visual intensity of the preceding stimulus, in the TOJ, but not SJ task, while the RT data revealed no effect of preceding intensity. Our data therefore provide some evidence that the perception of simultaneity rapidly recalibrates based on stimulus intensity.


Subject(s)
Acoustic Stimulation , Auditory Perception , Photic Stimulation , Reaction Time , Visual Perception , Humans , Visual Perception/physiology , Auditory Perception/physiology , Male , Female , Reaction Time/physiology , Adult , Young Adult , Judgment/physiology
13.
JASA Express Lett ; 4(5)2024 May 01.
Article in English | MEDLINE | ID: mdl-38717467

ABSTRACT

A long-standing quest in audition concerns understanding relations between behavioral measures and neural representations of changes in sound intensity. Here, we examined relations between aspects of intensity perception and central neural responses within the inferior colliculus of unanesthetized rabbits (by averaging the population's spike count/level functions). We found parallels between the population's neural output and: (1) how loudness grows with intensity; (2) how loudness grows with duration; (3) how discrimination of intensity improves with increasing sound level; (4) findings that intensity discrimination does not depend on duration; and (5) findings that duration discrimination is a constant fraction of base duration.


Subject(s)
Inferior Colliculi , Loudness Perception , Animals , Rabbits , Loudness Perception/physiology , Inferior Colliculi/physiology , Acoustic Stimulation/methods , Discrimination, Psychological/physiology , Auditory Perception/physiology , Neurons/physiology
14.
Article in English | MEDLINE | ID: mdl-38801679

ABSTRACT

Compared to traditional continuous performance tasks, virtual reality-based continuous performance tests (VR-CPT) offer higher ecological validity. While previous studies have primarily focused on behavioral outcomes in VR-CPT and incorporated various distractors to enhance ecological realism, little attention has been paid to the effects of distractors on EEG. Therefore, our study aimed to investigate the influence of distractors on EEG during VR-CPT. We studied visual distractors and auditory distractors separately, recruiting 68 subjects (M =20.82, SD =1.72) and asking each to complete four tasks. These tasks were categorized into four groups according to the presence or absence of visual and auditory distractors. We conducted paired t-tests on the mean relative power of the five electrodes in the ROI region across different frequency bands. Significant differences were found in theta waves between Group 3 (M =2.49, SD =2.02) and Group 4 (M =2.68, SD =2.39) (p < 0.05); in alpha waves between Group 3 (M =2.08, SD =3.73) and Group 4 (M =3.03, SD =4.60) (p < 0.001); and in beta waves between Group 1 (M = -4.44 , SD =2.29) and Group 2 (M = -5.03 , SD =2.48) (p < 0.001), as well as between Group 3 (M = -4.48 , SD =2.03) and Group 4 (M = -4.67 , SD =2.23) (p < 0.05). The incorporation of distractors in VR-CPT modulates EEG signals across different frequency bands, with visual distractors attenuating theta band activity, auditory distractors enhancing alpha band activity, and both types of distractors reducing beta oscillations following target stimuli. This insight holds significant promise for the rehabilitation of children and adolescents with attention deficits.


Subject(s)
Attention , Electroencephalography , Virtual Reality , Humans , Male , Female , Electroencephalography/methods , Young Adult , Attention/physiology , Adult , Visual Perception/physiology , Theta Rhythm/physiology , Acoustic Stimulation/methods , Alpha Rhythm/physiology , Photic Stimulation , Auditory Perception/physiology , Psychomotor Performance/physiology
15.
Sci Rep ; 14(1): 11036, 2024 05 14.
Article in English | MEDLINE | ID: mdl-38744906

ABSTRACT

The perception of a continuous phantom in a sensory domain in the absence of an external stimulus is explained as a maladaptive compensation of aberrant predictive coding, a proposed unified theory of brain functioning. If this were true, these changes would occur not only in the domain of the phantom percept but in other sensory domains as well. We confirm this hypothesis by using tinnitus (continuous phantom sound) as a model and probe the predictive coding mechanism using the established local-global oddball paradigm in both the auditory and visual domains. We observe that tinnitus patients are sensitive to changes in predictive coding not only in the auditory but also in the visual domain. We report changes in well-established components of event-related EEG such as the mismatch negativity. Furthermore, deviations in stimulus characteristics were correlated with the subjective tinnitus distress. These results provide an empirical confirmation that aberrant perceptions are a symptom of a higher-order systemic disorder transcending the domain of the percept.


Subject(s)
Auditory Perception , Electroencephalography , Tinnitus , Humans , Tinnitus/physiopathology , Tinnitus/psychology , Male , Female , Auditory Perception/physiology , Adult , Middle Aged , Acoustic Stimulation , Visual Perception/physiology
16.
PLoS One ; 19(5): e0303309, 2024.
Article in English | MEDLINE | ID: mdl-38748741

ABSTRACT

Catchiness and groove are common phenomena when listening to popular music. Catchiness may be a potential factor for experiencing groove but quantitative evidence for such a relationship is missing. To examine whether and how catchiness influences a key component of groove-the pleasurable urge to move to music (PLUMM)-we conducted a listening experiment with 450 participants and 240 short popular music clips of drum patterns, bass lines or keys/guitar parts. We found four main results: (1) catchiness as measured in a recognition task was only weakly associated with participants' perceived catchiness of music. We showed that perceived catchiness is multi-dimensional, subjective, and strongly associated with pleasure. (2) We found a sizeable positive relationship between PLUMM and perceived catchiness. (3) However, the relationship is complex, as further analysis showed that pleasure suppresses perceived catchiness' effect on the urge to move. (4) We compared common factors that promote perceived catchiness and PLUMM and found that listener-related variables contributed similarly, while the effects of musical content diverged. Overall, our data suggests music perceived as catchy is likely to foster groove experiences.


Subject(s)
Auditory Perception , Music , Pleasure , Humans , Music/psychology , Female , Male , Adult , Auditory Perception/physiology , Young Adult , Pleasure/physiology , Adolescent , Acoustic Stimulation
17.
Sci Rep ; 14(1): 11164, 2024 05 15.
Article in English | MEDLINE | ID: mdl-38750185

ABSTRACT

Electrophysiological studies have investigated predictive processing in music by examining event-related potentials (ERPs) elicited by the violation of musical expectations. While several studies have reported that the predictability of stimuli can modulate the amplitude of ERPs, it is unclear how specific the representation of the expected note is. The present study addressed this issue by recording the omitted stimulus potentials (OSPs) to avoid contamination of bottom-up sensory processing with top-down predictive processing. Decoding of the omitted content was attempted using a support vector machine, which is a type of machine learning. ERP responses to the omission of four target notes (E, F, A, and C) at the same position in familiar and unfamiliar melodies were recorded from 25 participants. The results showed that the omission N1 were larger in the familiar melody condition than in the unfamiliar melody condition. The decoding accuracy of the four omitted notes was significantly higher in the familiar melody condition than in the unfamiliar melody condition. These results suggest that the OSPs contain discriminable predictive information, and the higher the predictability, the more the specific representation of the expected note is generated.


Subject(s)
Acoustic Stimulation , Electroencephalography , Music , Humans , Female , Male , Young Adult , Adult , Auditory Perception/physiology , Support Vector Machine , Evoked Potentials, Auditory/physiology , Evoked Potentials/physiology
18.
Nat Commun ; 15(1): 4071, 2024 May 22.
Article in English | MEDLINE | ID: mdl-38778078

ABSTRACT

Adaptive behavior requires integrating prior knowledge of action outcomes and sensory evidence for making decisions while maintaining prior knowledge for future actions. As outcome- and sensory-based decisions are often tested separately, it is unclear how these processes are integrated in the brain. In a tone frequency discrimination task with two sound durations and asymmetric reward blocks, we found that neurons in the medial prefrontal cortex of male mice represented the additive combination of prior reward expectations and choices. The sensory inputs and choices were selectively decoded from the auditory cortex irrespective of reward priors and the secondary motor cortex, respectively, suggesting localized computations of task variables are required within single trials. In contrast, all the recorded regions represented prior values that needed to be maintained across trials. We propose localized and global computations of task variables in different time scales in the cerebral cortex.


Subject(s)
Auditory Cortex , Choice Behavior , Reward , Animals , Male , Choice Behavior/physiology , Mice , Auditory Cortex/physiology , Neurons/physiology , Prefrontal Cortex/physiology , Acoustic Stimulation , Mice, Inbred C57BL , Cerebral Cortex/physiology , Motor Cortex/physiology , Auditory Perception/physiology
19.
PLoS One ; 19(5): e0303565, 2024.
Article in English | MEDLINE | ID: mdl-38781127

ABSTRACT

In this study, we attempted to improve brain-computer interface (BCI) systems by means of auditory stream segregation in which alternately presented tones are perceived as sequences of various different tones (streams). A 3-class BCI using three tone sequences, which were perceived as three different tone streams, was investigated and evaluated. Each presented musical tone was generated by a software synthesizer. Eleven subjects took part in the experiment. Stimuli were presented to each user's right ear. Subjects were requested to attend to one of three streams and to count the number of target stimuli in the attended stream. In addition, 64-channel electroencephalogram (EEG) and two-channel electrooculogram (EOG) signals were recorded from participants with a sampling frequency of 1000 Hz. The measured EEG data were classified based on Riemannian geometry to detect the object of the subject's selective attention. P300 activity was elicited by the target stimuli in the segregated tone streams. In five out of eleven subjects, P300 activity was elicited only by the target stimuli included in the attended stream. In a 10-fold cross validation test, a classification accuracy over 80% for five subjects and over 75% for nine subjects was achieved. For subjects whose accuracy was lower than 75%, either the P300 was also elicited for nonattended streams or the amplitude of P300 was small. It was concluded that the number of selected BCI systems based on auditory stream segregation can be increased to three classes, and these classes can be detected by a single ear without the aid of any visual modality.


Subject(s)
Acoustic Stimulation , Attention , Brain-Computer Interfaces , Electroencephalography , Humans , Male , Female , Electroencephalography/methods , Adult , Attention/physiology , Acoustic Stimulation/methods , Auditory Perception/physiology , Young Adult , Event-Related Potentials, P300/physiology , Electrooculography/methods
20.
Curr Biol ; 34(9): R346-R348, 2024 05 06.
Article in English | MEDLINE | ID: mdl-38714161

ABSTRACT

Animals including humans often react to sounds by involuntarily moving their face and body. A new study shows that facial movements provide a simple and reliable readout of a mouse's hearing ability that is more sensitive than traditional measurements.


Subject(s)
Face , Animals , Mice , Face/physiology , Auditory Perception/physiology , Hearing/physiology , Sound , Movement/physiology , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...