Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
Add more filters










Publication year range
1.
bioRxiv ; 2024 Jun 30.
Article in English | MEDLINE | ID: mdl-38979139

ABSTRACT

In rodents, anxiety is charactered by heightened vigilance during low-threat and uncertain situations. Though activity in the frontal cortex and limbic system are fundamental to supporting this internal state, the underlying network architecture that integrates activity across brain regions to encode anxiety across animals and paradigms remains unclear. Here, we utilize parallel electrical recordings in freely behaving mice, translational paradigms known to induce anxiety, and machine learning to discover a multi-region network that encodes the anxious brain-state. The network is composed of circuits widely implicated in anxiety behavior, it generalizes across many behavioral contexts that induce anxiety, and it fails to encode multiple behavioral contexts that do not. Strikingly, the activity of this network is also principally altered in two mouse models of depression. Thus, we establish a network-level process whereby the brain encodes anxiety in health and disease.

2.
Nat Neurosci ; 27(1): 9, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38082088

Subject(s)
Brain , Head
3.
Nat Neurosci ; 26(12): 2049, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38040974
4.
Nat Neurosci ; 26(11): 1837, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37919611

Subject(s)
Speech Perception , Speech
5.
Nat Neurosci ; 26(6): 918-922, 2023 06.
Article in English | MEDLINE | ID: mdl-37202554
7.
Nat Neurosci ; 25(9): 1121, 2022 09.
Article in English | MEDLINE | ID: mdl-36042307
8.
Nat Neurosci ; 24(11): 1503, 2021 11.
Article in English | MEDLINE | ID: mdl-34711967
9.
Nat Neurosci ; 24(10): 1342, 2021 10.
Article in English | MEDLINE | ID: mdl-34588702
10.
J Neurosci ; 38(44): 9468-9470, 2018 10 31.
Article in English | MEDLINE | ID: mdl-30381438

ABSTRACT

Skillful storytelling helps listeners understand the essence of complex concepts and ideas in meaningful and often personal ways. For this reason, storytelling is being embraced by scientists who not only want to connect more authentically with their audiences, but also want to understand how the brain processes this powerful form of communication. Here we present part of a conversation between a group of scientists actively engaged with the practice and/or the science of storytelling. We highlight the brain networks involved in the telling and hearing of stories and show how storytelling is being used well beyond the realm of public communication to add a deeper dimension to communication with our students and colleagues, as well as helping to make our profession more inclusive.


Subject(s)
Brain/physiology , Cognition/physiology , Communication , Narration , Humans
11.
Cortex ; 77: 1-12, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26889603

ABSTRACT

Sensory cortices can be activated without any external stimuli. Yet, it is still unclear how this perceptual reactivation occurs and which neural structures mediate this reconstruction process. In this study, we employed fMRI with mental imagery paradigms to investigate the neural networks involved in perceptual reactivation. Subjects performed two speech imagery tasks: articulation imagery (AI) and hearing imagery (HI). We found that AI induced greater activity in frontal-parietal sensorimotor systems, including sensorimotor cortex, subcentral (BA 43), middle frontal cortex (BA 46) and parietal operculum (PO), whereas HI showed stronger activation in regions that have been implicated in memory retrieval: middle frontal (BA 8), inferior parietal cortex and intraparietal sulcus. Moreover, posterior superior temporal sulcus (pSTS) and anterior superior temporal gyrus (aSTG) was activated more in AI compared with HI, suggesting that covert motor processes induced stronger perceptual reactivation in the auditory cortices. These results suggest that motor-to-perceptual transformation and memory retrieval act as two complementary mechanisms to internally reconstruct corresponding perceptual outcomes. These two mechanisms can serve as a neurocomputational foundation for predicting perceptual changes, either via a previously learned relationship between actions and their perceptual consequences or via stored perceptual experiences of stimulus and episodic or contextual regularity.


Subject(s)
Auditory Cortex/physiology , Brain Mapping , Speech Perception/physiology , Speech/physiology , Adult , Brain Mapping/methods , Female , Humans , Magnetic Resonance Imaging/methods , Male , Memory/physiology , Young Adult
12.
Sci Rep ; 5: 11475, 2015 Jun 19.
Article in English | MEDLINE | ID: mdl-26088739

ABSTRACT

Voice or speaker recognition is critical in a wide variety of social contexts. In this study, we investigated the contributions of acoustic, phonological, lexical, and semantic information toward voice recognition. Native English speaking participants were trained to recognize five speakers in five conditions: non-speech, Mandarin, German, pseudo-English, and English. We showed that voice recognition significantly improved as more information became available, from purely acoustic features in non-speech to additional phonological information varying in familiarity. Moreover, we found that the recognition performance is transferable between training and testing in phonologically familiar conditions (German, pseudo-English, and English), but not in unfamiliar (Mandarin) or non-speech conditions. These results provide evidence suggesting that bottom-up acoustic analysis and top-down influence from phonological processing collaboratively govern voice recognition.


Subject(s)
Auditory Perception , Linguistics , Voice , Acoustic Stimulation , Adolescent , Adult , Female , Humans , Language , Male , Reproducibility of Results , Speech , Young Adult
13.
Nat Neurosci ; 18(6): 903-11, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25984889

ABSTRACT

Speech contains temporal structure that the brain must analyze to enable linguistic processing. To investigate the neural basis of this analysis, we used sound quilts, stimuli constructed by shuffling segments of a natural sound, approximately preserving its properties on short timescales while disrupting them on longer scales. We generated quilts from foreign speech to eliminate language cues and manipulated the extent of natural acoustic structure by varying the segment length. Using functional magnetic resonance imaging, we identified bilateral regions of the superior temporal sulcus (STS) whose responses varied with segment length. This effect was absent in primary auditory cortex and did not occur for quilts made from other natural sounds or acoustically matched synthetic sounds, suggesting tuning to speech-specific spectrotemporal structure. When examined parametrically, the STS response increased with segment length up to ∼500 ms. Our results identify a locus of speech analysis in human auditory cortex that is distinct from lexical, semantic or syntactic processes.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Speech Perception/physiology , Adolescent , Adult , Algorithms , Auditory Cortex/anatomy & histology , Auditory Perception/physiology , Brain Mapping , Female , Functional Laterality/physiology , Humans , Magnetic Resonance Imaging , Male , Noise , Oxygen/blood , Psychomotor Performance/physiology , Young Adult
15.
16.
PLoS One ; 8(9): e75410, 2013.
Article in English | MEDLINE | ID: mdl-24066179

ABSTRACT

We tested non-musicians and musicians in an auditory psychophysical experiment to assess the effects of timbre manipulation on pitch-interval discrimination. Both groups were asked to indicate the larger of two presented intervals, comprised of four sequentially presented pitches; the second or fourth stimulus within a trial was either a sinusoidal (or "pure"), flute, piano, or synthetic voice tone, while the remaining three stimuli were all pure tones. The interval-discrimination tasks were administered parametrically to assess performance across varying pitch distances between intervals ("interval-differences"). Irrespective of timbre, musicians displayed a steady improvement across interval-differences, while non-musicians only demonstrated enhanced interval discrimination at an interval-difference of 100 cents (one semitone in Western music). Surprisingly, the best discrimination performance across both groups was observed with pure-tone intervals, followed by intervals containing a piano tone. More specifically, we observed that: 1) timbre changes within a trial affect interval discrimination; and 2) the broad spectral characteristics of an instrumental timbre may influence perceived pitch or interval magnitude and make interval discrimination more difficult.


Subject(s)
Music , Pitch Discrimination/physiology , Adult , Female , Humans , Male , Pitch Perception/physiology , Young Adult
17.
Front Hum Neurosci ; 7: 237, 2013.
Article in English | MEDLINE | ID: mdl-23761746

ABSTRACT

Singing provides a unique opportunity to examine music performance-the musical instrument is contained wholly within the body, thus eliminating the need for creating artificial instruments or tasks in neuroimaging experiments. Here, more than two decades of voice and singing research will be reviewed to give an overview of the sensory-motor control of the singing voice, starting from the vocal tract and leading up to the brain regions involved in singing. Additionally, to demonstrate how sensory feedback is integrated with vocal motor control, recent functional magnetic resonance imaging (fMRI) research on somatosensory and auditory feedback processing during singing will be presented. The relationship between the brain and singing behavior will be explored also by examining: (1) neuroplasticity as a function of various lengths and types of training, (2) vocal amusia due to a compromised singing network, and (3) singing performance in individuals with congenital amusia. Finally, the auditory-motor control network for singing will be considered alongside dual-stream models of auditory processing in music and speech to refine both these theoretical models and the singing network itself.

18.
Front Psychol ; 3: 544, 2012.
Article in English | MEDLINE | ID: mdl-23227019

ABSTRACT

We tested changes in cortical functional response to auditory patterns in a configural learning paradigm. We trained 10 human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music) and measured covariation in blood oxygenation signal to increasing pitch interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature that was trained. A psychophysical staircase procedure with feedback was used for training over a 2-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch interval size, such that those who had a higher sensitivity to pitch interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities.

19.
PLoS One ; 5(6): e11181, 2010 Jun 17.
Article in English | MEDLINE | ID: mdl-20567521

ABSTRACT

BACKGROUND: Recent behavioral studies report correlational evidence to suggest that non-musicians with good pitch discrimination sing more accurately than those with poorer auditory skills. However, other studies have reported a dissociation between perceptual and vocal production skills. In order to elucidate the relationship between auditory discrimination skills and vocal accuracy, we administered an auditory-discrimination training paradigm to a group of non-musicians to determine whether training-enhanced auditory discrimination would specifically result in improved vocal accuracy. METHODOLOGY/PRINCIPAL FINDINGS: We utilized micromelodies (i.e., melodies with seven different interval scales, each smaller than a semitone) as the main stimuli for auditory discrimination training and testing, and we used single-note and melodic singing tasks to assess vocal accuracy in two groups of non-musicians (experimental and control). To determine if any training-induced improvements in vocal accuracy would be accompanied by related modulations in cortical activity during singing, the experimental group of non-musicians also performed the singing tasks while undergoing functional magnetic resonance imaging (fMRI). Following training, the experimental group exhibited significant enhancements in micromelody discrimination compared to controls. However, we did not observe a correlated improvement in vocal accuracy during single-note or melodic singing, nor did we detect any training-induced changes in activity within brain regions associated with singing. CONCLUSIONS/SIGNIFICANCE: Given the observations from our auditory training regimen, we therefore conclude that perceptual discrimination training alone is not sufficient to improve vocal accuracy in non-musicians, supporting the suggested dissociation between auditory perception and vocal production.


Subject(s)
Music , Speech , Brain/physiology , Hearing , Humans , Magnetic Resonance Imaging
20.
J Acoust Soc Am ; 127(1): 504-12, 2010 Jan.
Article in English | MEDLINE | ID: mdl-20058995

ABSTRACT

Vocal pitch matching is a foundational skill for singing and is an interesting place to study the relationship between pitch perception and production. To better understand this relationship, we assessed pitch-matching abilities in congenital amusics, who have documented disabilities in pitch perception, and in matched controls under normal, masked, and guided feedback conditions. Their vocal productions were analyzed for fundamental frequency and showed that amusics were significantly less accurate at pitch matching than the controls. However, five of the six amusics showed a significant correlation between their produced pitches and the target pitch. Feedback condition had no effect on pitch-matching accuracy. These results show impaired vocal pitch-matching abilities in amusics but also show a relationship between perceived and produced pitches.


Subject(s)
Auditory Perceptual Disorders , Music , Pitch Perception , Psychomotor Performance , Voice , Acoustic Stimulation , Acoustics , Aged , Analysis of Variance , Feedback, Psychological , Female , Humans , Male , Middle Aged , Models, Psychological , Regression Analysis , Sex Characteristics , Task Performance and Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...