Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
Brain Sci ; 12(3)2022 Mar 20.
Article in English | MEDLINE | ID: mdl-35326366

ABSTRACT

Music's deeply interpersonal nature suggests that music-derived neuroplasticity relates to interpersonal temporal dynamics, or synchrony. Interpersonal neural synchrony (INS) has been found to correlate with increased behavioral synchrony during social interactions and may represent mechanisms that support them. As social interactions often do not have clearly delineated boundaries, and many start and stop intermittently, we hypothesize that a neural signature of INS may be detectable following an interaction. The present study aimed to investigate this hypothesis using a pre-post paradigm, measuring interbrain phase coherence before and after a cooperative dyadic musical interaction. Ten dyads underwent synchronous electroencephalographic (EEG) recording during silent, non-interactive periods before and after a musical interaction in the form of a cooperative tapping game. Significant post-interaction increases in delta band INS were found in the post-condition and were positively correlated with the duration of the preceding interaction. These findings suggest a mechanism by which social interaction may be efficiently continued after interruption and hold the potential for measuring neuroplastic adaption in longitudinal studies. These findings also support the idea that INS during social interaction represents active mechanisms for maintaining synchrony rather than mere parallel processing of stimuli and motor activity.

2.
Brain Sci ; 11(12)2021 Nov 30.
Article in English | MEDLINE | ID: mdl-34942891

ABSTRACT

The perception of harmonic complexes provides important information for musical and vocal communication. Numerous studies have shown that musical training and expertise are associated with better processing of harmonic complexes, however, it is unclear whether the perceptual improvement associated with musical training is universal to different pitch models. The current study addresses this issue by measuring discrimination thresholds of musicians (n = 20) and non-musicians (n = 18) to diotic (same sound to both ears) and dichotic (different sounds to each ear) sounds of four stimulus types: (1) pure sinusoidal tones, PT; (2) four-harmonic complex tones, CT; (3) iterated rippled noise, IRN; and (4) interaurally correlated broadband noise, called the "Huggins" or "dichotic" pitch, DP. Frequency difference limens (DLF) for each stimulus type were obtained via a three-alternative-forced-choice adaptive task requiring selection of the interval with the highest pitch, yielding the smallest perceptible fundamental frequency (F0) distance (in Hz) between two sounds. Music skill was measured by an online test of musical pitch, melody and timing maintained by the International Laboratory for Brain Music and Sound Research. Musicianship, length of music experience and self-evaluation of musical skill were assessed by questionnaire. Results showed musicians had smaller DLFs in all four conditions with the largest group difference in the dichotic condition. DLF thresholds were related to both subjective and objective musical ability. In addition, subjective self-report of musical ability was shown to be a significant variable in group classification. Taken together, the results suggest that music-related plasticity benefits multiple mechanisms of pitch encoding and that self-evaluation of musicality can be reliably associated with objective measures of perception.

3.
Brain Sci ; 11(11)2021 Nov 21.
Article in English | MEDLINE | ID: mdl-34827544

ABSTRACT

Previous evidence has shown that early auditory processing impacts later linguistic development, and targeted training implemented at early ages can enhance auditory processing skills, with better expected language development outcomes. This study focuses on typically developing infants and aims to test the feasibility and preliminary efficacy of music training based on active synchronization with complex musical rhythms on the linguistic outcomes and electrophysiological functioning underlying auditory processing. Fifteen infants participated in the training (RTr+) and were compared with two groups of infants not attending any structured activities during the same time frame (RTr-, N = 14). At pre- and post-training, expressive and receptive language skills were assessed using standardized tests, and auditory processing skills were characterized through an electrophysiological non-speech multi-feature paradigm. Results reveal that RTr+ infants showed significantly broader improvement in both expressive and receptive pre-language skills. Moreover, at post-training, they presented an electrophysiological pattern characterized by shorter latency of two peaks (N2* and P2), reflecting a neural change detection process: these shifts in latency go beyond those seen due to maturation alone. These results provide preliminary evidence on the efficacy of our training in improving early linguistic competences, and in modifying the neural underpinnings of auditory processing in infants.

4.
Hear Res ; 398: 108101, 2020 12.
Article in English | MEDLINE | ID: mdl-33142106

ABSTRACT

Successful mapping of meaningful labels to sound input requires accurate representation of that sound's acoustic variances in time and spectrum. For some individuals, such as children or those with hearing loss, having an objective measure of the integrity of this representation could be useful. Classification is a promising machine learning approach which can be used to objectively predict a stimulus label from the brain response. This approach has been previously used with auditory evoked potentials (AEP) such as the frequency following response (FFR), but a number of key issues remain unresolved before classification can be translated into clinical practice. Specifically, past efforts at FFR classification have used data from a given subject for both training and testing the classifier. It is also unclear which components of the FFR elicit optimal classification accuracy. To address these issues, we recorded FFRs from 13 adults with normal hearing in response to speech and music stimuli. We compared labeling accuracy of two cross-validation classification approaches using FFR data: (1) a more traditional method combining subject data in both the training and testing set, and (2) a "leave-one-out" approach, in which subject data is classified based on a model built exclusively from the data of other individuals. We also examined classification accuracy on decomposed and time-segmented FFRs. Our results indicate that the accuracy of leave-one-subject-out cross validation approaches that obtained in the more conventional cross-validation classifications while allowing a subject's results to be analysed with respect to normative data pooled from a separate population. In addition, we demonstrate that classification accuracy is highest when the entire FFR is used to train the classifier. Taken together, these efforts contribute key steps toward translation of classification-based machine learning approaches into clinical practice.


Subject(s)
Music , Speech Perception , Acoustic Stimulation , Electroencephalography , Evoked Potentials, Auditory , Hearing Loss , Humans , Speech
5.
J Perinatol ; 40(2): 203-211, 2020 02.
Article in English | MEDLINE | ID: mdl-31263204

ABSTRACT

OBJECTIVE: To evaluate the feasibility of auditory monitoring of neurophysiological status using frequency-following response (FFR) in neonates with progressive moderate hyperbilirubinemia, measured by transcutaneous (TcB) levels. STUDY DESIGN: ABR and FFR measures were compared and correlated with TcB levels across three groups. Group I was a healthy cohort (n = 13). Group II (n = 28) consisted of neonates with progressive, moderate hyperbilirubinemia and Group III consisted of the same neonates, post physician-ordered phototherapy. RESULT: FFR amplitudes in Group I controls (TcB = 83.1 ± 32.5µmol/L; 4.9 ± 1.9 mg/dL) were greater than Group II (TcB = 209.3 ± 48.0µmol/L; 12.1 ± 2.8 mg/dL). After TcB was lowered by phototherapy, FFR amplitudes in Group III were similar to controls. Lower TcB levels correlated with larger FFR amplitudes (r = -0.291, p = 0.015), but not with ABR wave amplitude or latencies. CONCLUSION: The FFR is a promising measure of the dynamic neurophysiological status in neonates, and may be useful in tracking neurotoxicity in infants with hyperbilirubinemia.


Subject(s)
Acoustic Stimulation , Brain Stem/physiology , Evoked Potentials, Auditory, Brain Stem , Hyperbilirubinemia, Neonatal/physiopathology , Neonatal Screening/methods , Bilirubin/blood , Cohort Studies , Electroencephalography , Humans , Hyperbilirubinemia, Neonatal/blood , Hyperbilirubinemia, Neonatal/therapy , Infant, Newborn , Phototherapy , Speech
6.
Neuroimage Clin ; 22: 101778, 2019.
Article in English | MEDLINE | ID: mdl-30901712

ABSTRACT

The ability to rapidly discriminate successive auditory stimuli within tens-of-milliseconds is crucial for speech and language development, particularly in the first year of life. This skill, called Rapid Auditory Processing (RAP), is altered in infants at familial risk for language and learning impairment (LLI) and is a robust predictor of later language outcomes. In the present study, we investigate the neural substrates of RAP, i.e., the underlying neural oscillatory patterns, in a group of Italian 6-month-old infants at risk for LLI (FH+, n = 24), compared to control infants with no known family history of LLI (FH-, n = 32). Brain responses to rapid changes in fundamental frequency and duration were recorded via high-density electroencephalogram during a non-speech double oddball paradigm. Sources of event-related potential generators were localized to right and left auditory regions in both FH+ and FH- groups. Time-frequency analyses showed variations in both theta (Ɵ) and gamma (ɣ) ranges across groups. Our results showed that overall RAP stimuli elicited a more left-lateralized pattern of oscillations in FH- infants, whereas FH+ infants demonstrated a more right-lateralized pattern, in both the theta and gamma frequency bands. Interestingly, FH+ infants showed reduced early left gamma power (starting at 50 ms after stimulus onset) during deviant discrimination. Perturbed oscillatory dynamics may well constitute a candidate neural mechanism to explain group differences in RAP. Additional group differences in source location suggest that anatomical variations may underlie differences in oscillatory activity. Regarding the predictive value of early oscillatory measures, we found that the amplitude of the source response and the magnitude of oscillatory power and phase synchrony were predictive of expressive vocabulary at 20 months of age. These results further our understanding of the interplay among neural mechanisms that support typical and atypical rapid auditory processing in infancy.


Subject(s)
Auditory Cortex/physiopathology , Auditory Perception/physiology , Electroencephalography Phase Synchronization/physiology , Evoked Potentials, Auditory/physiology , Functional Laterality/physiology , Gamma Rhythm/physiology , Language Development Disorders/physiopathology , Language Development , Learning Disabilities/physiopathology , Theta Rhythm/physiology , Electroencephalography , Genetic Predisposition to Disease , Humans , Infant , Language Development Disorders/genetics , Learning Disabilities/genetics , Vocabulary
7.
Front Aging Neurosci ; 10: 357, 2018.
Article in English | MEDLINE | ID: mdl-30467474

ABSTRACT

Background: The speech-evoked frequency following response (FFR) has shown to be useful in assessing complex auditory processing abilities and in different age groups. While many aspects of FFR have been studied extensively, the effect of timing, as measured by inter-stimulus-interval (ISI), especially in the older adult population, has yet to be thoroughly investigated. Objective: The purpose of this study was to examine the effects of different ISIs on speech evoked FFR in older and younger adults who speak a tonal language, and to investigate whether the older adults' FFR were more susceptible to the change in ISI. Materials and Methods: Twenty-two normal hearing participants were recruited in our study, including 11 young adult participants and 11 elderly participants. An Intelligent Hearing Systems Smart EP evoke potential system was used to record the FFR in four ISI conditions (40, 80, 120 and 160 ms). A recorded natural speech token with a falling tone /yi/ was used as the stimulus. Two indices, stimulus-to-response correlation coefficient and pitch strength, were used to quantify the FFR responses. Two-way analysis of variance (ANOVA) was used to analyze the differences in different age groups and different ISI conditions. Results: There was no significant difference in stimulus-to-response correlation coefficient and pitch strength among the different ISI conditions, in either age groups. Older adults appeared to have weaker FFR for all ISI conditions when compared to their younger adult counterparts. Conclusion: Shorter ISIs did not result in worse FFRs from older adults or younger adults. For speech-evoked FFR using a recorded natural speech token that is 250 ms in length, an ISI of as short as 40 ms appeared to be sufficient and effective to record FFR for elderly adults.

8.
Clin Neurophysiol ; 129(12): 2623-2634, 2018 12.
Article in English | MEDLINE | ID: mdl-30241978

ABSTRACT

OBJECTIVE: Background noise makes hearing speech difficult for people of all ages. This difficulty can be exacerbated by co-occurring developmental deficits that often emerge in childhood. Sentence-type speech-in-noise (SIN) tests are available clinically but cannot be administered to very young individuals. Our objective was to examine the use of an electrophysiological test of SIN, suitable for infants, to track developmental trajectories. METHODS: Speech-evoked brainstem potentials were recorded from 30 typically-developing infants in quiet and +10 dB SNR background noise. Infants were divided into two age groups (7-12 and 18-24 months) and examined across development. Spectral power of the frequency following response (FFR) was computed using a fast Fourier Transform. Cross-correlations between quiet and noise responses were computed to measure encoding resistance to noise. RESULTS: Older infants had more robust FFR encoding in noise and had higher quiet-noise correlations than their younger counterparts. No group differences were observed in the quiet condition. CONCLUSIONS: By two years of age, infants show less vulnerability to the disruptive effects of background noise, compared to infants under 12 months. SIGNIFICANCE: Speech-in-noise electrophysiology can be easily recorded across infancy and provides unique insights into developmental differences that tests conducted in quiet may miss.


Subject(s)
Brain Stem/physiology , Child Development , Evoked Potentials, Auditory, Brain Stem , Noise , Speech Perception , Brain Stem/growth & development , Female , Humans , Infant , Male
9.
Dev Cogn Neurosci ; 26: 9-19, 2017 08.
Article in English | MEDLINE | ID: mdl-28436834

ABSTRACT

Language acquisition in infants is driven by on-going neural plasticity that is acutely sensitive to environmental acoustic cues. Recent studies showed that attention-based experience with non-linguistic, temporally-modulated auditory stimuli sharpens cortical responses. A previous ERP study from this laboratory showed that interactive auditory experience via behavior-based feedback (AEx), over a 6-week period from 4- to 7-months-of-age, confers a processing advantage, compared to passive auditory exposure (PEx) or maturation alone (Naïve Control, NC). Here, we provide a follow-up investigation of the underlying neural oscillatory patterns in these three groups. In AEx infants, Standard stimuli with invariant frequency (STD) elicited greater Theta-band (4-6Hz) activity in Right Auditory Cortex (RAC), as compared to NC infants, and Deviant stimuli with rapid frequency change (DEV) elicited larger responses in Left Auditory Cortex (LAC). PEx and NC counterparts showed less-mature bilateral patterns. AEx infants also displayed stronger Gamma (33-37Hz) activity in the LAC during DEV discrimination, compared to NCs, while NC and PEx groups demonstrated bilateral activity in this band, if at all. This suggests that interactive acoustic experience with non-linguistic stimuli can promote a distinct, robust and precise cortical pattern during rapid auditory processing, perhaps reflecting mechanisms that support fine-tuning of early acoustic mapping.


Subject(s)
Brain/physiology , Evoked Potentials, Auditory/physiology , Neuronal Plasticity/physiology , Auditory Cortex/physiology , Brain Mapping/methods , Electroencephalography/methods , Female , Humans , Infant , Male
10.
J Neurosci ; 37(4): 830-838, 2017 01 25.
Article in English | MEDLINE | ID: mdl-28123019

ABSTRACT

The frequency-following response (FFR) is a measure of the brain's periodic sound encoding. It is of increasing importance for studying the human auditory nervous system due to numerous associations with auditory cognition and dysfunction. Although the FFR is widely interpreted as originating from brainstem nuclei, a recent study using MEG suggested that there is also a right-lateralized contribution from the auditory cortex at the fundamental frequency (Coffey et al., 2016b). Our objectives in the present work were to validate and better localize this result using a completely different neuroimaging modality and to document the relationships between the FFR, the onset response, and cortical activity. Using a combination of EEG, fMRI, and diffusion-weighted imaging, we show that activity in the right auditory cortex is related to individual differences in FFR-fundamental frequency (f0) strength, a finding that was replicated with two independent stimulus sets, with and without acoustic energy at the fundamental frequency. We demonstrate a dissociation between this FFR-f0-sensitive response in the right and an area in left auditory cortex that is sensitive to individual differences in the timing of initial response to sound onset. Relationships to timing and their lateralization are supported by parallels in the microstructure of the underlying white matter, implicating a mechanism involving neural conduction efficiency. These data confirm that the FFR has a cortical contribution and suggest ways in which auditory neuroscience may be advanced by connecting early sound representation to measures of higher-level sound processing and cognitive function. SIGNIFICANCE STATEMENT: The frequency-following response (FFR) is an EEG signal that is used to explore how the auditory system encodes temporal regularities in sound and is related to differences in auditory function between individuals. It is known that brainstem nuclei contribute to the FFR, but recent findings of an additional cortical source are more controversial. Here, we use fMRI to validate and extend the prediction from MEG data of a right auditory cortex contribution to the FFR. We also demonstrate a dissociation between FFR-related cortical activity from that related to the latency of the response to sound onset, which is found in left auditory cortex. The findings provide a clearer picture of cortical processes for analysis of sound features.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Auditory Perception/physiology , Electroencephalography/methods , Magnetic Resonance Imaging/methods , Music , Adult , Evoked Potentials, Auditory, Brain Stem/physiology , Female , Humans , Male , Random Allocation , Young Adult
11.
Front Aging Neurosci ; 8: 286, 2016.
Article in English | MEDLINE | ID: mdl-27965572

ABSTRACT

Background: Perceptual and electrophysiological studies have found reduced speech discrimination in quiet and noisy environment, delayed neural timing, decreased neural synchrony, and decreased temporal processing ability in elderlies, even those with normal hearing. However, recent studies have also demonstrated that language experience and auditory training enhance the temporal dynamics of sound encoding in the auditory brainstem response (ABR). The purpose of this study was to explore the pitch processing ability at the brainstem level in an aging population that has a tonal language background. Method: Mandarin speaking younger (n = 12) and older (n = 12) adults were recruited for this study. All participants had normal audiometric test results and normal suprathreshold click-evoked ABR. To record frequency following responses (FFRs) elicited by Mandarin lexical tones, two Mandarin Chinese syllables with different fundamental frequency pitch contours (Flat Tone and Falling Tone) were presented at 70 dB SPL. Fundamental frequencies (f0) of both the stimulus and the responses were extracted and compared to individual brainstem responses. Two indices were used to examine different aspects of pitch processing ability at the brainstem level: Pitch Strength and Pitch Correlation. Results: Lexical tone elicited FFR were overall weaker in the older adult group compared to their younger adult counterpart. Measured by Pitch Strength and Pitch Correlation, statistically significant group differences were only found when the tone with a falling f0 (Falling Tone) were used as the stimulus. Conclusion: Results of this study demonstrated that in a tonal language speaking population, pitch processing ability at the brainstem level of older adults are not as strong and robust as their younger counterparts. Findings of this study are consistent with previous reports on brainstem responses of older adults whose native language is English. On the other hand, lexical tone elicited FFRs have been shown to correlate with the length of language exposure. Older adults' degraded responses in our study may also be due to that, the Mandarin speaking older adults' long term exposure somewhat counteracted the negative impact on aging and helped maintain, or at least reduced, the degradation rate in their temporal processing capacity at the brainstem level.

12.
Front Psychol ; 6: 1663, 2015.
Article in English | MEDLINE | ID: mdl-26579044

ABSTRACT

The brain's fascinating ability to adapt its internal neural dynamics to the temporal structure of the sensory environment is becoming increasingly clear. It is thought to be metabolically beneficial to align ongoing oscillatory activity to the relevant inputs in a predictable stream, so that they will enter at optimal processing phases of the spontaneously occurring rhythmic excitability fluctuations. However, some contexts have a more predictable temporal structure than others. Here, we tested the hypothesis that the processing of rhythmic sounds is more efficient than the processing of irregularly timed sounds. To do this, we simultaneously measured functional magnetic resonance imaging (fMRI) and electro-encephalograms (EEG) while participants detected oddball target sounds in alternating blocks of rhythmic (e.g., with equal inter-stimulus intervals) or random (e.g., with randomly varied inter-stimulus intervals) tone sequences. Behaviorally, participants detected target sounds faster and more accurately when embedded in rhythmic streams. The fMRI response in the auditory cortex was stronger during random compared to random tone sequence processing. Simultaneously recorded N1 responses showed larger peak amplitudes and longer latencies for tones in the random (vs. the rhythmic) streams. These results reveal complementary evidence for more efficient neural and perceptual processing during temporally predictable sensory contexts.

13.
J Neurosci ; 35(42): 14341-52, 2015 Oct 21.
Article in English | MEDLINE | ID: mdl-26490871

ABSTRACT

The functional significance of the α rhythm is widely debated. It has been proposed that α reflects sensory inhibition and/or a temporal sampling or "parsing" mechanism. There is also continuing disagreement over the more fundamental questions of which cortical layers generate α rhythms and whether the generation of α is equivalent across sensory systems. To address these latter questions, we analyzed laminar profiles of local field potentials (LFPs) and concomitant multiunit activity (MUA) from macaque V1, S1, and A1 during both spontaneous activity and sensory stimulation. Current source density (CSD) analysis of laminar LFP profiles revealed α current generators in the supragranular, granular, and infragranular layers. MUA phase-locked to local current source/sink configurations confirmed that α rhythms index local neuronal excitability fluctuations. CSD-defined α generators were strongest in the supragranular layers, whereas LFP α power was greatest in the infragranular layers, consistent with some of the previous reports. The discrepancy between LFP and CSD findings appears to be attributable to contamination of the infragranular LFP signal by activity that is volume-conducted from the stronger supragranular α generators. The presence of α generators across cortical depth in V1, S1, and A1 suggests the involvement of α in feedforward as well as feedback processes and is consistent with the view that α rhythms, perhaps in addition to a role in sensory inhibition, may parse sensory input streams in a way that facilitates communication across cortical areas. SIGNIFICANCE STATEMENT: The α rhythm is thought to reflect sensory inhibition and/or a temporal parsing mechanism. Here, we address two outstanding issues: (1) whether α is a general mechanism across sensory systems and (2) which cortical layers generate α oscillations. Using intracranial recordings from macaque V1, S1, and A1, we show α band activity with a similar spectral and laminar profile in each of these sensory areas. Furthermore, α generators were present in each of the cortical layers, with a strong source in superficial layers. We argue that previous findings, locating α generators exclusively in the deeper layers, were biased because of use of less locally specific local field potential measurements. The laminar distribution of α band activity appears more complex than generally assumed.


Subject(s)
Brain Mapping , Evoked Potentials/physiology , Neocortex/anatomy & histology , Neocortex/physiology , Nerve Net/physiology , Periodicity , Analysis of Variance , Animals , Female , Macaca , Male , Physical Stimulation , Spectrum Analysis
14.
J Vis Exp ; (101): e52420, 2015 Jul 01.
Article in English | MEDLINE | ID: mdl-26167670

ABSTRACT

Rapid auditory processing and acoustic change detection abilities play a critical role in allowing human infants to efficiently process the fine spectral and temporal changes that are characteristic of human language. These abilities lay the foundation for effective language acquisition; allowing infants to hone in on the sounds of their native language. Invasive procedures in animals and scalp-recorded potentials from human adults suggest that simultaneous, rhythmic activity (oscillations) between and within brain regions are fundamental to sensory development; determining the resolution with which incoming stimuli are parsed. At this time, little is known about oscillatory dynamics in human infant development. However, animal neurophysiology and adult EEG data provide the basis for a strong hypothesis that rapid auditory processing in infants is mediated by oscillatory synchrony in discrete frequency bands. In order to investigate this, 128-channel, high-density EEG responses of 4-month old infants to frequency change in tone pairs, presented in two rate conditions (Rapid: 70 msec ISI and Control: 300 msec ISI) were examined. To determine the frequency band and magnitude of activity, auditory evoked response averages were first co-registered with age-appropriate brain templates. Next, the principal components of the response were identified and localized using a two-dipole model of brain activity. Single-trial analysis of oscillatory power showed a robust index of frequency change processing in bursts of Theta band (3 - 8 Hz) activity in both right and left auditory cortices, with left activation more prominent in the Rapid condition. These methods have produced data that are not only some of the first reported evoked oscillations analyses in infants, but are also, importantly, the product of a well-established method of recording and analyzing clean, meticulously collected, infant EEG and ERPs. In this article, we describe our method for infant EEG net application, recording, dynamic brain response analysis, and representative results.


Subject(s)
Brain/physiology , Evoked Potentials, Auditory/physiology , Auditory Cortex/physiology , Brain Mapping/methods , Electroencephalography/methods , Humans , Infant
15.
Hear Res ; 308: 50-9, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24103509

ABSTRACT

Studies over several decades have identified many of the neuronal substrates of music perception by pursuing pitch and rhythm perception separately. Here, we address the question of how these mechanisms interact, starting with the observation that the peripheral pathways of the so-called "Core" and "Matrix" thalamocortical system provide the anatomical bases for tone and rhythm channels. We then examine the hypothesis that these specialized inputs integrate acoustic content within rhythm context in auditory cortex using classical types of "driving" and "modulatory" mechanisms. This hypothesis provides a framework for deriving testable predictions about the early stages of music processing. Furthermore, because thalamocortical circuits are shared by speech and music processing, such a model provides concrete implications for how music experience contributes to the development of robust speech encoding mechanisms.


Subject(s)
Cerebral Cortex/physiology , Music , Thalamus/physiology , Acoustic Stimulation , Animals , Auditory Cortex/physiology , Basal Ganglia/metabolism , Brain/physiology , Cats , Humans , Macaca , Models, Neurological , Neurons , Oscillometry , Pitch Perception , Rats , Somatosensory Cortex/physiology , Speech , Time Factors
16.
J Neurosci ; 33(48): 18746-54, 2013 Nov 27.
Article in English | MEDLINE | ID: mdl-24285881

ABSTRACT

Young infants discriminate phonetically relevant speech contrasts in a universal manner, that is, similarly across languages. This ability fades by 12 months of age as the brain builds language-specific phonemic maps and increasingly responds preferentially to the infant's native language. However, the neural mechanisms that underlie the development of infant preference for native over non-native phonemes remain unclear. Since gamma-band power is known to signal infants' preference for native language rhythm, we hypothesized that it might also indicate preference for native phonemes. Using high-density electroencephalogram/event-related potential (EEG/ERP) recordings and source-localization techniques to identify and locate the ERP generators, we examined changes in brain oscillations while 6-month-old human infants from monolingual English settings listened to English and Spanish syllable contrasts. Neural dynamics were investigated via single-trial analysis of the temporal-spectral composition of brain responses at source level. Increases in 4-6 Hz (theta) power and in phase synchronization at 2-4 Hz (delta/theta) were found to characterize infants' evoked responses to discrimination of native/non-native syllable contrasts mostly in the left auditory source. However, selective enhancement of induced gamma oscillations in the area of anterior cingulate cortex was seen only during native contrast discrimination. These results suggest that gamma oscillations support syllable discrimination in the earliest stages of language acquisition, particularly during the period in which infants begin to develop preferential processing for linguistically relevant phonemic features in their environment. Our results also suggest that by 6 months of age, infants already treat native phonemic contrasts differently from non-native, implying that perceptual specialization and establishment of enduring phonemic memory representations have been initiated.


Subject(s)
Electroencephalography , Language Development , Language , Speech Perception/physiology , Analysis of Variance , Brain/physiology , Brain Mapping , Data Interpretation, Statistical , Electroencephalography Phase Synchronization , England , Evoked Potentials, Auditory/physiology , Female , Humans , Image Processing, Computer-Assisted , Infant , Infant, Newborn , Magnetic Resonance Imaging , Male , Phonetics , Theta Rhythm/physiology
17.
Neuropsychologia ; 51(13): 2812-24, 2013 Nov.
Article in English | MEDLINE | ID: mdl-24055540

ABSTRACT

Rapid auditory processing and auditory change detection abilities are crucial aspects of speech and language development, particularly in the first year of life. Animal models and adult studies suggest that oscillatory synchrony, and in particular low-frequency oscillations play key roles in this process. We hypothesize that infant perception of rapid pitch and timing changes is mediated, at least in part, by oscillatory mechanisms. Using event-related potentials (ERPs), source localization and time-frequency analysis of event-related oscillations (EROs), we examined the neural substrates of rapid auditory processing in 4-month-olds. During a standard oddball paradigm, infants listened to tone pairs with invariant standard (STD, 800-800 Hz) and variant deviant (DEV, 800-1200 Hz) pitch. STD and DEV tone pairs were first presented in a block with a short inter-stimulus interval (ISI) (Rapid Rate: 70 ms ISI), followed by a block of stimuli with a longer ISI (Control Rate: 300 ms ISI). Results showed greater ERP peak amplitude in response to the DEV tone in both conditions and later and larger peaks during Rapid Rate presentation, compared to the Control condition. Sources of neural activity, localized to right and left auditory regions, showed larger and faster activation in the right hemisphere for both rate conditions. Time-frequency analysis of the source activity revealed clusters of theta band enhancement to the DEV tone in right auditory cortex for both conditions. Left auditory activity was enhanced only during Rapid Rate presentation. These data suggest that local low-frequency oscillatory synchrony underlies rapid processing and can robustly index auditory perception in young infants. Furthermore, left hemisphere recruitment during rapid frequency change discrimination suggests a difference in the spectral and temporal resolution of right and left hemispheres at a very young age.


Subject(s)
Brain Mapping , Brain/physiology , Electroencephalography , Evoked Potentials/physiology , Acoustic Stimulation , Analysis of Variance , Female , Functional Laterality/physiology , Humans , Infant , Magnetic Resonance Imaging , Male , Oscillometry , Spectrum Analysis , Time Factors
18.
Neuron ; 77(4): 750-61, 2013 Feb 20.
Article in English | MEDLINE | ID: mdl-23439126

ABSTRACT

Although we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, although the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli.


Subject(s)
Attention/physiology , Auditory Cortex/physiology , Auditory Perception/physiology , Neurons/physiology , Periodicity , Acoustic Stimulation/methods , Animals , Electroencephalography/methods , Evoked Potentials, Auditory/physiology , Female , Macaca , Male
19.
Hear Res ; 258(1-2): 72-9, 2009 Dec.
Article in English | MEDLINE | ID: mdl-19595755

ABSTRACT

Most auditory events in nature are accompanied by non-auditory signals, such as a view of the speaker's face during face-to-face communication or the vibration of a string during a musical performance. While it is known that accompanying visual and somatosensory signals can benefit auditory perception, often by making the sound seem louder, the specific neural bases for sensory amplification are still debated. In this review, we want to deal with what we regard as confusion on two topics that are crucial to our understanding of multisensory integration mechanisms in auditory cortex: (1) Anatomical Underpinnings (e.g., what circuits underlie multisensory convergence), and (2) Temporal Dynamics (e.g., what time windows of integration are physiologically feasible). The combined evidence on multisensory structure and function in auditory cortex advances the emerging view of the relationship between perception and low level multisensory integration. In fact, it seems that the question is no longer whether low level, putatively unisensory cortex is accessible to multisensory influences, but how.


Subject(s)
Auditory Cortex/anatomy & histology , Auditory Cortex/physiology , Neurons/physiology , Acoustic Stimulation , Auditory Pathways/physiology , Auditory Perception/physiology , Brain Mapping , Humans , Models, Neurological , Neurons/metabolism , Oscillometry/methods , Perception , Somatosensory Cortex/physiology , Time Factors , Visual Pathways , Visual Perception/physiology
20.
Ear Hear ; 30(5): 505-14, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19546807

ABSTRACT

OBJECTIVE: To examine the impact of hearing loss (HL) on audiovisual (AV) processing in the aging population. We hypothesized that age-related HL would have a pervasive effect on sensory processing, extending beyond the auditory domain. Specifically, we predicted that decreased auditory input to the neural system, in the form of HL over time, would have deleterious effects on multisensory mechanisms. DESIGN: This study compared AV processing between older adults with normal hearing (N = 12) and older adults with mild to moderate sensorineural HL (N = 12). To do this, we recorded cortical evoked potentials that were elicited by watching and listening to recordings of a speaker saying the syllable "bi." Stimuli were presented in three conditions: when hearing the syllable "bi" (auditory), when viewing a person say "bi" (visual), and when seeing and hearing the syllables simultaneously (AV). Presentation level of the auditory stimulus was set to +30 dB SL for each listener to equalize auditory input across groups. RESULTS: In the AV condition, the normal-hearing group showed a clear and consistent decrease in P1 and N1 latencies as well as a reduction in P1 amplitude compared with the sum of the unimodal components (auditory + visual). These integration effects were absent or less consistent in HL participants. CONCLUSIONS: Despite controlling for auditory sensation level, visual influence on auditory processing was significantly less pronounced in HL individuals compared with controls, indicating diminished AV integration in this population. These results demonstrate that HL has a deleterious effect on how older adults combine what they see and hear. Although auditory amplification vastly improves the communication abilities in most hearing impaired individuals, the associated atrophy of multisensory mechanisms may contribute to a patient's difficulty in everyday settings. Our findings and related studies emphasize the potential value of multimodal tasks and stimuli in the assessment and rehabilitation of hearing impairments.


Subject(s)
Electroencephalography , Lipreading , Presbycusis/physiopathology , Aged , Auditory Threshold/physiology , Cerebral Cortex/physiopathology , Evoked Potentials/physiology , Female , Humans , Male , Phonetics , Reaction Time/physiology , Reference Values , Speech Reception Threshold Test
SELECTION OF CITATIONS
SEARCH DETAIL
...