Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 37
Filter
Add more filters










Publication year range
1.
Brain Sci ; 14(1)2024 Jan 19.
Article in English | MEDLINE | ID: mdl-38275515

ABSTRACT

Tinnitus is a prevalent hearing-loss deficit manifested as a phantom (internally generated by the brain) sound that is heard as a high-frequency tone in the majority of afflicted persons. Chronic tinnitus is debilitating, leading to distress, sleep deprivation, anxiety, and even suicidal thoughts. It has been theorized that, in the majority of afflicted persons, tinnitus can be attributed to the loss of high-frequency input from the cochlea to the auditory cortex, known as deafferentation. Deafferentation due to hearing loss develops with aging, which progressively causes tonotopic regions coding for the lost high-frequency coding to synchronize, leading to a phantom high-frequency sound sensation. Approaches to tinnitus remediation that demonstrated promise include inhibitory drugs, the use of tinnitus-specific frequency notching to increase lateral inhibition to the deafferented neurons, and multisensory approaches (auditory-motor and audiovisual) that work by coupling multisensory stimulation to the deafferented neural populations. The goal of this review is to put forward a theoretical framework of a multisensory approach to remedy tinnitus. Our theoretical framework posits that due to vision's modulatory (inhibitory, excitatory) influence on the auditory pathway, a prolonged engagement in audiovisual activity, especially during daily discourse, as opposed to auditory-only activity/discourse, can progressively reorganize deafferented neural populations, resulting in the reduced synchrony of the deafferented neurons and a reduction in tinnitus severity over time.

2.
PLoS One ; 18(9): e0291600, 2023.
Article in English | MEDLINE | ID: mdl-37713394

ABSTRACT

BACKGROUND: The cochlear implant (CI) has proven to be a successful treatment for patients with severe-to-profound sensorineural hearing loss, however outcome variance exists. We sought to evaluate particular mutations discovered in previously established sensory and neural partition genes and compare post-operative CI outcomes. MATERIALS AND METHODS: Utilizing a prospective cohort study design, blood samples collected from adult patients with non-syndromic hearing loss undergoing CI were tested for 54 genes of interest with high-throughput sequencing. Patients were categorized as having a pathogenic variant in the sensory partition, pathogenic variant in the neural partition, pathogenic variant in both sensory and neural partition, or with no variant identified. Speech perception performance was assessed pre- and 12 months post-operatively. Performance measures were compared to genetic mutation and variant status utilizing a Wilcoxon rank sum test, with P<0.05 considered statistically significant. RESULTS: Thirty-six cochlear implant patients underwent genetic testing and speech understanding measurements. Of the 54 genes that were interrogated, three patients (8.3%) demonstrated a pathogenic mutation in the neural partition (within TMPRSS3 genes), one patient (2.8%) demonstrated a pathogenic mutation in the sensory partition (within the POU4F3 genes). In addition, 3 patients (8.3%) had an isolated neural partition variance of unknown significance (VUS), 5 patients (13.9%) had an isolated sensory partition VUS, 1 patient (2.8%) had a variant in both neural and sensory partition, and 23 patients (63.9%) had no mutation or variant identified. There was no statistically significant difference in speech perception scores between patients with sensory or neural partition pathogenic mutations or VUS. Variable performance was found within patients with TMPRSS3 gene mutations. CONCLUSION: The impact of genetic mutations on post-operative outcomes in CI patients was heterogenous. Future research and dissemination of mutations and subsequent CI performance is warranted to elucidate exact mutations within target genes providing the best non-invasive prognostic capability.


Subject(s)
Cochlear Implantation , Cochlear Implants , Humans , Adult , Prospective Studies , Mutation , Genetic Testing , Membrane Proteins , Neoplasm Proteins , Serine Endopeptidases/genetics
3.
Sci Rep ; 13(1): 15849, 2023 09 22.
Article in English | MEDLINE | ID: mdl-37740012

ABSTRACT

Language comprehension is a complex process involving an extensive brain network. Brain regions responsible for prosodic processing have been studied in adults; however, much less is known about the neural bases of prosodic processing in children. Using magnetoencephalography (MEG), we mapped regions supporting speech envelope tracking (a marker of prosodic processing) in 80 typically developing children, ages 4-18 years, completing a stories listening paradigm. Neuromagnetic signals coherent with the speech envelope were localized using dynamic imaging of coherent sources (DICS). Across the group, we observed coherence in bilateral perisylvian cortex. We observed age-related increases in coherence to the speech envelope in the right superior temporal gyrus (r = 0.31, df = 78, p = 0.0047) and primary auditory cortex (r = 0.27, df = 78, p = 0.016); age-related decreases in coherence to the speech envelope were observed in the left superior temporal gyrus (r = - 0.25, df = 78, p = 0.026). This pattern may indicate a refinement of the networks responsible for prosodic processing during development, where language areas in the right hemisphere become increasingly specialized for prosodic processing. Altogether, these results reveal a distinct neurodevelopmental trajectory for the processing of prosodic cues, highlighting the presence of supportive language functions in the right hemisphere. Findings from this dataset of typically developing children may serve as a potential reference timeline for assessing children with neurodevelopmental hearing and speech disorders.


Subject(s)
Brain , Cerebral Cortex , Adult , Humans , Child , Cues , Hearing , Language
4.
Front Hum Neurosci ; 16: 1043499, 2022.
Article in English | MEDLINE | ID: mdl-36419642

ABSTRACT

There is a weak relationship between clinical and self-reported speech perception outcomes in cochlear implant (CI) listeners. Such poor correspondence may be due to differences in clinical and "real-world" listening environments and stimuli. Speech in the real world is often accompanied by visual cues, background environmental noise, and is generally in a conversational context, all factors that could affect listening demand. Thus, our objectives were to determine if brain responses to naturalistic speech could index speech perception and listening demand in CI users. Accordingly, we recorded high-density electroencephalogram (EEG) while CI users listened/watched a naturalistic stimulus (i.e., the television show, "The Office"). We used continuous EEG to quantify "speech neural tracking" (i.e., TRFs, temporal response functions) to the show's soundtrack and 8-12 Hz (alpha) brain rhythms commonly related to listening effort. Background noise at three different signal-to-noise ratios (SNRs), +5, +10, and +15 dB were presented to vary the difficulty of following the television show, mimicking a natural noisy environment. The task also included an audio-only (no video) condition. After each condition, participants subjectively rated listening demand and the degree of words and conversations they felt they understood. Fifteen CI users reported progressively higher degrees of listening demand and less words and conversation with increasing background noise. Listening demand and conversation understanding in the audio-only condition was comparable to that of the highest noise condition (+5 dB). Increasing background noise affected speech neural tracking at a group level, in addition to eliciting strong individual differences. Mixed effect modeling showed that listening demand and conversation understanding were correlated to early cortical speech tracking, such that high demand and low conversation understanding occurred with lower amplitude TRFs. In the high noise condition, greater listening demand was negatively correlated to parietal alpha power, where higher demand was related to lower alpha power. No significant correlations were observed between TRF/alpha and clinical speech perception scores. These results are similar to previous findings showing little relationship between clinical speech perception and quality-of-life in CI users. However, physiological responses to complex natural speech may provide an objective measure of aspects of quality-of-life measures like self-perceived listening demand.

5.
Sci Rep ; 12(1): 17749, 2022 10 22.
Article in English | MEDLINE | ID: mdl-36273017

ABSTRACT

Deaf individuals who use a cochlear implant (CI) have remarkably different outcomes for auditory speech communication ability. One factor assumed to affect CI outcomes is visual crossmodal plasticity in auditory cortex, where deprived auditory regions begin to support non-auditory functions such as vision. Previous research has viewed crossmodal plasticity as harmful for speech outcomes for CI users if it interferes with sound processing, while others have demonstrated that plasticity related to visual language may be beneficial for speech recovery. To clarify, we used electroencephalography (EEG) to measure brain responses to a partial face speaking a silent single-syllable word (visual language) in 15 CI users and 13 age-matched typical-hearing controls. We used source analysis on EEG activity to measure crossmodal visual responses in auditory cortex and then compared them to CI users' speech-in-noise listening ability. CI users' brain response to the onset of the video stimulus (face) was larger than controls in left auditory cortex, consistent with crossmodal activation after deafness. CI users also produced a mixture of alpha (8-12 Hz) synchronization and desynchronization in auditory cortex while watching lip movement while controls instead showed desynchronization. CI users with higher speech scores had stronger crossmodal responses in auditory cortex to the onset of the video, but those with lower speech scores had increases in alpha power during lip movement in auditory areas. Therefore, evidence of crossmodal reorganization in CI users does not necessarily predict poor speech outcomes, and differences in crossmodal activation during lip reading may instead relate to strategies or differences that CI users use in audiovisual speech communication.


Subject(s)
Cochlear Implantation , Cochlear Implants , Deafness , Speech Perception , Humans , Speech , Deafness/surgery , Speech Perception/physiology
6.
Ear Hear ; 43(6): 1904-1916, 2022.
Article in English | MEDLINE | ID: mdl-35544449

ABSTRACT

OBJECTIVE: Evidence suggests that hearing loss increases the risk of cognitive impairment. However, the relationship between hearing loss and cognition can vary considerably across studies, which may be partially explained by demographic and health factors that are not systematically accounted for in statistical models. DESIGN: Middle-aged to older adult participants (N = 149) completed a web-based assessment that included speech-in-noise (SiN) and self-report measures of hearing, as well as auditory and visual cognitive interference (Stroop) tasks. Correlations between hearing and cognitive interference measures were performed with and without controlling for age, sex, education, depression, anxiety, and self-rated health. RESULTS: The risk of having objective SiN difficulties differed between males and females. All demographic and health variables, except education, influenced the likelihood of reporting hearing difficulties. Small but significant relationships between objective and reported hearing difficulties and the measures of cognitive interference were observed when analyses were controlled for demographic and health factors. Furthermore, when stratifying analyses for males and females, different relationships between hearing and cognitive interference measures were found. Self-reported difficulty with spatial hearing and objective SiN performance were better predictors of inhibitory control in females, whereas self-reported difficulty with speech was a better predictor of inhibitory control in males. This suggests that inhibitory control is associated with different listening abilities in males and females. CONCLUSIONS: The results highlight the importance of controlling for participant characteristics when assessing the relationship between hearing and cognitive interference, which may also be the case for other cognitive functions, but this requires further investigations. Furthermore, this study is the first to show that the relationship between hearing and cognitive interference can be captured using web-based tasks that are simple to implement and administer at home without any assistance, paving the way for future online screening tests assessing the effects of hearing loss on cognition.


Subject(s)
Deafness , Hearing Loss , Speech Perception , Middle Aged , Male , Female , Humans , Aged , Noise , Hearing , Auditory Perception , Cognition
7.
Mol Ther ; 30(2): 519-533, 2022 02 02.
Article in English | MEDLINE | ID: mdl-34298130

ABSTRACT

Moderate noise exposure may cause acute loss of cochlear synapses without affecting the cochlear hair cells and hearing threshold; thus, it remains "hidden" to standard clinical tests. This cochlear synaptopathy is one of the main pathologies of noise-induced hearing loss (NIHL). There is no effective treatment for NIHL, mainly because of the lack of a proper drug-delivery technique. We hypothesized that local magnetic delivery of gene therapy into the inner ear could be beneficial for NIHL. In this study, we used superparamagnetic iron oxide nanoparticles (SPIONs) and a recombinant adeno-associated virus (AAV) vector (AAV2(quad Y-F)) to deliver brain-derived neurotrophic factor (BDNF) gene therapy into the rat inner ear via minimally invasive magnetic targeting. We found that the magnetic targeting effectively accumulates and distributes the SPION-tagged AAV2(quad Y-F)-BDNF vector into the inner ear. We also found that AAV2(quad Y-F) efficiently transfects cochlear hair cells and enhances BDNF gene expression. Enhanced BDNF gene expression substantially recovers noise-induced BDNF gene downregulation, auditory brainstem response (ABR) wave I amplitude reduction, and synapse loss. These results suggest that magnetic targeting of AAV2(quad Y-F)-mediated BDNF gene therapy could reverse cochlear synaptopathy after NIHL.


Subject(s)
Brain-Derived Neurotrophic Factor , Dependovirus , Animals , Brain-Derived Neurotrophic Factor/genetics , Brain-Derived Neurotrophic Factor/metabolism , Cochlea/metabolism , Dependovirus/genetics , Evoked Potentials, Auditory, Brain Stem , Genetic Therapy/methods , Hearing , Magnetic Phenomena , Rats
8.
PLoS One ; 16(7): e0254162, 2021.
Article in English | MEDLINE | ID: mdl-34242290

ABSTRACT

Listening to speech in noise is effortful for individuals with hearing loss, even if they have received a hearing prosthesis such as a hearing aid or cochlear implant (CI). At present, little is known about the neural functions that support listening effort. One form of neural activity that has been suggested to reflect listening effort is the power of 8-12 Hz (alpha) oscillations measured by electroencephalography (EEG). Alpha power in two cortical regions has been associated with effortful listening-left inferior frontal gyrus (IFG), and parietal cortex-but these relationships have not been examined in the same listeners. Further, there are few studies available investigating neural correlates of effort in the individuals with cochlear implants. Here we tested 16 CI users in a novel effort-focused speech-in-noise listening paradigm, and confirm a relationship between alpha power and self-reported effort ratings in parietal regions, but not left IFG. The parietal relationship was not linear but quadratic, with alpha power comparatively lower when effort ratings were at the top and bottom of the effort scale, and higher when effort ratings were in the middle of the scale. Results are discussed in terms of cognitive systems that are engaged in difficult listening situations, and the implication for clinical translation.


Subject(s)
Cochlear Implants , Speech , Adult , Auditory Perception , Humans , Male , Middle Aged , Noise
9.
Eur J Neurosci ; 54(3): 5016-5037, 2021 08.
Article in English | MEDLINE | ID: mdl-34146363

ABSTRACT

A common concern for individuals with severe-to-profound hearing loss fitted with cochlear implants (CIs) is difficulty following conversations in noisy environments. Recent work has suggested that these difficulties are related to individual differences in brain function, including verbal working memory and the degree of cross-modal reorganization of auditory areas for visual processing. However, the neural basis for these relationships is not fully understood. Here, we investigated neural correlates of visual verbal working memory and sensory plasticity in 14 CI users and age-matched normal-hearing (NH) controls. While we recorded the high-density electroencephalogram (EEG), participants completed a modified Sternberg visual working memory task where sets of letters and numbers were presented visually and then recalled at a later time. Results suggested that CI users had comparable behavioural working memory performance compared with NH. However, CI users had more pronounced neural activity during visual stimulus encoding, including stronger visual-evoked activity in auditory and visual cortices, larger modulations of neural oscillations and increased frontotemporal connectivity. In contrast, during memory retention of the characters, CI users had descriptively weaker neural oscillations and significantly lower frontotemporal connectivity. We interpret the differences in neural correlates of visual stimulus processing in CI users through the lens of cross-modal and intramodal plasticity.


Subject(s)
Auditory Cortex , Cochlear Implantation , Cochlear Implants , Deafness , Hearing , Humans , Memory, Short-Term
10.
Sci Rep ; 10(1): 6141, 2020 04 09.
Article in English | MEDLINE | ID: mdl-32273536

ABSTRACT

Hearing impairment disrupts processes of selective attention that help listeners attend to one sound source over competing sounds in the environment. Hearing prostheses (hearing aids and cochlear implants, CIs), do not fully remedy these issues. In normal hearing, mechanisms of selective attention arise through the facilitation and suppression of neural activity that represents sound sources. However, it is unclear how hearing impairment affects these neural processes, which is key to understanding why listening difficulty remains. Here, severely-impaired listeners treated with a CI, and age-matched normal-hearing controls, attended to one of two identical but spatially separated talkers while multichannel EEG was recorded. Whereas neural representations of attended and ignored speech were differentiated at early (~ 150 ms) cortical processing stages in controls, differentiation of talker representations only occurred later (~250 ms) in CI users. CI users, but not controls, also showed evidence for spatial suppression of the ignored talker through lateralized alpha (7-14 Hz) oscillations. However, CI users' perceptual performance was only predicted by early-stage talker differentiation. We conclude that multi-talker listening difficulty remains for impaired listeners due to deficits in early-stage separation of cortical speech representations, despite neural evidence that they use spatial information to guide selective attention.


Subject(s)
Cerebral Cortex/physiopathology , Hearing Loss/physiopathology , Speech Perception/physiology , Speech/physiology , Adolescent , Adult , Aged , Attention/physiology , Case-Control Studies , Cerebral Cortex/physiology , Cochlear Implants , Electroencephalography , Hearing Loss/psychology , Hearing Loss/therapy , Humans , Male , Middle Aged , Young Adult
11.
Front Neurosci ; 14: 124, 2020.
Article in English | MEDLINE | ID: mdl-32132897

ABSTRACT

OBJECTIVES: The ability to understand speech is highly variable in people with cochlear implants (CIs) and to date, there are no objective measures that identify the root of this discrepancy. However, behavioral measures of temporal processing such as the temporal modulation transfer function (TMTF) has previously found to be related to vowel and consonant identification in CI users. The acoustic change complex (ACC) is a cortical auditory-evoked potential response that can be elicited by a "change" in an ongoing stimulus. In this study, the ACC elicited by amplitude modulation (AM) change was related to measures of speech perception as well as the amplitude detection threshold in CI users. METHODS: Ten CI users (mean age: 50 years old) participated in this study. All subjects participated in behavioral tests that included both speech and amplitude modulation detection to obtain a TMTF. CI users were categorized as "good" (n = 6) or "poor" (n = 4) based on their speech-in noise score (<50%). 64-channel electroencephalographic recordings were conducted while CI users passively listened to AM change sounds that were presented in a free field setting. The AM change stimulus was white noise with four different AM rates (4, 40, 100, and 300 Hz). RESULTS: Behavioral results show that AM detection thresholds in CI users were higher compared to the normal-hearing (NH) group for all AM rates. The electrophysiological data suggest that N1 responses were significantly decreased in amplitude and their latencies were increased in CI users compared to NH controls. In addition, the N1 latencies for the poor CI performers were delayed compared to the good CI performers. The N1 latency for 40 Hz AM was correlated with various speech perception measures. CONCLUSION: Our data suggest that the ACC to AM change provides an objective index of speech perception abilities that can be used to explain some of the variation in speech perception observed among CI users.

12.
Sci Rep ; 9(1): 11278, 2019 08 02.
Article in English | MEDLINE | ID: mdl-31375712

ABSTRACT

Listening in a noisy environment is challenging for individuals with normal hearing and can be a significant burden for those with hearing impairment. The extent to which this burden is alleviated by a hearing device is a major, unresolved issue for rehabilitation. Here, we found adult users of cochlear implants (CIs) self-reported listening effort during a speech-in-noise task that was positively related to alpha oscillatory activity in the left inferior frontal cortex, canonical Broca's area, and inversely related to speech envelope coherence in the 2-5 Hz range originating in the superior-temporal plane encompassing auditory cortex. Left frontal cortex coherence in the 2-5 Hz range also predicted speech-in-noise identification. These data demonstrate that neural oscillations predict both speech perception ability in noise and listening effort.


Subject(s)
Auditory Cortex/physiology , Broca Area/physiology , Frontal Lobe/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Aged , Auditory Perception/physiology , Brain Mapping , Cochlear Implantation/methods , Female , Hearing Loss/diagnostic imaging , Hearing Loss/physiopathology , Hearing Tests , Humans , Male , Middle Aged , Noise/adverse effects
13.
Front Hum Neurosci ; 11: 88, 2017.
Article in English | MEDLINE | ID: mdl-28286478

ABSTRACT

Understanding speech in noise (SiN) is a complex task involving sensory encoding and cognitive resources including working memory and attention. Previous work has shown that brain oscillations, particularly alpha rhythms (8-12 Hz) play important roles in sensory processes involving working memory and attention. However, no previous study has examined brain oscillations during performance of a continuous speech perception test. The aim of this study was to measure cortical alpha during attentive listening in a commonly used SiN task (digits-in-noise, DiN) to better understand the neural processes associated with "top-down" cognitive processing in adverse listening environments. We recruited 14 normal hearing (NH) young adults. DiN speech reception threshold (SRT) was measured in an initial behavioral experiment. EEG activity was then collected: (i) while performing the DiN near SRT; and (ii) while attending to a silent, close-caption video during presentation of identical digit stimuli that the participant was instructed to ignore. Three main results were obtained: (1) during attentive ("active") listening to the DiN, a number of distinct neural oscillations were observed (mainly alpha with some beta; 15-30 Hz). No oscillations were observed during attention to the video ("passive" listening); (2) overall, alpha event-related synchronization (ERS) of central/parietal sources were observed during active listening when data were grand averaged across all participants. In some participants, a smaller magnitude alpha event-related desynchronization (ERD), originating in temporal regions, was observed; and (3) when individual EEG trials were sorted according to correct and incorrect digit identification, the temporal alpha ERD was consistently greater on correctly identified trials. No such consistency was observed with the central/parietal alpha ERS. These data demonstrate that changes in alpha activity are specific to listening conditions. To our knowledge, this is the first report that shows almost no brain oscillatory changes during a passive task compared to an active task in any sensory modality. Temporal alpha ERD was related to correct digit identification.

14.
Ear Hear ; 37(5): e322-35, 2016.
Article in English | MEDLINE | ID: mdl-27556365

ABSTRACT

OBJECTIVE: To record envelope following responses (EFRs) to monaural amplitude-modulated broadband noise carriers in which amplitude modulation (AM) depth was slowly changed over time and to compare these objective electrophysiological measures to subjective behavioral thresholds in young normal hearing and older subjects. PARTICIPANTS: three groups of subjects included a young normal-hearing group (YNH 18 to 28 years; pure-tone average = 5 dB HL), a first older group ("O1"; 41 to 62 years; pure-tone average = 19 dB HL), and a second older group ("O2"; 67 to 82 years; pure-tone average = 35 dB HL). Electrophysiology: In condition 1, the AM depth (41 Hz) of a white noise carrier, was continuously varied from 2% to 100% (5%/s). EFRs were analyzed as a function of the AM depth. In condition 2, auditory steady-state responses were recorded to fixed AM depths (100%, 75%, 50%, and 25%) at a rate of 41 Hz. Psychophysics: A 3 AFC (alternative forced choice) procedure was used to track the AM depth needed to detect AM at 41 Hz (AM detection). The minimum AM depth capable of eliciting a statistically detectable EFR was defined as the physiological AM detection threshold. RESULTS: Across all ages, the fixed AM depth auditory steady-state response and swept AM EFR yielded similar response amplitudes. Statistically significant correlations (r = 0.48) were observed between behavioral and physiological AM detection thresholds. Older subjects had slightly higher (not significant) behavioral AM detection thresholds than younger subjects. AM detection thresholds did not correlate with age. All groups showed a sigmoidal EFR amplitude versus AM depth function but the shape of the function differed across groups. The O2 group reached EFR amplitude plateau levels at lower modulation depths than the normal-hearing group and had a narrower neural dynamic range. In the young normal-hearing group, the EFR phase did not differ with AM depth, whereas in the older group, EFR phase showed a consistent decrease with increasing AM depth. The degree of phase change (or phase slope) was significantly correlated to the pure-tone threshold at 4 kHz. CONCLUSIONS: EFRs can be recorded using either the swept modulation depth or the discrete AM depth techniques. Sweep recordings may provide additional valuable information at suprathreshold intensities including the plateau level, slope, and dynamic range. Older subjects had a reduced neural dynamic range compared with younger subjects suggesting that aging affects the ability of the auditory system to encode subtle differences in the depth of AM. The phase-slope differences are likely related to differences in low and high-frequency contributions to EFR. The behavioral-physiological AM depth threshold relationship was significant but likely too weak to be clinically useful in the present individual subjects who did not suffer from apparent temporal processing deficits.


Subject(s)
Aging/physiology , Evoked Potentials, Auditory, Brain Stem/physiology , Hearing/physiology , Adolescent , Adult , Aged , Aged, 80 and over , Audiometry, Pure-Tone , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Middle Aged , Young Adult
15.
Brain Connect ; 6(1): 76-83, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26456242

ABSTRACT

Using noninvasive neuroimaging, researchers have shown that young children have bilateral and diffuse language networks, which become increasingly left lateralized and focal with development. Connectivity within the distributed pediatric language network has been minimally studied, and conventional neuroimaging approaches do not distinguish task-related signal changes from those that are task essential. In this study, we propose a novel multimodal method to map core language sites from patterns of information flux. We retrospectively analyze neuroimaging data collected in two groups of children, ages 5-18 years, performing verb generation in functional magnetic resonance imaging (fMRI) (n = 343) and magnetoencephalography (MEG) (n = 21). The fMRI data were conventionally analyzed and the group activation map parcellated to define node locations. Neuronal activity at each node was estimated from MEG data using a linearly constrained minimum variance beamformer, and effective connectivity within canonical frequency bands was computed using the phase slope index metric. We observed significant (p ≤ 0.05) effective connections in all subjects. The number of suprathreshold connections was significantly and linearly correlated with participant's age (r = 0.50, n = 21, p ≤ 0.05), suggesting that core language sites emerge as part of the normal developmental trajectory. Across frequencies, we observed significant effective connectivity among proximal left frontal nodes. Within the low frequency bands, information flux was rostrally directed within a focal, left frontal region, approximating Broca's area. At higher frequencies, we observed increased connectivity involving bilateral perisylvian nodes. Frequency-specific differences in patterns of information flux were resolved through fast (i.e., MEG) neuroimaging.


Subject(s)
Brain Mapping , Frontal Lobe/growth & development , Frontal Lobe/physiology , Language , Magnetic Resonance Imaging , Neural Pathways/physiology , Adolescent , Brain Mapping/methods , Child , Female , Humans , Magnetic Resonance Imaging/methods , Magnetoencephalography/methods , Male
16.
Clin Neurophysiol ; 127(2): 1603-1617, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26616545

ABSTRACT

OBJECTIVE: Voice onset time (VOT) is a critical temporal cue for perception of speech in cochlear implant (CI) users. We assessed the cortical auditory evoked potentials (CAEPs) to consonant vowels (CVs) with varying VOTs and related these potentials to various speech perception measures. METHODS: CAEPs were recorded from 64 scalp electrodes during passive listening in CI and normal-hearing (NH) groups. Speech stimuli were synthesized CVs from a 6-step VOT /ba/-/pa/ continuum ranging from 0 to 50 ms VOT in 10-ms steps. Behavioral measures included the 50% boundary point for categorical perception ("ba" to "pa") from an active condition task. RESULTS: Behavioral measures: CI users with poor speech perception performance had prolonged 50% VOT boundary points compared to NH subjects. The 50% boundary point was also significantly correlated to the ability to discriminate consonants in quiet and noise masking. Electrophysiology: The most striking difference between the NH and CI subjects was that the P2 response was significantly reduced in amplitude in the CI group compared to NH. N1 amplitude did not differ between NH and CI groups. P2 latency increased with increases in VOT for both NH and CI groups. P2 was delayed more in CI users with poor speech perception compared to NH subjects. N1 amplitude was significantly related to consonant perception in noise while P2 latency was significantly related to vowel perception in noise. When dipole source modelling in auditory cortex was used to characterize N1/P2, more significant relationships were observed with speech perception measures compared to the same N1/P2 activity when measured at the scalp. N1 dipole amplitude measures were significantly correlated with consonants in noise discrimination. Like N1, the P2 dipole amplitude was correlated with consonant discrimination, but additional significant relationships were observed such as sentence and word identification. CONCLUSIONS: P2 responses to a VOT continuum stimulus were different between NH subjects and CI users. P2 responses show more significant relationships with speech perception than N1 responses. SIGNIFICANCE: The current findings indicate that N1/P2 measures during a passive listening task relate to speech perception outcomes after cochlear implantation.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Cochlear Implants , Evoked Potentials, Auditory/physiology , Speech Perception/physiology , Voice/physiology , Adult , Aged , Cochlear Implantation , Female , Humans , Male , Middle Aged , Young Adult
17.
Front Neurosci ; 9: 38, 2015.
Article in English | MEDLINE | ID: mdl-25717291

ABSTRACT

OBJECTIVE: Sound modulation is a critical temporal cue for the perception of speech and environmental sounds. To examine auditory cortical responses to sound modulation, we developed an acoustic change stimulus involving amplitude modulation (AM) of ongoing noise. The AM transitions in this stimulus evoked an acoustic change complex (ACC) that was examined parametrically in terms of rate and depth of modulation and hemispheric symmetry. METHODS: Auditory cortical potentials were recorded from 64 scalp electrodes during passive listening in two conditions: (1) ACC from white noise to 4, 40, 300 Hz AM, with varying AM depths of 100, 50, 25% lasting 1 s and (2) 1 s AM noise bursts at the same modulation rate. Behavioral measures included AM detection from an attend ACC condition and AM depth thresholds (i.e., a temporal modulation transfer function, TMTF). RESULTS: The N1 response of the ACC was large to 4 and 40 Hz and small to the 300 Hz AM. In contrast, the opposite pattern was observed with bursts of AM showing larger responses with increases in AM rate. Brain source modeling showed significant hemispheric asymmetry such that 4 and 40 Hz ACC responses were dominated by right and left hemispheres respectively. CONCLUSION: N1 responses to the ACC resembled a low pass filter shape similar to a behavioral TMTF. In the ACC paradigm, the only stimulus parameter that changes is AM and therefore the N1 response provides an index for this AM change. In contrast, an AM burst stimulus contains both AM and level changes and is likely dominated by the rise time of the stimulus. The hemispheric differences are consistent with the asymmetric sampling in time hypothesis suggesting that the different hemispheres preferentially sample acoustic time across different time windows. SIGNIFICANCE: The ACC provides a novel approach to studying temporal processing at the level of cortex and provides further evidence of hemispheric specialization for fast and slow stimuli.

18.
Neuroimage ; 87: 356-62, 2014 Feb 15.
Article in English | MEDLINE | ID: mdl-24188814

ABSTRACT

There have been a number of studies suggesting that oscillatory alpha activity (~10 Hz) plays a pivotal role in attention by gating information flow to relevant sensory regions. The vast majority of these studies have looked at shifts of attention in the spatial domain and only in a single modality (often visual or sensorimotor). In the current magnetoencephalography (MEG) study, we investigated the role of alpha activity in the suppression of a distracting modality stream. We used a cross-modal attention task where visual cues indicated whether participants had to judge a visual orientation or discriminate the auditory pitch of an upcoming target. The visual and auditory targets were presented either simultaneously or alone, allowing us to behaviorally gauge the "cost" of having a distractor present in each modality. We found that the preparation for visual discrimination (relative to pitch discrimination) resulted in a decrease of alpha power (9-11 Hz) in the early visual cortex, with a concomitant increase in alpha/beta power (14-16 Hz) in the supramarginal gyrus, a region suggested to play a vital role in short-term storage of pitch information (Gaab et al., 2003). On a trial-by-trial basis, alpha power over the visual areas was significantly correlated with increased visual discrimination times, whereas alpha power over the precuneus and right superior temporal gyrus was correlated with increased auditory discrimination times. However, these correlations were only significant when the targets were paired with distractors. Our work adds to increasing evidence that the top-down (i.e. attentional) modulation of alpha activity is a mechanism by which stimulus processing can be gated within the cortex. Here, we find that this phenomenon is not restricted to the domain of spatial attention and can be generalized to other sensory modalities than vision.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Brain/physiology , Visual Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Alpha Rhythm , Cues , Female , Humans , Magnetoencephalography , Male , Photic Stimulation , Reaction Time/physiology , Young Adult
19.
Brain ; 136(Pt 5): 1626-38, 2013 May.
Article in English | MEDLINE | ID: mdl-23503620

ABSTRACT

Abnormal auditory adaptation is a standard clinical tool for diagnosing auditory nerve disorders due to acoustic neuromas. In the present study we investigated auditory adaptation in auditory neuropathy owing to disordered function of inner hair cell ribbon synapses (temperature-sensitive auditory neuropathy) or auditory nerve fibres. Subjects were tested when afebrile for (i) psychophysical loudness adaptation to comfortably-loud sustained tones; and (ii) physiological adaptation of auditory brainstem responses to clicks as a function of their position in brief 20-click stimulus trains (#1, 2, 3 … 20). Results were compared with normal hearing listeners and other forms of hearing impairment. Subjects with ribbon synapse disorder had abnormally increased magnitude of loudness adaptation to both low (250 Hz) and high (8000 Hz) frequency tones. Subjects with auditory nerve disorders had normal loudness adaptation to low frequency tones; all but one had abnormal adaptation to high frequency tones. Adaptation was both more rapid and of greater magnitude in ribbon synapse than in auditory nerve disorders. Auditory brainstem response measures of adaptation in ribbon synapse disorder showed Wave V to the first click in the train to be abnormal both in latency and amplitude, and these abnormalities increased in magnitude or Wave V was absent to subsequent clicks. In contrast, auditory brainstem responses in four of the five subjects with neural disorders were absent to every click in the train. The fifth subject had normal latency and abnormally reduced amplitude of Wave V to the first click and abnormal or absent responses to subsequent clicks. Thus, dysfunction of both synaptic transmission and auditory neural function can be associated with abnormal loudness adaptation and the magnitude of the adaptation is significantly greater with ribbon synapse than neural disorders.


Subject(s)
Acoustic Stimulation/methods , Adaptation, Physiological/physiology , Cochlear Nerve/pathology , Hair Cells, Auditory, Inner/physiology , Hyperacusis/physiopathology , Adolescent , Adult , Aged , Auditory Perception/physiology , Child , Cochlear Nerve/physiology , Female , Hearing Disorders/diagnosis , Hearing Disorders/physiopathology , Humans , Hyperacusis/diagnosis , Loudness Perception/physiology , Male , Middle Aged , Young Adult
20.
Clin Neurophysiol ; 124(6): 1204-15, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23276491

ABSTRACT

OBJECTIVE: Compare brain potentials to consonant vowels (CVs) as a function of both voice onset times (VOTs) and consonant position; initial (CV) versus second (VCV). METHODS: Auditory cortical potentials (N100, P200, N200, and a late slow negativity, (SN) were recorded from scalp electrodes in twelve normal hearing subjects to consonant vowels in initial position (CVs: /du/ and /tu/), in second position (VCVs: /udu/ and /utu/), and to vowels alone (V: /u/) and paired (VVs: /uu/) separated in time to simulate consonant voice onset times (VOTs). RESULTS: CVs evoked "acoustic onset" N100s of similar latency but larger amplitudes to /du/ than /tu/. CVs preceded by a vowel (VCVs) evoked "acoustic change" N100s with longer latencies to /utu/ than /udu/. Their absolute latency difference was less than the corresponding VOT difference. The SN following N100 to VCVs was larger to /utu/ than /udu/. Paired vowels (/uu/) separated by intervals corresponding to consonant VOTs evoked N100s with latency differences equal to the simulated VOT differences and SNs of similar amplitudes. Noise masking resulted in VCV N100 latency differences that were now equal to consonant VOT differences. Brain activations by CVs, VCVs, and VVs were maximal in right temporal lobe. CONCLUSION: Auditory cortical activities to CVs are sensitive to: (1) position of the CV in the utterance; (2) VOTs of consonants; and (3) noise masking. SIGNIFICANCE: VOTs of stop consonants affect auditory cortical activities differently as a function of the position of the consonant in the utterance.


Subject(s)
Acoustic Stimulation , Auditory Cortex/physiology , Hearing/physiology , Algorithms , Cues , Electroencephalography , Evoked Potentials, Auditory/physiology , Functional Laterality/physiology , Magnetic Resonance Imaging , Noise , Perceptual Masking , Temporal Lobe/physiology , Voice
SELECTION OF CITATIONS
SEARCH DETAIL
...