Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Neurophysiol ; 130(2): 291-302, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37377190

RESUMO

Traditionally, pitch variation in a sound stream has been integral to music identity. We attempt to expand music's definition, by demonstrating that the neural code for musicality is independent of pitch encoding. That is, pitchless sound streams can still induce music-like perception and a neurophysiological hierarchy similar to pitched melodies. Previous work reported that neural processing of sounds with no-pitch, fixed-pitch, and irregular-pitch (melodic) patterns, exhibits a right-lateralized hierarchical shift, with pitchless sounds favorably processed in Heschl's gyrus (HG), ascending laterally to nonprimary auditory areas for fixed-pitch and even more laterally for melodic patterns. The objective of this EEG study was to assess whether sound encoding maintains a similar hierarchical profile when musical perception is driven by timbre irregularities in the absence of pitch changes. Individuals listened to repetitions of three musical and three nonmusical sound-streams. The nonmusical streams were comprised of seven 200-ms segments of white, pink, or brown noise, separated by silent gaps. Musical streams were created similarly, but with all three noise types combined in a unique order within each stream to induce timbre variations and music-like perception. Subjects classified the sound streams as musical or nonmusical. Musical processing exhibited right dominant α power enhancement, followed by a lateralized increase in θ phase-locking and spectral power. The θ phase-locking was stronger in musicians than in nonmusicians. The lateralization of activity suggests higher-level auditory processing. Our findings validate the existence of a hierarchical shift, traditionally observed with pitched-melodic perception, underscoring that musicality can be achieved with timbre irregularities alone.NEW & NOTEWORTHY EEG induced by streams of pitchless noise segments varying in timbre were classified as music-like and exhibited a right-lateralized hierarchy in processing similar to pitched melodic processing. This study provides evidence that the neural-code of musicality is independent of pitch encoding. The results have implications for understanding music processing in individuals with degraded pitch perception, such as in cochlear-implant listeners, as well as the role of nonpitched sounds in the induction of music-like perceptual states.


Assuntos
Implantes Cocleares , Música , Humanos , Percepção da Altura Sonora/fisiologia , Percepção Auditiva/fisiologia , Som , Estimulação Acústica
2.
Brain Sci ; 13(3)2023 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-36979322

RESUMO

Recent studies have questioned past conclusions regarding the mechanisms of the McGurk illusion, especially how McGurk susceptibility might inform our understanding of audiovisual (AV) integration. We previously proposed that the McGurk illusion is likely attributable to a default mechanism, whereby either the visual system, auditory system, or both default to specific phonemes-those implicated in the McGurk illusion. We hypothesized that the default mechanism occurs because visual stimuli with an indiscernible place of articulation (like those traditionally used in the McGurk illusion) lead to an ambiguous perceptual environment and thus a failure in AV integration. In the current study, we tested the default hypothesis as it pertains to the auditory system. Participants performed two tasks. One task was a typical McGurk illusion task, in which individuals listened to auditory-/ba/ paired with visual-/ga/ and judged what they heard. The second task was an auditory-only task, in which individuals transcribed trisyllabic words with a phoneme replaced by silence. We found that individuals' transcription of missing phonemes often defaulted to '/d/t/th/', the same phonemes often experienced during the McGurk illusion. Importantly, individuals' default rate was positively correlated with their McGurk rate. We conclude that the McGurk illusion arises when people fail to integrate visual percepts with auditory percepts, due to visual ambiguity, thus leading the auditory system to default to phonemes often implicated in the McGurk illusion.

3.
J Speech Lang Hear Res ; 65(9): 3502-3517, 2022 09 12.
Artigo em Inglês | MEDLINE | ID: mdl-36037517

RESUMO

PURPOSE: This research examined the expression of cortical auditory evoked potentials in a cohort of children who received cochlear implants (CIs) for treatment of congenital deafness (n = 28) and typically hearing controls (n = 28). METHOD: We make use of a novel electroencephalography paradigm that permits the assessment of auditory responses to ambiently presented speech and evaluates the contributions of concurrent visual stimulation on this activity. RESULTS: Our findings show group differences in the expression of auditory sensory and perceptual event-related potential components occurring in 80- to 200-ms and 200- to 300-ms time windows, with reductions in amplitude and a greater latency difference for CI-using children. Relative to typically hearing children, current source density analysis showed muted responses to concurrent visual stimulation in CI-using children, suggesting less cortical specialization and/or reduced responsiveness to auditory information that limits the detection of the interaction between sensory systems. CONCLUSION: These findings indicate that even in the face of early interventions, CI-using children may exhibit disruptions in the development of auditory and multisensory processing.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Percepção da Fala , Estimulação Acústica , Criança , Surdez/cirurgia , Potenciais Evocados Auditivos/fisiologia , Humanos , Fala , Percepção da Fala/fisiologia
4.
iScience ; 25(7): 104671, 2022 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-35845168

RESUMO

Previous work addressing the influence of audition on visual perception has mainly been assessed using non-speech stimuli. Herein, we introduce the Audiovisual Time-Flow Illusion in spoken language, underscoring the role of audition in multisensory processing. When brief pauses were inserted into or brief portions were removed from an acoustic speech stream, individuals perceived the corresponding visual speech as "pausing" or "skipping", respectively-even though the visual stimulus was intact. When the stimulus manipulation was reversed-brief pauses were inserted into, or brief portions were removed from the visual speech stream-individuals failed to perceive the illusion in the corresponding intact auditory stream. Our findings demonstrate that in the context of spoken language, people continually realign the pace of their visual perception based on that of the auditory input. In short, the auditory modality sets the pace of the visual modality during audiovisual speech processing.

5.
Front Hum Neurosci ; 15: 616049, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33867954

RESUMO

The McGurk illusion occurs when listeners hear an illusory percept (i.e., "da"), resulting from mismatched pairings of audiovisual (AV) speech stimuli (i.e., auditory/ba/paired with visual/ga/). Hearing a third percept-distinct from both the auditory and visual input-has been used as evidence of AV fusion. We examined whether the McGurk illusion is instead driven by visual dominance, whereby the third percept, e.g., "da," represents a default percept for visemes with an ambiguous place of articulation (POA), like/ga/. Participants watched videos of a talker uttering various consonant vowels (CVs) with (AV) and without (V-only) audios of/ba/. Individuals transcribed the CV they saw (V-only) or heard (AV). In the V-only condition, individuals predominantly saw "da"/"ta" when viewing CVs with indiscernible POAs. Likewise, in the AV condition, upon perceiving an illusion, they predominantly heard "da"/"ta" for CVs with indiscernible POAs. The illusion was stronger in individuals who exhibited weak/ba/auditory encoding (examined using a control auditory-only task). In Experiment2, we attempted to replicate these findings using stimuli recorded from a different talker. The V-only results were not replicated, but again individuals predominately heard "da"/"ta"/"tha" as an illusory percept for various AV combinations, and the illusion was stronger in individuals who exhibited weak/ba/auditory encoding. These results demonstrate that when visual CVs with indiscernible POAs are paired with a weakly encoded auditory/ba/, listeners default to hearing "da"/"ta"/"tha"-thus, tempering the AV fusion account, and favoring a default mechanism triggered when both AV stimuli are ambiguous.

6.
Brain Sci ; 11(1)2021 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-33435472

RESUMO

A debate over the past decade has focused on the so-called bilingual advantage-the idea that bilingual and multilingual individuals have enhanced domain-general executive functions, relative to monolinguals, due to competition-induced monitoring of both processing and representation from the task-irrelevant language(s). In this commentary, we consider a recent study by Pot, Keijzer, and de Bot (2018), which focused on the relationship between individual differences in language usage and performance on an executive function task among multilingual older adults. We discuss their approach and findings in light of a more general movement towards embracing complexity in this domain of research, including individuals' sociocultural context and position in the lifespan. The field increasingly considers interactions between bilingualism/multilingualism and cognition, employing measures of language use well beyond the early dichotomous perspectives on language background. Moreover, new measures of bilingualism and analytical approaches are helping researchers interrogate the complexities of specific processing issues. Indeed, our review of the bilingualism/multilingualism literature confirms the increased appreciation researchers have for the range of factors-beyond whether someone speaks one, two, or more languages-that impact specific cognitive processes. Here, we highlight some of the most salient of these, and incorporate suggestions for a way forward that likewise encompasses neural perspectives on the topic.

7.
eNeuro ; 7(6)2020.
Artigo em Inglês | MEDLINE | ID: mdl-33139321

RESUMO

There is growing interest in characterizing the neural mechanisms underlying the interactions between attention and memory. Current theories posit that reflective attention to memory representations generally involves a fronto-parietal attentional control network. The present study aimed to test this idea by manipulating how a particular short-term memory (STM) representation is accessed, that is, based on its input sensory modality or semantic category, during functional magnetic resonance imaging (fMRI). Human participants performed a novel variant of the retro-cue paradigm, in which they were presented with both auditory and visual non-verbal stimuli followed by Modality, Semantic, or Uninformative retro-cues. Modality and, to a lesser extent, Semantic retro-cues facilitated response time relative to Uninformative retro-cues. The univariate and multivariate pattern analyses (MVPAs) of fMRI time-series revealed three key findings. First, the posterior parietal cortex (PPC), including portions of the intraparietal sulcus (IPS) and ventral angular gyrus (AG), had activation patterns that spatially overlapped for both modality-based and semantic-based reflective attention. Second, considering both the univariate and multivariate analyses, Semantic retro-cues were associated with a left-lateralized fronto-parietal network. Finally, the experimental design enabled us to examine how dividing attention cross-modally within STM modulates the brain regions involved in reflective attention. This analysis revealed that univariate activation within bilateral portions of the PPC increased when participants simultaneously attended both auditory and visual memory representations. Therefore, prefrontal and parietal regions are flexibly recruited during reflective attention, depending on the representational feature used to selectively access STM representations.


Assuntos
Sinais (Psicologia) , Memória de Curto Prazo , Mapeamento Encefálico , Humanos , Imageamento por Ressonância Magnética , Lobo Parietal/diagnóstico por imagem , Tempo de Reação , Semântica
8.
J Neurophysiol ; 122(4): 1312-1329, 2019 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-31268796

RESUMO

Objective assessment of the sensory pathways is crucial for understanding their development across the life span and how they may be affected by neurodevelopmental disorders (e.g., autism spectrum) and neurological pathologies (e.g., stroke, multiple sclerosis, etc.). Quick and passive measurements, for example, using electroencephalography (EEG), are especially important when working with infants and young children and with patient populations having communication deficits (e.g., aphasia). However, many EEG paradigms are limited to measuring activity from one sensory domain at a time, may be time consuming, and target only a subset of possible responses from that particular sensory domain (e.g., only auditory brainstem responses or only auditory P1-N1-P2 evoked potentials). Thus we developed a new multisensory paradigm that enables simultaneous, robust, and rapid (6-12 min) measurements of both auditory and visual EEG activity, including auditory brainstem responses, auditory and visual evoked potentials, as well as auditory and visual steady-state responses. This novel method allows us to examine neural activity at various stations along the auditory and visual hierarchies with an ecologically valid continuous speech stimulus, while an unrelated video is playing. Both the speech stimulus and the video can be customized for any population of interest. Furthermore, by using two simultaneous visual steady-state stimulation rates, we demonstrate the ability of this paradigm to track both parafoveal and peripheral visual processing concurrently. We report results from 25 healthy young adults, which validate this new paradigm.NEW & NOTEWORTHY A novel electroencephalography paradigm enables the rapid, reliable, and noninvasive assessment of neural activity along both auditory and visual pathways concurrently. The paradigm uses an ecologically valid continuous speech stimulus for auditory evaluation and can simultaneously track visual activity to both parafoveal and peripheral visual space. This new methodology may be particularly appealing to researchers and clinicians working with infants and young children and with patient populations with limited communication abilities.


Assuntos
Eletroencefalografia/métodos , Potenciais Evocados Auditivos do Tronco Encefálico , Potenciais Evocados Visuais , Adolescente , Adulto , Vias Auditivas/fisiologia , Feminino , Humanos , Masculino , Percepção da Fala , Vias Visuais/fisiologia
9.
Ear Hear ; 40(5): 1106-1116, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30762601

RESUMO

OBJECTIVES: The goal of this study was to identify the effects of auditory deprivation (age-related hearing loss) and auditory stimulation (history of hearing aid use) on the neural registration of sound across two stimulus presentation conditions: (1) equal sound pressure level and (2) equal sensation level. DESIGN: We used a between-groups design, involving three groups of 14 older adults (n = 42; 62 to 84 years): (1) clinically defined normal hearing (≤25 dB from 250 to 8000 Hz, bilaterally), (2) bilateral mild-moderate/moderately severe sensorineural hearing loss who have never used hearing aids, and (3) bilateral mild-moderate/moderately severe sensorineural hearing loss who have worn bilateral hearing aids for at least the past 2 years. RESULTS: There were significant delays in the auditory P1-N1-P2 complex in older adults with hearing loss compared with their normal hearing peers when using equal sound pressure levels for all participants. However, when the degree and configuration of hearing loss were accounted for through the presentation of equal sensation level stimuli, no latency delays were observed. These results suggest that stimulus audibility modulates P1-N1-P2 morphology and should be controlled for when defining deprivation and stimulus-related neuroplasticity in people with hearing loss. Moreover, a history of auditory stimulation, in the form of hearing aid use, does not appreciably alter the neural registration of unaided auditory evoked brain activity when quantified by the P1-N1-P2. CONCLUSIONS: When comparing auditory cortical responses in older adults with and without hearing loss, stimulus audibility, and not hearing loss-related neurophysiological changes, results in delayed response latency for those with age-related hearing loss. Future studies should carefully consider stimulus presentation levels when drawing conclusions about deprivation- and stimulation-related neuroplasticity. Additionally, auditory stimulation, in the form of a history of hearing aid use, does not significantly affect the neural registration of sound when quantified using the P1-N1-P2-evoked response.


Assuntos
Córtex Auditivo/fisiopatologia , Potenciais Evocados Auditivos , Presbiacusia/fisiopatologia , Estimulação Acústica , Idoso , Idoso de 80 Anos ou mais , Estudos de Casos e Controles , Feminino , Auxiliares de Audição , Perda Auditiva Neurossensorial/fisiopatologia , Perda Auditiva Neurossensorial/reabilitação , Humanos , Masculino , Pessoa de Meia-Idade , Presbiacusia/radioterapia , Tempo de Reação , Índice de Gravidade de Doença
11.
Neurobiol Aging ; 66: 1-11, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29501965

RESUMO

We examined the effect of age on listeners' ability to orient attention to an item in auditory short-term memory (ASTM) using high-density electroencephalography, while participants completed a delayed match-to-sample task. During the retention interval, an uninformative or an informative visual retro-cue guided attention to an item in ASTM. Informative cues speeded response times, but only for young adults. In young adults, informative retro-cues generated greater event-related potential amplitude between 450 and 650 ms at parietal sites, and an increased sustained potential over the left central scalp region, thought to index the deployment of attention and maintenance of the cued item in ASTM, respectively. Both modulations were reduced in older adults. Alpha and low beta oscillatory power suppression was greater when the retro-cue was informative than uninformative, especially in young adults. Our results point toward an age-related decline in orienting attention to the cued item in ASTM. Older adults may be dividing their attention between all items in working memory rather than selectively focusing attention on a single cued item.


Assuntos
Envelhecimento/psicologia , Atenção/fisiologia , Percepção Auditiva/fisiologia , Orientação/fisiologia , Adolescente , Idoso , Sinais (Psicologia) , Eletroencefalografia/métodos , Potenciais Evocados/fisiologia , Humanos , Masculino , Memória de Curto Prazo/fisiologia , Pessoa de Meia-Idade , Tempo de Reação , Retenção Psicológica/fisiologia , Adulto Jovem
12.
J Neurosci ; 38(7): 1835-1849, 2018 02 14.
Artigo em Inglês | MEDLINE | ID: mdl-29263241

RESUMO

Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables /ba/ and /fa/, presented in Auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard /ba/ or /fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual /fa/ and acoustic /ba/ and hear /fa/ (illusion-fa), the visual input strengthens the weighting of the phone /f/ representation. When subjects are presented with visual /ba/ and acoustic /fa/ and hear /ba/ (illusion-ba), the visual input weakens the weighting of the phone /f/ representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived illusion-ba, and a reduced N1 when they perceived illusion-fa, mirroring the N1 behavior for /ba/ and /fa/ in Auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex.SIGNIFICANCE STATEMENT The current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes (e.g., via a multisensory mediator).


Assuntos
Compreensão/fisiologia , Fonética , Percepção da Fala/fisiologia , Estimulação Acústica , Córtex Auditivo , Percepção Auditiva/fisiologia , Eletroencefalografia , Potenciais Evocados Auditivos , Feminino , Humanos , Ilusões/psicologia , Individualidade , Idioma , Lábio/fisiologia , Masculino , Estimulação Luminosa , Tempo de Reação/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
13.
Ear Hear ; 37 Suppl 1: 155S-62S, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27355765

RESUMO

Here, we describe some of the ways in which aging negatively affects the way sensory input is transduced and processed within the aging brain and how cognitive work is involved when listening to a less-than-perfect signal. We also describe how audiologic rehabilitation, including hearing aid amplification and listening training, is used to reduce the amount of cognitive resources required for effective auditory communication and conclude with an example of how listening effort is being studied in research laboratories for the purpose(s) of informing clinical practice.


Assuntos
Envelhecimento , Cognição , Correção de Deficiência Auditiva , Auxiliares de Audição , Perda Auditiva/reabilitação , Audiometria , Humanos , Aprendizagem
14.
J Neurosci ; 35(3): 1307-18, 2015 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-25609643

RESUMO

Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Potenciais Evocados/fisiologia , Memória de Curto Prazo/fisiologia , Orientação/fisiologia , Estimulação Acústica , Adolescente , Adulto , Sinais (Psicologia) , Eletroencefalografia , Feminino , Humanos , Masculino , Neurônios/fisiologia , Tempo de Reação/fisiologia , Adulto Jovem
15.
Neuropsychologia ; 62: 233-44, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25080187

RESUMO

The present study was designed to examine listeners' ability to use voice information incidentally during spoken word recognition. We recorded event-related brain potentials (ERPs) during a continuous recognition paradigm in which participants indicated on each trial whether the spoken word was "new" or "old." Old items were presented at 2, 8 or 16 words following the first presentation. Context congruency was manipulated by having the same word repeated by either the same speaker or a different speaker. The different speaker could share the gender, accent or neither feature with the word presented the first time. Participants' accuracy was greatest when the old word was spoken by the same speaker than by a different speaker. In addition, accuracy decreased with increasing lag. The correct identification of old words was accompanied by an enhanced late positivity over parietal sites, with no difference found between voice congruency conditions. In contrast, an earlier voice reinstatement effect was observed over frontal sites, an index of priming that preceded recollection in this task. Our results provide further evidence that acoustic and semantic information are integrated into a unified trace and that acoustic information facilitates spoken word recollection.


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Reconhecimento Psicológico/fisiologia , Aprendizagem Verbal/fisiologia , Vocabulário , Estimulação Acústica , Adulto , Análise de Variância , Eletroencefalografia , Potenciais Evocados/fisiologia , Feminino , Humanos , Masculino , Psicolinguística , Tempo de Reação , Adulto Jovem
16.
Psychol Res ; 78(3): 439-52, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24352689

RESUMO

Despite a growing acceptance that attention and memory interact, and that attention can be focused on an active internal mental representation (i.e., reflective attention), there has been a paucity of work focusing on reflective attention to 'sound objects' (i.e., mental representations of actual sound sources in the environment). Further research on the dynamic interactions between auditory attention and memory, as well as its degree of neuroplasticity, is important for understanding how sound objects are represented, maintained, and accessed in the brain. This knowledge can then guide the development of training programs to help individuals with attention and memory problems. This review article focuses on attention to memory with an emphasis on behavioral and neuroimaging studies that have begun to explore the mechanisms that mediate reflective attentional orienting in vision and more recently, in audition. Reflective attention refers to situations in which attention is oriented toward internal representations rather than focused on external stimuli. We propose four general principles underlying attention to short-term memory. Furthermore, we suggest that mechanisms involved in orienting attention to visual object representations may also apply for orienting attention to sound object representations.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Memória/fisiologia , Orientação/fisiologia , Estimulação Acústica , Humanos
17.
J Exp Psychol Hum Percept Perform ; 38(6): 1554-66, 2012 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-22506788

RESUMO

According to the object-based account of attention, multiple objects coexist in short-term memory (STM), and we can selectively attend to a particular object of interest. Although there is evidence that attention can be directed to visual object representations, the assumption that attention can be oriented to sound object representations has yet to be validated. Here, we used a delayed match-to-sample task to examine whether orienting attention to sound object representations influences change detection within auditory scenes consisting of 3 concurrent sounds, each occurring at a different location. On some trials, the 2 scenes were identical; in the remaining trials, the locations of 2 sounds were switched. In a control experiment, we first identified auditory scenes, in which the 3 sounds were unambiguously segregated, for the subsequent experiments. In 2 experiments, we showed that orienting attention to a sound object representation during memory retention (via a retro-cue) enhanced performance relative to uncued trials, up to 4 s of memory retention. Our study shows that complex auditory scenes composed of cooccurring sound sources are quickly parsed into sound object representations--which are then available for top-down selective attention. Here, we demonstrate that attention can be guided toward 1 of those representations, thereby attenuating change deafness. Furthermore, the effects of retro-cues in audition extend analogous findings in the visual domain, thereby suggesting that orienting attention to an object within visual or auditory STM may follow similar processing principles.


Assuntos
Atenção , Percepção Auditiva , Memória de Curto Prazo , Orientação , Retenção Psicológica , Adolescente , Adulto , Formação de Conceito , Sinais (Psicologia) , Feminino , Humanos , Masculino , Ontário , Reconhecimento Fisiológico de Modelo , Tempo de Reação , Adulto Jovem
18.
J Psychiatr Res ; 45(1): 36-43, 2011 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-20537351

RESUMO

Premutation alleles of the fragile X mental retardation 1 gene (FMR1) are associated with the risk of developing fragile X-associated tremor/ataxia syndrome (FXTAS), a late-onset neurodegenerative disorder that involves neuropsychiatric problems and executive and memory deficits. Although abnormal elevation of FMR1 mRNA has been proposed to underlie these deficits, it remains unknown which brain regions are affected by the disease process of FXTAS and genetic molecular mechanisms associated with the FMR1 premutation. This study used functional magnetic resonance imaging (fMRI) to identify deficient neural substrates responsible for altered executive and memory functions in some FMR1 premutation individuals. We measured fMRI BOLD signals during the performance of verbal working memory from 15 premutation carriers affected by FXTAS (PFX+), 15 premutation carriers unaffected by FXTAS (PFX-), and 12 matched healthy control individuals (HC). We also examined correlation between brain activation and FMR1 molecular variables (CGG repeat size and mRNA levels) in premutation carriers. Compared with HC, PFX+ and PFX- showed reduced activation in the right ventral inferior frontal cortex and left premotor/dorsal inferior frontal cortex. Reduced activation specific to PFX+ was found in the right premotor/dorsal inferior frontal cortex. Regression analysis combining the two premutation groups demonstrated significant negative correlation between the right ventral inferior frontal cortex activity and the levels of FMR1 mRNA after excluding the effect of disease severity of FXTAS. These results indicate altered prefrontal cortex activity that may underline executive and memory deficits affecting some individuals with FMR1 premutation including FXTAS patients.


Assuntos
Ataxia , Proteína do X Frágil da Deficiência Intelectual/genética , Síndrome do Cromossomo X Frágil , Imageamento por Ressonância Magnética , Memória de Curto Prazo/fisiologia , Mutação/genética , Córtex Pré-Frontal/irrigação sanguínea , Adulto , Idoso , Análise de Variância , Ataxia/complicações , Ataxia/genética , Ataxia/patologia , Mapeamento Encefálico , Feminino , Síndrome do Cromossomo X Frágil/complicações , Síndrome do Cromossomo X Frágil/genética , Síndrome do Cromossomo X Frágil/patologia , Humanos , Masculino , Pessoa de Meia-Idade , Testes Neuropsicológicos , Oxigênio/sangue , Estatística como Assunto
19.
J Neurosci ; 30(5): 1905-13, 2010 Feb 03.
Artigo em Inglês | MEDLINE | ID: mdl-20130199

RESUMO

In reverberant environments, the brain can suppress echoes so that auditory perception is dominated by the primary or leading sounds. Echo suppression comprises at least two distinct phenomena whose neural bases are unknown: spatial translocation of an echo toward the primary sound, and object capture to combine echo and primary sounds into a single event. In an electroencephalography study, we presented subjects with primary-echo (leading-lagging) click pairs in virtual acoustic space, with interclick delay at the individual's 50% suppression threshold. On each trial, subjects reported both click location (one or both hemifields) and the number of clicks they heard (one or two). Thus, the threshold stimulus led to two common percepts: Suppressed and Not Suppressed. On some trials, a subset of subjects reported an intermediate percept, in which two clicks were perceived in the same hemifield as the leading click, providing a dissociation between spatial translocation and object capture. We conducted time-frequency and event-related potential analyses to examine the time course of the neural mechanisms mediating echo suppression. Enhanced gamma band phase synchronization (peaking at approximately 40 Hz) specific to successful echo suppression was evident from 20 to 60 ms after stimulus onset. N1 latency provided a categorical neural marker of spatial translocation, whereas N1 amplitude still reflected the physical presence of a second (lagging) click. These results provide evidence that (1) echo suppression begins early, at the latest when the acoustic signal first reaches cortex, and (2) the brain spatially translocates a perceived echo before the primary sound captures it.


Assuntos
Limiar Auditivo/fisiologia , Mascaramento Perceptivo/fisiologia , Adulto , Análise de Variância , Eletroencefalografia , Potenciais Evocados , Humanos , Masculino , Psicoacústica , Tempo de Reação , Valores de Referência , Adulto Jovem
20.
J Neurophysiol ; 103(1): 218-29, 2010 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-19864443

RESUMO

The auditory cortex undergoes functional and anatomical development that reflects specialization for learned sounds. In humans, auditory maturation is evident in transient auditory-evoked potentials (AEPs) elicited by speech or music. However, neural oscillations at specific frequencies are also known to play an important role in perceptual processing. We hypothesized that, if oscillatory activity in different frequency bands reflects different aspects of sound processing, the development of phase-locking to stimulus attributes at these frequencies may have different trajectories. We examined the development of phase-locking of oscillatory responses to music sounds and to pure tones matched to the fundamental frequency of the music sounds. Phase-locking for theta (4-8 Hz), alpha (8-14 Hz), lower-to-mid beta (14-25 Hz), and upper-beta and gamma (25-70 Hz) bands strengthened with age. Phase-locking in the upper-beta and gamma range matured later than in lower frequencies and was stronger for music sounds than for pure tones, likely reflecting the maturation of neural networks that code spectral complexity. Phase-locking for theta, alpha, and lower-to-mid beta was sensitive to temporal onset (rise time) sound characteristics. The data were also consistent with phase-locked oscillatory effects of acoustic (spectrotemporal) complexity and timbre familiarity. Future studies are called for to evaluate developmental trajectories for oscillatory activity, using stimuli selected to address hypotheses related to familiarity and spectral and temporal encoding suggested by the current findings.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/crescimento & desenvolvimento , Encéfalo/fisiologia , Desenvolvimento Humano/fisiologia , Música , Estimulação Acústica , Adolescente , Adulto , Envelhecimento/fisiologia , Análise de Variância , Criança , Pré-Escolar , Eletroencefalografia , Potenciais Evocados Auditivos , Humanos , Vias Neurais/fisiologia , Periodicidade , Fatores de Tempo , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...