Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
bioRxiv ; 2024 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-38798574

RESUMO

When we speak, we not only make movements with our mouth, lips, and tongue, but we also hear the sound of our own voice. Thus, speech production in the brain involves not only controlling the movements we make, but also auditory and sensory feedback. Auditory responses are typically suppressed during speech production compared to perception, but how this manifests across space and time is unclear. Here we recorded intracranial EEG in seventeen pediatric, adolescent, and adult patients with medication-resistant epilepsy who performed a reading/listening task to investigate how other auditory responses are modulated during speech production. We identified onset and sustained responses to speech in bilateral auditory cortex, with a selective suppression of onset responses during speech production. Onset responses provide a temporal landmark during speech perception that is redundant with forward prediction during speech production. Phonological feature tuning in these "onset suppression" electrodes remained stable between perception and production. Notably, the posterior insula responded at sentence onset for both perception and production, suggesting a role in multisensory integration during feedback control.

2.
J Cogn Neurosci ; 35(10): 1538-1556, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-37584593

RESUMO

Speaking elicits a suppressed neural response when compared with listening to others' speech, a phenomenon known as speaker-induced suppression (SIS). Previous research has focused on investigating SIS at constrained levels of linguistic representation, such as the individual phoneme and word level. Here, we present scalp EEG data from a dual speech perception and production task where participants read sentences aloud then listened to playback of themselves reading those sentences. Playback was separated into immediate repetition of the previous trial and randomized repetition of a former trial to investigate if forward modeling of responses during passive listening suppresses the neural response. Concurrent EMG was recorded to control for movement artifact during speech production. In line with previous research, ERP analyses at the sentence level demonstrated suppression of early auditory components of the EEG for production compared with perception. To evaluate whether linguistic abstractions (in the form of phonological feature tuning) are suppressed during speech production alongside lower-level acoustic information, we fit linear encoding models that predicted scalp EEG based on phonological features, EMG activity, and task condition. We found that phonological features were encoded similarly between production and perception. However, this similarity was only observed when controlling for movement by using the EMG response as an additional regressor. Our results suggest that SIS operates at a sensory representational level and is dissociated from higher order cognitive and linguistic processing that takes place during speech perception and production. We also detail some important considerations when analyzing EEG during continuous speech production.


Assuntos
Leitura , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Fala/fisiologia , Eletroencefalografia , Idioma
3.
Neuroimage ; 260: 119438, 2022 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-35792291

RESUMO

Since the second-half of the twentieth century, intracranial electroencephalography (iEEG), including both electrocorticography (ECoG) and stereo-electroencephalography (sEEG), has provided an intimate view into the human brain. At the interface between fundamental research and the clinic, iEEG provides both high temporal resolution and high spatial specificity but comes with constraints, such as the individual's tailored sparsity of electrode sampling. Over the years, researchers in neuroscience developed their practices to make the most of the iEEG approach. Here we offer a critical review of iEEG research practices in a didactic framework for newcomers, as well addressing issues encountered by proficient researchers. The scope is threefold: (i) review common practices in iEEG research, (ii) suggest potential guidelines for working with iEEG data and answer frequently asked questions based on the most widespread practices, and (iii) based on current neurophysiological knowledge and methodologies, pave the way to good practice standards in iEEG research. The organization of this paper follows the steps of iEEG data processing. The first section contextualizes iEEG data collection. The second section focuses on localization of intracranial electrodes. The third section highlights the main pre-processing steps. The fourth section presents iEEG signal analysis methods. The fifth section discusses statistical approaches. The sixth section draws some unique perspectives on iEEG research. Finally, to ensure a consistent nomenclature throughout the manuscript and to align with other guidelines, e.g., Brain Imaging Data Structure (BIDS) and the OHBM Committee on Best Practices in Data Analysis and Sharing (COBIDAS), we provide a glossary to disambiguate terms related to iEEG research.


Assuntos
Eletrocorticografia , Eletroencefalografia , Encéfalo/fisiologia , Mapeamento Encefálico/métodos , Eletrocorticografia/métodos , Eletrodos , Eletroencefalografia/métodos , Humanos
4.
Curr Biol ; 32(7): R311-R313, 2022 04 11.
Artigo em Inglês | MEDLINE | ID: mdl-35413255

RESUMO

Does the brain perceive song as speech with melody? A new study using intracranial recordings and functional brain imaging in humans suggests that it does not. Instead, singing, instrumental music, and speech are represented by different neural populations.


Assuntos
Música , Canto , Encéfalo , Humanos , Vias Neurais , Fala
5.
Front Hum Neurosci ; 16: 1001171, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36741776

RESUMO

In many experiments that investigate auditory and speech processing in the brain using electroencephalography (EEG), the experimental paradigm is often lengthy and tedious. Typically, the experimenter errs on the side of including more data, more trials, and therefore conducting a longer task to ensure that the data are robust and effects are measurable. Recent studies used naturalistic stimuli to investigate the brain's response to individual or a combination of multiple speech features using system identification techniques, such as multivariate temporal receptive field (mTRF) analyses. The neural data collected from such experiments must be divided into a training set and a test set to fit and validate the mTRF weights. While a good strategy is clearly to collect as much data as is feasible, it is unclear how much data are needed to achieve stable results. Furthermore, it is unclear whether the specific stimulus used for mTRF fitting and the choice of feature representation affects how much data would be required for robust and generalizable results. Here, we used previously collected EEG data from our lab using sentence stimuli and movie stimuli as well as EEG data from an open-source dataset using audiobook stimuli to better understand how much data needs to be collected for naturalistic speech experiments measuring acoustic and phonetic tuning. We found that the EEG receptive field structure tested here stabilizes after collecting a training dataset of approximately 200 s of TIMIT sentences, around 600 s of movie trailers training set data, and approximately 460 s of audiobook training set data. Thus, we provide suggestions on the minimum amount of data that would be necessary for fitting mTRFs from naturalistic listening data. Our findings are motivated by highly practical concerns when working with children, patient populations, or others who may not tolerate long study sessions. These findings will aid future researchers who wish to study naturalistic speech processing in healthy and clinical populations while minimizing participant fatigue and retaining signal quality.

6.
Front Hum Neurosci ; 15: 726998, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34880738

RESUMO

Intracranial recordings in epilepsy patients are increasingly utilized to gain insight into the electrophysiological mechanisms of human cognition. There are currently several practical limitations to conducting research with these patients, including patient and researcher availability and the cognitive abilities of patients, which limit the amount of task-related data that can be collected. Prior studies have synchronized clinical audio, video, and neural recordings to understand naturalistic behaviors, but these recordings are centered on the patient to understand their seizure semiology and thus do not capture and synchronize audiovisual stimuli experienced by patients. Here, we describe a platform for cognitive monitoring of neurosurgical patients during their hospitalization that benefits both patients and researchers. We provide the full specifications for this system and describe some example use cases in perception, memory, and sleep research. We provide results obtained from a patient passively watching TV as proof-of-principle for the naturalistic study of cognition. Our system opens up new avenues to collect more data per patient using real-world behaviors, affording new possibilities to conduct longitudinal studies of the electrophysiological basis of human cognition under naturalistic conditions.

7.
J Neurosci ; 41(43): 8946-8962, 2021 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-34503996

RESUMO

In natural conversations, listeners must attend to what others are saying while ignoring extraneous background sounds. Recent studies have used encoding models to predict electroencephalography (EEG) responses to speech in noise-free listening situations, sometimes referred to as "speech tracking." Researchers have analyzed how speech tracking changes with different types of background noise. It is unclear, however, whether neural responses from acoustically rich, naturalistic environments with and without background noise can be generalized to more controlled stimuli. If encoding models for acoustically rich, naturalistic stimuli are generalizable to other tasks, this could aid in data collection from populations of individuals who may not tolerate listening to more controlled and less engaging stimuli for long periods of time. We recorded noninvasive scalp EEG while 17 human participants (8 male/9 female) listened to speech without noise and audiovisual speech stimuli containing overlapping speakers and background sounds. We fit multivariate temporal receptive field encoding models to predict EEG responses to pitch, the acoustic envelope, phonological features, and visual cues in both stimulus conditions. Our results suggested that neural responses to naturalistic stimuli were generalizable to more controlled datasets. EEG responses to speech in isolation were predicted accurately using phonological features alone, while responses to speech in a rich acoustic background were more accurate when including both phonological and acoustic features. Our findings suggest that naturalistic audiovisual stimuli can be used to measure receptive fields that are comparable and generalizable to more controlled audio-only stimuli.SIGNIFICANCE STATEMENT Understanding spoken language in natural environments requires listeners to parse acoustic and linguistic information in the presence of other distracting stimuli. However, most studies of auditory processing rely on highly controlled stimuli with no background noise, or with background noise inserted at specific times. Here, we compare models where EEG data are predicted based on a combination of acoustic, phonetic, and visual features in highly disparate stimuli-sentences from a speech corpus and speech embedded within movie trailers. We show that modeling neural responses to highly noisy, audiovisual movies can uncover tuning for acoustic and phonetic information that generalizes to simpler stimuli typically used in sensory neuroscience experiments.


Assuntos
Estimulação Acústica/métodos , Encéfalo/fisiologia , Eletroencefalografia/métodos , Eletroculografia/métodos , Estimulação Luminosa/métodos , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Filmes Cinematográficos , Adulto Jovem
8.
Cell ; 184(18): 4626-4639.e13, 2021 09 02.
Artigo em Inglês | MEDLINE | ID: mdl-34411517

RESUMO

Speech perception is thought to rely on a cortical feedforward serial transformation of acoustic into linguistic representations. Using intracranial recordings across the entire human auditory cortex, electrocortical stimulation, and surgical ablation, we show that cortical processing across areas is not consistent with a serial hierarchical organization. Instead, response latency and receptive field analyses demonstrate parallel and distinct information processing in the primary and nonprimary auditory cortices. This functional dissociation was also observed where stimulation of the primary auditory cortex evokes auditory hallucination but does not distort or interfere with speech perception. Opposite effects were observed during stimulation of nonprimary cortex in superior temporal gyrus. Ablation of the primary auditory cortex does not affect speech perception. These results establish a distributed functional organization of parallel information processing throughout the human auditory cortex and demonstrate an essential independent role for nonprimary auditory cortex in speech processing.


Assuntos
Córtex Auditivo/fisiologia , Fala/fisiologia , Audiometria de Tons Puros , Eletrodos , Processamento Eletrônico de Dados , Humanos , Fonética , Percepção da Altura Sonora , Tempo de Reação/fisiologia , Lobo Temporal/fisiologia
9.
Epilepsia ; 62(4): 947-959, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33634855

RESUMO

OBJECTIVE: Intracranial electroencephalography (ICEEG) recordings are performed for seizure localization in medically refractory epilepsy. Signal quantifications such as frequency power can be projected as heatmaps on personalized three-dimensional (3D) reconstructed cortical surfaces to distill these complex recordings into intuitive cinematic visualizations. However, simultaneously reconciling deep recording locations and reliably tracking evolving ictal patterns remain significant challenges. METHODS: We fused oblique magnetic resonance imaging (MRI) slices along depth probe trajectories with cortical surface reconstructions and projected dynamic heatmaps using a simple mathematical metric of epileptiform activity (line-length). This omni-planar and surface casting of epileptiform activity approach (OPSCEA) thus illustrated seizure onset and spread among both deep and superficial locations simultaneously with minimal need for signal processing supervision. We utilized the approach on 41 patients at our center implanted with grid, strip, and/or depth electrodes for localizing medically refractory seizures. Peri-ictal data were converted into OPSCEA videos with multiple 3D brain views illustrating all electrode locations. Five people of varying expertise in epilepsy (medical student through epilepsy attending level) attempted to localize the seizure-onset zones. RESULTS: We retrospectively compared this approach with the original ICEEG study reports for validation. Accuracy ranged from 73.2% to 97.6% for complete or overlapping onset lobe(s), respectively, and ~56.1% to 95.1% for the specific focus (or foci). Higher answer certainty for a given case predicted better accuracy, and scorers had similar accuracy across different training levels. SIGNIFICANCE: In an era of increasing stereo-EEG use, cinematic visualizations fusing omni-planar and surface functional projections appear to provide a useful adjunct for interpreting complex intracranial recordings and subsequent surgery planning.


Assuntos
Epilepsia Resistente a Medicamentos/diagnóstico por imagem , Epilepsia Resistente a Medicamentos/fisiopatologia , Eletrocorticografia/normas , Imageamento por Ressonância Magnética/normas , Convulsões/diagnóstico por imagem , Convulsões/fisiopatologia , Adolescente , Adulto , Encéfalo/diagnóstico por imagem , Encéfalo/fisiopatologia , Criança , Pré-Escolar , Eletrocorticografia/métodos , Feminino , Seguimentos , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Adulto Jovem
10.
Lang Cogn Neurosci ; 35(5): 573-582, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32656294

RESUMO

Humans have a unique ability to produce and consume rich, complex, and varied language in order to communicate ideas to one another. Still, outside of natural reading, the most common methods for studying how our brains process speech or understand language use only isolated words or simple sentences. Recent studies have upset this status quo by employing complex natural stimuli and measuring how the brain responds to language as it is used. In this article we argue that natural stimuli offer many advantages over simplified, controlled stimuli for studying how language is processed by the brain. Furthermore, the downsides of using natural language stimuli can be mitigated using modern statistical and computational techniques.

12.
Nat Hum Behav ; 3(4): 327-328, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30971803
13.
Front Hum Neurosci ; 12: 360, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30279650

RESUMO

Background: Numerous studies have demonstrated that individuals exhibit structured neural activity in many brain regions during rest that is also observed during different tasks, however it is still not clear whether and how resting state activity patterns may relate to underlying tuning for specific stimuli. In the posterior superior temporal gyrus (STG), distinct neural activity patterns are observed during the perception of specific linguistic speech features. We hypothesized that spontaneous resting-state neural dynamics of the STG would be structured to reflect its role in speech perception, exhibiting an organization along speech features as seen during speech perception. Methods: Human cortical local field potentials were recorded from the superior temporal gyrus (STG) in 8 patients undergoing surgical treatment of epilepsy. Signals were recorded during speech perception and rest. Patterns of neural activity (high gamma power: 70-150 Hz) during rest, extracted with spatiotemporal principal component analysis, were compared to spatiotemporal neural responses to speech features during perception. Hierarchical clustering was applied to look for patterns in rest that corresponded to speech feature tuning. Results: Significant correlations were found between neural responses to speech features (sentence onsets, consonants, and vowels) and the spontaneous neural activity in the STG. Across subjects, these correlations clustered into five groups, demonstrating tuning for speech features-most robustly for acoustic onsets. These correlations were not seen in other brain areas, or during motor and spectrally-rotated speech control tasks. Conclusions: In this study, we present evidence that the RS structure of STG activity robustly recapitulates its stimulus-evoked response to acoustic onsets. Further, secondary patterns in RS activity appear to correlate with stimulus-evoked responses to speech features. The role of these spontaneous spatiotemporal activity patterns remains to be elucidated.

14.
Curr Biol ; 28(12): 1860-1871.e4, 2018 06 18.
Artigo em Inglês | MEDLINE | ID: mdl-29861132

RESUMO

To derive meaning from speech, we must extract multiple dimensions of concurrent information from incoming speech signals. That is, equally important to processing phonetic features is the detection of acoustic cues that give structure and context to the information we hear. How the brain organizes this information is unknown. Using data-driven computational methods on high-density intracranial recordings from 27 human participants, we reveal the functional distinction of neural responses to speech in the posterior superior temporal gyrus according to either onset or sustained response profiles. Though similar response types have been observed throughout the auditory system, we found novel evidence for a major spatial parcellation in which a distinct caudal zone detects acoustic onsets and a rostral-surround zone shows sustained, relatively delayed responses to ongoing speech stimuli. While posterior onset and anterior sustained responses are used substantially during natural speech perception, they are not limited to speech stimuli and are seen even for reversed or spectrally rotated speech. Single-electrode encoding of phonetic features in each zone depended upon whether the sound occurred at sentence onset, suggesting joint encoding of phonetic features and their temporal context. Onset responses in the caudal zone could accurately decode sentence and phrase onset boundaries, providing a potentially important internal mechanism for detecting temporal landmarks in speech and other natural sounds. These findings suggest that onset and sustained responses not only define the basic spatial organization of high-order auditory cortex but also have direct implications for how speech information is parsed in the cortex. VIDEO ABSTRACT.


Assuntos
Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Masculino , Fonética , Adulto Jovem
15.
Neurosurgery ; 83(4): 683-691, 2018 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-29040672

RESUMO

BACKGROUND: Interictal epileptiform discharges are an important biomarker for localization of focal epilepsy, especially in patients who undergo chronic intracranial monitoring. Manual detection of these pathophysiological events is cumbersome, but is still superior to current rule-based approaches in most automated algorithms. OBJECTIVE: To develop an unsupervised machine-learning algorithm for the improved, automated detection and localization of interictal epileptiform discharges based on spatiotemporal pattern recognition. METHODS: We decomposed 24 h of intracranial electroencephalography signals into basis functions and activation vectors using non-negative matrix factorization (NNMF). Thresholding the activation vector and the basis function of interest detected interictal epileptiform discharges in time and space (specific electrodes), respectively. We used convolutive NNMF, a refined algorithm, to add a temporal dimension to basis functions. RESULTS: The receiver operating characteristics for NNMF-based detection are close to the gold standard of human visual-based detection and superior to currently available alternative automated approaches (93% sensitivity and 97% specificity). The algorithm successfully identified thousands of interictal epileptiform discharges across a full day of neurophysiological recording and accurately summarized their localization into a single map. Adding a temporal window allowed for visualization of the archetypal propagation network of these epileptiform discharges. CONCLUSION: Unsupervised learning offers a powerful approach towards automated identification of recurrent pathological neurophysiological signals, which may have important implications for precise, quantitative, and individualized evaluation of focal epilepsy.


Assuntos
Algoritmos , Eletroencefalografia/métodos , Epilepsias Parciais/fisiopatologia , Aprendizado de Máquina não Supervisionado , Adulto , Idoso , Epilepsias Parciais/diagnóstico , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Convulsões/diagnóstico , Convulsões/fisiopatologia
16.
Front Neuroinform ; 11: 62, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29163118

RESUMO

In this article, we introduce img_pipe, our open source python package for preprocessing of imaging data for use in intracranial electrocorticography (ECoG) and intracranial stereo-EEG analyses. The process of electrode localization, labeling, and warping for use in ECoG currently varies widely across laboratories, and it is usually performed with custom, lab-specific code. This python package aims to provide a standardized interface for these procedures, as well as code to plot and display results on 3D cortical surface meshes. It gives the user an easy interface to create anatomically labeled electrodes that can also be warped to an atlas brain, starting with only a preoperative T1 MRI scan and a postoperative CT scan. We describe the full capabilities of our imaging pipeline and present a step-by-step protocol for users.

17.
J Neural Eng ; 13(5): 056013, 2016 10.
Artigo em Inglês | MEDLINE | ID: mdl-27578414

RESUMO

OBJECTIVE: Electrocorticography (ECoG) has become an important tool in human neuroscience and has tremendous potential for emerging applications in neural interface technology. Electrode array design parameters are outstanding issues for both research and clinical applications, and these parameters depend critically on the nature of the neural signals to be recorded. Here, we investigate the functional spatial resolution of neural signals recorded at the human cortical surface. We empirically derive spatial spread functions to quantify the shared neural activity for each frequency band of the electrocorticogram. APPROACH: Five subjects with high-density (4 mm center-to-center spacing) ECoG grid implants participated in speech perception and production tasks while neural activity was recorded from the speech cortex, including superior temporal gyrus, precentral gyrus, and postcentral gyrus. The cortical surface field potential was decomposed into traditional EEG frequency bands. Signal similarity between electrode pairs for each frequency band was quantified using a Pearson correlation coefficient. MAIN RESULTS: The correlation of neural activity between electrode pairs was inversely related to the distance between the electrodes; this relationship was used to quantify spatial falloff functions for cortical subdomains. As expected, lower frequencies remained correlated over larger distances than higher frequencies. However, both the envelope and phase of gamma and high gamma frequencies (30-150 Hz) are largely uncorrelated (<90%) at 4 mm, the smallest spacing of the high-density arrays. Thus, ECoG arrays smaller than 4 mm have significant promise for increasing signal resolution at high frequencies, whereas less additional gain is achieved for lower frequencies. SIGNIFICANCE: Our findings quantitatively demonstrate the dependence of ECoG spatial resolution on the neural frequency of interest. We demonstrate that this relationship is consistent across patients and across cortical areas during activity.


Assuntos
Córtex Cerebral/fisiologia , Eletrocorticografia/métodos , Percepção Espacial/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Adulto , Mapeamento Encefálico , Eletrodos , Feminino , Ritmo Gama/fisiologia , Humanos , Masculino , Ritmo Teta/fisiologia
19.
J Neurosci ; 36(6): 2014-26, 2016 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-26865624

RESUMO

The human superior temporal gyrus (STG) is critical for speech perception, yet the organization of spectrotemporal processing of speech within the STG is not well understood. Here, to characterize the spatial organization of spectrotemporal processing of speech across human STG, we use high-density cortical surface field potential recordings while participants listened to natural continuous speech. While synthetic broad-band stimuli did not yield sustained activation of the STG, spectrotemporal receptive fields could be reconstructed from vigorous responses to speech stimuli. We find that the human STG displays a robust anterior-posterior spatial distribution of spectrotemporal tuning in which the posterior STG is tuned for temporally fast varying speech sounds that have relatively constant energy across the frequency axis (low spectral modulation) while the anterior STG is tuned for temporally slow varying speech sounds that have a high degree of spectral variation across the frequency axis (high spectral modulation). This work illustrates organization of spectrotemporal processing in the human STG, and illuminates processing of ethologically relevant speech signals in a region of the brain specialized for speech perception. SIGNIFICANCE STATEMENT: Considerable evidence has implicated the human superior temporal gyrus (STG) in speech processing. However, the gross organization of spectrotemporal processing of speech within the STG is not well characterized. Here we use natural speech stimuli and advanced receptive field characterization methods to show that spectrotemporal features within speech are well organized along the posterior-to-anterior axis of the human STG. These findings demonstrate robust functional organization based on spectrotemporal modulation content, and illustrate that much of the encoded information in the STG represents the physical acoustic properties of speech stimuli.


Assuntos
Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Estimulação Acústica , Adulto , Algoritmos , Mapeamento Encefálico , Metabolismo Energético/fisiologia , Potenciais Evocados/fisiologia , Feminino , Humanos , Masculino , Fonética
20.
Schizophr Res ; 161(2-3): 357-66, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25497222

RESUMO

Declarative memory (DM) impairments are reported in schizophrenia and in unaffected biological relatives of patients. However, the neural correlates of successful and unsuccessful encoding, mediated by the medial temporal lobe (MTL) memory system, and the influence of disease-related genetic liability remain under explored. This study employed an event-related functional MRI paradigm to compare activations for successfully and unsuccessfully encoded associative face-name stimuli between 26 schizophrenia patients (mean age: 33, 19m/7f), 30 controls (mean age: 29, 24m/6f), and 14 unaffected relatives of patients (mean age: 40, 5m/9f). Compared to controls or unaffected relatives, patients showed hyper-activations in ventral visual stream and temporo-parietal cortical association areas when contrasting successfully encoded events to fixation. Follow-up hippocampal regions-of-interest analysis revealed schizophrenia-related hyper-activations in the right anterior hippocampus during successful encoding; contrasting successful versus unsuccessful events produced schizophrenia-related hypo-activations in the left anterior hippocampus. Similar hippocampal hypo-activations were observed in unaffected relatives during successful versus unsuccessful encoding. Post hoc analyses of hippocampal volume showed reductions in patients, but not in unaffected relatives compared to controls. Findings suggest that DM encoding deficits are attributable to both disease-specific and genetic liability factors that impact different components of the MTL memory system. Hyper-activations in temporo-occipital and parietal regions observed only in patients suggest the influence of disease-related factors. Regional hyper- and hypo-activations attributable to successful encoding occurring in both patients and unaffected relatives suggest the influence of schizophrenia-related genetic liability factors.


Assuntos
Hipocampo/fisiopatologia , Memória/fisiologia , Esquizofrenia/fisiopatologia , Adulto , Família , Feminino , Lateralidade Funcional , Hipocampo/patologia , Humanos , Imageamento por Ressonância Magnética , Masculino , Testes Neuropsicológicos , Tamanho do Órgão , Lobo Parietal/fisiopatologia , Esquizofrenia/patologia , Psicologia do Esquizofrênico , Lobo Temporal/fisiopatologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...