Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 40
Filtrar
1.
Psychon Bull Rev ; 30(5): 1928-1938, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36997717

RESUMO

Emotion influences many cognitive processes and plays an important role in our daily life. Previous studies focused on the effects of arousal on subsequent cognitive processing, but the effect of valence on subsequent semantic processing is still not clear. The present study examined the effect of auditory valence on subsequent visual semantic processing when controlling for arousal. We used instrumental music clips varying in valence while matching in arousal to induce valence states and asked participants to make natural or man-made judgements on subsequent neutral objects. We found that positive and negative valences similarly impaired subsequent semantic processing compared with neutral valence. The linear ballistic accumulator model analyses showed that the valence effects can be attributed to drift rate differences, suggesting that the effects are likely related to attentional selection. Our findings are consistent with a motivated attention model, indicating comparable attentional capture by both positive and negative valences in modulating subsequent cognitive processes.


Assuntos
Mapeamento Encefálico , Semântica , Humanos , Percepção Visual , Emoções , Atenção
2.
Behav Res Methods ; 55(6): 2853-2884, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-35971041

RESUMO

The number of databases that provide various measurements of lexical properties for psycholinguistic research has increased rapidly in recent years. The proliferation of lexical variables, and the multitude of associated databases, makes the choice, comparison, and standardization of these variables in psycholinguistic research increasingly difficult. Here, we introduce The South Carolina Psycholinguistic Metabase (SCOPE), which is a metabase (or a meta-database) containing an extensive, curated collection of psycholinguistic variable values from major databases. The metabase currently contains 245 lexical variables, organized into seven major categories: General (e.g., frequency), Orthographic (e.g., bigram frequency), Phonological (e.g., phonological uniqueness point), Orth-Phon (e.g., consistency), Semantic (e.g., concreteness), Morphological (e.g., number of morphemes), and Response variables (e.g., lexical decision latency). We hope that SCOPE will become a valuable resource for researchers in psycholinguistics and affiliated disciplines such as cognitive neuroscience of language, computational linguistics, and communication disorders. The availability and ease of use of the metabase with comprehensive set of variables can facilitate the understanding of the unique contribution of each of the variables to word processing, and that of interactions between variables, as well as new insights and development of improved models and theories of word processing. It can also help standardize practice in psycholinguistics. We demonstrate use of the metabase by measuring relationships between variables in multiple ways and testing their individual contribution towards a number of dependent measures, in the most comprehensive analysis of this kind to date. The metabase is freely available at go.sc.edu/scope.


Assuntos
Idioma , Psicolinguística , Humanos , South Carolina , Linguística , Semântica
3.
Cereb Cortex ; 33(9): 5574-5584, 2023 04 25.
Artigo em Inglês | MEDLINE | ID: mdl-36336347

RESUMO

People can seamlessly integrate a vast array of information from what they see and hear in the noisy and uncertain world. However, the neural underpinnings of audiovisual integration continue to be a topic of debate. Using strict inclusion criteria, we performed an activation likelihood estimation meta-analysis on 121 neuroimaging experiments with a total of 2,092 participants. We found that audiovisual integration is linked with the coexistence of multiple integration sites, including early cortical, subcortical, and higher association areas. Although activity was consistently found within the superior temporal cortex, different portions of this cortical region were identified depending on the analytical contrast used, complexity of the stimuli, and modality within which attention was directed. The context-dependent neural activity related to audiovisual integration suggests a flexible rather than fixed neural pathway for audiovisual integration. Together, our findings highlight a flexible multiple pathways model for audiovisual integration, with superior temporal cortex as the central node in these neural assemblies.


Assuntos
Percepção Auditiva , Percepção Visual , Humanos , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Imageamento por Ressonância Magnética/métodos , Encéfalo/fisiologia , Neuroimagem , Estimulação Luminosa , Mapeamento Encefálico , Estimulação Acústica
4.
J Exp Psychol Gen ; 151(9): 2144-2159, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35113643

RESUMO

Recognizing written or spoken words involves a sequence of processing stages, transforming sensory features into lexical-semantic representations. Whereas the later processing stages are common across modalities, the initial stages are modality-specific. In the visual modality, previous studies have shown that words with positive valence are recognized faster than neutral words. Here, we examined whether the effects of valence on word recognition are specific to the visual modality or are common across visual and auditory modalities. To address this question, we analyzed multiple large databases of visual and auditory lexical decision tasks, relating the valence of words to lexical decision times while controlling for a large number of variables, including arousal and frequency. We found that valence differentially influenced visual and auditory word recognition. Valence had an asymmetric effect on visual lexical decision times, primarily speeding up recognition of positive words. By contrast, valence had a symmetric effect on auditory lexical decision times, with both negative and positive words speeding up word recognition relative to neutral words. The modality-specificity of valence effects were consistent across databases and were observed when the same sets of words were compared across modalities. We interpret these findings as indicating that valence influences word recognition partly at the sensory-perceptual stage. We relate these effects to the effects of positive (reward) and negative (punishment) reinforcers on perception. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Nível de Alerta , Reconhecimento Psicológico , Humanos , Recompensa , Semântica , Redação
5.
Emotion ; 22(6): 1270-1280, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33211510

RESUMO

Negative events have greater influence on cognitive processing compared with positive events, a phenomenon referred to as the negativity bias. Previous studies have shown that reaction times (RTs) to negatively valenced items are slower in semantic tasks. According to the automatic vigilance hypothesis, these effects are caused by preferential attention to negative stimuli or features diverting cognitive resources away from semantic processing. However, it is still unclear whether the negativity bias can be modulated by affective context in a crossmodal setting and how that occurs. Experiment 1 examined individually presented pictures and words and established that participants were slower to judge negatively valenced picture and word targets in a semantic task. Experiments 2 and 3 probed the crossmodal influences of valence on subsequent semantic processing by using short music clips as primes and valenced pictures or words as targets. Both experiments demonstrated that priming negative versus positive music slowed RTs in a semantic task, irrespective of target valence. Hierarchical Bayesian drift diffusion model analyses suggest that the slow-down effects for negative conditions are mainly attributed to reduced drift rates. Together, these experiments demonstrate that negative auditory valence can impair subsequent semantic processing of visual targets in an additive fashion. These results provide further support for the automatic vigilance hypothesis. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Música , Semântica , Teorema de Bayes , Humanos , Tempo de Reação
6.
Biol Psychol ; 166: 108222, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34758371

RESUMO

Previous studies have shown the effects of retrieval practice and emotion on associative memory separately. However, it is yet not clear what are the related neural mechanisms and how the two factors together influence associative memory? We examined this question by instructing participants to memorize emotional or neutral words using different ways of learning. Behaviorally, the source memory was enhanced by the retrieval practice compared to the restudy condition, and impaired by the negative compared to the neutral condition without an interaction. Consistent neural effects of retrieval practice were also found, in which subsequent memory effects (SME) of 500-700 ms parietal ERPs and alpha desynchronization were found for the retrieval practice but not for the restudy. No significant difference of SME for ERPs and time-frequency analyses regarding the emotion effect was found. These results demonstrated the neural mechanism for the effects of emotion and retrieval practice on subsequent memory.


Assuntos
Eletroencefalografia , Rememoração Mental , Emoções , Potenciais Evocados , Humanos , Aprendizagem
7.
Cogn Emot ; 35(8): 1634-1651, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34486494

RESUMO

Although numerous studies have shown that people are more likely to integrate consistent visual and auditory signals, the role of non-affective congruence in emotion perception is unclear. This registered report examined the influence of non-affective cross-modal congruence on emotion perception. In Experiment 1, non-affective congruence was manipulated by matching or mismatching gender between visual and auditory modalities. Participants were instructed to attend to emotion information from only one modality while ignoring the other modality. Experiment 2 tested the inverse effectiveness rule by including both noise and noiseless conditions. Across two experiments, we found the effects of task-irrelevant emotional signals from one modality on emotional perception in the other modality, reflected in affective congruence, facilitation, and affective incongruence effects. The effects were stronger for the attend-auditory compared to the attend-visual condition, supporting a visual dominance effect. The effects were stronger for the noise compared to the noiseless condition, consistent with the inverse effectiveness rule. We did not find evidence for the effects of non-affective congruence on audiovisual integration of emotion across two experiments, suggesting that audiovisual integration of emotion may not require automatic integration of non-affective congruence information.


Assuntos
Percepção Auditiva , Percepção Visual , Estimulação Acústica , Emoções , Humanos , Estimulação Luminosa
8.
Cortex ; 138: 127-137, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33684626

RESUMO

A fundamental question in affective neuroscience is whether there is a common hedonic system for valence processing independent of modality, or there are distinct neural systems for different modalities. To address this question, we used both region of interest and whole-brain representational similarity analyses on functional magnetic resonance imaging data to identify modality-general and modality-specific brain areas involved in valence processing across visual and auditory modalities. First, region of interest analyses showed that the superior temporal cortex was associated with both modality-general and auditory-specific models, while the primary visual cortex was associated with the visual-specific model. Second, the whole-brain searchlight analyses also identified both modality-general and modality-specific representations. The modality-general regions included the superior temporal, medial superior frontal, inferior frontal, precuneus, precentral, postcentral, supramarginal, paracentral lobule and middle cingulate cortices. The modality-specific regions included both perceptual cortices and higher-order brain areas. The valence representations derived from individualized behavioral valence ratings were consistent with these results. Together, these findings suggest both modality-general and modality-specific representations of valence.


Assuntos
Percepção Auditiva , Mapeamento Encefálico , Encéfalo/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Percepção Visual
9.
Biol Psychol ; 158: 108006, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33301827

RESUMO

Our affective experiences are influenced by combined multisensory information. Although the enhanced effects of congruent audiovisual information on our affective experiences have been well documented, the role of neural oscillations in the audiovisual integration of affective signals remains unclear. First, it is unclear whether oscillatory activity changes as a function of valence. Second, the function of phase-locked and non-phase-locked power changes in audiovisual integration of affect has not yet been clearly distinguished. To fill this gap, the present study performed time-frequency analyses on EEG data acquired while participants perceived positive, neutral and negative naturalistic video and music clips. A comparison between the congruent audiovisual condition and the sum of unimodal conditions was used to identify supra-additive (Audiovisual > Visual + Auditory) or sub-additive (Audiovisual < Visual + Auditory) integration effects. The results showed that early evoked sub-additive theta and sustained induced supra-additive delta and beta activities are linked to audiovisual integration of affect regardless of affective content.


Assuntos
Percepção Auditiva , Percepção Visual , Estimulação Acústica , Eletroencefalografia , Humanos , Estimulação Luminosa
10.
Neuropsychologia ; 143: 107473, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32333934

RESUMO

Previous studies have shown that affective valence states induced by brief stimulus presentations are identifiable from whole brain activation patterns observed with functional magnetic resonance imaging (fMRI). However, it is unclear whether those results will generalize to identification of continuous changes in affective valence states under naturalistic settings, such as watching a movie. We examined neural representations of signed (positive versus negative) and unsigned (valenced versus non-valenced) valence on previously collected fMRI data from 17 participants who watched a TV show episode in a passive viewing task in the scanner (Chen et al., 2017). These data were correlated with behavioral valence ratings from a separate group of 125 participants. We spatially localized both signed and unsigned valence representations and were able to predict valence ratings for most participants based on the signed valence model in a cross-participant cross-validation procedure. These findings extend previous results from controlled experimental studies to naturalistic settings, demonstrating the ecological validity of prior findings.


Assuntos
Mapeamento Encefálico , Imageamento por Ressonância Magnética , Afeto , Encéfalo/diagnóstico por imagem , Emoções , Humanos , Filmes Cinematográficos
11.
J Cogn Neurosci ; 32(7): 1251-1262, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32108554

RESUMO

Evaluating multisensory emotional content is a part of normal day-to-day interactions. We used fMRI to examine brain areas sensitive to congruence of audiovisual valence and their overlap with areas sensitive to valence. Twenty-one participants watched audiovisual clips with either congruent or incongruent valence across visual and auditory modalities. We showed that affective congruence versus incongruence across visual and auditory modalities is identifiable on a trial-by-trial basis across participants. Representations of affective congruence were widely distributed with some overlap with the areas sensitive to valence. Regions of overlap included bilateral superior temporal cortex and right pregenual anterior cingulate. The overlap between the regions identified here and in the emotion congruence literature lends support to the idea that valence may be a key determinant of affective congruence processing across a variety of discrete emotions.


Assuntos
Mapeamento Encefálico , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Emoções , Humanos , Percepção Visual
12.
Affect Sci ; 1(4): 237-246, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36042819

RESUMO

Hedonic valence describes the pleasantness or unpleasantness of psychological states elicited by stimuli and is conceived as a fundamental building block of emotional experience. Multivariate pattern analysis approaches contribute to the study of valence representation by allowing identification of valence from distributed patterns of activity. However, the issue of construct validity arises in that there is always a possibility that classification results from a single study are driven by factors other than valence, such as the idiosyncrasies of the stimuli. In this work, we identify valence across participants from six different fMRI studies that used auditory, visual, or audiovisual stimuli, thus increasing the likelihood that classification is driven by valence and not by the specifics of the experimental paradigm of a particular study. The studies included a total of 93 participants and differed on stimuli, task, trial duration, number of participants, and scanner parameters. In a leave-one-study-out cross-validation procedure, we trained the classifiers on fMRI data from five studies and predicted valence, positive or negative, for each of the participants in the left-out study. Whole-brain classification demonstrated a reliable distinction between positive and negative valence states (72% accuracy). In a searchlight analysis, the representation of valence was localized to the right postcentral and supramarginal gyri, left superior frontal and middle frontal cortices, and right pregenual anterior cingulate and superior medial frontal cortices. The demonstrated cross-study classification of valence enhances the construct validity and generalizability of the findings from the combined studies.

13.
Neuropsychologia ; 133: 107183, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31493413

RESUMO

Studies on the organization of conceptual knowledge have examined categories of concrete nouns extensively. Less is known about the neural basis of verb categories suggested by linguistic theories. We used functional MRI to examine the differences between manner verbs, which encode information about the manner of an action, versus instrument verbs, which encode information about an object as part of their meaning. Using both visual and verbal stimuli and a combination of univariate and multivariate pattern analyses, our results show that accessing conceptual representations of instrument class involves brain regions typically associated with complex action and object perception, including the anterior inferior parietal cortex and occipito-temporal cortex. On the other hand, accessing conceptual representations of the manner class involves regions that are commonly associated with the processing of visual and biological motion, in the posterior superior temporal sulcus. These findings support the idea that the semantics of manner and instrument verbs are supported by distinct neural mechanisms.


Assuntos
Encéfalo/fisiologia , Formação de Conceito/fisiologia , Idioma , Adulto , Encéfalo/diagnóstico por imagem , Feminino , Neuroimagem Funcional , Humanos , Conhecimento , Imageamento por Ressonância Magnética , Masculino , Lobo Occipital/diagnóstico por imagem , Lobo Occipital/fisiologia , Lobo Parietal/diagnóstico por imagem , Lobo Parietal/fisiologia , Semântica , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiologia , Adulto Jovem
14.
Cortex ; 120: 66-77, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31255920

RESUMO

Our brains can integrate emotional signals from visual and auditory modalities, which is important for our daily social interactions and survival. Although behavioral effects of facilitation or interference of visual and auditory affective signals have been widely demonstrated, the underlying neural substrates remain unclear. We identified brain activation related to audiovisual affective processing at a whole-brain level in healthy adults with a quantitative coordinate-based meta-analysis, combining data from 306 participants across 18 neuroimaging studies. The meta-analysis identified a core audiovisual affective processing network including the right posterior superior temporal gyrus (pSTG/STS), left anterior superior temporal gyrus (aSTG/STS), right amygdala, and thalamus. These results support the involvement of STG/STS but not sensory-specific brain regions in audiovisual affective processing, consistent with the supramodal hypothesis. To further characterize these identified regions with regard to their connectivity and function, we conducted meta-analytic connectivity modeling and automated meta-analyses. Across both analyses, results showed co-activation profiles of the identified brain regions and their associations with emotion and audiovisual processes. These findings revealed the brain basis of audiovisual affective processing and can help guide future research in further examining its neural correlates.


Assuntos
Afeto , Percepção Auditiva , Encéfalo/fisiologia , Percepção Social , Percepção Visual , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Emoções , Humanos , Funções Verossimilhança
15.
Neuropsychologia ; : 102-110, 2019 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-31175884

RESUMO

Concrete words have been shown to have a processing advantage over abstract words, yet theoretical accounts and neural correlates underlying the distinction between concrete and abstract concepts are still unresolved. In an fMRI study, participants performed a property verification task on abstract and concrete concepts. Property comparisons of concrete concepts were predominantly based on either visual or haptic features. Multivariate pattern analysis successfully distinguished between abstract and concrete stimulus comparisons at the whole brain level. Multivariate searchlight analyses showed that posterior and middle cingulate cortices contained information that distinguished abstract from concrete concepts regardless of feature dominance. These results support the view that supramodal convergence zones play an important role in representation of concrete and abstract concepts.

16.
Biol Psychol ; 139: 59-72, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30291876

RESUMO

This study used event-related potentials (ERPs) to investigate the time course of auditory, visual, and audiovisual affective processing. Stimuli consisted of naturalistic silent videos, instrumental music clips, or combination of the two, with valence varied at three levels for each modality and arousal matched across valence conditions. Affective ratings of the unimodal and multimodal stimuli showed evidence of visual dominance, congruency, and negativity dominance effects. ERP results for unimodal presentations revealed valence effects in early components for both modalities, but only for the visual condition in a late positive potential. The ERP results for multimodal presentations showed effects for both visual valence and auditory valence in three components, early N200, P300 and LPP. A modeling analysis of the N200 component suggested its role in the visual dominance effect, which was further supported by a correlation between behavioral visual dominance scores and the early ERP components. Significant congruency comparisons were also found for N200 amplitudes, suggesting that congruency effects may occur early. Consistent differences between negative and positive valence were found for both visual and auditory modalities in the P300 at anterior electrode clusters, suggesting a potential source for the negativity dominance effect observed behaviorally. The separation between negative and positive valence also occurred at LPP for the visual modality. Significant auditory valence modulation was found for the LPP, implying an integration effect in which valence sensitivity of the LPP emerged for the audiovisual condition. These results provide a basis for mapping out the temporal dynamics of audiovisual affective processing.


Assuntos
Afeto/fisiologia , Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Potenciais Evocados/fisiologia , Percepção Visual/fisiologia , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Fatores de Tempo , Adulto Jovem
17.
Neuropsychologia ; 113: 78-84, 2018 05.
Artigo em Inglês | MEDLINE | ID: mdl-29588225

RESUMO

Previous studies have shown that task sets can be identified from functional magnetic resonance imaging (fMRI) data. However, these results may be partially confounded by differences in stimulus features associated with the different tasks. We disentangle stimulus modality and task features by presenting the same stimulus while varying task, and conversely, presenting different stimuli using the same task. Analyses were conducted on fMRI data previously collected on twenty participants who made either affective or semantic judgements of the same music pieces or the same silent video clips (Kim et al., 2017). Holding stimuli constant, task set was identified from fMRI data across individuals from both task activation data and functional connectivity data. Thus, we were able to identify whether participants made affective or semantic judgments when exposed to identical stimuli based on the task activation and functional connectivity data from other participants. Moreover, task set was successfully identified for cross-modal prediction in which stimuli in the training set bore no resemblance to those in the test set (e.g., using videos data to predict task for music data). Brain regions that were sensitive to tasks irrespective of sensory modality were identified by univariate and searchlight analyses of fMRI data. Consistent with a frontal-parietal network, middle frontal gyrus, inferior parietal gyrus, mid-cingulate cortex, and superior temporal sulcus were found to be key regions distinguishing the two task sets.


Assuntos
Afeto/fisiologia , Atenção/fisiologia , Encéfalo/fisiologia , Vias Neurais/fisiologia , Semântica , Estimulação Acústica , Adulto , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Julgamento/fisiologia , Imageamento por Ressonância Magnética , Masculino , Vias Neurais/diagnóstico por imagem , Oxigênio , Estimulação Luminosa , Tempo de Reação/fisiologia , Estatísticas não Paramétricas
18.
Cogn Emot ; 32(3): 516-529, 2018 05.
Artigo em Inglês | MEDLINE | ID: mdl-28463060

RESUMO

Two experiments examined how affective values from visual and auditory modalities are integrated. Experiment 1 paired music and videos drawn from three levels of valence while holding arousal constant. Experiment 2 included a parallel combination of three levels of arousal while holding valence constant. In each experiment, participants rated their affective states after unimodal and multimodal presentations. Experiment 1 revealed a congruency effect in which stimulus combinations of the same extreme valence resulted in more extreme state ratings than component stimuli presented in isolation. An interaction between music and video valence reflected the greater influence of negative affect. Video valence was found to have a significantly greater effect on combined ratings than music valence. The pattern of data was explained by a five parameter differential weight averaging model that attributed greater weight to the visual modality and increased weight with decreasing values of valence. Experiment 2 revealed a congruency effect only for high arousal combinations and no interaction effects. This pattern was explained by a three parameter constant weight averaging model with greater weight for the auditory modality and a very low arousal value for the initial state. These results demonstrate key differences in audiovisual integration between valence and arousal.


Assuntos
Afeto , Percepção Auditiva , Música/psicologia , Percepção Visual , Estimulação Acústica , Adulto , Nível de Alerta , Feminino , Humanos , Masculino , Estimulação Luminosa , Gravação de Videoteipe , Adulto Jovem
19.
Neuroimage ; 148: 42-54, 2017 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-28057489

RESUMO

This study tested for neural representations of valence that are shared across visual and auditory modalities referred to as modality-general representations. On a given trial participants made either affective or semantic judgments of short silent videos or music samples. For each modality valence was manipulated at three levels, positive, neutral, and negative, while controlling for the level of arousal. Whole-brain crossmodal identification of affect indicated the presence of modality-general valence representations that distinguished 1) positive from negative trials (signed valence) and 2) valenced from non-valenced trials (unsigned valence). These results generalized across the two tasks. Brain regions that were sensitive to valence states in the same way for both modalities were identified by searchlight analysis of fMRI data by comparing the correlation of voxel responses to the same and different valence conditions across the two modalities. These analyses identified seven clusters that distinguished signed valence, unsigned valence or both. Signed valence was represented in the precuneus, unsigned valence in the bilateral medial prefrontal cortex, superior temporal sulcus (STS)/postcentral, and middle frontal gyrus (MFG) and both types were represented in the STS/MFG and thalamus. These results support the idea that modality general valence is represented in a network of several locations throughout the brain.


Assuntos
Emoções/fisiologia , Imageamento por Ressonância Magnética/métodos , Música/psicologia , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Análise por Conglomerados , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Imagem Multimodal , Desempenho Psicomotor/fisiologia , Adulto Jovem
20.
Cogn Neuropsychol ; 33(3-4): 265-75, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27686111

RESUMO

The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms.


Assuntos
Mapeamento Encefálico/métodos , Imageamento por Ressonância Magnética/métodos , Leitura , Adulto , Feminino , Humanos , Masculino , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...