Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Elife ; 92020 02 28.
Artículo en Inglés | MEDLINE | ID: mdl-32108572

RESUMEN

Is vision necessary for the development of the categorical organization of the Ventral Occipito-Temporal Cortex (VOTC)? We used fMRI to characterize VOTC responses to eight categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. We observed that VOTC reliably encodes sound categories in sighted and blind people using a representational structure and connectivity partially similar to the one found in vision. Sound categories were, however, more reliably encoded in the blind than the sighted group, using a representational format closer to the one found in vision. Crucially, VOTC in blind represents the categorical membership of sounds rather than their acoustic features. Our results suggest that sounds trigger categorical responses in the VOTC of congenitally blind and sighted people that partially match the topography and functional profile of the visual response, despite qualitative nuances in the categorical organization of VOTC between modalities and groups.


The world is full of rich and dynamic visual information. To avoid information overload, the human brain groups inputs into categories such as faces, houses, or tools. A part of the brain called the ventral occipito-temporal cortex (VOTC) helps categorize visual information. Specific parts of the VOTC prefer different types of visual input; for example, one part may tend to respond more to faces, whilst another may prefer houses. However, it is not clear how the VOTC characterizes information. One idea is that similarities between certain types of visual information may drive how information is organized in the VOTC. For example, looking at faces requires using central vision, while looking at houses requires using peripheral vision. Furthermore, all faces have a roundish shape while houses tend to have a more rectangular shape. Another possibility, however, is that the categorization of different inputs cannot be explained just by vision, and is also be driven by higher-level aspects of each category. For instance, how humans use or interact with something may also influence how an input is categorized. If categories are established depending (at least partially) on these higher-level aspects, rather than purely through visual likeness, it is likely that the VOTC would respond similarly to both sounds and images representing these categories. Now, Mattioni et al. have tested how individuals with and without sight respond to eight different categories of information to find out whether or not categorization is driven purely by visual likeness. Each category was presented to participants using sounds while measuring their brain activity. In addition, a group of participants who could see were also presented with the categories visually. Mattioni et al. then compared what happened in the VOTC of the three groups ­ sighted people presented with sounds, blind people presented with sounds, and sighted people presented with images ­ in response to each category. The experiment revealed that the VOTC organizes both auditory and visual information in a similar way. However, there were more similarities between the way blind people categorized auditory information and how sighted people categorized visual information than between how sighted people categorized each type of input. Mattioni et al. also found that the region of the VOTC that responds to inanimate objects massively overlapped across the three groups, whereas the part of the VOTC that responds to living things was more variable. These findings suggest that the way that the VOTC organizes information is, at least partly, independent from vision. The experiments also provide some information about how the brain reorganizes in people who are born blind. Further studies may reveal how differences in the VOTC of people with and without sight affect regions typically associated with auditory categorization, and potentially explain how the brain reorganizes in people who become blind later in life.


Asunto(s)
Percepción Auditiva , Ceguera/fisiopatología , Lóbulo Occipital/fisiopatología , Lóbulo Temporal/fisiopatología , Estimulación Acústica , Estudios de Casos y Controles , Humanos
2.
Cortex ; 99: 330-345, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-29334647

RESUMEN

Different contexts require us either to react immediately, or to delay (or suppress) a planned movement. Previous studies that aimed at decoding movement plans typically dissociated movement preparation and execution by means of delayed-movement paradigms. Here we asked whether these results can be generalized to the planning and execution of immediate movements. To directly compare delayed, non-delayed, and suppressed reaching and grasping movements, we used a slow event-related functional magnetic resonance imaging (fMRI) design. To examine how neural representations evolved throughout movement planning, execution, and suppression, we performed time-resolved multivariate pattern analysis (MVPA). During the planning phase, we were able to decode upcoming reaching and grasping movements in contralateral parietal and premotor areas. During the execution phase, we were able to decode movements in a widespread bilateral network of motor, premotor, and somatosensory areas. Moreover, we obtained significant decoding across delayed and non-delayed movement plans in contralateral primary motor cortex. Our results demonstrate the feasibility of time-resolved MVPA and provide new insights into the dynamics of the prehension network, suggesting early neural representations of movement plans in the primary motor cortex that are shared between delayed and non-delayed contexts.


Asunto(s)
Fuerza de la Mano , Corteza Motora/fisiología , Movimiento , Corteza Somatosensorial/fisiología , Adolescente , Adulto , Femenino , Neuroimagen Funcional , Humanos , Imagen por Resonancia Magnética , Masculino , Corteza Motora/diagnóstico por imagen , Análisis Multivariante , Corteza Somatosensorial/diagnóstico por imagen , Factores de Tiempo , Adulto Joven
3.
Proc Natl Acad Sci U S A ; 114(51): 13435-13440, 2017 12 19.
Artículo en Inglés | MEDLINE | ID: mdl-29203678

RESUMEN

Incoming sensory input is condensed by our perceptual system to optimally represent and store information. In the temporal domain, this process has been described in terms of temporal windows (TWs) of integration/segregation, in which the phase of ongoing neural oscillations determines whether two stimuli are integrated into a single percept or segregated into separate events. However, TWs can vary substantially, raising the question of whether different TWs map onto unique oscillations or, rather, reflect a single, general fluctuation in cortical excitability (e.g., in the alpha band). We used multivariate decoding of electroencephalography (EEG) data to investigate perception of stimuli that either repeated in the same location (two-flash fusion) or moved in space (apparent motion). By manipulating the interstimulus interval (ISI), we created bistable stimuli that caused subjects to perceive either integration (fusion/apparent motion) or segregation (two unrelated flashes). Training a classifier searchlight on the whole channels/frequencies/times space, we found that the perceptual outcome (integration vs. segregation) could be reliably decoded from the phase of prestimulus oscillations in right parieto-occipital channels. The highest decoding accuracy for the two-flash fusion task (ISI = 40 ms) was evident in the phase of alpha oscillations (8-10 Hz), while the highest decoding accuracy for the apparent motion task (ISI = 120 ms) was evident in the phase of theta oscillations (6-7 Hz). These results reveal a precise relationship between specific TW durations and specific oscillations. Such oscillations at different frequencies may provide a hierarchical framework for the temporal organization of perception.


Asunto(s)
Ritmo alfa , Ritmo Teta , Percepción Visual , Encéfalo/fisiología , Femenino , Humanos , Masculino , Tiempo de Reacción , Adulto Joven
4.
Sci Rep ; 7(1): 14040, 2017 10 25.
Artículo en Inglés | MEDLINE | ID: mdl-29070901

RESUMEN

How do humans recognize humans among other creatures? Recent studies suggest that a preference for conspecifics may emerge already in perceptual processing, in regions such as the right posterior superior temporal sulcus (pSTS), implicated in visual perception of biological motion. In the current functional MRI study, participants viewed point-light displays of human and nonhuman creatures moving in their typical bipedal (man and chicken) or quadrupedal mode (crawling-baby and cat). Stronger activity for man and chicken versus baby and cat was found in the right pSTS responsive to biological motion. The novel effect of pedalism suggests that, if right pSTS contributes to recognizing of conspecifics, it does so by detecting perceptual features (e.g. bipedal motion) that reliably correlate with their appearance. A searchlight multivariate pattern analysis could decode humans and nonhumans across pedalism in the left pSTS and bilateral posterior cingulate cortex. This result implies a categorical human-nonhuman distinction, independent from within-category physical/perceptual variation. Thus, recognizing conspecifics involves visual classification based on perceptual features that most frequently co-occur with humans, such as bipedalism, and retrieval of information that determines category membership above and beyond visual appearance. The current findings show that these processes are at work in separate brain networks.


Asunto(s)
Marcha , Percepción Visual , Adulto , Animales , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Percepción de Movimiento , Estimulación Luminosa
5.
Cereb Cortex ; 27(8): 4277-4291, 2017 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-28591837

RESUMEN

Humans prioritize different semantic qualities of a complex stimulus depending on their behavioral goals. These semantic features are encoded in distributed neural populations, yet it is unclear how attention might operate across these distributed representations. To address this, we presented participants with naturalistic video clips of animals behaving in their natural environments while the participants attended to either behavior or taxonomy. We used models of representational geometry to investigate how attentional allocation affects the distributed neural representation of animal behavior and taxonomy. Attending to animal behavior transiently increased the discriminability of distributed population codes for observed actions in anterior intraparietal, pericentral, and ventral temporal cortices. Attending to animal taxonomy while viewing the same stimuli increased the discriminability of distributed animal category representations in ventral temporal cortex. For both tasks, attention selectively enhanced the discriminability of response patterns along behaviorally relevant dimensions. These findings suggest that behavioral goals alter how the brain extracts semantic features from the visual world. Attention effectively disentangles population responses for downstream read-out by sculpting representational geometry in late-stage perceptual areas.


Asunto(s)
Atención/fisiología , Encéfalo/fisiología , Percepción de Movimiento/fisiología , Semántica , Adulto , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Modelos Estadísticos , Vías Nerviosas/diagnóstico por imagen , Vías Nerviosas/fisiología , Pruebas Neuropsicológicas , Reconocimiento Visual de Modelos/fisiología
6.
J Neurosci ; 36(41): 10522-10528, 2016 10 12.
Artículo en Inglés | MEDLINE | ID: mdl-27733605

RESUMEN

The human visual system can only represent a small subset of the many objects present in cluttered scenes at any given time, such that objects compete for representation. Despite these processing limitations, the detection of object categories in cluttered natural scenes is remarkably rapid. How does the brain efficiently select goal-relevant objects from cluttered scenes? In the present study, we used multivariate decoding of magneto-encephalography (MEG) data to track the neural representation of within-scene objects as a function of top-down attentional set. Participants detected categorical targets (cars or people) in natural scenes. The presence of these categories within a scene was decoded from MEG sensor patterns by training linear classifiers on differentiating cars and people in isolation and testing these classifiers on scenes containing one of the two categories. The presence of a specific category in a scene could be reliably decoded from MEG response patterns as early as 160 ms, despite substantial scene clutter and variation in the visual appearance of each category. Strikingly, we find that these early categorical representations fully depend on the match between visual input and top-down attentional set: only objects that matched the current attentional set were processed to the category level within the first 200 ms after scene onset. A sensor-space searchlight analysis revealed that this early attention bias was localized to lateral occipitotemporal cortex, reflecting top-down modulation of visual processing. These results show that attention quickly resolves competition between objects in cluttered natural scenes, allowing for the rapid neural representation of goal-relevant objects. SIGNIFICANCE STATEMENT: Efficient attentional selection is crucial in many everyday situations. For example, when driving a car, we need to quickly detect obstacles, such as pedestrians crossing the street, while ignoring irrelevant objects. How can humans efficiently perform such tasks, given the multitude of objects contained in real-world scenes? Here we used multivariate decoding of magnetoencephalogaphy data to characterize the neural underpinnings of attentional selection in natural scenes with high temporal precision. We show that brain activity quickly tracks the presence of objects in scenes, but crucially only for those objects that were immediately relevant for the participant. These results provide evidence for fast and efficient attentional selection that mediates the rapid detection of goal-relevant objects in real-world environments.


Asunto(s)
Atención/fisiología , Percepción Visual/fisiología , Adulto , Femenino , Humanos , Magnetoencefalografía , Masculino , Lóbulo Occipital/fisiología , Estimulación Luminosa , Lóbulo Temporal/fisiología , Corteza Visual/fisiología , Adulto Joven
7.
Front Neuroinform ; 10: 27, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27499741

RESUMEN

Recent years have seen an increase in the popularity of multivariate pattern (MVP) analysis of functional magnetic resonance (fMRI) data, and, to a much lesser extent, magneto- and electro-encephalography (M/EEG) data. We present CoSMoMVPA, a lightweight MVPA (MVP analysis) toolbox implemented in the intersection of the Matlab and GNU Octave languages, that treats both fMRI and M/EEG data as first-class citizens. CoSMoMVPA supports all state-of-the-art MVP analysis techniques, including searchlight analyses, classification, correlations, representational similarity analysis, and the time generalization method. These can be used to address both data-driven and hypothesis-driven questions about neural organization and representations, both within and across: space, time, frequency bands, neuroimaging modalities, individuals, and species. It uses a uniform data representation of fMRI data in the volume or on the surface, and of M/EEG data at the sensor and source level. Through various external toolboxes, it directly supports reading and writing a variety of fMRI and M/EEG neuroimaging formats, and, where applicable, can convert between them. As a result, it can be integrated readily in existing pipelines and used with existing preprocessed datasets. CoSMoMVPA overloads the traditional volumetric searchlight concept to support neighborhoods for M/EEG and surface-based fMRI data, which supports localization of multivariate effects of interest across space, time, and frequency dimensions. CoSMoMVPA also provides a generalized approach to multiple comparison correction across these dimensions using Threshold-Free Cluster Enhancement with state-of-the-art clustering and permutation techniques. CoSMoMVPA is highly modular and uses abstractions to provide a uniform interface for a variety of MVP measures. Typical analyses require a few lines of code, making it accessible to beginner users. At the same time, expert programmers can easily extend its functionality. CoSMoMVPA comes with extensive documentation, including a variety of runnable demonstration scripts and analysis exercises (with example data and solutions). It uses best software engineering practices including version control, distributed development, an automated test suite, and continuous integration testing. It can be used with the proprietary Matlab and the free GNU Octave software, and it complies with open source distribution platforms such as NeuroDebian. CoSMoMVPA is Free/Open Source Software under the permissive MIT license. Website: http://cosmomvpa.org Source code: https://github.com/CoSMoMVPA/CoSMoMVPA.

8.
Neuroimage ; 136: 197-207, 2016 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-27173760

RESUMEN

To be able to interact with our environment, we need to transform incoming sensory information into goal-directed motor outputs. Whereas our ability to plan an appropriate movement based on sensory information appears effortless and simple, the underlying brain dynamics are still largely unknown. Here we used magnetoencephalography (MEG) to investigate this issue by recording brain activity during the planning of non-visually guided reaching and grasping actions, performed with either the left or right hand. Adopting a combination of univariate and multivariate analyses, we revealed specific patterns of beta power modulations underlying varying levels of neural representations during movement planning. (1) Effector-specific modulations were evident as a decrease in power in the beta band. Within both hemispheres, this decrease was stronger while planning a movement with the contralateral hand. (2) The comparison of planned grasping and reaching led to a relative increase in power in the beta band. These power changes were localized within temporal, premotor and posterior parietal cortices. Action-related modulations overlapped with effector-related beta power changes within widespread frontal and parietal regions, suggesting the possible integration of these two types of neural representations. (3) Multivariate analyses of action-specific power changes revealed that part of this broadband beta modulation also contributed to the encoding of an effector-independent neural representation of a planned action within fronto-parietal and temporal regions. Our results suggest that beta band power modulations play a central role in movement planning, within both the dorsal and ventral stream, by coding and integrating different levels of neural representations, ranging from the simple representation of the to-be-moved effector up to an abstract, effector-independent representation of the upcoming action.


Asunto(s)
Anticipación Psicológica/fisiología , Atención/fisiología , Ritmo beta/fisiología , Corteza Cerebral/fisiología , Movimiento/fisiología , Desempeño Psicomotor/fisiología , Mapeo Encefálico , Femenino , Objetivos , Mano/fisiología , Humanos , Magnetoencefalografía , Masculino , Red Nerviosa/fisiología , Adulto Joven
9.
J Neurosci ; 35(49): 16034-45, 2015 Dec 09.
Artículo en Inglés | MEDLINE | ID: mdl-26658857

RESUMEN

Understanding other people's actions is a fundamental prerequisite for social interactions. Whether action understanding relies on simulating the actions of others in the observers' motor system or on the access to conceptual knowledge stored in nonmotor areas is strongly debated. It has been argued previously that areas that play a crucial role in action understanding should (1) distinguish between different actions, (2) generalize across the ways in which actions are performed (Dinstein et al., 2008; Oosterhof et al., 2013; Caramazza et al., 2014), and (3) have access to action information around the time of action recognition (Hauk et al., 2008). Whereas previous studies focused on the first two criteria, little is known about the dynamics underlying action understanding. We examined which human brain regions are able to distinguish between pointing and grasping, regardless of reach direction (left or right) and effector (left or right hand), using multivariate pattern analysis of magnetoencephalography data. We show that the lateral occipitotemporal cortex (LOTC) has the earliest access to abstract action representations, which coincides with the time point from which there was enough information to allow discriminating between the two actions. By contrast, precentral regions, though recruited early, have access to such abstract representations substantially later. Our results demonstrate that in contrast to the LOTC, the early recruitment of precentral regions does not contain the detailed information that is required to recognize an action. We discuss previous theoretical claims of motor theories and how they are incompatible with our data. SIGNIFICANCE STATEMENT: It is debated whether our ability to understand other people's actions relies on the simulation of actions in the observers' motor system, or is based on access to conceptual knowledge stored in nonmotor areas. Here, using magnetoencephalography in combination with machine learning, we examined where in the brain and at which point in time it is possible to distinguish between pointing and grasping actions regardless of the way in which they are performed (effector, reach direction). We show that, in contrast to the predictions of motor theories of action understanding, the lateral occipitotemporal cortex has access to abstract action representations substantially earlier than precentral regions.


Asunto(s)
Formación de Concepto/fisiología , Lateralidad Funcional/fisiología , Magnetoencefalografía , Lóbulo Occipital/fisiología , Desempeño Psicomotor/fisiología , Lóbulo Temporal/fisiología , Adulto , Mapeo Encefálico , Femenino , Fuerza de la Mano , Humanos , Masculino , Análisis Multivariante , Estimulación Luminosa , Factores de Tiempo , Adulto Joven
10.
J Cogn Neurosci ; 27(4): 665-78, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25269114

RESUMEN

Major theories for explaining the organization of semantic memory in the human brain are premised on the often-observed dichotomous dissociation between living and nonliving objects. Evidence from neuroimaging has been interpreted to suggest that this distinction is reflected in the functional topography of the ventral vision pathway as lateral-to-medial activation gradients. Recently, we observed that similar activation gradients also reflect differences among living stimuli consistent with the semantic dimension of graded animacy. Here, we address whether the salient dichotomous distinction between living and nonliving objects is actually reflected in observable measured brain activity or whether previous observations of a dichotomous dissociation were the illusory result of stimulus sampling biases. Using fMRI, we measured neural responses while participants viewed 10 animal species with high to low animacy and two inanimate categories. Representational similarity analysis of the activity in ventral vision cortex revealed a main axis of variation with high-animacy species maximally different from artifacts and with the least animate species closest to artifacts. Although the associated functional topography mirrored activation gradients observed for animate-inanimate contrasts, we found no evidence for a dichotomous dissociation. We conclude that a central organizing principle of human object vision corresponds to the graded psychological property of animacy with no clear distinction between living and nonliving stimuli. The lack of evidence for a dichotomous dissociation in the measured brain activity challenges theories based on this premise.


Asunto(s)
Mapeo Encefálico , Ilusiones Ópticas/fisiología , Reconocimiento Visual de Modelos/fisiología , Semántica , Corteza Visual/fisiología , Vías Visuales/fisiología , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Oxígeno/sangre , Estimulación Luminosa , Análisis de Componente Principal , Tiempo de Reacción/fisiología , Corteza Visual/irrigación sanguínea , Vías Visuales/irrigación sanguínea
11.
Behav Brain Sci ; 37(2): 213-5, 2014 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-24775171

RESUMEN

Cook et al. overstate the evidence supporting their associative account of mirror neurons in humans: most studies do not address a key property, action-specificity that generalizes across the visual and motor domains. Multivariate pattern analysis (MVPA) of neuroimaging data can address this concern, and we illustrate how MVPA can be used to test key predictions of their account.


Asunto(s)
Evolución Biológica , Encéfalo/fisiología , Aprendizaje/fisiología , Neuronas Espejo/fisiología , Percepción Social , Animales , Humanos
12.
Neuroimage ; 88: 69-78, 2014 03.
Artículo en Inglés | MEDLINE | ID: mdl-24246486

RESUMEN

Studies investigating the role of oscillatory activity in sensory perception are primarily conducted in the visual domain, while the contribution of oscillatory activity to auditory perception is heavily understudied. The objective of the present study was to investigate macroscopic (EEG) oscillatory brain response patterns that contribute to an auditory (Zwicker tone, ZT) illusion. Three different analysis approaches were chosen: 1) a parametric variation of the ZT illusion intensity via three different notch widths of the ZT-inducing noise; 2) contrasts of high-versus-low-intensity ZT illusion trials, excluding physical stimuli differences; 3) a representational similarity analysis to relate source activity patterns to loudness ratings. Depending on the analysis approach, levels of alpha to beta activity (10-20Hz) reflected illusion intensity, mainly defined by reduced power levels co-occurring with stronger percepts. Consistent across all analysis approaches, source level analysis implicated auditory cortices as main generators, providing evidence that the activity level in the alpha and beta range - at least in part - contributes to the strength of the illusory auditory percept. This study corroborates the notion that alpha to beta activity in the auditory cortex is linked to functionally similar states, as has been proposed for visual, somatosensory and motor regions. Furthermore, our study provides certain theoretical implications for pathological auditory conscious perception (tinnitus).


Asunto(s)
Ritmo alfa/fisiología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Ritmo beta/fisiología , Ilusiones/fisiología , Adulto , Femenino , Humanos , Masculino , Adulto Joven
13.
Trends Cogn Sci ; 17(7): 311-8, 2013 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-23746574

RESUMEN

The notion of a frontoparietal human mirror neuron system (HMNS) has been used to explain a range of social phenomena. However, most human neuroimaging studies of this system do not address critical 'mirror' properties: neural representations should be action specific and should generalise across visual and motor modalities. Studies using repetition suppression (RS) and, particularly, multivariate pattern analysis (MVPA) highlight the contribution to action perception of anterior parietal regions. Further, these studies add to mounting evidence that suggests the lateral occipitotemporal cortex plays a role in the HMNS, but they offer less support for the involvement of the premotor cortex. Neuroimaging, particularly through application of MVPA, has the potential to reveal the properties of the HMNS in further detail, which could challenge prevailing views about its neuroanatomical organisation.


Asunto(s)
Mapeo Encefálico , Corteza Cerebral/citología , Conducta Imitativa/fisiología , Neuronas Espejo/fisiología , Actividad Motora/fisiología , Neuroimagen , Lateralidad Funcional , Humanos
14.
Emotion ; 13(4): 724-38, 2013 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-23627724

RESUMEN

People rapidly form impressions from facial appearance, and these impressions affect social decisions. We argue that data-driven, computational models are the best available tools for identifying the source of such impressions. Here we validate seven computational models of social judgments of faces: attractiveness, competence, dominance, extroversion, likability, threat, and trustworthiness. The models manipulate both face shape and reflectance (i.e., cues such as pigmentation and skin smoothness). We show that human judgments track the models' predictions (Experiment 1) and that the models differentiate between different judgments, though this differentiation is constrained by the similarity of the models (Experiment 2). We also make the validated stimuli available for academic research: seven databases containing 25 identities manipulated in the respective model to take on seven different dimension values, ranging from -3 SD to +3 SD (175 stimuli in each database). Finally, we show how the computational models can be used to control for shared variance of the models. For example, even for highly correlated dimensions (e.g., dominance and threat), we can identify cues specific to each dimension and, consequently, generate faces that vary only on these cues.


Asunto(s)
Simulación por Computador , Cara , Modelos Psicológicos , Percepción Social , Adulto , Señales (Psicología) , Bases de Datos Factuales , Expresión Facial , Femenino , Humanos , Masculino , Reconocimiento Visual de Modelos/fisiología , Adulto Joven
15.
Neuroimage ; 63(1): 262-71, 2012 Oct 15.
Artículo en Inglés | MEDLINE | ID: mdl-22766163

RESUMEN

An important human capacity is the ability to imagine performing an action, and its consequences, without actually executing it. Here we seek neural representations of specific manual actions that are common across visuo-motor performance and imagery. Participants were scanned with fMRI while they performed and observed themselves performing two different manual actions during some trials, and imagined performing and observing themselves performing the same actions during other trials. We used multi-variate pattern analysis to identify areas where representations of specific actions generalize across imagined and performed actions. The left anterior parietal cortex showed this property. In this region, we also found that activity patterns for imagined actions generalize better to performed actions than vice versa, and we provide simulation results that can explain this asymmetry. The present results are the first demonstration of action-specific representations that are similar irrespective of whether actions are actively performed or covertly imagined. Further, they demonstrate concretely how the apparent cross-modal visuo-motor coding of actions identified in studies of a human "mirror neuron system" could, at least partially, reflect imagery.


Asunto(s)
Mapeo Encefálico/métodos , Corteza Cerebral/fisiología , Imaginación/fisiología , Imagen por Resonancia Magnética/métodos , Movimiento/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Percepción Visual/fisiología , Femenino , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Masculino , Análisis Multivariante , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
16.
J Cogn Neurosci ; 24(4): 975-89, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22264198

RESUMEN

The discovery of mirror neurons-neurons that code specific actions both when executed and observed-in area F5 of the macaque provides a potential neural mechanism underlying action understanding. To date, neuroimaging evidence for similar coding of specific actions across the visual and motor modalities in human ventral premotor cortex (PMv)-the putative homologue of macaque F5-is limited to the case of actions observed from a first-person perspective. However, it is the third-person perspective that figures centrally in our understanding of the actions and intentions of others. To address this gap in the literature, we scanned participants with fMRI while they viewed two actions from either a first- or third-person perspective during some trials and executed the same actions during other trials. Using multivoxel pattern analysis, we found action-specific cross-modal visual-motor representations in PMv for the first-person but not for the third-person perspective. Additional analyses showed no evidence for spatial or attentional differences across the two perspective conditions. In contrast, more posterior areas in the parietal and occipitotemporal cortex did show cross-modal coding regardless of perspective. These findings point to a stronger role for these latter regions, relative to PMv, in supporting the understanding of others' actions with reference to one's own actions.


Asunto(s)
Atención/fisiología , Mapeo Encefálico , Corteza Cerebral/fisiología , Conducta Imitativa/fisiología , Percepción Visual/fisiología , Adulto , Análisis de Varianza , Corteza Cerebral/irrigación sanguínea , Femenino , Lateralidad Funcional , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Percepción de Movimiento , Oxígeno/sangre , Reconocimiento Visual de Modelos , Estimulación Luminosa , Desempeño Psicomotor , Adulto Joven
17.
J Neurophysiol ; 107(2): 628-39, 2012 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-22013235

RESUMEN

How is working memory for different visual categories supported in the brain? Do the same principles of cortical specialization that govern the initial processing and encoding of visual stimuli also apply to their short-term maintenance? We investigated these questions with a delayed discrimination paradigm for faces, bodies, flowers, and scenes and applied both univariate and multivariate analyses to functional magnetic resonance imaging (fMRI) data. Activity during encoding followed the well-known specialization in posterior areas. During the delay interval, activity shifted to frontal and parietal regions but was not specialized for category. Conversely, activity in visual areas returned to baseline during that interval but showed some evidence of category specialization on multivariate pattern analysis (MVPA). We conclude that principles of cortical activation differ between encoding and maintenance of visual material. Whereas perceptual processes rely on specialized regions in occipitotemporal cortex, maintenance involves the activation of a frontoparietal network that seems to require little specialization at the category level. We also confirm previous findings that MVPA can extract information from fMRI signals in the absence of suprathreshold activation and that such signals from visual areas can reflect the material stored in memory.


Asunto(s)
Mapeo Encefálico , Encéfalo/fisiología , Memoria a Corto Plazo/fisiología , Reconocimiento Visual de Modelos/fisiología , Adulto , Encéfalo/irrigación sanguínea , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Análisis Multivariante , Pruebas Neuropsicológicas , Oxígeno/sangre , Estimulación Luminosa , Tiempo de Reacción , Aprendizaje Seriado/fisiología , Factores de Tiempo , Adulto Joven
18.
J Neurosci ; 31(29): 10701-11, 2011 Jul 20.
Artículo en Inglés | MEDLINE | ID: mdl-21775613

RESUMEN

Motivation improves the efficiency of intentional behavior, but how this performance modulation is instantiated in the human brain remains unclear. We used a reward-cued antisaccade paradigm to investigate how motivational goals (the expectation of a reward for good performance) modulate patterns of neural activation and functional connectivity to improve preparation for antisaccade performance. Behaviorally, subjects performed better (faster and more accurate antisaccades) when they knew they would be rewarded for good performance. Reward anticipation was associated with increased activation in the ventral and dorsal striatum, and cortical oculomotor regions. Functional connectivity between the caudate nucleus and cortical oculomotor control structures predicted individual differences in the behavioral benefit of reward anticipation. We conclude that although both dorsal and ventral striatal circuitry are involved in the anticipation of reward, only the dorsal striatum and its connected cortical network is involved in the direct modulation of oculomotor behavior by motivational incentive.


Asunto(s)
Ganglios Basales/fisiología , Núcleo Caudado/fisiología , Movimientos Oculares/fisiología , Motivación/fisiología , Vías Nerviosas/fisiología , Análisis de Varianza , Atención/fisiología , Ganglios Basales/irrigación sanguínea , Mapeo Encefálico , Núcleo Caudado/irrigación sanguínea , Señales (Psicología) , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Masculino , Vías Nerviosas/irrigación sanguínea , Oxígeno/sangre , Estimulación Luminosa/métodos , Tiempo de Reacción , Recompensa , Aprendizaje Seriado/fisiología , Factores de Tiempo , Adulto Joven
19.
J Cogn Neurosci ; 23(10): 2766-81, 2011 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-21254805

RESUMEN

In two fMRI experiments (n = 44) using tasks with different demands-approach-avoidance versus one-back recognition decisions-we measured the responses to the social value of faces. The face stimuli were produced by a parametric model of face evaluation that reduces multiple social evaluations to two orthogonal dimensions of valence and power [Oosterhof, N. N., & Todorov, A. The functional basis of face evaluation. Proceedings of the National Academy of Sciences, U.S.A., 105, 11087-11092, 2008]. Independent of the task, the response within regions of the occipital, fusiform, and lateral prefrontal cortices was sensitive to the valence dimension, with larger responses to low-valence faces. Additionally, there were extensive quadratic responses in the fusiform gyri and dorsal amygdala, with larger responses to faces at the extremes of the face valence continuum than faces in the middle. In all these regions, participants' avoidance decisions correlated with brain responses, with faces more likely to be avoided evoking stronger responses. The findings suggest that both explicit and implicit face evaluation engage multiple brain regions involved in attention, affect, and decision making.


Asunto(s)
Mapeo Encefálico , Encéfalo/fisiología , Expresión Facial , Reconocimiento Visual de Modelos/fisiología , Valores Sociales , Adolescente , Análisis de Varianza , Encéfalo/irrigación sanguínea , Toma de Decisiones/fisiología , Cara , Femenino , Lateralidad Funcional/fisiología , Humanos , Procesamiento de Imagen Asistido por Computador , Juicio , Imagen por Resonancia Magnética , Masculino , Oxígeno/sangre , Estimulación Luminosa , Tiempo de Reacción/fisiología , Adulto Joven
20.
Neuroimage ; 56(2): 593-600, 2011 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-20621701

RESUMEN

For functional magnetic resonance imaging (fMRI), multi-voxel pattern analysis (MVPA) has been shown to be a sensitive method to detect areas that encode certain stimulus dimensions. By moving a searchlight through the volume of the brain, one can continuously map the information content about the experimental conditions of interest to the brain. Traditionally, the searchlight is defined as a volume sphere that does not take into account the anatomy of the cortical surface. Here we present a method that uses a cortical surface reconstruction to guide voxel selection for information mapping. This approach differs in two important aspects from a volume-based searchlight definition. First, it uses only voxels that are classified as grey matter based on an anatomical scan. Second, it uses a surface-based geodesic distance metric to define neighbourhoods of voxels, and does not select voxels across a sulcus. We study here the influence of these two factors onto classification accuracy and onto the spatial specificity of the resulting information map. In our example data set, participants pressed one of four fingers while undergoing fMRI. We used MVPA to identify regions in which local fMRI patterns can successfully discriminate which finger was moved. We show that surface-based information mapping is a more sensitive measure of local information content, and provides better spatial selectivity. This makes surface-based information mapping a useful technique for a data-driven analysis of information representation in the cerebral cortex.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/anatomía & histología , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética , Femenino , Humanos , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA