Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
Add more filters










Publication year range
1.
Neuropsychologia ; 199: 108900, 2024 07 04.
Article in English | MEDLINE | ID: mdl-38697558

ABSTRACT

Whilst previous research has linked attenuation of the mu rhythm to the observation of specific visual categories, and even to a potential role in action observation via a putative mirror neuron system, much of this work has not considered what specific type of information might be coded in this oscillatory response when triggered via vision. Here, we sought to determine whether the mu rhythm contains content-specific information about the identity of familiar (and also unfamiliar) graspable objects. In the present study, right-handed participants (N = 27) viewed images of both familiar (apple, wine glass) and unfamiliar (cubie, smoothie) graspable objects, whilst performing an orthogonal task at fixation. Multivariate pattern analysis (MVPA) revealed significant decoding of familiar, but not unfamiliar, visual object categories in the mu rhythm response. Thus, simply viewing familiar graspable objects may automatically trigger activation of associated tactile and/or motor properties in sensorimotor areas, reflected in the mu rhythm. In addition, we report significant attenuation in the central beta band for both familiar and unfamiliar visual objects, but not in the mu rhythm. Our findings highlight how analysing two different aspects of the oscillatory response - either attenuation or the representation of information content - provide complementary views on the role of the mu rhythm in response to viewing graspable object categories.


Subject(s)
Recognition, Psychology , Humans , Male , Female , Young Adult , Adult , Recognition, Psychology/physiology , Brain Waves/physiology , Electroencephalography , Pattern Recognition, Visual/physiology , Photic Stimulation
2.
Sci Rep ; 14(1): 9402, 2024 04 24.
Article in English | MEDLINE | ID: mdl-38658575

ABSTRACT

Perceptual decisions are derived from the combination of priors and sensorial input. While priors are broadly understood to reflect experience/expertise developed over one's lifetime, the role of perceptual expertise at the individual level has seldom been directly explored. Here, we manipulate probabilistic information associated with a high and low expertise category (faces and cars respectively), while assessing individual level of expertise with each category. 67 participants learned the probabilistic association between a color cue and each target category (face/car) in a behavioural categorization task. Neural activity (EEG) was then recorded in a similar paradigm in the same participants featuring the previously learned contingencies without the explicit task. Behaviourally, perception of the higher expertise category (faces) was modulated by expectation. Specifically, we observed facilitatory and interference effects when targets were correctly or incorrectly expected, which were also associated with independently measured individual levels of face expertise. Multivariate pattern analysis of the EEG signal revealed clear effects of expectation from 100 ms post stimulus, with significant decoding of the neural response to expected vs. not stimuli, when viewing identical images. Latency of peak decoding when participants saw faces was directly associated with individual level facilitation effects in the behavioural task. The current results not only provide time sensitive evidence of expectation effects on early perception but highlight the role of higher-level expertise on forming priors.


Subject(s)
Electroencephalography , Facial Recognition , Humans , Male , Female , Adult , Facial Recognition/physiology , Young Adult , Photic Stimulation , Reaction Time/physiology , Visual Perception/physiology , Face/physiology
3.
Biology (Basel) ; 12(7)2023 Jul 20.
Article in English | MEDLINE | ID: mdl-37508451

ABSTRACT

Neurons in the primary visual cortex (V1) receive sensory inputs that describe small, local regions of the visual scene and cortical feedback inputs from higher visual areas processing the global scene context. Investigating the spatial precision of this visual contextual modulation will contribute to our understanding of the functional role of cortical feedback inputs in perceptual computations. We used human functional magnetic resonance imaging (fMRI) to test the spatial precision of contextual feedback inputs to V1 during natural scene processing. We measured brain activity patterns in the stimulated regions of V1 and in regions that we blocked from direct feedforward input, receiving information only from non-feedforward (i.e., feedback and lateral) inputs. We measured the spatial precision of contextual feedback signals by generalising brain activity patterns across parametrically spatially displaced versions of identical images using an MVPA cross-classification approach. We found that fMRI activity patterns in cortical feedback signals predicted our scene-specific features in V1 with a precision of approximately 4 degrees. The stimulated regions of V1 carried more precise scene information than non-stimulated regions; however, these regions also contained information patterns that generalised up to 4 degrees. This result shows that contextual signals relating to the global scene are similarly fed back to V1 when feedforward inputs are either present or absent. Our results are in line with contextual feedback signals from extrastriate areas to V1, describing global scene information and contributing to perceptual computations such as the hierarchical representation of feature boundaries within natural scenes.

4.
Cortex ; 159: 299-312, 2023 02.
Article in English | MEDLINE | ID: mdl-36669447

ABSTRACT

Although humans are considered to be face experts, there is a well-established reliable variation in the degree to which neurotypical individuals are able to learn and recognise faces. While many behavioural studies have characterised these differences, studies that seek to relate the neuronal response to standardised behavioural measures of ability remain relatively scarce, particularly so for the time-resolved approaches and the early response to face stimuli. In the present study we make use of a relatively recent methodological advance, multi-variate pattern analysis (MVPA), to decode the time course of the neural response to faces compared to other object categories (inverted faces, objects). Importantly, for the first time, we directly relate metrics of this decoding assessed at the individual level to gold-standard measures of behavioural face processing ability assessed in an independent task. Thirty-nine participants completed the behavioural Cambridge Face Memory Test (CFMT), then viewed images of faces and houses (presented upright and inverted) while their neural activity was measured via electroencephalography. Significant decoding of both face orientation and face category were observed in all individual participants. Decoding of face orientation, a marker of more advanced face processing, was earlier and stronger in participants with higher levels of face expertise, while decoding of face category information was earlier but not stronger for individuals with greater face expertise. Taken together these results provide a marker of significant differences in the early neuronal response to faces from around 100 ms post stimulus as a function of behavioural expertise with faces.


Subject(s)
Facial Recognition , Humans , Facial Recognition/physiology , Electroencephalography , Learning , Orientation, Spatial , Pattern Recognition, Visual/physiology , Photic Stimulation/methods
5.
Cereb Cortex ; 33(7): 3621-3635, 2023 03 21.
Article in English | MEDLINE | ID: mdl-36045002

ABSTRACT

Neurons, even in the earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Recent work has shown that primary sensory areas can respond to and, in some cases, discriminate stimuli that are not of their target modality: for example, primary somatosensory cortex (SI) discriminates visual images of graspable objects. In the present work, we investigated whether SI would discriminate sounds depicting hand-object interactions (e.g. bouncing a ball). In a rapid event-related functional magnetic resonance imaging experiment, participants listened attentively to sounds from 3 categories: hand-object interactions, and control categories of pure tones and animal vocalizations, while performing a one-back repetition detection task. Multivoxel pattern analysis revealed significant decoding of hand-object interaction sounds within SI, but not for either control category. Crucially, in the hand-sensitive voxels defined from an independent tactile localizer, decoding accuracies were significantly higher for hand-object interactions compared to pure tones in left SI. Our findings indicate that simply hearing sounds depicting familiar hand-object interactions elicit different patterns of activity in SI, despite the complete absence of tactile stimulation. These results highlight the rich contextual information that can be transmitted across sensory modalities even to primary sensory areas.


Subject(s)
Hand , Somatosensory Cortex , Animals , Somatosensory Cortex/diagnostic imaging , Somatosensory Cortex/physiology , Touch/physiology , Neurons/physiology , Magnetic Resonance Imaging , Brain Mapping
6.
Sci Rep ; 12(1): 9042, 2022 06 05.
Article in English | MEDLINE | ID: mdl-35662252

ABSTRACT

Intelligent manipulation of handheld tools marks a major discontinuity between humans and our closest ancestors. Here we identified neural representations about how tools are typically manipulated within left anterior temporal cortex, by shifting a searchlight classifier through whole-brain real action fMRI data when participants grasped 3D-printed tools in ways considered typical for use (i.e., by their handle). These neural representations were automatically evocated as task performance did not require semantic processing. In fact, findings from a behavioural motion-capture experiment confirmed that actions with tools (relative to non-tool) incurred additional processing costs, as would be suspected if semantic areas are being automatically engaged. These results substantiate theories of semantic cognition that claim the anterior temporal cortex combines sensorimotor and semantic content for advanced behaviours like tool manipulation.


Subject(s)
Brain Mapping , Magnetic Resonance Imaging , Brain Mapping/methods , Humans , Magnetic Resonance Imaging/methods , Multivariate Analysis , Semantics , Temporal Lobe/diagnostic imaging
7.
Sci Rep ; 11(1): 14357, 2021 07 13.
Article in English | MEDLINE | ID: mdl-34257357

ABSTRACT

Studies on low-level visual information underlying pain categorization have led to inconsistent findings. Some show an advantage for low spatial frequency information (SFs) and others a preponderance of mid SFs. This study aims to clarify this gap in knowledge since these results have different theoretical and practical implications, such as how far away an observer can be in order to categorize pain. This study addresses this question by using two complementary methods: a data-driven method without a priori expectations about the most useful SFs for pain recognition and a more ecological method that simulates the distance of stimuli presentation. We reveal a broad range of important SFs for pain recognition starting from low to relatively high SFs and showed that performance is optimal in a short to medium distance (1.2-4.8 m) but declines significantly when mid SFs are no longer available. This study reconciles previous results that show an advantage of LSFs over HSFs when using arbitrary cutoffs, but above all reveal the prominent role of mid-SFs for pain recognition across two complementary experimental tasks.


Subject(s)
Emotions , Facial Expression , Facial Pain/classification , Facial Pain/diagnosis , Pattern Recognition, Visual , Psychophysics/methods , Adolescent , Adult , Distance Perception , Face , Facial Recognition , Female , Humans , Knowledge , Male , Normal Distribution , Recognition, Psychology , Reproducibility of Results , Young Adult
8.
J Neurosci ; 41(24): 5263-5273, 2021 06 16.
Article in English | MEDLINE | ID: mdl-33972399

ABSTRACT

Most neuroimaging experiments that investigate how tools and their actions are represented in the brain use visual paradigms where tools or hands are displayed as 2D images and no real movements are performed. These studies discovered selective visual responses in occipitotemporal and parietal cortices for viewing pictures of hands or tools, which are assumed to reflect action processing, but this has rarely been directly investigated. Here, we examined the responses of independently visually defined category-selective brain areas when participants grasped 3D tools (N = 20; 9 females). Using real-action fMRI and multivoxel pattern analysis, we found that grasp typicality representations (i.e., whether a tool is grasped appropriately for use) were decodable from hand-selective areas in occipitotemporal and parietal cortices, but not from tool-, object-, or body-selective areas, even if partially overlapping. Importantly, these effects were exclusive for actions with tools, but not for biomechanically matched actions with control nontools. In addition, grasp typicality decoding was significantly higher in hand than tool-selective parietal regions. Notably, grasp typicality representations were automatically evoked even when there was no requirement for tool use and participants were naive to object category (tool vs nontools). Finding a specificity for typical tool grasping in hand-selective, rather than tool-selective, regions challenges the long-standing assumption that activation for viewing tool images reflects sensorimotor processing linked to tool manipulation. Instead, our results show that typicality representations for tool grasping are automatically evoked in visual regions specialized for representing the human hand, the primary tool of the brain for interacting with the world.


Subject(s)
Brain Mapping/methods , Hand/physiology , Imaging, Three-Dimensional/methods , Psychomotor Performance/physiology , Adolescent , Adult , Brain/physiology , Female , Hand Strength/physiology , Humans , Magnetic Resonance Imaging , Male , Young Adult
9.
Neuropsychologia ; 142: 107440, 2020 05.
Article in English | MEDLINE | ID: mdl-32179101

ABSTRACT

Face recognition ability is often reported to be a relative strength in Williams syndrome (WS). Yet methodological issues associated with the supporting research, and evidence that atypical face processing mechanisms may drive outcomes 'in the typical range', challenge these simplistic characterisations of this important social ability. Detailed investigations of face processing abilities in WS both at a behavioural and neural level provide critical insights. Here, we behaviourally characterised face recognition ability in 18 individuals with WS comparatively to typically developing children and adult control groups. A subset of 11 participants with WS as well as chronologically age matched typical adults further took part in an EEG task where they were asked to attentively view a series of upright and inverted faces and houses. State-of-the-art multivariate pattern analysis (MVPA) was used alongside standard ERP analysis to obtain a detailed characterisation of the neural profile associated with 1) viewing faces as an overall category (by examining neural activity associated with upright faces and houses), and to 2) the canonical upright configuration of a face, critically associated with expertise in typical development and often linked with holistic processing (upright and inverted faces). Our results show that while face recognition ability is not on average at a chronological age-appropriate level in individuals with WS, it nonetheless appears to be a relative strength within their cognitive profile. Furthermore, all participants with WS revealed a differential pattern of neural activity to faces compared to objects, showing a distinct response to faces as a category, as well as a differential neural pattern for upright vs. inverted faces. Nonetheless, an atypical profile of face orientation classification was found in WS, suggesting that this group differs from typical individuals in their face processing mechanisms. Through this innovative application of MVPA, alongside the high temporal resolution of EEG, we provide important new insights into the neural processing of faces in WS.


Subject(s)
Facial Recognition , Williams Syndrome , Adult , Child , Electroencephalography , Evoked Potentials , Humans , Orientation , Orientation, Spatial , Pattern Recognition, Visual , Photic Stimulation
10.
Neuroimage ; 211: 116660, 2020 05 01.
Article in English | MEDLINE | ID: mdl-32081784

ABSTRACT

Rapidly and accurately processing information from faces is a critical human function that is known to improve with developmental age. Understanding the underlying drivers of this improvement remains a contentious question, with debate continuing as to the presence of early vs. late maturation of face-processing mechanisms. Recent behavioural evidence suggests an important 'hallmark' of expert face processing - the face inversion effect - is present in very young children, yet neural support for this remains unclear. To address this, we conducted a detailed investigation of the neural dynamics of face processing in children spanning a range of ages (6-11 years) and adults. Uniquely, we applied multivariate pattern analysis (MVPA) to the electroencephalogram signal (EEG) to test for the presence of a distinct neural profile associated with canonical upright faces when compared both to other objects (houses) and to inverted faces. Results revealed robust discrimination profiles, at the individual level, of differentiated neural activity associated with broad face categorization and further with its expert processing, as indexed by the face inversion effect, from the youngest ages tested. This result is consistent with an early functional maturation of broad face processing mechanisms. Yet, clear quantitative differences between the response profile of children and adults is suggestive of age-related refinement of this system with developing face and general expertise. Standard ERP analysis also provides some support for qualitative differences in the neural response to inverted faces in children in contrast to adults. This neural profile is in line with recent behavioural studies that have reported impressively expert early face abilities during childhood, while also providing novel evidence of the ongoing neural specialisation between child and adulthood.


Subject(s)
Child Development/physiology , Electroencephalography/methods , Evoked Potentials/physiology , Facial Recognition/physiology , Social Perception , Adult , Child , Female , Humans , Male , Young Adult
11.
Neuroimage ; 195: 261-271, 2019 07 15.
Article in English | MEDLINE | ID: mdl-30940611

ABSTRACT

Faces transmit a wealth of important social signals. While previous studies have elucidated the network of cortical regions important for perception of facial expression, and the associated temporal components such as the P100, N170 and EPN, it is still unclear how task constraints may shape the representation of facial expression (or other face categories) in these networks. In the present experiment, we used Multivariate Pattern Analysis (MVPA) with EEG to investigate the neural information available across time about two important face categories (expression and identity) when those categories are either perceived under explicit (e.g. decoding facial expression category from the EEG when task is on expression) or incidental task contexts (e.g. decoding facial expression category from the EEG when task is on identity). Decoding of both face categories, across both task contexts, peaked in time-windows spanning 91-170 ms (across posterior electrodes). Peak decoding of expression, however, was not affected by task context whereas peak decoding of identity was significantly reduced under incidental processing conditions. In addition, errors in EEG decoding correlated with errors in behavioral categorization under explicit processing for both expression and identity, however under incidental conditions only errors in EEG decoding of expression correlated with behavior. Furthermore, decoding time-courses and the spatial pattern of informative electrodes showed consistently better decoding of identity under explicit conditions at later-time periods, with weak evidence for similar effects for decoding of expression at isolated time-windows. Taken together, these results reveal differences and commonalities in the processing of face categories under explicit Vs incidental task contexts and suggest that facial expressions are processed to a richer degree under incidental processing conditions, consistent with prior work indicating the relative automaticity by which emotion is processed. Our work further demonstrates the utility in applying multivariate decoding analyses to EEG for revealing the dynamics of face perception.


Subject(s)
Brain/physiology , Emotions , Facial Expression , Facial Recognition/physiology , Adolescent , Adult , Electroencephalography , Female , Humans , Male , Support Vector Machine , Young Adult
12.
PLoS One ; 13(5): e0197160, 2018.
Article in English | MEDLINE | ID: mdl-29847562

ABSTRACT

Facial expressions of emotion are signals of high biological value. Whilst recognition of facial expressions has been much studied in central vision, the ability to perceive these signals in peripheral vision has only seen limited research to date, despite the potential adaptive advantages of such perception. In the present experiment, we investigate facial expression recognition and detection performance for each of the basic emotions (plus neutral) at up to 30 degrees of eccentricity. We demonstrate, as expected, a decrease in recognition and detection performance with increasing eccentricity, with happiness and surprised being the best recognized expressions in peripheral vision. In detection however, while happiness and surprised are still well detected, fear is also a well detected expression. We show that fear is a better detected than recognized expression. Our results demonstrate that task constraints shape the perception of expression in peripheral vision and provide novel evidence that detection and recognition rely on partially separate underlying mechanisms, with the latter more dependent on the higher spatial frequency content of the face stimulus.


Subject(s)
Facial Recognition/physiology , Fear , Happiness , Vision, Ocular/physiology , Adult , Facial Expression , Female , Humans , Male , Pattern Recognition, Visual , Recognition, Psychology
13.
Cortex ; 101: 31-43, 2018 04.
Article in English | MEDLINE | ID: mdl-29414459

ABSTRACT

A network of cortical and sub-cortical regions is known to be important in the processing of facial expression. However, to date no study has investigated whether representations of facial expressions present in this network permit generalization across independent samples of face information (e.g., eye region vs mouth region). We presented participants with partial face samples of five expression categories in a rapid event-related fMRI experiment. We reveal a network of face-sensitive regions that contain information about facial expression categories regardless of which part of the face is presented. We further reveal that the neural information present in a subset of these regions: dorsal prefrontal cortex (dPFC), superior temporal sulcus (STS), lateral occipital and ventral temporal cortex, and even early visual cortex, enables reliable generalization across independent visual inputs (faces depicting the 'eyes only' vs 'eyes removed'). Furthermore, classification performance was correlated to behavioral performance in STS and dPFC. Our results demonstrate that both higher (e.g., STS, dPFC) and lower level cortical regions contain information useful for facial expression decoding that go beyond the visual information presented, and implicate a key role for contextual mechanisms such as cortical feedback in facial expression perception under challenging conditions of visual occlusion.


Subject(s)
Brain Mapping , Cognition/physiology , Emotions , Facial Expression , Facial Recognition/physiology , Recognition, Psychology/physiology , Analysis of Variance , Face/physiology , Female , Humans , Image Processing, Computer-Assisted/methods , Linear Models , Magnetic Resonance Imaging/methods , Male , Mental Status and Dementia Tests , Temporal Lobe/physiology , Visual Cortex/physiology
14.
Curr Biol ; 25(20): 2690-5, 2015 Oct 19.
Article in English | MEDLINE | ID: mdl-26441356

ABSTRACT

Neuronal cortical circuitry comprises feedforward, lateral, and feedback projections, each of which terminates in distinct cortical layers [1-3]. In sensory systems, feedforward processing transmits signals from the external world into the cortex, whereas feedback pathways signal the brain's inference of the world [4-11]. However, the integration of feedforward, lateral, and feedback inputs within each cortical area impedes the investigation of feedback, and to date, no technique has isolated the feedback of visual scene information in distinct layers of healthy human cortex. We masked feedforward input to a region of V1 cortex and studied the remaining internal processing. Using high-resolution functional brain imaging (0.8 mm(3)) and multivoxel pattern information techniques, we demonstrate that during normal visual stimulation scene information peaks in mid-layers. Conversely, we found that contextual feedback information peaks in outer, superficial layers. Further, we found that shifting the position of the visual scene surrounding the mask parametrically modulates feedback in superficial layers of V1. Our results reveal the layered cortical organization of external versus internal visual processing streams during perception in healthy human subjects. We provide empirical support for theoretical feedback models such as predictive coding [10, 12] and coherent infomax [13] and reveal the potential of high-resolution fMRI to access internal processing in sub-millimeter human cortex.


Subject(s)
Feedback, Physiological , Visual Cortex/physiology , Visual Pathways , Humans , Magnetic Resonance Imaging , Photic Stimulation
15.
Cereb Cortex ; 25(4): 1020-31, 2015 Apr.
Article in English | MEDLINE | ID: mdl-24122136

ABSTRACT

Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects.


Subject(s)
Magnetic Resonance Imaging/methods , Pattern Recognition, Visual/physiology , Signal Processing, Computer-Assisted , Somatosensory Cortex/physiology , Brain Mapping , Discrimination, Psychological/physiology , Female , Humans , Male , Multivariate Analysis , Neuropsychological Tests
16.
Curr Biol ; 24(11): 1256-62, 2014 Jun 02.
Article in English | MEDLINE | ID: mdl-24856208

ABSTRACT

Human early visual cortex was traditionally thought to process simple visual features such as orientation, contrast, and spatial frequency via feedforward input from the lateral geniculate nucleus (e.g., [1]). However, the role of nonretinal influence on early visual cortex is so far insufficiently investigated despite much evidence that feedback connections greatly outnumber feedforward connections [2-5]. Here, we explored in five fMRI experiments how information originating from audition and imagery affects the brain activity patterns in early visual cortex in the absence of any feedforward visual stimulation. We show that category-specific information from both complex natural sounds and imagery can be read out from early visual cortex activity in blindfolded participants. The coding of nonretinal information in the activity patterns of early visual cortex is common across actual auditory perception and imagery and may be mediated by higher-level multisensory areas. Furthermore, this coding is robust to mild manipulations of attention and working memory but affected by orthogonal, cognitively demanding visuospatial processing. Crucially, the information fed down to early visual cortex is category specific and generalizes to sound exemplars of the same category, providing evidence for abstract information feedback rather than precise pictorial feedback. Our results suggest that early visual cortex receives nonretinal input from other brain areas when it is generated by auditory perception and/or imagery, and this input carries common abstract information. Our findings are compatible with feedback of predictive information to the earliest visual input level (e.g., [6]), in line with predictive coding models [7-10].


Subject(s)
Auditory Perception , Visual Cortex/physiology , Visual Perception , Acoustic Stimulation , Humans , Magnetic Resonance Imaging , Memory, Short-Term , Photic Stimulation
17.
Behav Brain Sci ; 36(3): 221, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23663531

ABSTRACT

Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models).


Subject(s)
Attention/physiology , Brain/physiology , Cognition/physiology , Cognitive Science/trends , Perception/physiology , Humans
18.
Eur J Neurosci ; 37(7): 1130-9, 2013 Apr.
Article in English | MEDLINE | ID: mdl-23373719

ABSTRACT

Higher visual areas in the occipitotemporal cortex contain discrete regions for face processing, but it remains unclear if V1 is modulated by top-down influences during face discrimination, and if this is widespread throughout V1 or localized to retinotopic regions processing task-relevant facial features. Employing functional magnetic resonance imaging (fMRI), we mapped the cortical representation of two feature locations that modulate higher visual areas during categorical judgements - the eyes and mouth. Subjects were presented with happy and fearful faces, and we measured the fMRI signal of V1 regions processing the eyes and mouth whilst subjects engaged in gender and expression categorization tasks. In a univariate analysis, we used a region-of-interest-based general linear model approach to reveal changes in activation within these regions as a function of task. We then trained a linear pattern classifier to classify facial expression or gender on the basis of V1 data from 'eye' and 'mouth' regions, and from the remaining non-diagnostic V1 region. Using multivariate techniques, we show that V1 activity discriminates face categories both in local 'diagnostic' and widespread 'non-diagnostic' cortical subregions. This indicates that V1 might receive the processed outcome of complex facial feature analysis from other cortical (i.e. fusiform face area, occipital face area) or subcortical areas (amygdala).


Subject(s)
Facial Expression , Form Perception , Visual Cortex/physiology , Adult , Brain Mapping , Eye , Female , Humans , Magnetic Resonance Imaging , Male , Models, Neurological , Mouth
19.
J Neurosci ; 31(47): 17149-68, 2011 Nov 23.
Article in English | MEDLINE | ID: mdl-22114283

ABSTRACT

Our present understanding of the neural mechanisms and sensorimotor transformations that govern the planning of arm and eye movements predominantly come from invasive parieto-frontal neural recordings in nonhuman primates. While functional MRI (fMRI) has motivated investigations on much of these same issues in humans, the highly distributed and multiplexed organization of parieto-frontal neurons necessarily constrain the types of intention-related signals that can be detected with traditional fMRI analysis techniques. Here we employed multivoxel pattern analysis (MVPA), a multivariate technique sensitive to spatially distributed fMRI patterns, to provide a more detailed understanding of how hand and eye movement plans are coded in human parieto-frontal cortex. Subjects performed an event-related delayed movement task requiring that a reach or saccade be planned and executed toward one of two spatial target positions. We show with MVPA that, even in the absence of signal amplitude differences, the fMRI spatial activity patterns preceding movement onset are predictive of upcoming reaches and saccades and their intended directions. Within certain parieto-frontal regions we show that these predictive activity patterns reflect a similar spatial target representation for the hand and eye. Within some of the same regions, we further demonstrate that these preparatory spatial signals can be discriminated from nonspatial, effector-specific signals. In contrast to the largely graded effector- and direction-related planning responses found with fMRI subtraction methods, these results reveal considerable consensus with the parieto-frontal network organization suggested from primate neurophysiology and specifically show how predictive spatial and nonspatial movement information coexists within single human parieto-frontal areas.


Subject(s)
Brain Mapping/methods , Frontal Lobe/physiology , Movement/physiology , Parietal Lobe/physiology , Psychomotor Performance/physiology , Saccades/physiology , Female , Humans , Male , Photic Stimulation/methods , Young Adult
20.
Proc Natl Acad Sci U S A ; 107(46): 20099-103, 2010 Nov 16.
Article in English | MEDLINE | ID: mdl-21041652

ABSTRACT

Even within the early sensory areas, the majority of the input to any given cortical neuron comes from other cortical neurons. To extend our knowledge of the contextual information that is transmitted by such lateral and feedback connections, we investigated how visually nonstimulated regions in primary visual cortex (V1) and visual area V2 are influenced by the surrounding context. We used functional magnetic resonance imaging (fMRI) and pattern-classification methods to show that the cortical representation of a nonstimulated quarter-field carries information that can discriminate the surrounding visual context. We show further that the activity patterns in these regions are significantly related to those observed with feed-forward stimulation and that these effects are driven primarily by V1. These results thus demonstrate that visual context strongly influences early visual areas even in the absence of differential feed-forward thalamic stimulation.


Subject(s)
Pattern Recognition, Visual/physiology , Photic Stimulation , Visual Cortex/physiology , Algorithms , Brain Mapping , Discriminant Analysis , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...