Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 32
Filter
Add more filters










Publication year range
1.
J Cogn Neurosci ; : 1-13, 2024 Mar 22.
Article in English | MEDLINE | ID: mdl-38527075

ABSTRACT

Humans recognize the facial expressions of others rapidly and effortlessly. Although much is known about how we perceive expressions, the role of facial experience in shaping this remarkable ability remains unclear. Is our perception of expressions linked to how we ourselves make facial expressions? Are we better at recognizing other's facial expressions if we are experts at making the same expressions ourselves? And if we could not make facial expressions at all, would it impact our ability to recognize others' facial expressions? The current article aims to examine these questions by explicating the link between facial experience and facial expression recognition. It includes a comprehensive appraisal of the related literature and examines three main theories that posit a connection between making and recognizing facial expressions. First, recent studies in individuals with Moebius syndrome support the role of facial ability (i.e., the ability to move one's face to make facial expressions) in facial expression recognition. Second, motor simulation theory suggests that humans recognize others' facial expressions by covertly mimicking the observed expression (without overt motor action) and that this facial mimicry helps us identify and feel the associated emotion. Finally, the facial feedback hypothesis provides a framework for enhanced emotional experience via proprioceptive feedback from facial muscles when mimicking a viewed facial expression. Evidence for and against these theories is presented as well as some considerations and outstanding questions for future research studies investigating the role of facial experience in facial expression perception.

2.
J Cogn Neurosci ; : 1-17, 2024 Jan 19.
Article in English | MEDLINE | ID: mdl-38261366

ABSTRACT

For primates, expressions of fear are thought to be powerful social signals. In laboratory settings, faces with fearful expressions have reliably evoked valence effects in inferior temporal cortex. However, because macaques use so called "fear grins" in a variety of different contexts, the deeper question is whether the macaque inferior temporal cortex is tuned to the prototypical fear grin, or to conspecifics signaling fear? In this study, we combined neuroimaging with the results of a behavioral task to investigate how macaques encode a wide variety of fearful facial expressions. In Experiment 1, we identified two sets of macaque face stimuli using different approaches; we selected faces based on the emotional context (i.e., calm vs. fearful), and we selected faces based on the engagement of action units (i.e., neutral vs. fear grins). We also included human faces in Experiment 1. Then, using fMRI, we found that the faces selected based on context elicited a larger valence effect in the inferior temporal cortex than faces selected based on visual appearance. Furthermore, human facial expressions only elicited weak valence effects. These observations were further supported by the results of a two-alternative, forced-choice task (Experiment 2), suggesting that fear grins vary in their perceived pleasantness. Collectively, these findings indicate that the macaque inferior temporal cortex is more involved in social intelligence than commonly assumed, encoding emergent properties in naturalistic face stimuli that transcend basic visual features. These results demand a rethinking of theories surrounding the function and operationalization of primate inferior temporal cortex.

3.
Emotion ; 24(4): 1109-1124, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38127536

ABSTRACT

Emotional expressions are an evolutionarily conserved means of social communication essential for social interactions. It is important to understand how anxious individuals perceive their social environments, including emotional expressions, especially with the rising prevalence of anxiety during the COVID-19 pandemic. Anxiety is often associated with an attentional bias for threat-related stimuli, such as angry faces. Yet the mechanisms by which anxiety enhances or impairs two key components of spatial attention-attentional capture and attentional disengagement-to emotional expressions are still unclear. Moreover, positive valence is often ignored in studies of threat-related attention and anxiety, despite the high occurrence of happy faces during everyday social interaction. Here, we investigated the relationship between anxiety, emotional valence, and spatial attention in 574 participants across two preregistered studies (data collected in 2021 and 2022; Experiment 1: n = 154, 54.5% male, Mage = 43.5 years; Experiment 2: n = 420, 58% male, Mage = 36.46 years). We found that happy faces capture attention more quickly than angry faces during the visual search experiment and found delayed disengagement from both angry and happy faces over neutral faces during the spatial cueing experiment. We also show that anxiety has a distinct impact on both attentional capture and disengagement of emotional faces. Together, our findings highlight the role of positively valenced stimuli in attracting and holding attention and suggest that anxiety is a critical factor in modulating spatial attention to emotional stimuli. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Anxiety , Emotions , Facial Expression , Humans , Male , Female , Adult , Emotions/physiology , Attentional Bias/physiology , COVID-19/psychology , Anger/physiology , Attention/physiology , Young Adult , Happiness , Middle Aged , Space Perception/physiology
4.
Cortex ; 169: 35-49, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37852041

ABSTRACT

Humans rely heavily on facial expressions for social communication to convey their thoughts and emotions and to understand them in others. One prominent but controversial view is that humans learn to recognize the significance of facial expressions by mimicking the expressions of others. This view predicts that an inability to make facial expressions (e.g., facial paralysis) would result in reduced perceptual sensitivity to others' facial expressions. To test this hypothesis, we developed a diverse battery of sensitive emotion recognition tasks to characterize expression perception in individuals with Moebius Syndrome (MBS), a congenital neurological disorder that causes facial palsy. Using computer-based detection tasks we systematically assessed expression perception thresholds for static and dynamic face and body expressions. We found that while MBS individuals were able to perform challenging perceptual control tasks and body expression tasks, they were less efficient at extracting emotion from facial expressions, compared to matched controls. Exploratory analyses of fMRI data from a small group of MBS participants suggested potentially reduced engagement of the amygdala in MBS participants during expression processing relative to matched controls. Collectively, these results suggest a role for facial mimicry and consequent facial feedback and motor experience in the perception of others' facial expressions.


Subject(s)
Facial Paralysis , Facial Recognition , Mobius Syndrome , Humans , Facial Expression , Emotions , Mobius Syndrome/complications , Facial Paralysis/etiology , Facial Paralysis/psychology , Perception , Social Perception
5.
Sci Rep ; 13(1): 5383, 2023 04 03.
Article in English | MEDLINE | ID: mdl-37012369

ABSTRACT

Facial expressions are thought to be complex visual signals, critical for communication between social agents. Most prior work aimed at understanding how facial expressions are recognized has relied on stimulus databases featuring posed facial expressions, designed to represent putative emotional categories (such as 'happy' and 'angry'). Here we use an alternative selection strategy to develop the Wild Faces Database (WFD); a set of one thousand images capturing a diverse range of ambient facial behaviors from outside of the laboratory. We characterized the perceived emotional content in these images using a standard categorization task in which participants were asked to classify the apparent facial expression in each image. In addition, participants were asked to indicate the intensity and genuineness of each expression. While modal scores indicate that the WFD captures a range of different emotional expressions, in comparing the WFD to images taken from other, more conventional databases, we found that participants responded more variably and less specifically to the wild-type faces, perhaps indicating that natural expressions are more multiplexed than a categorical model would predict. We argue that this variability can be employed to explore latent dimensions in our mental representation of facial expressions. Further, images in the WFD were rated as less intense and more genuine than images taken from other databases, suggesting a greater degree of authenticity among WFD images. The strong positive correlation between intensity and genuineness scores demonstrating that even the high arousal states captured in the WFD were perceived as authentic. Collectively, these findings highlight the potential utility of the WFD as a new resource for bridging the gap between the laboratory and real world in studies of expression recognition.


Subject(s)
Anger , Emotions , Humans , Happiness , Facial Expression , Arousal
6.
Neuroimage ; 273: 120067, 2023 06.
Article in English | MEDLINE | ID: mdl-36997134

ABSTRACT

Both the primate visual system and artificial deep neural network (DNN) models show an extraordinary ability to simultaneously classify facial expression and identity. However, the neural computations underlying the two systems are unclear. Here, we developed a multi-task DNN model that optimally classified both monkey facial expressions and identities. By comparing the fMRI neural representations of the macaque visual cortex with the best-performing DNN model, we found that both systems: (1) share initial stages for processing low-level face features which segregate into separate branches at later stages for processing facial expression and identity respectively, and (2) gain more specificity for the processing of either facial expression or identity as one progresses along each branch towards higher stages. Correspondence analysis between the DNN and monkey visual areas revealed that the amygdala and anterior fundus face patch (AF) matched well with later layers of the DNN's facial expression branch, while the anterior medial face patch (AM) matched well with later layers of the DNN's facial identity branch. Our results highlight the anatomical and functional similarities between macaque visual system and DNN model, suggesting a common mechanism between the two systems.


Subject(s)
Facial Expression , Macaca , Animals , Neural Networks, Computer , Primates , Magnetic Resonance Imaging/methods , Pattern Recognition, Visual
7.
Sci Adv ; 8(47): eadd6865, 2022 Nov 25.
Article in English | MEDLINE | ID: mdl-36427322

ABSTRACT

Body language is a powerful tool that we use to communicate how we feel, but it is unclear whether other primates also communicate in this way. Here, we use functional magnetic resonance imaging to show that the body-selective patches in macaques are activated by affective body language. Unexpectedly, we found these regions to be tolerant of naturalistic variation in posture as well as species; the bodies of macaques, humans, and domestic cats all evoked a stronger response when they conveyed fear than when they conveyed no affect. Multivariate analyses confirmed that the neural representation of fear-related body expressions was species-invariant. Collectively, these findings demonstrate that, like humans, macaques have body-selective brain regions in the ventral visual pathway for processing affective body language. These data also indicate that representations of body stimuli in these regions are built on the basis of emergent properties, such as socio-affective meaning, and not just putative image properties.

8.
Nat Commun ; 13(1): 6302, 2022 10 22.
Article in English | MEDLINE | ID: mdl-36273204

ABSTRACT

Viewing faces that are perceived as emotionally expressive evokes enhanced neural responses in multiple brain regions, a phenomenon thought to depend critically on the amygdala. This emotion-related modulation is evident even in primary visual cortex (V1), providing a potential neural substrate by which emotionally salient stimuli can affect perception. How does emotional valence information, computed in the amygdala, reach V1? Here we use high-resolution functional MRI to investigate the layer profile and retinotopic distribution of neural activity specific to emotional facial expressions. Across three experiments, human participants viewed centrally presented face stimuli varying in emotional expression and performed a gender judgment task. We found that facial valence sensitivity was evident only in superficial cortical layers and was not restricted to the retinotopic location of the stimuli, consistent with diffuse feedback-like projections from the amygdala. Together, our results provide a feedback mechanism by which the amygdala directly modulates activity at the earliest stage of visual processing.


Subject(s)
Facial Expression , Visual Cortex , Humans , Visual Cortex/physiology , Emotions/physiology , Amygdala/physiology , Visual Perception/physiology , Brain Mapping , Magnetic Resonance Imaging
9.
Sci Data ; 9(1): 518, 2022 08 25.
Article in English | MEDLINE | ID: mdl-36008415

ABSTRACT

The NIMH Healthy Research Volunteer Dataset is a collection of phenotypic data characterizing healthy research volunteers using clinical assessments such as assays of blood and urine, mental health assessments, diagnostic and dimensional measures of mental health, cognitive and neuropsychological functioning, structural and functional magnetic resonance imaging (MRI), along with diffusion tensor imaging (DTI), and a comprehensive magnetoencephalography battery (MEG). In addition, blood samples of healthy volunteers are banked for future analyses. All data collected in this protocol are broadly shared in the OpenNeuro repository, in the Brain Imaging Data Structure (BIDS) format. In addition, task paradigms and basic pre-processing scripts are shared on GitHub. There are currently few open access MEG datasets, and multimodal neuroimaging datasets are even more rare. Due to its depth of characterization of a healthy population in terms of brain health, this dataset may contribute to a wide array of secondary investigations of non-clinical and clinical research questions.


Subject(s)
Diffusion Tensor Imaging , Magnetoencephalography , Brain/diagnostic imaging , Healthy Volunteers , Humans , Magnetic Resonance Imaging , National Institute of Mental Health (U.S.) , Neuroimaging/methods , United States
10.
medRxiv ; 2021 Oct 14.
Article in English | MEDLINE | ID: mdl-34671781

ABSTRACT

BACKGROUND: The COVID-19 pandemic led to dramatic threats to health and social life. Study objectives - develop a prediction model leveraging subsample of known Patient/Controls and evaluate the relationship of predicted mental health status to clinical outcome measures and pandemic-related psychological and behavioral responses during lockdown (spring/summer 2020). METHODS: Online cohort study conducted by National Institute of Mental Health Intramural Research Program. Convenience sample of English-speaking adults (enrolled 4/4-5/16/20; n=1,992). Enrollment measures: demographics, clinical history, functional status, psychiatric and family history, alcohol/drug use. Outcome measures (enrollment and q2 weeks/6 months): distress, loneliness, mental health symptoms, and COVID-19 survey. NIMH IRP Patient/Controls survey responses informed assignment of Patient Probability Scores (PPS) for all participants. Regression models analyzed the relationship between PPS and outcome measures. OUTCOMES: Mean age 46.0 (±14.7), female (82.4%), white (88.9 %). PPS correlated with distress, loneliness, depression, and mental health factors. PPS associated with negative psychological responses to COVID-19. Worry about mental health (OR 1.46) exceeded worry about physical health (OR 1.13). PPS not associated with adherence to social distancing guidelines but was with stress related to social distancing and worries about infection of self/others. INTERPRETATION: Mental health status (PPS) was associated with concurrent clinical ratings and COVID-specific negative responses. A focus on mental health during the pandemic is warranted, especially among those with mental health vulnerabilities. We will include PPS when conducting longitudinal analyses of mental health trajectories and risk and resilience factors that may account for differing clinical outcomes. FUNDING: NIMH (ZIAMH002922); NCCIH (ZIAAT000030).

11.
Neurosci Biobehav Rev ; 120: 75-77, 2021 01.
Article in English | MEDLINE | ID: mdl-33227326

ABSTRACT

In the recent review by Waller et al. (2020) the authors discuss how the Facial Action Coding System (FACS) can be used to study the evolution of facial behaviors. This is a timely and thought-provoking review which highlights the numerous ways in which FACS could be used to compare the mechanisms responsible for the production of facial behaviors across species. We propose that FACS could also be used to study the recognition of facial behaviors in nonhuman subjects where one of the key challenges is finding suitable stimuli that convey different emotions. By using FACS-rated images in awake neuroimaging experiments, researchers could accurately identify the brain mechanisms responsible for recognizing expressions across mammalian species. This approach would reveal neural homologs and deepen our understanding of how nonverbal social communication has evolved.


Subject(s)
Facial Expression , Recognition, Psychology , Animals , Emotions , Face , Humans , Nonverbal Communication
12.
J Neurosci ; 40(42): 8119-8131, 2020 10 14.
Article in English | MEDLINE | ID: mdl-32928886

ABSTRACT

When we move the features of our face, or turn our head, we communicate changes in our internal state to the people around us. How this information is encoded and used by an observer's brain is poorly understood. We investigated this issue using a functional MRI adaptation paradigm in awake male macaques. Among face-selective patches of the superior temporal sulcus (STS), we found a double dissociation of areas processing facial expression and those processing head orientation. The face-selective patches in the STS fundus were most sensitive to facial expression, as was the amygdala, whereas those on the lower, lateral edge of the sulcus were most sensitive to head orientation. The results of this study reveal a new dimension of functional organization, with face-selective patches segregating within the STS. The findings thus force a rethinking of the role of the face-processing system in representing subject-directed actions and supporting social cognition.SIGNIFICANCE STATEMENT When we are interacting with another person, we make inferences about their emotional state based on visual signals. For example, when a person's facial expression changes, we are given information about their feelings. While primates are thought to have specialized cortical mechanisms for analyzing the identity of faces, less is known about how these mechanisms unpack transient signals, like expression, that can change from one moment to the next. Here, using an fMRI adaptation paradigm, we demonstrate that while the identity of a face is held constant, there are separate mechanisms in the macaque brain for processing transient changes in the face's expression and orientation. These findings shed new light on the function of the face-processing system during social exchanges.


Subject(s)
Facial Expression , Motion Perception/physiology , Orientation , Social Perception , Amygdala/diagnostic imaging , Amygdala/physiology , Animals , Cognition , Head , Image Processing, Computer-Assisted , Macaca mulatta , Magnetic Resonance Imaging , Male , Temporal Lobe/diagnostic imaging , Temporal Lobe/physiology
13.
Neuroimage ; 218: 116878, 2020 09.
Article in English | MEDLINE | ID: mdl-32360168

ABSTRACT

Facial motion plays a fundamental role in the recognition of facial expressions in primates, but the neural substrates underlying this special type of biological motion are not well understood. Here, we used fMRI to investigate the extent to which the specialization for facial motion is represented in the visual system and compared the neural mechanisms for the processing of non-rigid facial motion in macaque monkeys and humans. We defined the areas specialized for facial motion as those significantly more activated when subjects perceived the motion caused by dynamic faces (dynamic faces â€‹> â€‹static faces) than when they perceived the motion caused by dynamic non-face objects (dynamic objects â€‹> â€‹static objects). We found that, in monkeys, significant activations evoked by facial motion were in the fundus of anterior superior temporal sulcus (STS), which overlapped the anterior fundus face patch. In humans, facial motion activated three separate foci in the right STS: posterior, middle, and anterior STS, with the anterior STS location showing the most selectivity for facial motion compared with other facial motion areas. In both monkeys and humans, facial motion shows a gradient preference as one progresses anteriorly along the STS. Taken together, our results indicate that monkeys and humans share similar neural substrates within the anterior temporal lobe specialized for the processing of non-rigid facial motion.


Subject(s)
Facial Expression , Facial Recognition/physiology , Temporal Lobe/physiology , Adult , Animals , Emotions/physiology , Female , Humans , Image Processing, Computer-Assisted/methods , Macaca , Magnetic Resonance Imaging/methods , Male , Motion
15.
Neuropsychologia ; 128: 297-304, 2019 05.
Article in English | MEDLINE | ID: mdl-28807647

ABSTRACT

Visuospatial attention often improves task performance by increasing signal gain at attended locations and decreasing noise at unattended locations. Attention is also believed to be the mechanism that allows information to enter awareness. In this experiment, we assessed whether orienting endogenous visuospatial attention with cues differentially affects visual discrimination sensitivity (an objective task performance) and visual awareness (the subjective feeling of perceiving) during the same discrimination task. Gabor patch targets were presented laterally, either at low contrast (contrast stimuli) or at high contrast embedded in noise (noise stimuli). Participants reported their orientation either in a 3-alternative choice task (clockwise, counterclockwise, unknown) that allowed for both objective and subjective reports, or in a 2-alternative choice task (clockwise, counterclockwise) that provided a control for objective reports. Signal detection theory models were fit to the experimental data: estimated perceptual sensitivity reflected objective performance; decision criteria, or subjective biases, were a proxy for visual awareness. Attention increased sensitivity (i.e., improved objective performance) for the contrast, but not for the noise stimuli. Indeed, with the latter, attention did not further enhance the already high target signal or reduce the already low uncertainty on its position. Interestingly, for both contrast and noise stimuli, attention resulted in more liberal criteria, i.e., awareness increased. The noise condition is thus an experimental configuration where people think they see the targets they attend to better, even if they do not. This could be explained by an internal representation of their attentional state, which influences awareness independent of objective visual signals.


Subject(s)
Attention/physiology , Awareness/physiology , Discrimination, Psychological/physiology , Signal Detection, Psychological/physiology , Space Perception/physiology , Visual Perception/physiology , Adult , Choice Behavior/physiology , Cues , Female , Humans , Male , Middle Aged , Noise , Orientation/physiology , Photic Stimulation , Psychomotor Performance/physiology , Reaction Time/physiology , Uncertainty , Young Adult
16.
PLoS Biol ; 16(6): e2005399, 2018 06.
Article in English | MEDLINE | ID: mdl-29939981

ABSTRACT

Feature-based attention has a spatially global effect, i.e., responses to stimuli that share features with an attended stimulus are enhanced not only at the attended location but throughout the visual field. However, how feature-based attention modulates cortical neural responses at unattended locations remains unclear. Here we used functional magnetic resonance imaging (fMRI) to examine this issue as human participants performed motion- (Experiment 1) and color- (Experiment 2) based attention tasks. Results indicated that, in both experiments, the respective visual processing areas (middle temporal area [MT+] for motion and V4 for color) as well as early visual, parietal, and prefrontal areas all showed the classic feature-based attention effect, with neural responses to the unattended stimulus significantly elevated when it shared the same feature with the attended stimulus. Effective connectivity analysis using dynamic causal modeling (DCM) showed that this spatially global effect in the respective visual processing areas (MT+ for motion and V4 for color), intraparietal sulcus (IPS), frontal eye field (FEF), medial frontal gyrus (mFG), and primary visual cortex (V1) was derived by feedback from the inferior frontal junction (IFJ). Complementary effective connectivity analysis using Granger causality modeling (GCM) confirmed that, in both experiments, the node with the highest outflow and netflow degree was IFJ, which was thus considered to be the source of the network. These results indicate a source for the spatially global effect of feature-based attention in the human prefrontal cortex.


Subject(s)
Prefrontal Cortex/physiology , Adult , Attention/physiology , Brain Mapping , Color Perception/physiology , Connectome , Female , Humans , Magnetic Resonance Imaging , Male , Models, Neurological , Models, Psychological , Motion Perception/physiology , Photic Stimulation , Visual Cortex/physiology , Visual Fields/physiology , Young Adult
17.
Neuroimage ; 163: 231-243, 2017 12.
Article in English | MEDLINE | ID: mdl-28951352

ABSTRACT

Classic theories of object-based attention assume a single object of selection but real-world tasks, such as driving a car, often require attending to multiple objects simultaneously. However, whether object-based attention can operate on more than one object at a time remains unexplored. Here, we used functional magnetic resonance imaging (fMRI) to address this question as human participants performed object-based attention tasks that required simultaneous attention to two objects differing in either their features or locations. Simultaneous attention to two objects differing in features (face and house) did not show significantly different responses in the fusiform face area (FFA) or parahippocampal place area (PPA), respectively, compared to attending a single object (face or house), but did enhance the response in the inferior frontal gyrus (IFG). Simultaneous attention to two circular arcs differing in locations did not show significantly different responses in the primary visual cortex (V1) compared to attending a single circular arc, but did enhance the response in the intraparietal sulcus (IPS). These results suggest that object-based attention can simultaneously select at least two objects differing in their features or locations, processes mediated by the frontal and parietal cortex, respectively.


Subject(s)
Attention/physiology , Brain/physiology , Pattern Recognition, Visual/physiology , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
18.
J Neurosci ; 37(5): 1156-1161, 2017 02 01.
Article in English | MEDLINE | ID: mdl-28011742

ABSTRACT

Nonhuman primate neuroanatomical studies have identified a cortical pathway from the superior temporal sulcus (STS) projecting into dorsal subregions of the amygdala, but whether this same pathway exists in humans is unknown. Here, we addressed this question by combining theta burst transcranial magnetic stimulation (TBS) with fMRI to test the prediction that the STS and amygdala are functionally connected during face perception. Human participants (N = 17) were scanned, over two sessions, while viewing 3 s video clips of moving faces, bodies, and objects. During these sessions, TBS was delivered over the face-selective right posterior STS (rpSTS) or over the vertex control site. A region-of-interest analysis revealed results consistent with our hypothesis. Namely, TBS delivered over the rpSTS reduced the neural response to faces (but not to bodies or objects) in the rpSTS, right anterior STS (raSTS), and right amygdala, compared with TBS delivered over the vertex. By contrast, TBS delivered over the rpSTS did not significantly reduce the neural response to faces in the right fusiform face area or right occipital face area. This pattern of results is consistent with the existence of a cortico-amygdala pathway in humans for processing face information projecting from the rpSTS, via the raSTS, into the amygdala. This conclusion is consistent with nonhuman primate neuroanatomy and with existing face perception models. SIGNIFICANCE STATEMENT: Neuroimaging studies have identified multiple face-selective regions in the brain, but the functional connections between these regions are unknown. In the present study, participants were scanned with fMRI while viewing movie clips of faces, bodies, and objects before and after transient disruption of the face-selective right posterior superior temporal sulcus (rpSTS). Results showed that TBS disruption reduced the neural response to faces, but not to bodies or objects, in the rpSTS, right anterior STS (raSTS), and right amygdala. These results are consistent with the existence of a cortico-amygdala pathway in humans for processing face information projecting from the rpSTS, via the raSTS, into the amygdala. This conclusion is consistent with nonhuman primate neuroanatomy and with existing face perception models.


Subject(s)
Amygdala/anatomy & histology , Temporal Lobe/anatomy & histology , Adult , Brain Mapping , Face , Female , Functional Laterality/physiology , Humans , Magnetic Resonance Imaging , Male , Neural Pathways/anatomy & histology , Pattern Recognition, Visual/physiology , Photic Stimulation , Transcranial Magnetic Stimulation , Visual Perception/physiology
19.
PLoS Biol ; 14(11): e1002578, 2016 Nov.
Article in English | MEDLINE | ID: mdl-27870851

ABSTRACT

The normalization model of attention proposes that attention can affect performance by response- or contrast-gain changes, depending on the size of the stimulus and attention field. Here, we manipulated the attention field by emotional valence, negative faces versus positive faces, while holding stimulus size constant in a spatial cueing task. We observed changes in the cueing effect consonant with changes in response gain for negative faces and contrast gain for positive faces. Neuroimaging experiments confirmed that subjects' attention fields were narrowed for negative faces and broadened for positive faces. Importantly, across subjects, the self-reported emotional strength of negative faces and positive faces correlated, respectively, both with response- and contrast-gain changes and with primary visual cortex (V1) narrowed and broadened attention fields. Effective connectivity analysis showed that the emotional valence-dependent attention field was closely associated with feedback from the dorsolateral prefrontal cortex (DLPFC) to V1. These findings indicate a crucial involvement of DLPFC in the normalization processes of emotional attention.


Subject(s)
Attention/physiology , Emotions/physiology , Female , Humans , Magnetic Resonance Imaging , Male , Prefrontal Cortex/diagnostic imaging , Prefrontal Cortex/physiology , Psychophysics
20.
Neuroimage ; 130: 77-90, 2016 Apr 15.
Article in English | MEDLINE | ID: mdl-26826513

ABSTRACT

Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: the amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter.


Subject(s)
Brain/physiology , Discrimination, Psychological/physiology , Pattern Recognition, Visual/physiology , Adult , Brain Mapping , Emotions/physiology , Facial Expression , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Support Vector Machine
SELECTION OF CITATIONS
SEARCH DETAIL
...