Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Psychol ; 15: 1350631, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38966733

RESUMO

Core to understanding emotion are subjective experiences and their expression in facial behavior. Past studies have largely focused on six emotions and prototypical facial poses, reflecting limitations in scale and narrow assumptions about the variety of emotions and their patterns of expression. We examine 45,231 facial reactions to 2,185 evocative videos, largely in North America, Europe, and Japan, collecting participants' self-reported experiences in English or Japanese and manual and automated annotations of facial movement. Guided by Semantic Space Theory, we uncover 21 dimensions of emotion in the self-reported experiences of participants in Japan, the United States, and Western Europe, and considerable cross-cultural similarities in experience. Facial expressions predict at least 12 dimensions of experience, despite massive individual differences in experience. We find considerable cross-cultural convergence in the facial actions involved in the expression of emotion, and culture-specific display tendencies-many facial movements differ in intensity in Japan compared to the U.S./Canada and Europe but represent similar experiences. These results quantitatively detail that people in dramatically different cultures experience and express emotion in a high-dimensional, categorical, and similar but complex fashion.

2.
iScience ; 27(3): 109175, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38433918

RESUMO

Cross-cultural studies of the meaning of facial expressions have largely focused on judgments of small sets of stereotypical images by small numbers of people. Here, we used large-scale data collection and machine learning to map what facial expressions convey in six countries. Using a mimicry paradigm, 5,833 participants formed facial expressions found in 4,659 naturalistic images, resulting in 423,193 participant-generated facial expressions. In their own language, participants also rated each expression in terms of 48 emotions and mental states. A deep neural network tasked with predicting the culture-specific meanings people attributed to facial movements while ignoring physical appearance and context discovered 28 distinct dimensions of facial expression, with 21 dimensions showing strong evidence of universality and the remainder showing varying degrees of cultural specificity. These results capture the underlying dimensions of the meanings of facial expressions within and across cultures in unprecedented detail.

3.
Neuroimage ; 268: 119879, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36642154

RESUMO

Thirty years of neuroimaging reveal the set of brain regions consistently associated with pleasant and unpleasant affect in humans-or the neural reference space for valence. Yet some of humans' most potent affective states occur in the context of other humans. Prior work has yet to differentiate how the neural reference space for valence varies as a product of the sociality of affective stimuli. To address this question, we meta-analyzed across 614 social and non-social affective neuroimaging contrasts, summarizing the brain regions that are consistently activated for social and non-social affective information. We demonstrate that across the literature, social and non-social affective stimuli yield overlapping activations within regions associated with visceromotor control, including the amygdala, hypothalamus, anterior cingulate cortex and insula. However, we find that social processing differs from non-social affective processing in that it involves additional cortical activations in the medial prefrontal and posterior cingulum that have been associated with mentalizing and prediction. A Bayesian classifier was able to differentiate unpleasant from pleasant affect, but not social from non-social affective states. Moreover, it was not able to classify unpleasantness from pleasantness at the highest levels of sociality. These findings suggest that highly social scenarios may be equally salient to humans, regardless of their valence.


Assuntos
Mapeamento Encefálico , Encéfalo , Humanos , Teorema de Bayes , Mapeamento Encefálico/métodos , Encéfalo/diagnóstico por imagem , Emoções , Comportamento Social , Imageamento por Ressonância Magnética/métodos
4.
Nat Hum Behav ; 7(2): 240-250, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36577898

RESUMO

Human social life is rich with sighs, chuckles, shrieks and other emotional vocalizations, called 'vocal bursts'. Nevertheless, the meaning of vocal bursts across cultures is only beginning to be understood. Here, we combined large-scale experimental data collection with deep learning to reveal the shared and culture-specific meanings of vocal bursts. A total of n = 4,031 participants in China, India, South Africa, the USA and Venezuela mimicked vocal bursts drawn from 2,756 seed recordings. Participants also judged the emotional meaning of each vocal burst. A deep neural network tasked with predicting the culture-specific meanings people attributed to vocal bursts while disregarding context and speaker identity discovered 24 acoustic dimensions, or kinds, of vocal expression with distinct emotion-related meanings. The meanings attributed to these complex vocal modulations were 79% preserved across the five countries and three languages. These results reveal the underlying dimensions of human emotional vocalization in remarkable detail.


Assuntos
Aprendizado Profundo , Voz , Humanos , Emoções , Idioma , Acústica
5.
Soc Cogn Affect Neurosci ; 17(11): 995-1006, 2022 11 02.
Artigo em Inglês | MEDLINE | ID: mdl-35445241

RESUMO

In the present study, we used an unsupervised classification algorithm to reveal both consistency and degeneracy in neural network connectivity during anger and anxiety. Degeneracy refers to the ability of different biological pathways to produce the same outcomes. Previous research is suggestive of degeneracy in emotion, but little research has explicitly examined whether degenerate functional connectivity patterns exist for emotion categories such as anger and anxiety. Twenty-four subjects underwent functional magnetic resonance imaging (fMRI) while listening to unpleasant music and self-generating experiences of anger and anxiety. A data-driven model building algorithm with unsupervised classification (subgrouping Group Iterative Multiple Model Estimation) identified patterns of connectivity among 11 intrinsic networks that were associated with anger vs anxiety. As predicted, degenerate functional connectivity patterns existed within these overarching consistent patterns. Degenerate patterns were not attributable to differences in emotional experience or other individual-level factors. These findings are consistent with the constructionist account that emotions emerge from flexible functional neuronal assemblies and that emotion categories such as anger and anxiety each describe populations of highly variable instances.


Assuntos
Encéfalo , Emoções , Humanos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Emoções/fisiologia , Imageamento por Ressonância Magnética/métodos , Ira/fisiologia , Redes Neurais de Computação , Vias Neurais/diagnóstico por imagem , Vias Neurais/fisiologia
6.
Soc Cogn Affect Neurosci ; 16(8): 827-837, 2021 08 05.
Artigo em Inglês | MEDLINE | ID: mdl-32986115

RESUMO

Across multiple domains of social perception-including social categorization, emotion perception, impression formation and mentalizing-multivariate pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data has permitted a more detailed understanding of how social information is processed and represented in the brain. As in other neuroimaging fields, the neuroscientific study of social perception initially relied on broad structure-function associations derived from univariate fMRI analysis to map neural regions involved in these processes. In this review, we trace the ways that social neuroscience studies using MVPA have built on these neuroanatomical associations to better characterize the computational relevance of different brain regions, and discuss how MVPA allows explicit tests of the correspondence between psychological models and the neural representation of social information. We also describe current and future advances in methodological approaches to multivariate fMRI data and their theoretical value for the neuroscience of social perception.


Assuntos
Mapeamento Encefálico , Neurociências , Encéfalo/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Percepção Social
7.
Soc Cogn Affect Neurosci ; 16(3): 302-314, 2021 03 05.
Artigo em Inglês | MEDLINE | ID: mdl-33270131

RESUMO

Previous research has shown that social-conceptual associations, such as stereotypes, can influence the visual representation of faces and neural pattern responses in ventral temporal cortex (VTC) regions, such as the fusiform gyrus (FG). Current models suggest that this social-conceptual impact requires medial orbitofrontal cortex (mOFC) feedback signals during perception. Backward masking can disrupt such signals, as it is a technique known to reduce functional connectivity between VTC regions and regions outside VTC. During functional magnetic resonance imaging (fMRI), subjects passively viewed masked and unmasked faces, and following the scan, perceptual biases and stereotypical associations were assessed. Multi-voxel representations of faces across the VTC, and in the FG and mOFC, reflected stereotypically biased perceptions when faces were unmasked, but this effect was abolished when faces were masked. However, the VTC still retained the ability to process masked faces and was sensitive to their categorical distinctions. Functional connectivity analyses confirmed that masking disrupted mOFC-FG connectivity, which predicted a reduced impact of stereotypical associations in the FG. Taken together, our findings suggest that the biasing of face representations in line with stereotypical associations does not arise from intrinsic processing within the VTC and FG alone, but instead it depends in part on top-down feedback from the mOFC during perception.


Assuntos
Reconhecimento Facial/fisiologia , Córtex Pré-Frontal/diagnóstico por imagem , Lobo Temporal/diagnóstico por imagem , Adulto , Mapeamento Encefálico/métodos , Face/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Estimulação Luminosa , Córtex Pré-Frontal/fisiologia , Estereotipagem , Lobo Temporal/fisiologia , Adulto Jovem
8.
Adv Exp Soc Psychol ; 61: 237-287, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-34326560

RESUMO

The perception of social categories, emotions, and personality traits from others' faces each have been studied extensively but in relative isolation. We synthesize emerging findings suggesting that, in each of these domains of social perception, both a variety of bottom-up facial features and top-down social cognitive processes play a part in driving initial perceptions. Among such top-down processes, social-conceptual knowledge in particular can have a fundamental structuring role in how we perceive others' faces. Extending the Dynamic Interactive framework (Freeman & Ambady, 2011), we outline a perspective whereby the perception of social categories, emotions, and traits from faces can all be conceived as emerging from an integrated system relying on domain-general cognitive properties. Such an account of social perception would envision perceptions to be a rapid, but gradual, process of negotiation between the variety of visual cues inherent to a person and the social cognitive knowledge an individual perceiver brings to the perceptual process. We describe growing evidence in support of this perspective as well as its theoretical implications for social psychology.

9.
Proc Natl Acad Sci U S A ; 116(32): 15861-15870, 2019 08 06.
Artigo em Inglês | MEDLINE | ID: mdl-31332015

RESUMO

Humans reliably categorize configurations of facial actions into specific emotion categories, leading some to argue that this process is invariant between individuals and cultures. However, growing behavioral evidence suggests that factors such as emotion-concept knowledge may shape the way emotions are visually perceived, leading to variability-rather than universality-in facial-emotion perception. Understanding variability in emotion perception is only emerging, and the neural basis of any impact from the structure of emotion-concept knowledge remains unknown. In a neuroimaging study, we used a representational similarity analysis (RSA) approach to measure the correspondence between the conceptual, perceptual, and neural representational structures of the six emotion categories Anger, Disgust, Fear, Happiness, Sadness, and Surprise. We found that subjects exhibited individual differences in their conceptual structure of emotions, which predicted their own unique perceptual structure. When viewing faces, the representational structure of multivoxel patterns in the right fusiform gyrus was significantly predicted by a subject's unique conceptual structure, even when controlling for potential physical similarity in the faces themselves. Finally, cross-cultural differences in emotion perception were also observed, which could be explained by individual differences in conceptual structure. Our results suggest that the representational structure of emotion expressions in visual face-processing regions may be shaped by idiosyncratic conceptual understanding of emotion categories.


Assuntos
Encéfalo/fisiologia , Emoções/fisiologia , Expressão Facial , Animais , Comportamento , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Camundongos , Adulto Jovem
10.
Neurosci Lett ; 693: 40-43, 2019 02 06.
Artigo em Inglês | MEDLINE | ID: mdl-29275186

RESUMO

The visual system is able to extract an enormous amount of socially relevant information from the face, including social categories, personality traits, and emotion. While facial features may be directly tied to certain perceptions, emerging research suggests that top-down social cognitive factors (e.g., stereotypes, social-conceptual knowledge, prejudice) considerably influence and shape the perceptual process. The rapid integration of higher-order social cognitive processes into visual perception can give rise to systematic biases in face perception and may potentially act as a mediating factor for intergroup behavioral and evaluative biases. Drawing on neuroimaging evidence, we review the ways that top-down social cognitive factors shape visual perception of facial features. This emerging work in social and affective neuroscience builds upon work on predictive coding and perceptual priors in cognitive neuroscience and visual cognition, suggesting domain-general mechanisms that underlie a social-visual interface through which social cognition affects visual perception.


Assuntos
Cognição/fisiologia , Percepção Social , Percepção Visual/fisiologia , Tonsila do Cerebelo/diagnóstico por imagem , Tonsila do Cerebelo/fisiologia , Emoções , Face , Expressão Facial , Humanos , Neuroimagem/métodos , Comportamento Social
11.
Curr Opin Psychol ; 24: 83-91, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30388494

RESUMO

An emerging focus on the geometry of representational structures is advancing a variety of areas in social perception, including social categorization, emotion perception, and trait impressions. Here, we review recent studies adopting a representational geometry approach, and argue that important advances in social perception can be gained by triangulating on the structure of representations via three levels of analysis: neuroimaging, behavioral measures, and computational modeling. Among other uses, this approach permits broad and comprehensive tests of how bottom-up facial features and visual processes as well as top-down social cognitive factors and conceptual processes shape perceptions of social categories, emotion, and personality traits. Although such work is only in its infancy, a focus on corroborating representational geometry across modalities is allowing researchers to use multiple levels of analysis to constrain theoretical models in social perception. This approach holds promise to further our understanding of the multiply determined nature of social perception and its neural basis.


Assuntos
Cognição/fisiologia , Percepção Social , Percepção Visual/fisiologia , Encéfalo/fisiologia , Emoções , Face , Expressão Facial , Humanos , Neuroimagem/métodos , Comportamento Social
12.
Nat Hum Behav ; 2(8): 581-591, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-31209318

RESUMO

Recent theoretical accounts argue that conceptual knowledge dynamically interacts with processing of facial cues, fundamentally influencing visual perception of social and emotion categories. Evidence is accumulating for the idea that a perceiver's conceptual knowledge about emotion is involved in emotion perception, even when stereotypic facial expressions are presented in isolation1-4. However, existing methods have not allowed a comprehensive assessment of the relationship between conceptual knowledge and emotion perception across individuals and emotion categories. Here we use a representational similarity analysis approach to show that conceptual knowledge predicts the representational structure of facial emotion perception. We conducted three studies using computer mouse-tracking5 and reverse-correlation6 paradigms. Overall, we found that when individuals believed two emotions to be conceptually more similar, faces from those categories were perceived with a corresponding similarity, even when controlling for any physical similarity in the stimuli themselves. When emotions were rated conceptually more similar, computer-mouse trajectories during emotion perception exhibited a greater simultaneous attraction to both category responses (despite only one emotion being depicted; studies 1 and 2), and reverse-correlated face prototypes exhibited a greater visual resemblance (study 3). Together, our findings suggest that differences in conceptual knowledge are reflected in the perceptual processing of facial emotion.

13.
Soc Cogn Affect Neurosci ; 12(2): 169-183, 2017 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-27539864

RESUMO

Recent behavioral and neuroimaging studies demonstrate that labeling one's emotional experiences and perceptions alters those states. Here, we used a comprehensive meta-analysis of the neuroimaging literature to systematically explore whether the presence of emotion words in experimental tasks has an impact on the neural representation of emotional experiences and perceptions across studies. Using a database of 386 studies, we assessed brain activity when emotion words (e.g. 'anger', 'disgust') and more general affect words (e.g. 'pleasant', 'unpleasant') were present in experimental tasks vs not present. As predicted, when emotion words were present, we observed more frequent activations in regions related to semantic processing. When emotion words were not present, we observed more frequent activations in the amygdala and parahippocampal gyrus, bilaterally. The presence of affect words did not have the same effect on the neural representation of emotional experiences and perceptions, suggesting that our observed effects are specific to emotion words. These findings are consistent with the psychological constructionist prediction that in the absence of accessible emotion concepts, the meaning of affective experiences and perceptions are ambiguous. Findings are also consistent with the regulatory role of 'affect labeling'. Implications of the role of language in emotion construction and regulation are discussed.


Assuntos
Encéfalo/fisiologia , Emoções/fisiologia , Idioma , Neuroimagem , Semântica , Percepção da Fala/fisiologia , Tonsila do Cerebelo/fisiologia , Formação de Conceito/fisiologia , Dominância Cerebral/fisiologia , Humanos , Giro Para-Hipocampal/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...