Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 13(1): 19323, 2023 11 07.
Artículo en Inglés | MEDLINE | ID: mdl-37935828

RESUMEN

Face ensemble coding is the perceptual ability to create a quick and overall impression of a group of faces, triggering social and behavioral motivations towards other people (approaching friendly people or avoiding an angry mob). Cultural differences in this ability have been reported, such that Easterners are better at face ensemble coding than Westerners are. The underlying mechanism has been attributed to differences in processing styles, with Easterners allocating attention globally, and Westerners focusing on local parts. However, the remaining question is how such default attention mode is influenced by salient information during ensemble perception. We created visual displays that resembled a real-world social setting in which one individual in a crowd of different faces drew the viewer's attention while the viewer judged the overall emotion of the crowd. In each trial, one face in the crowd was highlighted by a salient cue, capturing spatial attention before the participants viewed the entire group. American participants' judgment of group emotion more strongly weighed the attended individual face than Korean participants, suggesting a greater influence of local information on global perception. Our results showed that different attentional modes between cultural groups modulate social-emotional processing underlying people's perceptions and attributions.


Asunto(s)
Pueblos del Este de Asia , Juicio , Humanos , Estados Unidos , Expresión Facial , Emociones , Ira
2.
Affect Sci ; 3(3): 539-545, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36385905

RESUMEN

Meeting the demands of a social world is an incredibly complex task. Since humans are able to navigate the social world so effortlessly, our ability to both interpret and signal complex social and emotional information is arguably shaped by evolutionary pressures. Dunbar (1992) tested this assumption in his Social Brain Hypothesis, observing that different primates' neocortical volume predicted their average social network size, suggesting that neocortical evolution was driven at least in part by social demands. Here we examined the Social Face Hypothesis, based on the assumption that the face co-evolved with the brain to signal more complex and nuanced emotional, mental, and behavioral states to others. Despite prior observations suggestive of this conclusion (e.g., Redican, 1982), it has not, to our knowledge, been empirically tested. To do this, we obtained updated metrics of primate facial musculature, facial hair bareness, average social network size, and average brain weight data for a large number of primate genera (N = 63). In this sample, we replicated Dunbar's original observation by finding that average brain weight predicted average social network size. Critically, we also found that perceived facial hair bareness predicted both group size and average brain weight. Finally, we found that all three variables acted as mediators, confirming a complex, interdependent relationship between primate social network size, primate brain weight, and primate facial hair bareness. These findings are consistent with the conclusion that the primate brain and face co-evolved in response to meeting the increased social demands of one's environment. Supplementary Information: The online version contains supplementary material available at 10.1007/s42761-022-00116-7.

3.
Front Psychol ; 13: 997498, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36248585

RESUMEN

Research in person and face perception has broadly focused on group-level consensus that individuals hold when making judgments of others (e.g., "X type of face looks trustworthy"). However, a growing body of research demonstrates that individual variation is larger than shared, stimulus-level variation for many social trait judgments. Despite this insight, little research to date has focused on building and explaining individual models of face perception. Studies and methodologies that have examined individual models are limited in what visualizations they can reliably produce to either noisy and blurry or computer avatar representations. Methods that produce low-fidelity visual representations inhibit generalizability by being clearly computer manipulated and produced. In the present work, we introduce a novel paradigm to visualize individual models of face judgments by leveraging state-of-the-art computer vision methods. Our proposed method can produce a set of photorealistic face images that correspond to an individual's mental representation of a specific attribute across a variety of attribute intensities. We provide a proof-of-concept study which examines perceived trustworthiness/untrustworthiness and masculinity/femininity. We close with a discussion of future work to substantiate our proposed method.

4.
Affect Sci ; 3(1): 46-61, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36046095

RESUMEN

Machine learning findings suggest Eurocentric (aka White/European) faces structurally resemble anger more than Afrocentric (aka Black/African) faces (e.g., Albohn, 2020; Zebrowitz et al., 2010); however, Afrocentric faces are typically associated with anger more so than Eurocentric faces (e.g., Hugenberg & Bodenhausen, 2003, 2004). Here, we further examine counter-stereotypic associations between Eurocentric faces and anger, and Afrocentric faces and fear. In Study 1, using a computer vision algorithm, we demonstrate that neutral European American faces structurally resemble anger more and fear less than do African American faces. In Study 2, we then found that anger- and fear-resembling facial appearance influences perceived racial prototypicality in this same counter-stereotypic manner. In Study 3, we likewise found that imagined European American versus African American faces were rated counter-stereotypically (i.e., more like anger than fear) on key emotion-related facial characteristics (i.e., size of eyes, size of mouth, overall angularity of features). Finally in Study 4, we again found counter-stereotypic differences, this time in processing fluency, such that angry Eurocentric versus Afrocentric faces and fearful Afrocentric versus Eurocentric faces were categorized more accurately and quickly. Only in Study 5, using race-ambiguous interior facial cues coupled with Afrocentric versus Eurocentric hairstyles and skin tone, did we find the stereotypical effects commonly reported in the literature. These findings are consistent with the conclusion that the "angry Black" association in face perception is socially constructed in that structural cues considered prototypical of African American appearance conflict with common race-emotion stereotypes.

5.
Atten Percept Psychophys ; 84(7): 2271-2280, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36045309

RESUMEN

Decades of research show that contextual information from the body, visual scene, and voices can facilitate judgments of facial expressions of emotion. To date, most research suggests that bodily expressions of emotion offer context for interpreting facial expressions, but not vice versa. The present research aimed to investigate the conditions under which mutual processing of facial and bodily displays of emotion facilitate and/or interfere with emotion recognition. In the current two studies, we examined whether body and face emotion recognition are enhanced through integration of shared emotion cues, and/or hindered through mixed signals (i.e., interference). We tested whether faces and bodies facilitate or interfere with emotion processing by pairing briefly presented (33 ms), backward-masked presentations of faces with supraliminally presented bodies (Experiment 1) and vice versa (Experiment 2). Both studies revealed strong support for integration effects, but not interference. Integration effects are most pronounced for low-emotional clarity facial and bodily expressions, suggesting that when more information is needed in one channel, the other channel is recruited to disentangle any ambiguity. That this occurs for briefly presented, backward-masked presentations reveals low-level visual integration of shared emotional signal value.


Asunto(s)
Emociones , Reconocimiento Facial , Señales (Psicología) , Expresión Facial , Humanos , Estimulación Luminosa
6.
Cogn Emot ; 36(4): 741-749, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35175173

RESUMEN

Social exclusion influences how expressions are perceived and the tendency of the perceiver to mimic them. However, less is known about social exclusion's effect on one's own facial expressions. The aim of the present study was to identify the effects of social exclusion on Duchenne smiling behaviour, defined as activity of both zygomaticus major and the orbicularis oculi muscles. Utilising a within-subject's design, participants took part in the Cyberball Task in which they were both included and excluded while facial electromyography was measured. We found that during the active experience of social exclusion, participants showed greater orbicularis oculi activation when compared to the social inclusion condition. Further, we found that across both conditions, participants showed greater zygomaticus major muscle activation the longer they engaged in the Cyberball Task. Order of condition also mattered, with those who experienced social exclusion before social inclusion showing the greatest overall muscle activation. These results are consistent with an affiliative function of smiling, particularly as social exclusion engaged activation of muscles associated with a Duchenne smile.


Asunto(s)
Músculos Faciales , Sonrisa , Electromiografía , Expresión Facial , Músculos Faciales/fisiología , Humanos , Aislamiento Social
7.
Front Psychol ; 12: 612923, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33716875

RESUMEN

Previous research has demonstrated how emotion resembling cues in the face help shape impression formation (i. e., emotion overgeneralization). Perhaps most notable in the literature to date, has been work suggesting that gender-related appearance cues are visually confounded with certain stereotypic expressive cues (see Adams et al., 2015 for review). Only a couple studies to date have used computer vision to directly map out and test facial structural resemblance to emotion expressions using facial landmark coordinates to estimate face shape. In one study using a Bayesian network classifier trained to detect emotional expressions structural resemblance to a specific expression on a non-expressive (i.e., neutral) face was found to influence trait impressions of others (Said et al., 2009). In another study, a connectionist model trained to detect emotional expressions found different emotion-resembling cues in male vs. female faces (Zebrowitz et al., 2010). Despite this seminal work, direct evidence confirming the theoretical assertion that humans likewise utilize these emotion-resembling cues when forming impressions has been lacking. Across four studies, we replicate and extend these prior findings using new advances in computer vision to examine gender-related, emotion-resembling structure, color, and texture (as well as their weighted combination) and their impact on gender-stereotypic impression formation. We show that all three (plus their combination) are meaningfully related to human impressions of emotionally neutral faces. Further when applying the computer vision algorithms to experimentally manipulate faces, we show that humans derive similar impressions from them as did the computer.

8.
Front Psychol ; 11: 264, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32180750

RESUMEN

The evolution of the human brain and visual system is widely believed to have been shaped by the need to process and make sense out of expressive information, particularly via the face. We are so attuned to expressive information in the face that it informs even stable trait inferences (e.g., Knutson, 1996) through a process we refer to here as the face-specific fundamental attribution error (Albohn et al., 2019). We even derive highly consistent beliefs about the emotional lives of others based on emotion-resembling facial appearance (e.g., low versus high brows, big versus small eyes, etc.) in faces we know are completely devoid of overt expression (i.e., emotion overgeneralization effect: see Zebrowitz et al., 2010). The present studies extend these insights to better understand lay beliefs about older and younger adults' emotion dispositions and their impact on behavioral outcomes. In Study 1, we found that older versus younger faces objectively have more negative emotion-resembling cues in the face (using computer vision), and that raters likewise attribute more negative emotional dispositions to older versus younger adults based just on neutral facial appearance (see too Adams et al., 2016). In Study 2, we found that people appear to encode these negative emotional appearance cues in memory more so for older than younger adult faces. Finally, in Study 3 we exam downstream behavioral consequences of these negative attributions, showing that observers' avoidance of older versus younger faces is mediated by emotion-resembling facial appearance.

9.
Emotion ; 20(7): 1244-1254, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31259586

RESUMEN

Individuals use naïve emotion theories, including stereotypical information on the emotional disposition of an interaction partner, to form social impressions. In view of an aging population in Western societies, beliefs on emotion and age become more and more relevant. Across 10 studies, we thus present findings on how individuals associate specific affective states with young and old adults using the emotion implicit association test. The results of the studies are summarized in 2 separate mini meta-analyses. Participants implicitly associated young adult individuals with positive emotions, that is, happiness and serenity, respectively, and old adult individuals with negative emotions, that is, sadness and anger, respectively (Mini Meta-Analysis 1). Within negative emotions, participants preferentially associated young adult individuals with sadness and old adult individuals with anger (Mini Meta-Analysis 2). Even though young and old adults are stereotypically associated with specific emotions, contextual factors influence which age-emotion stereotype is salient in a given context. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Asunto(s)
Emociones/fisiología , Adolescente , Adulto , Factores de Edad , Femenino , Humanos , Masculino , Estereotipo , Adulto Joven
10.
Prog Brain Res ; 247: 71-87, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31196444

RESUMEN

Recently, speed of presentation of facially expressive stimuli was found to influence the processing of compound threat cues (e.g., anger/fear/gaze). For instance, greater amygdala responses were found to clear (e.g., direct gaze anger/averted gaze fear) versus ambiguous (averted gaze anger/direct gaze fear) combinations of threat cues when rapidly presented (33 and 300ms), but greater to ambiguous versus clear threat cues when presented for more sustained durations (1, 1.5, and 2s). A working hypothesis was put forth (Adams et al., 2012) that these effects were due to differential magnocellular versus parvocellular pathways contributions to the rapid versus sustained processing of threat, respectively. To test this possibility directly here, we restricted visual stream processing in the fMRI environment using facially expressive stimuli specifically designed to bias visual input exclusively to the magnocellular versus parvocellular pathways. We found that for magnocellular-biased stimuli, activations were predominantly greater to clear versus ambiguous threat-gaze pairs (on par with that previously found for rapid presentations of threat cues), whereas activations to ambiguous versus clear threat-gaze pairs were greater for parvocellular-biased stimuli (on par with that previously found for sustained presentations). We couch these findings in an adaptive dual process account of threat perception and highlight implications for other dual process models within psychology.


Asunto(s)
Encéfalo/fisiología , Expresión Facial , Miedo/psicología , Adulto , Amígdala del Cerebelo/fisiología , Señales (Psicología) , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Red Nerviosa/fisiología , Estimulación Luminosa/métodos
11.
Exp Brain Res ; 237(4): 967-975, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-30683957

RESUMEN

Facial emotion is an important cue for deciding whether an individual is potentially helpful or harmful. However, facial expressions are inherently ambiguous and observers typically employ other cues to categorize emotion expressed on the face, such as race, sex, and context. Here, we explored the effect of increasing or reducing different types of uncertainty associated with a facial expression that is to be categorized. On each trial, observers responded according to the emotion and location of a peripherally presented face stimulus and were provided with either: (1) no information about the upcoming face; (2) its location; (3) its expressed emotion; or (4) both its location and emotion. While cueing emotion or location resulted in faster response times than cueing unpredictive information, cueing face emotion alone resulted in faster responses than cueing face location alone. Moreover, cueing both stimulus location and emotion resulted in a superadditive reduction of response times compared with cueing location or emotion alone, suggesting that feature-based attention to emotion and spatially selective attention interact to facilitate perception of face stimuli. While categorization of facial expressions was significantly affected by stable identity cues (sex and race) in the face, we found that these interactions were eliminated when uncertainty about facial expression, but not spatial uncertainty about stimulus location, was reduced by predictive cueing. This demonstrates that feature-based attention to facial expression greatly attenuates the need to rely on stable identity cues to interpret facial emotion.


Asunto(s)
Atención/fisiología , Emociones/fisiología , Expresión Facial , Reconocimiento Facial/fisiología , Percepción Social , Percepción Espacial/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino , Adulto Joven
12.
Front Psychol ; 9: 1509, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30197614

RESUMEN

The present study examined how emotional fit with culture - the degree of similarity between an individual' emotional response to the emotional response of others from the same culture - relates to well-being in a sample of Asian American and European American college students. Using a profile correlation method, we calculated three types of emotional fit based on self-reported emotions, facial expressions, and physiological responses. We then examined the relationships between emotional fit and individual well-being (depression, life satisfaction) as well as collective aspects of well-being, namely collective self-esteem (one's evaluation of one's cultural group) and identification with one's group. The results revealed that self-report emotional fit was associated with greater individual well-being across cultures. In contrast, culture moderated the relationship between self-report emotional fit and collective self-esteem, such that emotional fit predicted greater collective self-esteem in Asian Americans, but not in European Americans. Behavioral emotional fit was unrelated to well-being. There was a marginally significant cultural moderation in the relationship between physiological emotional fit in a strong emotional situation and group identification. Specifically, physiological emotional fit predicted greater group identification in Asian Americans, but not in European Americans. However, this finding disappeared after a Bonferroni correction. The current finding extends previous research by showing that, while emotional fit may be closely related to individual aspects of well-being across cultures, the influence of emotional fit on collective aspects of well-being may be unique to cultures that emphasize interdependence and social harmony, and thus being in alignment with other members of the group.

13.
Sci Rep ; 8(1): 2776, 2018 02 09.
Artículo en Inglés | MEDLINE | ID: mdl-29426826

RESUMEN

Fearful faces convey threat cues whose meaning is contextualized by eye gaze: While averted gaze is congruent with facial fear (both signal avoidance), direct gaze (an approach signal) is incongruent with it. We have previously shown using fMRI that the amygdala is engaged more strongly by fear with averted gaze during brief exposures. However, the amygdala also responds more to fear with direct gaze during longer exposures. Here we examined previously unexplored brain oscillatory responses to characterize the neurodynamics and connectivity during brief (~250 ms) and longer (~883 ms) exposures of fearful faces with direct or averted eye gaze. We performed two experiments: one replicating the exposure time by gaze direction interaction in fMRI (N = 23), and another where we confirmed greater early phase locking to averted-gaze fear (congruent threat signal) with MEG (N = 60) in a network of face processing regions, regardless of exposure duration. Phase locking to direct-gaze fear (incongruent threat signal) then increased significantly for brief exposures at ~350 ms, and at ~700 ms for longer exposures. Our results characterize the stages of congruent and incongruent facial threat signal processing and show that stimulus exposure strongly affects the onset and duration of these stages.


Asunto(s)
Amígdala del Cerebelo/fisiología , Expresión Facial , Reconocimiento Facial , Miedo/psicología , Fijación Ocular , Factores de Tiempo , Adolescente , Señales (Psicología) , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Estimulación Luminosa , Adulto Joven
14.
Cult Brain ; 5(2): 125-152, 2017 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-29230379

RESUMEN

In many social situations, we make a snap judgment about crowds of people relying on their overall mood (termed "crowd emotion"). Although reading crowd emotion is critical for interpersonal dynamics, the sociocultural aspects of this process have not been explored. The current study examined how culture modulates the processing of crowd emotion in Korean and American observers. Korean and American (non-East Asian) participants were briefly presented with two groups of faces that were individually varying in emotional expressions and asked to choose which group between the two they would rather avoid. We found that Korean participants were more accurate than American participants overall, in line with the framework on cultural viewpoints: Holistic versus analytic processing in East Asians versus Westerners. Moreover, we found a speed advantage for other-race crowds in both cultural groups. Finally, we found different hemispheric lateralization patterns: American participants were more accurate to perceive the facial crowd to be avoided when it was presented in the left visual field than the right visual field, indicating a right hemisphere advantage for processing crowd emotion of both European American and Korean facial crowds. However, Korean participants showed weak or nonexistent laterality effects, with a slight right hemisphere advantage for European American facial crowds and no advantage in perceiving Korean facial crowds. Instead, Korean participants showed positive emotion bias for own-race faces. This work suggests that culture plays a role in modulating our crowd emotion perception of groups of faces and responses to them.

15.
Nat Hum Behav ; 1: 828-842, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29226255

RESUMEN

In crowds, where scrutinizing individual facial expressions is inefficient, humans can make snap judgments about the prevailing mood by reading "crowd emotion". We investigated how the brain accomplishes this feat in a set of behavioral and fMRI studies. Participants were asked to either avoid or approach one of two crowds of faces presented in the left and right visual hemifields. Perception of crowd emotion was improved when crowd stimuli contained goal-congruent cues and was highly lateralized to the right hemisphere. The dorsal visual stream was preferentially activated in crowd emotion processing, with activity in the intraparietal sulcus and superior frontal gyrus predicting perceptual accuracy for crowd emotion perception, whereas activity in the fusiform cortex in the ventral stream predicted better perception of individual facial expressions. Our findings thus reveal significant behavioral differences and differential involvement of the hemispheres and the major visual streams in reading crowd versus individual face expressions.

16.
Curr Dir Psychol Sci ; 26(3): 243-248, 2017 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-29606807

RESUMEN

A social-functional approach to face processing comes with a number of assumptions. First, given that humans possess limited cognitive resources, it assumes that we naturally allocate attention to processing and integrating the most adaptively relevant social cues. Second, from these cues, we make behavioral forecasts about others in order to respond in an efficient and adaptive manner. This assumption aligns with broader ecological accounts of vision that highlight a direct action-perception link, even for nonsocial vision. Third, humans are naturally predisposed to process faces in this functionally adaptive manner. This latter contention is implied by our attraction to dynamic aspects of the face, including looking behavior and facial expressions, from which we tend to overgeneralize inferences, even when forming impressions of stable traits. The functional approach helps to address how and why observers are able to integrate functionally related compound social cues in a manner that is ecologically relevant and thus adaptive.

17.
Perspect Psychol Sci ; 11(6): 917-928, 2016 11.
Artículo en Inglés | MEDLINE | ID: mdl-27784749

RESUMEN

According to the facial feedback hypothesis, people's affective responses can be influenced by their own facial expression (e.g., smiling, pouting), even when their expression did not result from their emotional experiences. For example, Strack, Martin, and Stepper (1988) instructed participants to rate the funniness of cartoons using a pen that they held in their mouth. In line with the facial feedback hypothesis, when participants held the pen with their teeth (inducing a "smile"), they rated the cartoons as funnier than when they held the pen with their lips (inducing a "pout"). This seminal study of the facial feedback hypothesis has not been replicated directly. This Registered Replication Report describes the results of 17 independent direct replications of Study 1 from Strack et al. (1988), all of which followed the same vetted protocol. A meta-analysis of these studies examined the difference in funniness ratings between the "smile" and "pout" conditions. The original Strack et al. (1988) study reported a rating difference of 0.82 units on a 10-point Likert scale. Our meta-analysis revealed a rating difference of 0.03 units with a 95% confidence interval ranging from -0.11 to 0.16.


Asunto(s)
Afecto , Expresión Facial , Retroalimentación Psicológica , Modelos Psicológicos , Humanos , Boca
18.
Front Psychol ; 7: 986, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27445944

RESUMEN

It might seem a reasonable assumption that when we are not actively using our faces to express ourselves (i.e., when we display nonexpressive, or neutral faces), those around us will not be able to read our emotions. Herein, using a variety of expression-related ratings, we examined whether age-related changes in the face can accurately reveal one's innermost affective dispositions. In each study, we found that expressive ratings of neutral facial displays predicted self-reported positive/negative dispositional affect, but only for elderly women, and only for positive affect. These findings meaningfully replicate and extend earlier work examining age-related emotion cues in the face of elderly women (Malatesta et al., 1987a). We discuss these findings in light of evidence that women are expected to, and do, smile more than men, and that the quality of their smiles predicts their life satisfaction. Although ratings of old male faces did not significantly predict self-reported affective dispositions, the trend was similar to that found for old female faces. A plausible explanation for this gender difference is that in the process of attenuating emotional expressions over their lifetimes, old men reveal less evidence of their total emotional experiences in their faces than do old women.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA