Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 102
Filter
1.
Sci Rep ; 14(1): 10673, 2024 05 09.
Article in English | MEDLINE | ID: mdl-38724676

ABSTRACT

U.S. immigration discourse has spurred interest in characterizing who illegalized immigrants are or perceived to be. What are the associated visual representations of migrant illegality? Across two studies with undergraduate and online samples (N = 686), we used face-based reverse correlation and similarity sorting to capture and compare mental representations of illegalized immigrants, native-born U.S. citizens, and documented immigrants. Documentation statuses evoked racialized imagery. Immigrant representations were dark-skinned and perceived as non-white, while citizen representations were light-skinned, evaluated positively, and perceived as white. Legality further differentiated immigrant representations: documentation conjured trustworthy representations, illegality conjured threatening representations. Participants spontaneously sorted unlabeled faces by documentation status in a spatial arrangement task. Faces' spatial similarity correlated with their similarity in pixel luminance and "American" ratings, confirming racialized distinctions. Representations of illegalized immigrants were uniquely racialized as dark-skinned un-American threats, reflecting how U.S. imperialism and colorism set conditions of possibility for existing representations of migrant illegalization.


Subject(s)
Racism , Humans , Male , Female , Adult , Racism/psychology , United States , Young Adult , Emigrants and Immigrants/psychology , Emigration and Immigration , Adolescent , Documentation , Face
2.
J Pain ; 24(11): 2040-2051, 2023 11.
Article in English | MEDLINE | ID: mdl-37356606

ABSTRACT

Social context has been shown to influence pain perception. This study aimed to broaden this literature by investigating whether relevant social stimuli, such as faces with different levels of intrinsic (based on physical resemblance to known individuals) and episodic (acquired through a previous experience) familiarity, may lead to hypoalgesia. We hypothesized that familiarity, whether intrinsic or acquired through experience, would increase pain threshold and decrease pain intensity. Sixty-seven participants underwent pain induction (the cold pressor test) viewing previously seen faces (Episodic Group) or new faces (Non-episodic Group) that differed in the level of intrinsic familiarity (high vs low). Pain threshold was measured in seconds, while pain intensity was measured on a rating scale of 0 to 10. The results did not show an effect of episodic familiarity. However, compared to low, high intrinsic familiar faces had an attenuating effect on pain intensity, even after controlling for pain expectation. These results suggest that physical features conveying a higher feeling of familiarity induce a top-down hypoalgesic modulation, in line with the idea that familiarity may signal safety and that the presence of familiar others reduce perceived threat-related distress. This study provides further evidence on the social modulation of pain and contributes to the literature on first impressions' influence on social behavior. PERSPECTIVE: Consistent with the idea that familiar others signal safety and reduce the sense of threat, facial features conveying familiarity induce a top-down hypoalgesic modulation. This knowledge may contribute to understanding differences in pain perception in experimental and clinical contexts.


Subject(s)
Pain , Recognition, Psychology , Humans , Pain Perception
3.
Perception ; 52(8): 590-607, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37321648

ABSTRACT

Trustworthy-looking faces are also perceived as more attractive, but are there other meaningful cues that contribute to perceived trustworthiness? Using data-driven models, we identify these cues after removing attractiveness cues. In Experiment 1, we show that both judgments of trustworthiness and attractiveness of faces manipulated by a model of perceived trustworthiness change in the same direction. To control for the effect of attractiveness, we build two new models of perceived trustworthiness: a subtraction model, which forces the perceived attractiveness and trustworthiness to be negatively correlated (Experiment 2), and an orthogonal model, which reduces their correlation (Experiment 3). In both experiments, faces manipulated to appear more trustworthy were indeed perceived to be more trustworthy, but not more attractive. Importantly, in both experiments, these faces were also perceived as more approachable and with more positive expressions, as indicated by both judgments and machine learning algorithms. The current studies show that the visual cues used for trustworthiness and attractiveness judgments can be separated, and that apparent approachability and facial emotion are driving trustworthiness judgments and possibly general valence evaluation.


Subject(s)
Social Perception , Trust , Humans , Trust/psychology , Judgment , Emotions , Effect Modifier, Epidemiologic , Facial Expression
4.
Cognition ; 237: 105452, 2023 08.
Article in English | MEDLINE | ID: mdl-37054490

ABSTRACT

When we look at someone's face, we rapidly and automatically form robust impressions of how trustworthy they appear. Yet while people's impressions of trustworthiness show a high degree of reliability and agreement with one another, evidence for the accuracy of these impressions is weak. How do such appearance-based biases survive in the face of weak evidence? We explored this question using an iterated learning paradigm, in which memories relating (perceived) facial and behavioral trustworthiness were passed through many generations of participants. Stimuli consisted of pairs of computer-generated people's faces and exact dollar amounts that those fictional people shared with partners in a trust game. Importantly, the faces were designed to vary considerably along a dimension of perceived facial trustworthiness. Each participant learned (and then reproduced from memory) some mapping between the faces and the dollar amounts shared (i.e., between perceived facial and behavioral trustworthiness). Much like in the game of 'telephone', their reproductions then became the training stimuli initially presented to the next participant, and so on for each transmission chain. Critically, the first participant in each chain observed some mapping between perceived facial and behavioral trustworthiness, including positive linear, negative linear, nonlinear, and completely random relationships. Strikingly, participants' reproductions of these relationships showed a pattern of convergence in which more trustworthy looks were associated with more trustworthy behavior - even when there was no relationship between looks and behavior at the start of the chain. These results demonstrate the power of facial stereotypes, and the ease with which they can be propagated to others, even in the absence of any reliable origin of these stereotypes.


Subject(s)
Facial Expression , Trust , Humans , Reproducibility of Results , Learning , Conditioning, Operant
5.
Br J Psychol ; 114(2): 511-514, 2023 May.
Article in English | MEDLINE | ID: mdl-36504382

ABSTRACT

In their comprehensive review of research on impressions from faces, Sutherland and Young (this issue) highlight both the remarkable progress and the many challenges facing the field. We focus on two of the challenges: the need for generative, powerful models of impressions and the idiosyncratic nature of complex impressions.


Subject(s)
Face , Facial Recognition , Humans
6.
Front Psychol ; 13: 997498, 2022.
Article in English | MEDLINE | ID: mdl-36248585

ABSTRACT

Research in person and face perception has broadly focused on group-level consensus that individuals hold when making judgments of others (e.g., "X type of face looks trustworthy"). However, a growing body of research demonstrates that individual variation is larger than shared, stimulus-level variation for many social trait judgments. Despite this insight, little research to date has focused on building and explaining individual models of face perception. Studies and methodologies that have examined individual models are limited in what visualizations they can reliably produce to either noisy and blurry or computer avatar representations. Methods that produce low-fidelity visual representations inhibit generalizability by being clearly computer manipulated and produced. In the present work, we introduce a novel paradigm to visualize individual models of face judgments by leveraging state-of-the-art computer vision methods. Our proposed method can produce a set of photorealistic face images that correspond to an individual's mental representation of a specific attribute across a variety of attribute intensities. We provide a proof-of-concept study which examines perceived trustworthiness/untrustworthiness and masculinity/femininity. We close with a discussion of future work to substantiate our proposed method.

7.
Mol Psychiatry ; 27(8): 3501-3509, 2022 08.
Article in English | MEDLINE | ID: mdl-35672377

ABSTRACT

People instantaneously evaluate faces with significant agreement on evaluations of social traits. However, the neural basis for such rapid spontaneous face evaluation remains largely unknown. Here, we recorded from 490 neurons in the human amygdala and hippocampus and found that the neuronal activity was associated with the geometry of a social trait space. We further investigated the temporal evolution and modulation on the social trait representation, and we employed encoding and decoding models to reveal the critical social traits for the trait space. We also recorded from another 938 neurons and replicated our findings using different social traits. Together, our results suggest that there exists a neuronal population code for a comprehensive social trait space in the human amygdala and hippocampus that underlies spontaneous first impressions. Changes in such neuronal social trait space may have implications for the abnormal processing of social information observed in some neurological and psychiatric disorders.


Subject(s)
Amygdala , Hippocampus , Humans , Amygdala/physiology , Hippocampus/physiology , Neurons/physiology , Sociological Factors
8.
Proc Natl Acad Sci U S A ; 119(17): e2115228119, 2022 04 26.
Article in English | MEDLINE | ID: mdl-35446619

ABSTRACT

The diversity of human faces and the contexts in which they appear gives rise to an expansive stimulus space over which people infer psychological traits (e.g., trustworthiness or alertness) and other attributes (e.g., age or adiposity). Machine learning methods, in particular deep neural networks, provide expressive feature representations of face stimuli, but the correspondence between these representations and various human attribute inferences is difficult to determine because the former are high-dimensional vectors produced via black-box optimization algorithms. Here we combine deep generative image models with over 1 million judgments to model inferences of more than 30 attributes over a comprehensive latent face space. The predictive accuracy of our model approaches human interrater reliability, which simulations suggest would not have been possible with fewer faces, fewer judgments, or lower-dimensional feature representations. Our model can be used to predict and manipulate inferences with respect to arbitrary face photographs or to generate synthetic photorealistic face stimuli that evoke impressions tuned along the modeled attributes.


Subject(s)
Facial Expression , Judgment , Attitude , Face , Humans , Social Perception , Trust
9.
eNeuro ; 9(1)2022.
Article in English | MEDLINE | ID: mdl-34933946

ABSTRACT

The human amygdala and hippocampus are critically involved in various processes in face perception. However, it remains unclear how task demands or evaluative contexts modulate processes underlying face perception. In this study, we employed two task instructions when participants viewed the same faces and recorded single-neuron activity from the human amygdala and hippocampus. We comprehensively analyzed task modulation for three key aspects of face processing and we found that neurons in the amygdala and hippocampus (1) encoded high-level social traits such as perceived facial trustworthiness and dominance and this response was modulated by task instructions; (2) encoded low-level facial features and demonstrated region-based feature coding, which was not modulated by task instructions; and (3) encoded fixations on salient face parts such as the eyes and mouth, which was not modulated by task instructions. Together, our results provide a comprehensive survey of task modulation of neural processes underlying face perception at the single-neuron level in the human amygdala and hippocampus.


Subject(s)
Amygdala , Facial Recognition , Facial Expression , Hippocampus , Humans , Magnetic Resonance Imaging , Neurons
10.
Cognition ; 211: 104638, 2021 06.
Article in English | MEDLINE | ID: mdl-33740538

ABSTRACT

Perceptual conscious experiences result from non-conscious processes that precede them. We document a new characteristic of the cognitive system: the speed with which visual meaningful stimuli are prioritized to consciousness over competing noise in visual masking paradigms. In ten experiments (N = 399) we find that an individual's non-conscious visual prioritization speed (NVPS) is ubiquitous across a wide variety of stimuli, and generalizes across visual masks, suppression tasks, and time. We also find that variation in NVPS is unique, in that it cannot be explained by variation in general speed, perceptual decision thresholds, short-term visual memory, or three networks of attention (alerting, orienting and executive). Finally, we find that NVPS is correlated with subjective measures of sensitivity, as they are measured by the Highly Sensitive Person scale. We conclude by discussing the implications of variance in NVPS for understanding individual variance in behavior and the neural substrates of consciousness.


Subject(s)
Individuality , Visual Perception , Attention , Consciousness , Humans , Memory, Short-Term , Perceptual Masking
11.
Psychol Res ; 85(4): 1706-1712, 2021 Jun.
Article in English | MEDLINE | ID: mdl-32266544

ABSTRACT

Trait inferences based solely on facial appearance affect many social decisions. Here we tested whether the effects of such inferences extend to the perception of physical sensations. In an actual clinical setting, we show that healthcare providers' facial appearance is a strong predictor of pain experienced by patients during a medical procedure. The effect was specific to familiarity: facial features of healthcare providers that convey feelings of familiarity were associated with a decrease in patients' perception of pain. In addition, caring appearance of the healthcare providers was significantly related to patients' satisfaction with the care they received. Besides indicating that rapid, unreflective trait inferences from facial appearance may affect important healthcare outcomes, these findings contribute to the understanding of the mechanisms underlying social modulation of pain perception.


Subject(s)
Attitude of Health Personnel , Facial Expression , Pain/psychology , Professional-Patient Relations , Adult , Emotions , Humans , Male , Personnel, Hospital/psychology
12.
J Exp Psychol Hum Percept Perform ; 46(11): 1328-1343, 2020 Nov.
Article in English | MEDLINE | ID: mdl-32757588

ABSTRACT

Women prefer male faces with feminine shape and masculine reflectance. Here, we investigated the conceptual correlates of this preference, showing that it might reflect women's preferences for feminine (vs. masculine) personality in a partner. Young heterosexual women reported their preferences for personality traits in a partner and rated male faces-manipulated on masculinity/femininity-on stereotypically masculine (e.g., dominance) and feminine traits (e.g., warmth). Masculine shape and reflectance increased perceptions of masculine traits but had different effects on perceptions of feminine traits and attractiveness. While masculine shape decreased perceptions of both attractiveness and feminine traits, masculine reflectance increased perceptions of attractiveness and, to a weaker extent, perceptions of feminine traits. These findings are consistent with the idea that sex-dimorphic characteristics elicit personality trait judgments, which might in turn affect attractiveness. Importantly, participants found faces attractive to the extent that these faces elicited their preferred personality traits, regardless of gender typicality of the traits. In sum, women's preferences for male faces are associated with their preferences for personality traits. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Choice Behavior/physiology , Facial Recognition/physiology , Femininity , Masculinity , Personality/physiology , Sexual Behavior/physiology , Social Perception , Adult , Female , Humans , Young Adult
13.
Acta Psychol (Amst) ; 203: 103011, 2020 Feb.
Article in English | MEDLINE | ID: mdl-31981825

ABSTRACT

People's ability to learn about the affective value of others is impressive. However, it is unclear whether this learning solely reflects general affect-based processes or a mixture of affect-based and person-attribution processes. Consistent with the former possibility, people's ability to learn the affective value of people and places have been shown to be comparable (Falvello, Vinson, Ferrari, & Todorov, 2015). To investigate whether general affect-based processes are sufficient to account for this kind of learning, we presented participants with images paired with valenced statements that were either relevant (e.g., a person statement with a person image) or irrelevant (e.g., a person statement with a non-person image). After this presentation, participants evaluated the goodness or badness of the images. In Experiment 1, we found that the learning effects for faces and places were comparable and occurred only when the statements were relevant. However, when we presented the images with multiple statements of the same valence (Experiments 2-4), we found that places acquired affective value from both relevant and irrelevant statements. In contrast, faces were less likely to acquire affective value from irrelevant statements. Our findings suggest that although general affect-based processes might be sufficient to account for affective learning of places, affective learning of faces might involve both affect-based and person-attribution processes.


Subject(s)
Affect/physiology , Association Learning/physiology , Adult , Attention/physiology , Conditioning, Psychological/physiology , Facial Expression , Female , Humans , Male , Psychomotor Performance/physiology , Reaction Time/physiology , Young Adult
14.
Behav Res Methods ; 52(4): 1428-1444, 2020 08.
Article in English | MEDLINE | ID: mdl-31898288

ABSTRACT

Identifying relative idiosyncratic and shared contributions to judgments is a fundamental challenge to the study of human behavior, yet there is no established method for estimating these contributions. Using edge cases of stimuli varying in intrarater reliability and interrater agreement-faces (high on both), objects (high on the former, low on the latter), and complex patterns (low on both)-we showed that variance component analyses (VCAs) accurately captured the psychometric properties of the data (Study 1). Simulations showed that the VCA generalizes to any arbitrary continuous rating and that both sample and stimulus set size affect estimate precision (Study 2). Generally, a minimum of 60 raters and 30 stimuli provided reasonable estimates within our simulations. Furthermore, VCA estimates stabilized given more than two repeated measures, consistent with the finding that both intrarater reliability and interrater agreement increased nonlinearly with repeated measures (Study 3). The VCA provides a rigorous examination of where variance lies in data, can be implemented using mixed models with crossed random effects, and is general enough to be useful in any judgment domain in which agreement and disagreement are important to quantify and in which multiple raters independently rate multiple stimuli.


Subject(s)
Judgment , Research Design , Humans , Observer Variation , Reproducibility of Results
15.
Cereb Cortex Commun ; 1(1): tgaa055, 2020.
Article in English | MEDLINE | ID: mdl-34296119

ABSTRACT

An important question in human face perception research is to understand whether the neural representation of faces is dynamically modulated by context. In particular, although there is a plethora of neuroimaging literature that has probed the neural representation of faces, few studies have investigated what low-level structural and textural facial features parametrically drive neural responses to faces and whether the representation of these features is modulated by the task. To answer these questions, we employed 2 task instructions when participants viewed the same faces. We first identified brain regions that parametrically encoded high-level social traits such as perceived facial trustworthiness and dominance, and we showed that these brain regions were modulated by task instructions. We then employed a data-driven computational face model with parametrically generated faces and identified brain regions that encoded low-level variation in the faces (shape and skin texture) that drove neural responses. We further analyzed the evolution of the neural feature vectors along the visual processing stream and visualized and explained these feature vectors. Together, our results showed a flexible neural representation of faces for both low-level features and high-level social traits in the human brain.

16.
Front Psychol ; 11: 576852, 2020.
Article in English | MEDLINE | ID: mdl-33510667

ABSTRACT

INTRODUCTION: The present study investigates the association of lifetime interpersonal violence (IPV) exposure, related posttraumatic stress disorder (IPV-PTSD), and appraisal of the degree of threat posed by facial avatars. METHODS: We recorded self-rated responses and high-density electroencephalography (HD-EEG) among women, 16 of whom with lifetime IPV-PTSD and 14 with no PTSD, during a face-evaluation task that displayed male face avatars varying in their degree of threat as rated along dimensions of dominance and trustworthiness. RESULTS: The study found a significant association between lifetime IPV exposure, under-estimation of dominance, and over-estimation of trustworthiness. Characterization of EEG microstates supported that lifetime IPV-PTSD modulates emotional appraisal, specifically in encoding and decoding processing associated with N170 and LPP evoked potentials. EEG source localization demonstrated an overactivation of the limbic system, in particular the parahippocampal gyrus, in response to non-threatening avatars. Additionally, dysfunctional involvement of attention-related processing anterior prefrontal cortex (aPFC) was found in response to relatively trustworthy avatars in IPV-PTSD individuals compared with non-PTSD controls. DISCUSSION: This study showed that IPV exposure and related PTSD modulate individuals' evaluation of facial characteristics suggesting threat. Atypical processing of these avatar characteristics was marked by group differences in brain regions linked to facial processing, emotion regulation, and memory.

17.
J Exp Psychol Gen ; 149(2): 323-342, 2020 Feb.
Article in English | MEDLINE | ID: mdl-31294585

ABSTRACT

Trustworthiness and dominance impressions summarize trait judgments from faces. Judgments on these key traits are negatively correlated to each other in impressions of female faces, implying less differentiated impressions of female faces. Here we test whether this is true across many trait judgments and whether less differentiated impressions of female faces originate in different facial information used for male and female impressions or different evaluation of the same information. Using multidimensional rating datasets and data-driven modeling, we show that (a) impressions of women are less differentiated and more valence-laden than impressions of men and find that (b) these impressions are based on similar visual information across face genders. Female face impressions were more highly intercorrelated and were better explained by valence (Study 1). These intercorrelations were higher when raters more strongly endorsed gender stereotypes. Despite the gender difference, male and female impression models-derived from separate trustworthiness and dominance ratings of male and female faces-were similar to each other (Study 2). Further, both male and female models could manipulate impressions of faces of both genders (Study 3). The results highlight the high-level, evaluative effect of face gender in impression formation-women are judged negatively to the extent their looks do not conform to expectations, not because people use different facial information across genders but because people evaluate the information differently across genders. (PsycINFO Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Attitude , Facial Expression , Judgment , Sexism/psychology , Sexism/statistics & numerical data , Social Perception , Adult , Computer Simulation , Female , Humans , Male , Students/psychology , Students/statistics & numerical data , Young Adult
18.
Nat Hum Behav ; 4(3): 287-293, 2020 03.
Article in English | MEDLINE | ID: mdl-31819209

ABSTRACT

Impressions of competence from faces predict important real-world outcomes, including electoral success and chief executive officer selection. Presumed competence is associated with social status. Here we show that subtle economic status cues in clothes affect perceived competence from faces. In nine studies, people rated the competence of faces presented in frontal headshots. Faces were shown with different upper-body clothing rated by independent judges as looking 'richer' or 'poorer', although not notably perceived as such when explicitly described. The same face when seen with 'richer' clothes was judged significantly more competent than with 'poorer' clothes. The effect persisted even when perceivers were exposed to the stimuli briefly (129 ms), warned that clothing cues are non-informative and instructed to ignore the clothes (in one study, with considerable incentives). These findings demonstrate the uncontrollable effect of economic status cues on person perception. They add yet another hurdle to the challenges faced by low-status individuals.


Subject(s)
Clothing , Economic Status , Facial Recognition , Social Perception , Adult , Cues , Female , Humans , Male , Young Adult
19.
Vision Res ; 165: 131-142, 2019 12.
Article in English | MEDLINE | ID: mdl-31734634

ABSTRACT

Face perception is based on both shape and reflectance information. However, we know little about the relative contribution of these kinds of information to social judgments of faces. In Experiment 1, we generated faces using validated computational models of attractiveness, competence, dominance, extroversion, and trustworthiness. Faces were manipulated orthogonally on five levels of shape and reflectance for each model. Both kinds of information had linear and additive effects on participants' social judgments. Shape information was more predictive of dominance, extroversion, and trustworthiness judgments, whereas reflectance information was more predictive of competence judgments. In Experiment 2, to test whether the amount of visual information alters the relative contribution of shape and reflectance information, we presented faces - varied on attractiveness, competence, and dominance - for five different durations (33-500 ms). For all judgments, the linear effect of both shape and reflectance increased as duration increased. Importantly, the relative contribution did not change across durations. These findings show that that the judged dimension is critical for which kind of information is weighted more heavily in judgments and that the relative contribution of shape and reflectance is stable across the amount of visual information available.


Subject(s)
Facial Expression , Facial Recognition/physiology , Judgment/physiology , Social Perception , Adult , Humans , Male
20.
Multisens Res ; 32(6): 499-519, 2019 01 01.
Article in English | MEDLINE | ID: mdl-31117046

ABSTRACT

Deaf individuals may compensate for the lack of the auditory input by showing enhanced capacities in certain visual tasks. Here we assessed whether this also applies to recognition of emotions expressed by bodily and facial cues. In Experiment 1, we compared deaf participants and hearing controls in a task measuring recognition of the six basic emotions expressed by actors in a series of video-clips in which either the face, the body, or both the face and body were visible. In Experiment 2, we measured the weight of body and face cues in conveying emotional information when intense genuine emotions are expressed, a situation in which face expressions alone may have ambiguous valence. We found that deaf individuals were better at identifying disgust and fear from body cues (Experiment 1) and in integrating face and body cues in case of intense negative genuine emotions (Experiment 2). Our findings support the capacity of deaf individuals to compensate for the lack of the auditory input enhancing perceptual and attentional capacities in the spared modalities, showing that this capacity extends to the affective domain.


Subject(s)
Attention/physiology , Deafness/physiopathology , Emotions/physiology , Facial Expression , Persons With Hearing Impairments/rehabilitation , Recognition, Psychology/physiology , Visual Perception/physiology , Adolescent , Adult , Cues , Deafness/rehabilitation , Female , Hearing , Hearing Tests , Humans , Male , Middle Aged , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...