Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 34
Filter
1.
Psychol Sci ; 34(12): 1390-1403, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37955384

ABSTRACT

Recent evidence shows that AI-generated faces are now indistinguishable from human faces. However, algorithms are trained disproportionately on White faces, and thus White AI faces may appear especially realistic. In Experiment 1 (N = 124 adults), alongside our reanalysis of previously published data, we showed that White AI faces are judged as human more often than actual human faces-a phenomenon we term AI hyperrealism. Paradoxically, people who made the most errors in this task were the most confident (a Dunning-Kruger effect). In Experiment 2 (N = 610 adults), we used face-space theory and participant qualitative reports to identify key facial attributes that distinguish AI from human faces but were misinterpreted by participants, leading to AI hyperrealism. However, the attributes permitted high accuracy using machine learning. These findings illustrate how psychological theory can inform understanding of AI outputs and provide direction for debiasing AI algorithms, thereby promoting the ethical use of AI.


Subject(s)
Algorithms , Machine Learning , Adult , Humans
2.
Front Psychol ; 14: 1221081, 2023.
Article in English | MEDLINE | ID: mdl-37794914

ABSTRACT

A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed.

3.
Cogn Emot ; 37(7): 1230-1247, 2023.
Article in English | MEDLINE | ID: mdl-37776238

ABSTRACT

ABSTRACTSmiles provide information about a social partner's affect and intentions during social interaction. Although always encountered within a specific situation, the influence of contextual information on smile evaluation has not been widely investigated. Moreover, little is known about the reciprocal effect of smiles on evaluations of their accompanying situations. In this research, we assessed how different smile types and situational contexts affected participants' social evaluations. In Study 1, 85 participants rated reward, affiliation, and dominance smiles embedded within either enjoyable, polite, or negative (unpleasant) situations. Context had a strong effect on smile ratings, such that smiles in enjoyable situations were rated as more genuine and joyful, as well as indicating less superiority than those in negative situations. In Study 2, 200 participants evaluated the situations that these smiles were perceived within (rather than the smiles themselves). Although situations paired with reward (vs. affiliation) smiles tended to be rated more positively, this effect was absent for negative situations. Ultimately, the findings point toward a reciprocal relationship between smiles and contexts, whereby the face influences evaluations of the situation and vice versa.


Subject(s)
Facial Expression , Smiling , Humans , Happiness , Reward , Social Interaction
4.
Sensors (Basel) ; 24(1)2023 Dec 26.
Article in English | MEDLINE | ID: mdl-38202988

ABSTRACT

This paper provides a comprehensive overview of affective computing systems for facial expression recognition (FER) research in naturalistic contexts. The first section presents an updated account of user-friendly FER toolboxes incorporating state-of-the-art deep learning models and elaborates on their neural architectures, datasets, and performances across domains. These sophisticated FER toolboxes can robustly address a variety of challenges encountered in the wild such as variations in illumination and head pose, which may otherwise impact recognition accuracy. The second section of this paper discusses multimodal large language models (MLLMs) and their potential applications in affective science. MLLMs exhibit human-level capabilities for FER and enable the quantification of various contextual variables to provide context-aware emotion inferences. These advancements have the potential to revolutionize current methodological approaches for studying the contextual influences on emotions, leading to the development of contextualized emotion models.


Subject(s)
Deep Learning , Humans , Facial Expression , Awareness , Emotions , Language
5.
Perspect Psychol Sci ; 17(6): 1566-1575, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35712993

ABSTRACT

We comment on an article by Sheldon et al. from a previous issue of Perspectives (May 2021). They argued that the presence of positive emotion (Hypothesis 1), the intensity of positive emotion (Hypothesis 2), and chronic positive mood (Hypothesis 3) are reliably signaled by the Duchenne smile (DS). We reexamined the cited literature in support of each hypothesis and show that the study findings were mostly inconclusive, irrelevant, incomplete, and/or misread. In fact, there is no single (empirical) article that would unanimously support the idea that DSs function solely as indicators of felt positive affect. Additional evidence is reviewed, suggesting that DSs can be-and often are-displayed deliberately and in the absence of positive feelings. Although DSs may lead to favorable interpersonal perceptions and positive emotional responses in the observer, we propose a functional view that focuses on what facial actions-here specifically DSs-do rather than what they express.


Subject(s)
Facial Expression , Smiling , Humans , Smiling/physiology , Smiling/psychology , Emotions , Social Perception , Affect
6.
Sensors (Basel) ; 22(9)2022 May 06.
Article in English | MEDLINE | ID: mdl-35591224

ABSTRACT

In this paper, we introduce an approach for future frames prediction based on a single input image. Our method is able to generate an entire video sequence based on the information contained in the input frame. We adopt an autoregressive approach in our generation process, i.e., the output from each time step is fed as the input to the next step. Unlike other video prediction methods that use "one shot" generation, our method is able to preserve much more details from the input image, while also capturing the critical pixel-level changes between the frames. We overcome the problem of generation quality degradation by introducing a "complementary mask" module in our architecture, and we show that this allows the model to only focus on the generation of the pixels that need to be changed, and to reuse those that should remain static from its previous frame. We empirically validate our methods against various video prediction models on the UT Dallas Dataset, and show that our approach is able to generate high quality realistic video sequences from one static input image. In addition, we also validate the robustness of our method by testing a pre-trained model on the unseen ADFES facial expression dataset. We also provide qualitative results of our model tested on a human action dataset: The Weizmann Action database.


Subject(s)
Algorithms , Databases, Factual , Humans
7.
Emotion ; 22(5): 907-919, 2022 Aug.
Article in English | MEDLINE | ID: mdl-32718174

ABSTRACT

The Duchenne marker-crow's feet wrinkles at the corner of the eyes-has a reputation for signaling genuine positive emotion in smiles. Here, we test whether this facial action might be better conceptualized as a marker of emotional intensity, rather than genuineness per se, and examine its perceptual outcomes beyond smiling, in sad expressions. For smiles, we found ratings of emotional intensity (how happy a face is) were unable to fully account for the effect of Duchenne status (present vs. absent) on ratings of emotion genuineness. The Duchenne marker made a unique direct contribution to the perceived genuineness of smiles, supporting its reputation for signaling genuine emotion in smiling. In contrast, across 4 experiments, we found Duchenne sad expressions were not rated as any more genuine or sincere than non-Duchenne ones. The Duchenne marker did however make sad expressions look sadder and more negative, just like it made smiles look happier and more positive. Together, these findings argue the Duchenne marker has an important role in sad as well as smiling expressions, but is interpreted differently in sad expressions (contributions to intensity only) compared with smiles (emotion genuineness independently of intensity). (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Emotions , Facial Expression , Happiness , Humans , Sadness , Smiling/psychology
8.
Behav Res Methods ; 54(6): 2678-2692, 2022 12.
Article in English | MEDLINE | ID: mdl-34918224

ABSTRACT

The vast majority of research on human emotional tears has relied on posed and static stimulus materials. In this paper, we introduce the Portsmouth Dynamic Spontaneous Tears Database (PDSTD), a free resource comprising video recordings of 24 female encoders depicting a balanced representation of sadness stimuli with and without tears. Encoders watched a neutral film and a self-selected sad film and reported their emotional experience for 9 emotions. Extending this initial validation, we obtained norming data from an independent sample of naïve observers (N = 91, 45 females) who watched videos of the encoders during three time phases (neutral, pre-sadness, sadness), yielding a total of 72 validated recordings. Observers rated the expressions during each phase on 7 discrete emotions, negative and positive valence, arousal, and genuineness. All data were analyzed by means of general linear mixed modelling (GLMM) to account for sources of random variance. Our results confirm the successful elicitation of sadness, and demonstrate the presence of a tear effect, i.e., a substantial increase in perceived sadness for spontaneous dynamic weeping. To our knowledge, the PDSTD is the first database of spontaneously elicited dynamic tears and sadness that is openly available to researchers. The stimuli can be accessed free of charge via OSF from https://osf.io/uyjeg/?view_only=24474ec8d75949ccb9a8243651db0abf .


Subject(s)
Female , Humans
9.
Behav Sci (Basel) ; 11(6)2021 Jun 10.
Article in English | MEDLINE | ID: mdl-34200633

ABSTRACT

Body postures can affect how we process and attend to information. Here, a novel effect of adopting an open or closed posture on the ability to detect deception was investigated. It was hypothesized that the posture adopted by judges would affect their social acuity, resulting in differences in the detection of nonverbal behavior (i.e., microexpression recognition) and the discrimination of deceptive and truthful statements. In Study 1, adopting an open posture produced higher accuracy for detecting naturalistic lies, but no difference was observed in the recognition of brief facial expressions as compared to adopting a closed posture; trait empathy was found to have an additive effect on posture, with more empathic judges having higher deception detection scores. In Study 2, with the use of an eye-tracker, posture effects on gazing behavior when judging both low-stakes and high-stakes lies were measured. Sitting in an open posture reduced judges' average dwell times looking at senders, and in particular, the amount and length of time they focused on their hands. The findings suggest that simply shifting posture can impact judges' attention to visual information and veracity judgments (Mg = 0.40, 95% CI (0.03, 0.78)).

10.
Scand J Pain ; 21(1): 174-182, 2021 01 27.
Article in English | MEDLINE | ID: mdl-33583170

ABSTRACT

OBJECTIVES: The decoding of facial expressions of pain plays a crucial role in pain diagnostic and clinical decision making. For decoding studies, it is necessary to present facial expressions of pain in a flexible and controllable fashion. Computer models (avatars) of human facial expressions of pain allow for systematically manipulating specific facial features. The aim of the present study was to investigate whether avatars can show realistic facial expressions of pain and how the sex of the avatars influence the decoding of pain by human observers. METHODS: For that purpose, 40 female (mean age: 23.9 years) and 40 male (mean age: 24.6 years) observers watched 80 short videos showing computer-generated avatars, who presented the five clusters of facial expressions of pain (four active and one stoic cluster) identified by Kunz and Lautenbacher (2014). After each clip, observers were asked to provide ratings for the intensity of pain the avatars seem to experience and the certainty of judgement, i.e. if the shown expression truly represents pain. RESULTS: Results show that three of the four active facial clusters were similarly accepted as valid expressions of pain by the observers whereas only one cluster ("raised eyebrows") was disregarded. The sex of the observed avatars influenced the decoding of pain as indicated by increased intensity and elevated certainty ratings for female avatars. CONCLUSIONS: The assumption of different valid facial expressions of pain could be corroborated in avatars, which contradicts the idea of only one uniform pain face. The observers' rating of the avatars' pain was influenced by the avatars' sex, which resembles known observer biases for humans. The use of avatars appeared to be a suitable method in research on the decoding of the facial expression of pain, mirroring closely the known forms of human facial expressions.


Subject(s)
Facial Expression , Facial Pain , Adult , Female , Humans , Male , Observer Variation , Young Adult
11.
Q J Exp Psychol (Hove) ; 74(5): 910-927, 2021 May.
Article in English | MEDLINE | ID: mdl-33234008

ABSTRACT

People hold strong beliefs about the role of emotional cues in detecting deception. While research on the diagnostic value of such cues has been mixed, their influence on human veracity judgements is yet to be fully explored. Here, we address the relationship between emotional information and veracity judgements. In Study 1, the role of emotion recognition in the process of detecting naturalistic lies was investigated. Decoders' veracity judgements were compared based on differences in trait empathy and their ability to recognise microexpressions and subtle expressions. Accuracy was found to be unrelated to facial cue recognition and negatively related to empathy. In Study 2, we manipulated decoders' emotion recognition ability and the type of lies they saw: experiential or affective (emotional and unemotional). Decoders received either emotion recognition training, bogus training, or no training. In all scenarios, training did not affect veracity judgements. Experiential lies were easier to detect than affective lies; however, affective unemotional lies were overall the hardest to judge. The findings illustrate the complex relationship between emotion recognition and veracity judgements, with abilities for facial cue detection being high yet unrelated to deception accuracy.


Subject(s)
Empathy , Facial Expression , Deception , Emotions , Humans , Judgment
12.
Emotion ; 21(2): 247-259, 2021 Mar.
Article in English | MEDLINE | ID: mdl-31886681

ABSTRACT

According to the influential shared signal hypothesis, perceived gaze direction influences the recognition of emotion from the face, for example, gaze averted sideways facilitates the recognition of sad expressions because both gaze and expression signal avoidance. Importantly, this approach assumes that gaze direction is an independent cue that influences emotion recognition. But could gaze direction also impact emotion recognition because it is part of the stereotypical representation of the expression itself? In Experiment 1, we measured gaze aversion in participants engaged in a facial expression posing task. In Experiment 2, we examined the use of gaze aversion when constructing facial expressions on a computerized avatar. Results from both experiments demonstrated that downward gaze plays a central role in the representation of sad expressions. In Experiment 3, we manipulated gaze direction in perceived facial expressions and found that sadness was the only expression yielding a recognition advantage for downward, but not sideways gaze. Finally, in Experiment 4 we independently manipulated gaze aversion and eyelid closure, thereby demonstrating that downward gaze enhances sadness recognition irrespective of eyelid position. Together, these findings indicate that (1) gaze and expression are not independent cues and (2) the specific type of averted gaze is critical. In consequence, several premises of the shared signal hypothesis may need revision. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Facial Expression , Fixation, Ocular/physiology , Adult , Female , Humans , Male , Sadness , Young Adult
13.
Emotion ; 21(2): 447-451, 2021 Mar.
Article in English | MEDLINE | ID: mdl-31829721

ABSTRACT

The majority of research on the judgment of emotion from facial expressions has focused on deliberately posed displays, often sampled from single stimulus sets. Herein, we investigate emotion recognition from posed and spontaneous expressions, comparing classification performance between humans and machine in a cross-corpora investigation. For this, dynamic facial stimuli portraying the six basic emotions were sampled from a broad range of different databases, and then presented to human observers and a machine classifier. Recognition performance by the machine was found to be superior for posed expressions containing prototypical facial patterns, and comparable to humans when classifying emotions from spontaneous displays. In both humans and machine, accuracy rates were generally higher for posed compared to spontaneous stimuli. The findings suggest that automated systems rely on expression prototypicality for emotion classification and may perform just as well as humans when tested in a cross-corpora context. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Artificial Intelligence/standards , Behavior Observation Techniques/methods , Emotions/physiology , Facial Expression , Recognition, Psychology/physiology , Adolescent , Adult , Female , Humans , Male , Young Adult
14.
Int J Psychol ; 56(3): 466-477, 2021 Jun.
Article in English | MEDLINE | ID: mdl-32996599

ABSTRACT

While previous work demonstrated that animals are categorised based on their edibility, little research has systematically evaluated the role of religion in the perception of animal edibility, particularly when specific animals are deemed sacred in a religion. In two studies, we explored a key psychological mechanism through which sacred animals are deemed inedible by members of a faith: mind attribution. In Study 1, non-vegetarian Hindus in Singapore (N = 70) evaluated 19 animals that differed in terms of their sacredness and edibility. Results showed that participants categorised animals into three groups: holy animals (high sacredness but low edibility), food animals (low sacredness but high edibility) and neutral animals (low sacredness and low edibility). Holy animals were deemed to possess greater mental life compared to other animal categories. In Study 2, we replicated this key finding with Hindus in India (N = 100), and further demonstrated that the observed pattern of results was specific to Hindus but not Muslims (N = 90). In both studies, mind attribution mediated the negative association between sacredness and edibility. Our findings illustrate how religious groups diverge in animal perception, thereby highlighting the role of mind attribution as a crucial link between sacredness and edibility.


Subject(s)
Meat/standards , Religion and Psychology , Social Perception/psychology , Adolescent , Adult , Animals , Humans , Middle Aged , Young Adult
15.
Behav Res Methods ; 53(2): 686-701, 2021 04.
Article in English | MEDLINE | ID: mdl-32804342

ABSTRACT

With a shift in interest toward dynamic expressions, numerous corpora of dynamic facial stimuli have been developed over the past two decades. The present research aimed to test existing sets of dynamic facial expressions (published between 2000 and 2015) in a cross-corpus validation effort. For this, 14 dynamic databases were selected that featured facial expressions of the basic six emotions (anger, disgust, fear, happiness, sadness, surprise) in posed or spontaneous form. In Study 1, a subset of stimuli from each database (N = 162) were presented to human observers and machine analysis, yielding considerable variance in emotion recognition performance across the databases. Classification accuracy further varied with perceived intensity and naturalness of the displays, with posed expressions being judged more accurately and as intense, but less natural compared to spontaneous ones. Study 2 aimed for a full validation of the 14 databases by subjecting the entire stimulus set (N = 3812) to machine analysis. A FACS-based Action Unit (AU) analysis revealed that facial AU configurations were more prototypical in posed than spontaneous expressions. The prototypicality of an expression in turn predicted emotion classification accuracy, with higher performance observed for more prototypical facial behavior. Furthermore, technical features of each database (i.e., duration, face box size, head rotation, and motion) had a significant impact on recognition accuracy. Together, the findings suggest that existing databases vary in their ability to signal specific emotions, thereby facing a trade-off between realism and ecological validity on the one end, and expression uniformity and comparability on the other.


Subject(s)
Emotions , Facial Expression , Anger , Happiness , Humans , Recognition, Psychology
16.
Front Neurosci ; 14: 400, 2020.
Article in English | MEDLINE | ID: mdl-32410956

ABSTRACT

The ability to automatically assess emotional responses via contact-free video recording taps into a rapidly growing market aimed at predicting consumer choices. If consumer attention and engagement are measurable in a reliable and accessible manner, relevant marketing decisions could be informed by objective data. Although significant advances have been made in automatic affect recognition, several practical and theoretical issues remain largely unresolved. These concern the lack of cross-system validation, a historical emphasis of posed over spontaneous expressions, as well as more fundamental issues regarding the weak association between subjective experience and facial expressions. To address these limitations, the present paper argues that extant commercial and free facial expression classifiers should be rigorously validated in cross-system research. Furthermore, academics and practitioners must better leverage fine-grained emotional response dynamics, with stronger emphasis on understanding naturally occurring spontaneous expressions, and in naturalistic choice settings. We posit that applied consumer research might be better situated to examine facial behavior in socio-emotional contexts rather than decontextualized, laboratory studies, and highlight how AHAA can be successfully employed in this context. Also, facial activity should be considered less as a single outcome variable, and more as a starting point for further analyses. Implications of this approach and potential obstacles that need to be overcome are discussed within the context of consumer research.

17.
PLoS One ; 15(4): e0231968, 2020.
Article in English | MEDLINE | ID: mdl-32330178

ABSTRACT

In the wake of rapid advances in automatic affect analysis, commercial automatic classifiers for facial affect recognition have attracted considerable attention in recent years. While several options now exist to analyze dynamic video data, less is known about the relative performance of these classifiers, in particular when facial expressions are spontaneous rather than posed. In the present work, we tested eight out-of-the-box automatic classifiers, and compared their emotion recognition performance to that of human observers. A total of 937 videos were sampled from two large databases that conveyed the basic six emotions (happiness, sadness, anger, fear, surprise, and disgust) either in posed (BU-4DFE) or spontaneous (UT-Dallas) form. Results revealed a recognition advantage for human observers over automatic classification. Among the eight classifiers, there was considerable variance in recognition accuracy ranging from 48% to 62%. Subsequent analyses per type of expression revealed that performance by the two best performing classifiers approximated those of human observers, suggesting high agreement for posed expressions. However, classification accuracy was consistently lower (although above chance level) for spontaneous affective behavior. The findings indicate potential shortcomings of existing out-of-the-box classifiers for measuring emotions, and highlight the need for more spontaneous facial databases that can act as a benchmark in the training and testing of automatic emotion recognition systems. We further discuss some limitations of analyzing facial expressions that have been recorded in controlled environments.


Subject(s)
Affect , Facial Expression , Recognition, Psychology , Adult , Automation , Female , Humans , Male
18.
Front Psychol ; 11: 612654, 2020.
Article in English | MEDLINE | ID: mdl-33510690

ABSTRACT

Smiles that vary in muscular configuration also vary in how they are perceived. Previous research suggests that "Duchenne smiles," indicated by the combined actions of the orbicularis oculi (cheek raiser) and the zygomaticus major muscles (lip corner puller), signal enjoyment. This research has compared perceptions of Duchenne smiles with non-Duchenne smiles among individuals voluntarily innervating or inhibiting the orbicularis oculi muscle. Here we used a novel set of highly controlled stimuli: photographs of patients taken before and after receiving botulinum toxin treatment for crow's feet lines that selectively paralyzed the lateral orbicularis oculi muscle and removed visible lateral eye wrinkles, to test perception of smiles. Smiles in which the orbicularis muscle was active (prior to treatment) were rated as more felt, spontaneous, intense, and happier. Post treatment patients looked younger, although not more attractive. We discuss the potential implications of these findings within the context of emotion science and clinical research on botulinum toxin.

19.
Front Psychol ; 11: 611248, 2020.
Article in English | MEDLINE | ID: mdl-33519624

ABSTRACT

People dedicate significant attention to others' facial expressions and to deciphering their meaning. Hence, knowing whether such expressions are genuine or deliberate is important. Early research proposed that authenticity could be discerned based on reliable facial muscle activations unique to genuine emotional experiences that are impossible to produce voluntarily. With an increasing body of research, such claims may no longer hold up to empirical scrutiny. In this article, expression authenticity is considered within the context of senders' ability to produce convincing facial displays that resemble genuine affect and human decoders' judgments of expression authenticity. This includes a discussion of spontaneous vs. posed expressions, as well as appearance- vs. elicitation-based approaches for defining emotion recognition accuracy. We further expand on the functional role of facial displays as neurophysiological states and communicative signals, thereby drawing upon the encoding-decoding and affect-induction perspectives of emotion expressions. Theoretical and methodological issues are addressed with the aim to instigate greater conceptual and operational clarity in future investigations of expression authenticity.

20.
Q J Exp Psychol (Hove) ; 72(4): 729-741, 2019 Apr.
Article in English | MEDLINE | ID: mdl-29471708

ABSTRACT

A happy facial expression makes a person look (more) trustworthy. Do perceptions of happiness and trustworthiness rely on the same face regions and visual attention processes? In an eye-tracking study, eye movements and fixations were recorded while participants judged the un/happiness or the un/trustworthiness of dynamic facial expressions in which the eyes and/or the mouth unfolded from neutral to happy or vice versa. A smiling mouth and happy eyes enhanced perceived happiness and trustworthiness similarly, with a greater contribution of the smile relative to the eyes. This comparable judgement output for happiness and trustworthiness was reached through shared as well as distinct attentional mechanisms: (a) entry times and (b) initial fixation thresholds for each face region were equivalent for both judgements, thereby revealing the same attentional orienting in happiness and trustworthiness processing. However, (c) greater and (d) longer fixation density for the mouth region in the happiness task, and for the eye region in the trustworthiness task, demonstrated different selective attentional engagement. Relatedly, (e) mean fixation duration across face regions was longer in the trustworthiness task, thus showing increased attentional intensity or processing effort.


Subject(s)
Attention/physiology , Facial Expression , Happiness , Trust/psychology , Adolescent , Adult , Analysis of Variance , Eye Movements/physiology , Female , Functional Laterality/physiology , Humans , Judgment , Male , Pattern Recognition, Visual , Photic Stimulation , Reaction Time/physiology , Videotape Recording , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...