Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 45
Filter
1.
Plast Reconstr Surg Glob Open ; 12(1): e5382, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38204867

ABSTRACT

Background: The pursuit of understanding facial beauty has been the subject of scientific interest since time immemorial. How beauty is associated with other perceived attributes that affect human interaction remains elusive. This article aims to explore how facial attractiveness correlates with health, happiness, femininity, and perceived age. We review the existing literature and report an empirical study using expert raters. Methods: A peer-reviewed database of 2870 aesthetic female faces with a global ethnic distribution was created. Twenty-one raters were asked to score frontal images on the attributes of health, happiness, femininity, perceived age, and attractiveness, on a Likert scale of 0-100. Results: Pearson correlation coefficients ("r") were calculated to correlate attributes, with multiple regression analyses and P values calculated. Strong positive correlation was found between attractiveness and health (r = 0.61, P < 0.05), attractiveness and femininity (r = 0.7, P < 0.05), and health and femininity (r = 0.57, P < 0.05); medium positive correlation between health and happiness (r = 0.31, P < 0.05); and small positive correlation between happiness and femininity (r = 0.21, P < 0.05). A neutral relationship was observed between perceived age and happiness (0.01, P = 0.75), and medium negative correlation between perceived age and attractiveness (-0.32, P < 0.05), health (-0.36, P < 0.05), and femininity (-0.31, P < 0.05). Conclusions: Our study illustrates a positive correlation between the positive attributes of health, happiness, femininity and attractiveness, with a negative correlation of all characteristics with increasing perceived age. This provides insight into the complexity of human interaction and provides a holistic view of attraction as being a gateway to the reflexive perception of other attributes. The implications encourage an aesthetic focus on facial reconstruction.

2.
Psychol Sci ; 34(12): 1390-1403, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37955384

ABSTRACT

Recent evidence shows that AI-generated faces are now indistinguishable from human faces. However, algorithms are trained disproportionately on White faces, and thus White AI faces may appear especially realistic. In Experiment 1 (N = 124 adults), alongside our reanalysis of previously published data, we showed that White AI faces are judged as human more often than actual human faces-a phenomenon we term AI hyperrealism. Paradoxically, people who made the most errors in this task were the most confident (a Dunning-Kruger effect). In Experiment 2 (N = 610 adults), we used face-space theory and participant qualitative reports to identify key facial attributes that distinguish AI from human faces but were misinterpreted by participants, leading to AI hyperrealism. However, the attributes permitted high accuracy using machine learning. These findings illustrate how psychological theory can inform understanding of AI outputs and provide direction for debiasing AI algorithms, thereby promoting the ethical use of AI.


Subject(s)
Algorithms , Machine Learning , Adult , Humans
3.
Aesthet Surg J Open Forum ; 5: ojad082, 2023.
Article in English | MEDLINE | ID: mdl-37780530

ABSTRACT

Background: Facial reconstruction surgery is often a complex and staged process, leading to lengthy reconstructive journeys for patients. The integration of a clinical pathway can give patients a clearer understanding of what to expect at each stage of their reconstructive journey. Objectives: The authors demonstrate how the incorporation of multidisciplinary team clinics, three-dimensional (3D) photography, and 3D modeling into an integrated pathway can streamline the process for patients undergoing facial reconstructive surgeries and aid their understanding of their surgeries. Methods: A novel clinical pathway was developed for patients undergoing facial reconstructive surgery at a tertiary reconstructive unit in London. A case series was collated of 35 patients who had been through the integrated pathway. Patient-reported outcome measures (PROMs) were assessed using FACE-Q scales, Global Aesthetic Improvement Scale, Self-Perception of Age score, and Ordinal Rank change in facial aesthetic appearance, determined subjectively and objectively. Statistical analysis was performed to calculate mean averages for each scale and PROM. Results: High patient satisfaction with overall facial appearance, aging appearance, and the decision-making process was demonstrated. The average perceived improvement in age-related facial appearance was -7.7 years postreconstruction compared with prereconstruction. The Ordinal Rank improvement on facial aesthetic appearance showed considerable improvement, both subjectively and objectively. Conclusions: The authors advocate the implementation of an integrated clinical pathway for facial reconstruction, with positive impacts observed in terms of patient satisfaction and objective assessments of facial appearance. Similar principles can be extrapolated to other aspects of reconstructive surgery.

4.
Front Psychol ; 14: 1221081, 2023.
Article in English | MEDLINE | ID: mdl-37794914

ABSTRACT

A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed.

5.
Aesthet Surg J ; 44(1): NP1-NP15, 2023 Dec 14.
Article in English | MEDLINE | ID: mdl-37695808

ABSTRACT

BACKGROUND: To achieve the goal of enhancing facial beauty it is crucial for aesthetic physicians and plastic surgeons to have a deep understanding of aesthetic ideals. Although numerous aesthetic criteria have been proposed over the years, there is a lack of empirical analysis supporting many of these standards. OBJECTIVES: This aim of this review was to undertake the first exploration of the empirical evidence concerning the aesthetic ideals of the face in the existing literature. METHODS: A comprehensive search in MEDLINE, Embase, Scopus and CENTRAL databases was conducted for primary clinical studies reporting on the classification of the facial aesthetic units as per the Gonzales-Ulloa facial aesthetic unit classification from January 1962 to November 2022. RESULTS: A total of 36 articles were included in the final review: 12 case series, 14 cohort studies, and 10 comparative studies. These described the aesthetic ideals of the following areas: forehead (6 studies; mean level of evidence, 3.33); nose (9 studies; mean level of evidence, 3.6); orbit (6 studies; mean level of evidence, 3); cheek (4 studies; mean level of evidence, 4.07); lips (6 studies; mean level of evidence, 3.33); chin (4 studies; mean level of evidence, 3.75); ear (1 study; level of evidence, 4). CONCLUSIONS: The units that were most extensively studied were the nose, forehead, and lip, and these studies also appeared in journals with higher impact factors than other subunits. Conversely, the chin and ear subunits had the fewest studies conducted on them and had lower impact factors. To provide a useful resource for readers, it would be prudent to identify and discuss influential papers for each subunit.


Subject(s)
Forehead , Nose , Humans , Esthetics , Cheek , Lip/surgery
6.
Cogn Emot ; 37(7): 1230-1247, 2023.
Article in English | MEDLINE | ID: mdl-37776238

ABSTRACT

ABSTRACTSmiles provide information about a social partner's affect and intentions during social interaction. Although always encountered within a specific situation, the influence of contextual information on smile evaluation has not been widely investigated. Moreover, little is known about the reciprocal effect of smiles on evaluations of their accompanying situations. In this research, we assessed how different smile types and situational contexts affected participants' social evaluations. In Study 1, 85 participants rated reward, affiliation, and dominance smiles embedded within either enjoyable, polite, or negative (unpleasant) situations. Context had a strong effect on smile ratings, such that smiles in enjoyable situations were rated as more genuine and joyful, as well as indicating less superiority than those in negative situations. In Study 2, 200 participants evaluated the situations that these smiles were perceived within (rather than the smiles themselves). Although situations paired with reward (vs. affiliation) smiles tended to be rated more positively, this effect was absent for negative situations. Ultimately, the findings point toward a reciprocal relationship between smiles and contexts, whereby the face influences evaluations of the situation and vice versa.


Subject(s)
Facial Expression , Smiling , Humans , Happiness , Reward , Social Interaction
7.
Aesthet Surg J Open Forum ; 5: ojad072, 2023.
Article in English | MEDLINE | ID: mdl-37638342

ABSTRACT

Background: Understanding the differences in facial shapes in individuals from different races is relevant across several fields, from cosmetic and reconstructive medicine to anthropometric studies. Objectives: To determine whether there are features shared by the faces of an aesthetic female face database and if they correlate to their racial demographics using novel computer modeling. Methods: The database was formed using the "top 100 most beautiful women" lists released by "For Him Magazine" for the last 15 years. Principal component analysis (PCA) of 158 parameters was carried out to check for clustering or racial correlation with these clusters. PCA is a machine-learning tool used to reduce the number of variables in a large data set, allowing for easier analysis of the data while retaining as much information as possible from the original data set. A review of the literature on craniofacial anthropometric differences across ethnicities was also undertaken to complement the computer data. Results: Two thousand eight hundred and seventy aesthetic faces formed the database in the same racial proportion as 10,000 faces from the general population as a baseline. PCA clustering illustrated grouping by latent space parameters for facial dimensions but showed no correlation with racial demographics. There was a commonality of facial features within the aesthetic cohort, which differed from the general population. Fourteen papers were included in the review which contained 8142 individuals. Conclusions: Aesthetic female faces have commonalities in facial features regardless of racial demographic, and the dimensions of these features vary from the baseline population. There may even be a common human aesthetic proportion that transcends racial boundaries, but this is yet to be elucidated.

8.
Aesthet Surg J Open Forum ; 5: ojad062, 2023.
Article in English | MEDLINE | ID: mdl-37575889

ABSTRACT

Background: Reconstructive surgery operations are often complex, staged, and have a steep learning curve. As a vocational training requiring thorough three-dimensional (3D) understanding of reconstructive techniques, the use of 3D photography and computer modeling can accelerate this learning for surgical trainees. Objectives: The authors illustrate the benefits of introducing a streamlined reconstructive pathway that integrates 3D photography and computer modeling, to create a learning database for use by trainees and patients alike, to improve learning and comprehension. Methods: A computer database of 3D photographs and associated computer models was developed for 35 patients undergoing reconstructive facial surgery at the Royal Free Hospital, London, UK. This was used as a training and teaching tool for 20 surgical trainees, with an MCQ questionnaire assessing knowledge and a Likert scale questionnaire assessing satisfaction with the understanding of core reconstructive techniques, given before and after teaching sessions. Data were analyzed using the Mann-Whitney U test for trainee knowledge and Wilcoxon rank sum test for trainee satisfaction. Results: Trainee (n = 20) knowledge showed a statistically significant improvement, P < .01, as did trainee satisfaction, P < .05, after a teaching session using 3D photography and computer models for facial reconstruction. Conclusions: Three-dimensional photography and computer modeling are useful teaching and training tools for reconstructive facial surgery. The authors advocate the implementation of an integrated pathway for patients with facial defects to include 3D photography and computer modeling wherever possible, to develop internal databases for training trainees as well as patients. This algorithm can be extrapolated to other aspects of reconstructive surgery.

10.
Sensors (Basel) ; 24(1)2023 Dec 26.
Article in English | MEDLINE | ID: mdl-38202988

ABSTRACT

This paper provides a comprehensive overview of affective computing systems for facial expression recognition (FER) research in naturalistic contexts. The first section presents an updated account of user-friendly FER toolboxes incorporating state-of-the-art deep learning models and elaborates on their neural architectures, datasets, and performances across domains. These sophisticated FER toolboxes can robustly address a variety of challenges encountered in the wild such as variations in illumination and head pose, which may otherwise impact recognition accuracy. The second section of this paper discusses multimodal large language models (MLLMs) and their potential applications in affective science. MLLMs exhibit human-level capabilities for FER and enable the quantification of various contextual variables to provide context-aware emotion inferences. These advancements have the potential to revolutionize current methodological approaches for studying the contextual influences on emotions, leading to the development of contextualized emotion models.


Subject(s)
Deep Learning , Humans , Facial Expression , Awareness , Emotions , Language
11.
J Oral Biol Craniofac Res ; 12(5): 512-515, 2022.
Article in English | MEDLINE | ID: mdl-35774231

ABSTRACT

Advances in high resolution 3D photography and computer modelling are revolutionising patient workup, surgical planning, patient satisfaction, clinical outcomes, and surgical training. We present a case in which this technology is utilised for a patient undergoing a forehead flap for reconstruction of a nasal defect, allowing us to develop a novel reconstructive algorithm. 3D photographs were taken pre-operatively, a computer model rendered and follow up photographs taken at each stage of the reconstruction using a Vectra XT camera. Patient satisfaction was measured qualitatively postoperatively. Prior to each stage we were able to use the 3D photographs to make thorough preoperative plans whilst minimising the number of outpatient appointments the patient required. With the images always at hand, we had much more time to make measurements and consider alterations. Utilising the 3D models in clinic and MDT allowed us to have more insightful outpatient appointments, in which we were able to discuss and illustrate each subsequent stage. The use of 3D photography and computer modelling allows for a greater level of care to patients by improving understanding and satisfaction and alleviating anxiety. It also reduced operative time, improves surgical planning, and acts as an excellent resource for surgical trainees and future patients.

12.
Perspect Psychol Sci ; 17(6): 1566-1575, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35712993

ABSTRACT

We comment on an article by Sheldon et al. from a previous issue of Perspectives (May 2021). They argued that the presence of positive emotion (Hypothesis 1), the intensity of positive emotion (Hypothesis 2), and chronic positive mood (Hypothesis 3) are reliably signaled by the Duchenne smile (DS). We reexamined the cited literature in support of each hypothesis and show that the study findings were mostly inconclusive, irrelevant, incomplete, and/or misread. In fact, there is no single (empirical) article that would unanimously support the idea that DSs function solely as indicators of felt positive affect. Additional evidence is reviewed, suggesting that DSs can be-and often are-displayed deliberately and in the absence of positive feelings. Although DSs may lead to favorable interpersonal perceptions and positive emotional responses in the observer, we propose a functional view that focuses on what facial actions-here specifically DSs-do rather than what they express.


Subject(s)
Facial Expression , Smiling , Humans , Smiling/physiology , Smiling/psychology , Emotions , Social Perception , Affect
13.
Sensors (Basel) ; 22(9)2022 May 06.
Article in English | MEDLINE | ID: mdl-35591224

ABSTRACT

In this paper, we introduce an approach for future frames prediction based on a single input image. Our method is able to generate an entire video sequence based on the information contained in the input frame. We adopt an autoregressive approach in our generation process, i.e., the output from each time step is fed as the input to the next step. Unlike other video prediction methods that use "one shot" generation, our method is able to preserve much more details from the input image, while also capturing the critical pixel-level changes between the frames. We overcome the problem of generation quality degradation by introducing a "complementary mask" module in our architecture, and we show that this allows the model to only focus on the generation of the pixels that need to be changed, and to reuse those that should remain static from its previous frame. We empirically validate our methods against various video prediction models on the UT Dallas Dataset, and show that our approach is able to generate high quality realistic video sequences from one static input image. In addition, we also validate the robustness of our method by testing a pre-trained model on the unseen ADFES facial expression dataset. We also provide qualitative results of our model tested on a human action dataset: The Weizmann Action database.


Subject(s)
Algorithms , Databases, Factual , Humans
14.
Emotion ; 22(5): 907-919, 2022 Aug.
Article in English | MEDLINE | ID: mdl-32718174

ABSTRACT

The Duchenne marker-crow's feet wrinkles at the corner of the eyes-has a reputation for signaling genuine positive emotion in smiles. Here, we test whether this facial action might be better conceptualized as a marker of emotional intensity, rather than genuineness per se, and examine its perceptual outcomes beyond smiling, in sad expressions. For smiles, we found ratings of emotional intensity (how happy a face is) were unable to fully account for the effect of Duchenne status (present vs. absent) on ratings of emotion genuineness. The Duchenne marker made a unique direct contribution to the perceived genuineness of smiles, supporting its reputation for signaling genuine emotion in smiling. In contrast, across 4 experiments, we found Duchenne sad expressions were not rated as any more genuine or sincere than non-Duchenne ones. The Duchenne marker did however make sad expressions look sadder and more negative, just like it made smiles look happier and more positive. Together, these findings argue the Duchenne marker has an important role in sad as well as smiling expressions, but is interpreted differently in sad expressions (contributions to intensity only) compared with smiles (emotion genuineness independently of intensity). (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Subject(s)
Emotions , Facial Expression , Happiness , Humans , Sadness , Smiling/psychology
15.
Behav Res Methods ; 54(6): 2678-2692, 2022 12.
Article in English | MEDLINE | ID: mdl-34918224

ABSTRACT

The vast majority of research on human emotional tears has relied on posed and static stimulus materials. In this paper, we introduce the Portsmouth Dynamic Spontaneous Tears Database (PDSTD), a free resource comprising video recordings of 24 female encoders depicting a balanced representation of sadness stimuli with and without tears. Encoders watched a neutral film and a self-selected sad film and reported their emotional experience for 9 emotions. Extending this initial validation, we obtained norming data from an independent sample of naïve observers (N = 91, 45 females) who watched videos of the encoders during three time phases (neutral, pre-sadness, sadness), yielding a total of 72 validated recordings. Observers rated the expressions during each phase on 7 discrete emotions, negative and positive valence, arousal, and genuineness. All data were analyzed by means of general linear mixed modelling (GLMM) to account for sources of random variance. Our results confirm the successful elicitation of sadness, and demonstrate the presence of a tear effect, i.e., a substantial increase in perceived sadness for spontaneous dynamic weeping. To our knowledge, the PDSTD is the first database of spontaneously elicited dynamic tears and sadness that is openly available to researchers. The stimuli can be accessed free of charge via OSF from https://osf.io/uyjeg/?view_only=24474ec8d75949ccb9a8243651db0abf .


Subject(s)
Female , Humans
16.
Behav Sci (Basel) ; 11(6)2021 Jun 10.
Article in English | MEDLINE | ID: mdl-34200633

ABSTRACT

Body postures can affect how we process and attend to information. Here, a novel effect of adopting an open or closed posture on the ability to detect deception was investigated. It was hypothesized that the posture adopted by judges would affect their social acuity, resulting in differences in the detection of nonverbal behavior (i.e., microexpression recognition) and the discrimination of deceptive and truthful statements. In Study 1, adopting an open posture produced higher accuracy for detecting naturalistic lies, but no difference was observed in the recognition of brief facial expressions as compared to adopting a closed posture; trait empathy was found to have an additive effect on posture, with more empathic judges having higher deception detection scores. In Study 2, with the use of an eye-tracker, posture effects on gazing behavior when judging both low-stakes and high-stakes lies were measured. Sitting in an open posture reduced judges' average dwell times looking at senders, and in particular, the amount and length of time they focused on their hands. The findings suggest that simply shifting posture can impact judges' attention to visual information and veracity judgments (Mg = 0.40, 95% CI (0.03, 0.78)).

17.
Br J Soc Psychol ; 60(4): 1262-1278, 2021 Oct.
Article in English | MEDLINE | ID: mdl-33604913

ABSTRACT

Previous research on money and prosociality has described a monotonic pattern, showing that money reduces generosity. The present research aimed to examine whether money differently impairs generosity when arising from altruistic versus egoistic motives. To this end, we employed economic games designed to study generosity (e.g., the Dictator game) and varied experimental currency (i.e., money vs. candy/food). The results (N = 850) showed that although money made people ignore others when others were not crucial for their future gain, generosity was not impacted when egoistic motives (Study 1: avoiding sanctions; Studies 2 and 3: building reputation) were present. In other words, although people in general showed flexible prosociality by adjusting their generosity level according to game type, this was much more strongly the case when money rather than candy/food was the currency. In addition, we demonstrate a boundary condition of money on flexible generosity, namely imbuing money with prosocial meaning (Study 3). Some implications are discussed.


Subject(s)
Altruism , Motivation , Humans
18.
Scand J Pain ; 21(1): 174-182, 2021 01 27.
Article in English | MEDLINE | ID: mdl-33583170

ABSTRACT

OBJECTIVES: The decoding of facial expressions of pain plays a crucial role in pain diagnostic and clinical decision making. For decoding studies, it is necessary to present facial expressions of pain in a flexible and controllable fashion. Computer models (avatars) of human facial expressions of pain allow for systematically manipulating specific facial features. The aim of the present study was to investigate whether avatars can show realistic facial expressions of pain and how the sex of the avatars influence the decoding of pain by human observers. METHODS: For that purpose, 40 female (mean age: 23.9 years) and 40 male (mean age: 24.6 years) observers watched 80 short videos showing computer-generated avatars, who presented the five clusters of facial expressions of pain (four active and one stoic cluster) identified by Kunz and Lautenbacher (2014). After each clip, observers were asked to provide ratings for the intensity of pain the avatars seem to experience and the certainty of judgement, i.e. if the shown expression truly represents pain. RESULTS: Results show that three of the four active facial clusters were similarly accepted as valid expressions of pain by the observers whereas only one cluster ("raised eyebrows") was disregarded. The sex of the observed avatars influenced the decoding of pain as indicated by increased intensity and elevated certainty ratings for female avatars. CONCLUSIONS: The assumption of different valid facial expressions of pain could be corroborated in avatars, which contradicts the idea of only one uniform pain face. The observers' rating of the avatars' pain was influenced by the avatars' sex, which resembles known observer biases for humans. The use of avatars appeared to be a suitable method in research on the decoding of the facial expression of pain, mirroring closely the known forms of human facial expressions.


Subject(s)
Facial Expression , Facial Pain , Adult , Female , Humans , Male , Observer Variation , Young Adult
19.
Q J Exp Psychol (Hove) ; 74(5): 910-927, 2021 May.
Article in English | MEDLINE | ID: mdl-33234008

ABSTRACT

People hold strong beliefs about the role of emotional cues in detecting deception. While research on the diagnostic value of such cues has been mixed, their influence on human veracity judgements is yet to be fully explored. Here, we address the relationship between emotional information and veracity judgements. In Study 1, the role of emotion recognition in the process of detecting naturalistic lies was investigated. Decoders' veracity judgements were compared based on differences in trait empathy and their ability to recognise microexpressions and subtle expressions. Accuracy was found to be unrelated to facial cue recognition and negatively related to empathy. In Study 2, we manipulated decoders' emotion recognition ability and the type of lies they saw: experiential or affective (emotional and unemotional). Decoders received either emotion recognition training, bogus training, or no training. In all scenarios, training did not affect veracity judgements. Experiential lies were easier to detect than affective lies; however, affective unemotional lies were overall the hardest to judge. The findings illustrate the complex relationship between emotion recognition and veracity judgements, with abilities for facial cue detection being high yet unrelated to deception accuracy.


Subject(s)
Empathy , Facial Expression , Deception , Emotions , Humans , Judgment
20.
Emotion ; 21(2): 247-259, 2021 Mar.
Article in English | MEDLINE | ID: mdl-31886681

ABSTRACT

According to the influential shared signal hypothesis, perceived gaze direction influences the recognition of emotion from the face, for example, gaze averted sideways facilitates the recognition of sad expressions because both gaze and expression signal avoidance. Importantly, this approach assumes that gaze direction is an independent cue that influences emotion recognition. But could gaze direction also impact emotion recognition because it is part of the stereotypical representation of the expression itself? In Experiment 1, we measured gaze aversion in participants engaged in a facial expression posing task. In Experiment 2, we examined the use of gaze aversion when constructing facial expressions on a computerized avatar. Results from both experiments demonstrated that downward gaze plays a central role in the representation of sad expressions. In Experiment 3, we manipulated gaze direction in perceived facial expressions and found that sadness was the only expression yielding a recognition advantage for downward, but not sideways gaze. Finally, in Experiment 4 we independently manipulated gaze aversion and eyelid closure, thereby demonstrating that downward gaze enhances sadness recognition irrespective of eyelid position. Together, these findings indicate that (1) gaze and expression are not independent cues and (2) the specific type of averted gaze is critical. In consequence, several premises of the shared signal hypothesis may need revision. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Facial Expression , Fixation, Ocular/physiology , Adult , Female , Humans , Male , Sadness , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...