Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.286
Filtrar
1.
PEC Innov ; 4: 100302, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38966314

RESUMEN

Objective: Machine learning models were employed to discern patients' impressions from the therapists' facial expressions during a virtual online video counselling session. Methods: Eight therapists simulated an online video counselling session for the same patient. The facial emotions of the therapists were extracted from the session videos; we then utilized a random forest model to determine the therapist's impression as perceived by the patients. Results: The therapists' neutral facial expressions were important controlling factors for patients' impressions. A predictive model with three neutral facial features achieved an accuracy of 83% in identifying patients' impressions. Conclusions: Neutral facial expressions may contribute to patient impressions in an online video counselling environment with spatiotemporal disconnection. Innovation: Expression recognition techniques were applied innovatively to an online counselling setting where therapists' expressions are limited. Our findings have the potential to enhance psychiatric clinical practice using Information and Communication Technology.

2.
Comput Biol Med ; 179: 108822, 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38986286

RESUMEN

Facial Expression Analysis (FEA) plays a vital role in diagnosing and treating early-stage neurological disorders (NDs) like Alzheimer's and Parkinson's. Manual FEA is hindered by expertise, time, and training requirements, while automatic methods confront difficulties with real patient data unavailability, high computations, and irrelevant feature extraction. To address these challenges, this paper proposes a novel approach: an efficient, lightweight convolutional block attention module (CBAM) based deep learning network (DLN) to aid doctors in diagnosing ND patients. The method comprises two stages: data collection of real ND patients, and pre-processing, involving face detection and an attention-enhanced DLN for feature extraction and refinement. Extensive experiments with validation on real patient data showcase compelling performance, achieving an accuracy of up to 73.2%. Despite its efficacy, the proposed model is lightweight, occupying only 3MB, making it suitable for deployment on resource-constrained mobile healthcare devices. Moreover, the method exhibits significant advancements over existing FEA approaches, holding tremendous promise in effectively diagnosing and treating ND patients. By accurately recognizing emotions and extracting relevant features, this approach empowers medical professionals in early ND detection and management, overcoming the challenges of manual analysis and heavy models. In conclusion, this research presents a significant leap in FEA, promising to enhance ND diagnosis and care.The code and data used in this work are available at: https://github.com/munsif200/Neurological-Health-Care.

3.
Front Med (Lausanne) ; 11: 1309720, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38994344

RESUMEN

Background: Pain management is an essential and complex issue for non-communicative patients undergoing sedation in the intensive care unit (ICU). The Behavioral Pain Scale (BPS), although not perfect for assessing behavioral pain, is the gold standard based partly on clinical facial expression. NEVVA© , an automatic pain assessment tool based on facial expressions in critically ill patients, is a much-needed innovative medical device. Methods: In this prospective pilot study, we recorded the facial expressions of critically ill patients in the medical ICU of Caen University Hospital using the iPhone and Smart Motion Tracking System (SMTS) software with the Facial Action Coding System (FACS) to measure human facial expressions metrically during sedation weaning. Analyses were recorded continuously, and BPS scores were collected hourly over two 8 h periods per day for 3 consecutive days. For this first stage, calibration of the innovative NEVVA© medical device algorithm was obtained by comparison with the reference pain scale (BPS). Results: Thirty participants were enrolled between March and July 2022. To assess the acute severity of illness, the Sequential Organ Failure Assessment (SOFA) and the Simplified Acute Physiology Score (SAPS II) were recorded on ICU admission and were 9 and 47, respectively. All participants had deep sedation, assessed by a Richmond Agitation and Sedation scale (RASS) score of less than or equal to -4 at the time of inclusion. One thousand and six BPS recordings were obtained, and 130 recordings were retained for final calibration: 108 BPS recordings corresponding to the absence of pain and 22 BPS recordings corresponding to the presence of pain. Due to the small size of the dataset, a leave-one-subject-out cross-validation (LOSO-CV) strategy was performed, and the training results obtained the receiver operating characteristic (ROC) curve with an area under the curve (AUC) of 0.792. This model has a sensitivity of 81.8% and a specificity of 72.2%. Conclusion: This pilot study calibrated the NEVVA© medical device and showed the feasibility of continuous facial expression analysis for pain monitoring in ICU patients. The next step will be to correlate this device with the BPS scale.

4.
Geriatr Psychol Neuropsychiatr Vieil ; 22(2): 200-208, 2024 Jun 01.
Artículo en Francés | MEDLINE | ID: mdl-39023155

RESUMEN

Younger adults have difficulties identifying emotional facial expressions from faces covered by face masks. It is important to evaluate how face mask wearing might specifically impact older people, because they have lower emotion identification performance than younger adults, even without face masks. We compared performance of 62 young and 38 older adults in an online task of emotional facial expression identification using masked or unmasked pictures of faces with fear, happiness, anger, surprise, and neutral expression, from different viewpoints. Face masks affected performance in both age groups, but more so in older adults, specifically for negative emotions (anger, fear), in favour of the saliency hypothesis as an explanation for the positive advantage. Additionally, face masks more affected emotion recognition on profile than on three-quarter or full-face views. Our results encourage using clearer and full-face expressions when dealing with older people while wearing face masks.


Asunto(s)
Emociones , Expresión Facial , Reconocimiento Facial , Máscaras , Humanos , Anciano , Masculino , Femenino , Adulto , Adulto Joven , Anciano de 80 o más Años , Persona de Mediana Edad
5.
Sensors (Basel) ; 24(13)2024 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-39000930

RESUMEN

Convolutional neural networks (CNNs) have made significant progress in the field of facial expression recognition (FER). However, due to challenges such as occlusion, lighting variations, and changes in head pose, facial expression recognition in real-world environments remains highly challenging. At the same time, methods solely based on CNN heavily rely on local spatial features, lack global information, and struggle to balance the relationship between computational complexity and recognition accuracy. Consequently, the CNN-based models still fall short in their ability to address FER adequately. To address these issues, we propose a lightweight facial expression recognition method based on a hybrid vision transformer. This method captures multi-scale facial features through an improved attention module, achieving richer feature integration, enhancing the network's perception of key facial expression regions, and improving feature extraction capabilities. Additionally, to further enhance the model's performance, we have designed the patch dropping (PD) module. This module aims to emulate the attention allocation mechanism of the human visual system for local features, guiding the network to focus on the most discriminative features, reducing the influence of irrelevant features, and intuitively lowering computational costs. Extensive experiments demonstrate that our approach significantly outperforms other methods, achieving an accuracy of 86.51% on RAF-DB and nearly 70% on FER2013, with a model size of only 3.64 MB. These results demonstrate that our method provides a new perspective for the field of facial expression recognition.


Asunto(s)
Expresión Facial , Redes Neurales de la Computación , Humanos , Reconocimiento Facial Automatizado/métodos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Cara , Reconocimiento de Normas Patrones Automatizadas/métodos
6.
Behav Sci (Basel) ; 14(6)2024 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-38920840

RESUMEN

Ensemble coding allows observers to form an average to represent a set of elements. However, it is unclear whether observers can extract an average from a cross-category set. Previous investigations on this issue using low-level stimuli yielded contradictory results. The current study addressed this issue by presenting high-level stimuli (i.e., a crowd of facial expressions) simultaneously (Experiment 1) or sequentially (Experiment 2), and asked participants to complete a member judgment task. The results showed that participants could extract average information from a group of cross-category facial expressions with a short perceptual distance. These findings demonstrate cross-category ensemble coding of high-level stimuli, contributing to the understanding of ensemble coding and providing inspiration for future research.

7.
Ergonomics ; : 1-21, 2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38832783

RESUMEN

The affective experience generated when users play computer games can influence their attitude and preference towards the game. Existing evaluation means mainly depend on subjective scales and physiological signals. However, some limitations should not be ignored (e.g. subjective scales are not objective, and physiological signals are complicated). In this paper, we 1) propose a novel method to assess user affective experience when playing single-player games based on pleasure-arousal-dominance (PAD) emotions, facial expressions, and gaze directions, and 2) build an artificial intelligence model to identify user preference. Fifty-four subjects participated in a basketball experiment with three difficulty levels. Their expressions, gaze directions, and subjective PAD emotions were collected and analysed. Experimental results showed that the expression intensities of angry, sad, and neutral, yaw angle degrees of gaze direction, and PAD emotions varied significantly under different difficulties. Besides, the proposed model achieved better performance than other machine-learning algorithms on the collected dataset.


This paper considers the limitations of existing methods for assessing user affective experience when playing computer games. It demonstrates a novel approach using subjective emotion and objective facial cues to identify user affective experience and user preference for the game.

8.
Cogn Neurodyn ; 18(3): 863-875, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38826642

RESUMEN

The human brain can effectively perform Facial Expression Recognition (FER) with a few samples by utilizing its cognitive ability. However, unlike the human brain, even the well-trained deep neural network is data-dependent and lacks cognitive ability. To tackle this challenge, this paper proposes a novel framework, Brain Machine Generative Adversarial Networks (BM-GAN), which utilizes the concept of brain's cognitive ability to guide a Convolutional Neural Network to generate LIKE-electroencephalograph (EEG) features. More specifically, we firstly obtain EEG signals triggered from facial emotion images, then we adopt BM-GAN to carry out the mutual generation of image visual features and EEG cognitive features. BM-GAN intends to use the cognitive knowledge learnt from EEG signals to instruct the model to perceive LIKE-EEG features. Thereby, BM-GAN has a superior performance for FER like the human brain. The proposed model consists of VisualNet, EEGNet, and BM-GAN. More specifically, VisualNet can obtain image visual features from facial emotion images and EEGNet can obtain EEG cognitive features from EEG signals. Subsequently, the BM-GAN completes the mutual generation of image visual features and EEG cognitive features. Finally, the predicted LIKE-EEG features of test images are used for FER. After learning, without the participation of the EEG signals, an average classification accuracy of 96.6 % is obtained on Chinese Facial Affective Picture System dataset using LIKE-EEG features for FER. Experiments demonstrate that the proposed method can produce an excellent performance for FER.

9.
Psych J ; 13(3): 398-406, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38830603

RESUMEN

Facial expressions in infants have been noted to create a spatial attention bias when compared with adult faces. Yet, there is limited understanding of how adults perceive the timing of infant facial expressions. To investigate this, we used both infant and adult facial expressions in a temporal bisection task. In Experiment 1, we compared duration judgments of neutral infant and adult faces. The results revealed that participants felt that neutral infant faces lasted for a shorter time than neutral adult faces, independent of participant sex. Experiment 2 employed sad (crying) facial expressions. Here, the female participants perceived that the infants' faces were displayed for a longer duration than the adults' faces, whereas this distinction was not evident among the male participants. These findings highlight the influence of the babyface schema on time perception, nuanced by emotional context and sex-based individual variances.


Asunto(s)
Llanto , Expresión Facial , Percepción del Tiempo , Humanos , Femenino , Masculino , Adulto , Lactante , Reconocimiento Facial/fisiología , Emociones , Atención , Factores Sexuales
11.
Front Psychol ; 15: 1281857, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38845772

RESUMEN

The rapid detection of neutral faces with emotional value plays an important role in social relationships for both young and older adults. Recent psychological studies have indicated that young adults show efficient value learning for neutral faces and the detection of "value-associated faces," while older adults show slightly different patterns of value learning and value-based detection of neutral faces. However, the mechanisms underlying these processes remain unknown. To investigate this, we applied hierarchical reinforcement learning and diffusion models to a value learning task and value-driven detection task that involved neutral faces; the tasks were completed by young and older adults. The results for the learning task suggested that the sensitivity of learning feedback might decrease with age. In the detection task, the younger adults accumulated information more efficiently than the older adults, and the perceptual time leading to motion onset was shorter in the younger adults. In younger adults only, the reward sensitivity during associative learning might enhance the accumulation of information during a visual search for neutral faces in a rewarded task. These results provide insight into the processing linked to efficient detection of faces associated with emotional values, and the age-related changes therein.

12.
Neurosci Biobehav Rev ; 162: 105684, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38710425

RESUMEN

Facial expression is a critical form of nonverbal social communication which promotes emotional exchange and affiliation among humans. Facial expressions are generated via precise contraction of the facial muscles, guided by sensory feedback. While the neural pathways underlying facial motor control are well characterized in humans and primates, it remains unknown how tactile and proprioceptive information reaches these pathways to guide facial muscle contraction. Thus, despite the importance of facial expressions for social functioning, little is known about how they are generated as a unique sensorimotor behavior. In this review, we highlight current knowledge about sensory feedback from the face and how it is distinct from other body regions. We describe connectivity between the facial sensory and motor brain systems, and call attention to the other brain systems which influence facial expression behavior, including vision, gustation, emotion, and interoception. Finally, we petition for more research on the sensory basis of facial expressions, asserting that incomplete understanding of sensorimotor mechanisms is a barrier to addressing atypical facial expressivity in clinical populations.


Asunto(s)
Expresión Facial , Humanos , Retroalimentación Sensorial/fisiología , Músculos Faciales/fisiología , Animales , Emociones/fisiología , Encéfalo/fisiología
13.
Biology (Basel) ; 13(5)2024 Apr 25.
Artículo en Inglés | MEDLINE | ID: mdl-38785773

RESUMEN

The evolution of facial muscles in dogs has been linked to human preferential selection of dogs whose faces appear to communicate information and emotion. Dogs who convey, especially with their eyes, a sense of perceived helplessness can elicit a caregiving response from humans. However, the facial muscles used to generate such expressions may not be uniquely present in all dogs, but rather specifically cultivated among various taxa and individuals. In a preliminary, qualitative gross anatomical evaluation of 10 canid specimens of various species, we find that the presence of two facial muscles previously implicated in human-directed canine communication, the levator anguli occuli medialis (LAOM) and the retractor anguli occuli lateralis (RAOL), was not unique to domesticated dogs (Canis familiaris). Our results suggest that these aspects of facial musculature do not necessarily reflect selection via human domestication and breeding. In addition to quantitatively evaluating more and other members of the Canidae family, future directions should include analyses of the impact of superficial facial features on canine communication and interspecies communication between dogs and humans.

14.
Behav Sci (Basel) ; 14(5)2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38785846

RESUMEN

Uncertainties and discrepant results in identifying crucial areas for emotional facial expression recognition may stem from the eye tracking data analysis methods used. Many studies employ parameters of analysis that predominantly prioritize the examination of the foveal vision angle, ignoring the potential influences of simultaneous parafoveal and peripheral information. To explore the possible underlying causes of these discrepancies, we investigated the role of the visual field aperture in emotional facial expression recognition with 163 volunteers randomly assigned to three groups: no visual restriction (NVR), parafoveal and foveal vision (PFFV), and foveal vision (FV). Employing eye tracking and gaze contingency, we collected visual inspection and judgment data over 30 frontal face images, equally distributed among five emotions. Raw eye tracking data underwent Eye Movements Metrics and Visualizations (EyeMMV) processing. Accordingly, the visual inspection time, number of fixations, and fixation duration increased with the visual field restriction. Nevertheless, the accuracy showed significant differences among the NVR/FV and PFFV/FV groups, despite there being no difference in NVR/PFFV. The findings underscore the impact of specific visual field areas on facial expression recognition, highlighting the importance of parafoveal vision. The results suggest that eye tracking data analysis methods should incorporate projection angles extending to at least the parafoveal level.

15.
Front Psychol ; 15: 1379652, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38725946

RESUMEN

The development of facial expression recognition ability in children is crucial for their emotional cognition and social interactions. In this study, 510 children aged between 6 and 15 participated in a two forced-choice task of facial expression recognition. The findings supported that recognition of the six basic facial expressions reached a relatively stable mature level around 8-9 years old. Additionally, model fitting results indicated that children showed the most significant improvement in recognizing expressions of disgust, closely followed by fear. Conversely, recognition of expressions of happiness and sadness showed slower improvement across different age groups. Regarding gender differences, girls exhibited a more pronounced advantage. Further model fitting revealed that boys showed more pronounced improvements in recognizing expressions of disgust, fear, and anger, while girls showed more pronounced improvements in recognizing expressions of surprise, sadness, and happiness. These clear findings suggested the synchronous developmental trajectory of facial expression recognition from childhood to adolescence, likely influenced by socialization processes and interactions related to brain maturation.

16.
Sci Rep ; 14(1): 12250, 2024 05 28.
Artículo en Inglés | MEDLINE | ID: mdl-38806507

RESUMEN

Mona Lisa's ambiguous expression, oscillating between melancholy and contentment, has captivated viewers for centuries, prompting diverse explanations. This article proposes a novel interpretation grounded in the psychological theory of perceptual organisation. Central to the investigation is the "Ambiguity-Nuance", a subtly shaded, blended region framing the upper part of the lips, hypothesised to influence perceived expression due to perceptual organization. Through carefully crafted artwork and systematic manipulations of Mona Lisa reproductions, experiments reveal how alterations in the perceptual relationships of the Ambiguity-Nuance yield significant shifts in perceived expression, explaining why Mona Lisa's appearance changes and under which conditions she looks content versus melancholic based on perceptual organization. These findings underscore the pivotal role of psychological principles in shaping ambiguous expressions in the Mona Lisa, and extend to other Leonardo's portraits, namely La Bella Principessa and Scapigliata. This study sheds light on the intersection of psychology and art, offering new perspectives on timeless masterpieces.


Asunto(s)
Sonrisa , Humanos , Femenino , Expresión Facial , Personajes , Pinturas
17.
Cortex ; 175: 1-11, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38691922

RESUMEN

Studies have reported substantial variability in emotion recognition ability (ERA) - an important social skill - but possible neural underpinnings for such individual differences are not well understood. This functional magnetic resonance imaging (fMRI) study investigated neural responses during emotion recognition in young adults (N = 49) who were selected for inclusion based on their performance (high or low) during previous testing of ERA. Participants were asked to judge brief video recordings in a forced-choice emotion recognition task, wherein stimuli were presented in visual, auditory and multimodal (audiovisual) blocks. Emotion recognition rates during brain scanning confirmed that individuals with high (vs low) ERA received higher accuracy for all presentation blocks. fMRI-analyses focused on key regions of interest (ROIs) involved in the processing of multimodal emotion expressions, based on previous meta-analyses. In neural response to emotional stimuli contrasted with neutral stimuli, individuals with high (vs low) ERA showed higher activation in the following ROIs during the multimodal condition: right middle superior temporal gyrus (mSTG), right posterior superior temporal sulcus (PSTS), and right inferior frontal cortex (IFC). Overall, results suggest that individual variability in ERA may be reflected across several stages of decisional processing, including extraction (mSTG), integration (PSTS) and evaluation (IFC) of emotional information.


Asunto(s)
Mapeo Encefálico , Emociones , Individualidad , Imagen por Resonancia Magnética , Reconocimiento en Psicología , Humanos , Masculino , Femenino , Emociones/fisiología , Adulto Joven , Adulto , Reconocimiento en Psicología/fisiología , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Expresión Facial , Estimulación Luminosa/métodos , Reconocimiento Facial/fisiología
18.
Cogn Neurodyn ; 18(2): 317-335, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38699622

RESUMEN

Facial expressions can convey the internal emotions of a person within a certain scenario and play a major role in the social interaction of human beings. In automatic Facial Expression Recognition (FER) systems, the method applied for feature extraction plays a major role in determining the performance of a system. In this regard, by drawing inspiration from the Swastik symbol, three texture based feature descriptors named Symbol Patterns (SP1, SP2 and SP3) have been proposed for facial feature extraction. SP1 generates one pattern value by comparing eight pixels within a 3×3 neighborhood, whereas, SP2 and SP3 generates two pattern values each by comparing twelve and sixteen pixels within a 5×5 neighborhood respectively. In this work, the proposed Symbol Patterns (SP) have been evaluated with natural, fibonacci, odd, prime, squares and binary weights for determining the optimal recognition accuracy. The proposed SP methods have been tested on MUG, TFEID, CK+, KDEF, FER2013 and FERG datasets and the results from the experimental analysis demonstrated an improvement in the recognition accuracy when compared to the existing FER methods.

19.
Eur Eat Disord Rev ; 2024 May 06.
Artículo en Inglés | MEDLINE | ID: mdl-38708578

RESUMEN

OBJECTIVE: The study investigated interpersonal distance in patients with anorexia nervosa (AN), focussing on the role of other's facial expression and morphology, also assessing physiological and subjective responses. METHOD: Twenty-nine patients with AN and 30 controls (CTL) were exposed to virtual characters either with an angry, neutral, or happy facial expression or with an overweight, normal-weight, or underweight morphology presented either in the near or far space while we recorded electrodermal activity. Participants had to judge their preferred interpersonal distance with the characters and rated them in terms of valence and arousal. RESULTS: Unlike CTL, patients with AN exhibited heightened electrodermal activity for morphological stimuli only, when presented in the near space. They also preferred larger and smaller interpersonal distances with overweight and underweight characters respectively, although rating both negatively. Finally, and similar to CTL, they preferred larger interpersonal distance with angry than neutral or happy characters. DISCUSSION: Although patients with AN exhibited behavioural response to emotional stimuli similar to CTL, they lacked corresponding physiological response, indicating emotional blunting towards emotional social stimuli. Moreover, they showed distinct behavioural and physiological adjustments in response to body shape, confirming the specific emotional significance attached to body shape.

20.
Artículo en Inglés | MEDLINE | ID: mdl-38761256

RESUMEN

A better understanding of social deficits in alcohol use disorder (AUD) has the potential to improve our understanding of the disorder. Clinical research shows that AUD is associated with interpersonal problems and the loss of a social network which impedes response to treatment. Translational research between animal models and clinical research may benefit from a discussion of the models and methods that currently guide research into social cognition in AUD. We propose that research in AUD should harness recent technological developments to improve ecological validity while maintaining experimental control. Novel methods allow us to parse naturalistic social cognition into tangible components, and to investigate previously neglected aspects of social cognition. Furthermore, to incorporate social cognition as a defining element of AUD, it is critical to clarify the timing of these social disturbances. Currently, there is limited evidence to distinguish factors that influence social cognition as a consequence of AUD, and those that precede the onset of the disorder. Both increasing the focus on operationalization of social cognition into objective components and adopting a perspective that spans the clinical spectrum will improve our understanding in humans, but also possibly increase methodological consistency and translational dialogue across species. This commentary underscores current challenges and perspectives in this area of research.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...