Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters











Publication year range
1.
Behav Sci (Basel) ; 14(5)2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38785846

ABSTRACT

Uncertainties and discrepant results in identifying crucial areas for emotional facial expression recognition may stem from the eye tracking data analysis methods used. Many studies employ parameters of analysis that predominantly prioritize the examination of the foveal vision angle, ignoring the potential influences of simultaneous parafoveal and peripheral information. To explore the possible underlying causes of these discrepancies, we investigated the role of the visual field aperture in emotional facial expression recognition with 163 volunteers randomly assigned to three groups: no visual restriction (NVR), parafoveal and foveal vision (PFFV), and foveal vision (FV). Employing eye tracking and gaze contingency, we collected visual inspection and judgment data over 30 frontal face images, equally distributed among five emotions. Raw eye tracking data underwent Eye Movements Metrics and Visualizations (EyeMMV) processing. Accordingly, the visual inspection time, number of fixations, and fixation duration increased with the visual field restriction. Nevertheless, the accuracy showed significant differences among the NVR/FV and PFFV/FV groups, despite there being no difference in NVR/PFFV. The findings underscore the impact of specific visual field areas on facial expression recognition, highlighting the importance of parafoveal vision. The results suggest that eye tracking data analysis methods should incorporate projection angles extending to at least the parafoveal level.

2.
Sensors (Basel) ; 22(10)2022 May 14.
Article in English | MEDLINE | ID: mdl-35632160

ABSTRACT

Social robotics is an emerging area that is becoming present in social spaces, by introducing autonomous social robots. Social robots offer services, perform tasks, and interact with people in such social environments, demanding more efficient and complex Human-Robot Interaction (HRI) designs. A strategy to improve HRI is to provide robots with the capacity of detecting the emotions of the people around them to plan a trajectory, modify their behaviour, and generate an appropriate interaction with people based on the analysed information. However, in social environments in which it is common to find a group of persons, new approaches are needed in order to make robots able to recognise groups of people and the emotion of the groups, which can be also associated with a scene in which the group is participating. Some existing studies are focused on detecting group cohesion and the recognition of group emotions; nevertheless, these works do not focus on performing the recognition tasks from a robocentric perspective, considering the sensory capacity of robots. In this context, a system to recognise scenes in terms of groups of people, to then detect global (prevailing) emotions in a scene, is presented. The approach proposed to visualise and recognise emotions in typical HRI is based on the face size of people recognised by the robot during its navigation (face sizes decrease when the robot moves away from a group of people). On each frame of the video stream of the visual sensor, individual emotions are recognised based on the Visual Geometry Group (VGG) neural network pre-trained to recognise faces (VGGFace); then, to detect the emotion of the frame, individual emotions are aggregated with a fusion method, and consequently, to detect global (prevalent) emotion in the scene (group of people), the emotions of its constituent frames are also aggregated. Additionally, this work proposes a strategy to create datasets with images/videos in order to validate the estimation of emotions in scenes and personal emotions. Both datasets are generated in a simulated environment based on the Robot Operating System (ROS) from videos captured by robots through their sensory capabilities. Tests are performed in two simulated environments in ROS/Gazebo: a museum and a cafeteria. Results show that the accuracy in the detection of individual emotions is 99.79% and the detection of group emotion (scene emotion) in each frame is 90.84% and 89.78% in the cafeteria and the museum scenarios, respectively.


Subject(s)
Robotics , Emotions , Humans , Reactive Oxygen Species , Robotics/methods , Social Interaction , Social Perception
3.
Healthcare (Basel) ; 10(4)2022 Mar 31.
Article in English | MEDLINE | ID: mdl-35455835

ABSTRACT

Humans express their emotions verbally and through actions, and hence emotions play a fundamental role in facial expressions and body gestures. Facial expression recognition is a popular topic in security, healthcare, entertainment, advertisement, education, and robotics. Detecting facial expressions via gesture recognition is a complex and challenging problem, especially in persons who suffer face impairments, such as patients with facial paralysis. Facial palsy or paralysis refers to the incapacity to move the facial muscles on one or both sides of the face. This work proposes a methodology based on neural networks and handcrafted features to recognize six gestures in patients with facial palsy. The proposed facial palsy gesture recognition system is designed and evaluated on a publicly available database with good results as a first attempt to perform this task in the medical field. We conclude that, to recognize facial gestures in patients with facial paralysis, the severity of the damage has to be considered because paralyzed organs exhibit different behavior than do healthy ones, and any recognition system must be capable of discerning these behaviors.

4.
Sensors (Basel) ; 20(17)2020 Aug 27.
Article in English | MEDLINE | ID: mdl-32867182

ABSTRACT

An essential aspect in the interaction between people and computers is the recognition of facial expressions. A key issue in this process is to select relevant features to classify facial expressions accurately. This study examines the selection of optimal geometric features to classify six basic facial expressions: happiness, sadness, surprise, fear, anger, and disgust. Inspired by the Facial Action Coding System (FACS) and the Moving Picture Experts Group 4th standard (MPEG-4), an initial set of 89 features was proposed. These features are normalized distances and angles in 2D and 3D computed from 22 facial landmarks. To select a minimum set of features with the maximum classification accuracy, two selection methods and four classifiers were tested. The first selection method, principal component analysis (PCA), obtained 39 features. The second selection method, a genetic algorithm (GA), obtained 47 features. The experiments ran on the Bosphorus and UIVBFED data sets with 86.62% and 93.92% median accuracy, respectively. Our main finding is that the reduced feature set obtained by the GA is the smallest in comparison with other methods of comparable accuracy. This has implications in reducing the time of recognition.


Subject(s)
Automated Facial Recognition , Emotions , Facial Expression , Humans
5.
Foods ; 9(6)2020 Jun 11.
Article in English | MEDLINE | ID: mdl-32545344

ABSTRACT

Sensory experiences play an important role in consumer response, purchase decision, and fidelity towards food products. Consumer studies when launching new food products must incorporate physiological response assessment to be more precise and, thus, increase their chances of success in the market. This paper introduces a novel sensory analysis system that incorporates facial emotion recognition (FER), galvanic skin response (GSR), and cardiac pulse to determine consumer acceptance of food samples. Taste and smell experiments were conducted with 120 participants recording facial images, biometric signals, and reported liking when trying a set of pleasant and unpleasant flavors and odors. Data fusion and analysis by machine learning models allow predicting the acceptance elicited by the samples. Results confirm that FER alone is not sufficient to determine consumers' acceptance. However, when combined with GSR and, to a lesser extent, with pulse signals, acceptance prediction can be improved. This research targets predicting consumer's acceptance without the continuous use of liking scores. In addition, the findings of this work may be used to explore the relationships between facial expressions and physiological reactions for non-rational decision-making when interacting with new food products.

6.
Sensors (Basel) ; 19(13)2019 Jun 26.
Article in English | MEDLINE | ID: mdl-31248004

ABSTRACT

Child-Robot Interaction (CRI) has become increasingly addressed in research and applications. This work proposes a system for emotion recognition in children, recording facial images by both visual (RGB-red, green and blue) and Infrared Thermal Imaging (IRTI) cameras. For this purpose, the Viola-Jones algorithm is used on color images to detect facial regions of interest (ROIs), which are transferred to the thermal camera plane by multiplying a homography matrix obtained through the calibration process of the camera system. As a novelty, we propose to compute the error probability for each ROI located over thermal images, using a reference frame manually marked by a trained expert, in order to choose that ROI better placed according to the expert criteria. Then, this selected ROI is used to relocate the other ROIs, increasing the concordance with respect to the reference manual annotations. Afterwards, other methods for feature extraction, dimensionality reduction through Principal Component Analysis (PCA) and pattern classification by Linear Discriminant Analysis (LDA) are applied to infer emotions. The results show that our approach for ROI locations may track facial landmarks with significant low errors with respect to the traditional Viola-Jones algorithm. These ROIs have shown to be relevant for recognition of five emotions, specifically disgust, fear, happiness, sadness, and surprise, with our recognition system based on PCA and LDA achieving mean accuracy (ACC) and Kappa values of 85.75% and 81.84%, respectively. As a second stage, the proposed recognition system was trained with a dataset of thermal images, collected on 28 typically developing children, in order to infer one of five basic emotions (disgust, fear, happiness, sadness, and surprise) during a child-robot interaction. The results show that our system can be integrated to a social robot to infer child emotions during a child-robot interaction.


Subject(s)
Emotions/physiology , Face/diagnostic imaging , Facial Expression , Image Processing, Computer-Assisted , Algorithms , Child , Discriminant Analysis , Fear/physiology , Female , Humans , Male , Pattern Recognition, Visual/physiology , Robotics
7.
Arq. neuropsiquiatr ; Arq. neuropsiquiatr;73(5): 383-389, 05/2015. tab
Article in English | LILACS | ID: lil-746495

ABSTRACT

Facial recognition is one of the most important aspects of social cognition. In this study, we investigate the patterns of change and the factors involved in the ability to recognize emotion in mild Alzheimer’s disease (AD). Through a longitudinal design, we assessed 30 people with AD. We used an experimental task that includes matching expressions with picture stimuli, labelling emotions and emotionally recognizing a stimulus situation. We observed a significant difference in the situational recognition task (p ≤ 0.05) between baseline and the second evaluation. The linear regression showed that cognition is a predictor of emotion recognition impairment (p ≤ 0.05). The ability to perceive emotions from facial expressions was impaired, particularly when the emotions presented were relatively subtle. Cognition is recruited to comprehend emotional situations in cases of mild dementia.


O reconhecimento da expressão facial é um dos aspectos mais importantes relacionados à cognição social. Foram investigados os padrões de mudança e os fatores envolvidos na habilidade de reconhecer emoções na doença de Alzheimer (DA) leve. Em um estudo longitudinal foram avaliadas 30 pessoas com DA. Para a avaliação da capacidade de reconhecimento facial na DA foi utilizada uma tarefa experimental que inclui a combinação de expressões com uma figura estímulo, rotulação da emoção e reconhecimento emocional de uma situação estímulo. Foi encontrada diferença significativa entre os momentos 1 e 2 na tarefa de reconhecimento situacional (p ≤ 0.05). A regressão linear mostrou que a cognição (p ≤ 0.05) é o fator preditor para o prejuízo do reconhecimento emocional, o que sugere um recrutamento da cognição para a compreensão de situações emocionais mais complexas. Houve comprometimento na percepção de emoções em expressões faciais, particularmente, quando as emoções eram sutis.


Subject(s)
Aged , Aged, 80 and over , Female , Humans , Male , Alzheimer Disease/psychology , Cognition/physiology , Emotions/physiology , Facial Expression , Recognition, Psychology , Alzheimer Disease/physiopathology , Epidemiologic Methods , Neuropsychological Tests , Quality of Life , Task Performance and Analysis , Time Factors
8.
Univ. psychol ; 6(2): 295-308, mayo.-ago. 2007. tab
Article in Spanish | LILACS | ID: lil-571883

ABSTRACT

El intercambio emocional humano implica expresión/reconocimiento de emociones. La cara es el lugar privilegiado para expresar o leer la emoción. Algunas emociones se asocian con llanto emocional, diferenciable del basal y del reflejo. Murube, Murube y Murube (1999) clasificaron el llanto emocional en de demanda y de ofrecimiento de ayuda. Se evaluó la validez de dicha tipología empleando rostros humanos de ambos sexos que lloraban por dolor propio y ajeno. Un grupo de jueces clasificó el llanto expresado en esos rostros. Se calcularon tasas de acierto de discriminación de llanto y se realizaron pruebas chi-cuadro por sexo. Los resultados no apoyan la idea de una habilidad para distinguir dos tipos de llanto y son explicados desde un punto de vista cultural.


Human emotional interchange implicates expression/recognition of emotions. The human face is a conspicuous place to express/read emotion. Certain emotions associate with emotional tearing, differentiable from basal and reflex tearing. Murube, Murube and Murube (1999) classified emotional tearing in requesting- and offering -help. The validity of that typology was evaluated using faces of people of both sexes crying because of their own suffering and because of other’s suffering. A group of judges classified the crying shown by those faces. Discrimination hit rates andqui-square tests were estimated by sex. Results do not support a human ability to distinguish two types of crying andare interpreted from a cultural point of view.


Subject(s)
Crying/psychology , Nonverbal Communication , Facial Expression
SELECTION OF CITATIONS
SEARCH DETAIL