Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 24(8)2024 Apr 19.
Article in English | MEDLINE | ID: mdl-38676235

ABSTRACT

Most human emotion recognition methods largely depend on classifying stereotypical facial expressions that represent emotions. However, such facial expressions do not necessarily correspond to actual emotional states and may correspond to communicative intentions. In other cases, emotions are hidden, cannot be expressed, or may have lower arousal manifested by less pronounced facial expressions, as may occur during passive video viewing. This study improves an emotion classification approach developed in a previous study, which classifies emotions remotely without relying on stereotypical facial expressions or contact-based methods, using short facial video data. In this approach, we desire to remotely sense transdermal cardiovascular spatiotemporal facial patterns associated with different emotional states and analyze this data via machine learning. In this paper, we propose several improvements, which include a better remote heart rate estimation via a preliminary skin segmentation, improvement of the heartbeat peaks and troughs detection process, and obtaining a better emotion classification accuracy by employing an appropriate deep learning classifier using an RGB camera input only with data. We used the dataset obtained in the previous study, which contains facial videos of 110 participants who passively viewed 150 short videos that elicited the following five emotion types: amusement, disgust, fear, sexual arousal, and no emotion, while three cameras with different wavelength sensitivities (visible spectrum, near-infrared, and longwave infrared) recorded them simultaneously. From the short facial videos, we extracted unique high-resolution spatiotemporal, physiologically affected features and examined them as input features with different deep-learning approaches. An EfficientNet-B0 model type was able to classify participants' emotional states with an overall average accuracy of 47.36% using a single input spatiotemporal feature map obtained from a regular RGB camera.


Subject(s)
Deep Learning , Emotions , Facial Expression , Heart Rate , Humans , Emotions/physiology , Heart Rate/physiology , Video Recording/methods , Image Processing, Computer-Assisted/methods , Face/physiology , Female , Male
2.
IEEE J Biomed Health Inform ; 27(12): 5755-5766, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37703166

ABSTRACT

Standard recordings of electrocardiograhic signals are contaminated by a large variety of noises and interferences, which impair their analysis and the further related diagnosis. In this article, we propose a method, based on compressive sensing techniques, to remove the main noise artifacts and to locate the main features of the pulses in the electrocardiogram (ECG). The motivation is to use trend filtering with a varying proximal parameter, in order to sequentially capture the peaks of the ECG, which have different functional regularities. The practical implementation is based on an adaptive version of the alternating direction method of multiplier (ADMM) algorithm. We present results obtained on simulated signals and on real data illustrating the validity of this approach, showing that results in peak localization are very good in both cases and comparable to state of the art approaches.


Subject(s)
Data Compression , Signal Processing, Computer-Assisted , Humans , Algorithms , Electrocardiography/methods , Artifacts , Signal-To-Noise Ratio
SELECTION OF CITATIONS
SEARCH DETAIL
...