Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Data ; 10(1): 351, 2023 06 02.
Artigo em Inglês | MEDLINE | ID: mdl-37268686

RESUMO

With the popularization of low-cost mobile and wearable sensors, several studies have used them to track and analyze mental well-being, productivity, and behavioral patterns. However, there is still a lack of open datasets collected in real-world contexts with affective and cognitive state labels such as emotion, stress, and attention; the lack of such datasets limits research advances in affective computing and human-computer interaction. This study presents K-EmoPhone, a real-world multimodal dataset collected from 77 students over seven days. This dataset contains (1) continuous probing of peripheral physiological signals and mobility data measured by commercial off-the-shelf devices, (2) context and interaction data collected from individuals' smartphones, and (3) 5,582 self-reported affect states, including emotions, stress, attention, and task disturbance, acquired by the experience sampling method. We anticipate the dataset will contribute to advancements in affective computing, emotion intelligence technologies, and attention management based on mobile and wearable sensor data.


Assuntos
Emoções , Dispositivos Eletrônicos Vestíveis , Humanos , Atenção , Autorrelato , Smartphone
2.
IEEE J Biomed Health Inform ; 27(2): 912-923, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36446009

RESUMO

The automated recognition of human emotions plays an important role in developing machines with emotional intelligence. Major research efforts are dedicated to the development of emotion recognition methods. However, most of the affective computing models are based on images, audio, videos and brain signals. Literature lacks works that focus on utilizing only peripheral signals for emotion recognition (ER), which can be ideally implemented in daily life settings. Therefore, this paper present a framework for ER on the arousal and valence space, based on using multi-modal peripheral signals. The data used in this work were collected during a debate between two people using wearable devices. The emotions of the participants were rated by multiple raters and converted into classes in correspondence to the arousal and valence space. The use of a dynamic threshold for ratings conversion was investigated. An ER model is proposed that uses a Long Short-Term Memory (LSTM)-based architecture for classification. The model uses heart rate (HR), temperature (T), and electrodermal activity (EDA) signals as its inputs with emotional cues. Additionally, a post-processing prediction mechanism is introduced to enhance the recognition performance. The model is implemented to study the use of individual and different combinations of the peripheral signals, as well as utilizing annotations from different ratings. Additionally, it is employed for classification of valence and arousal in an independent and combined fashion, under subject dependent and independent scenarios. The experimental results have justified the efficient performance of the proposed framework, achieving classification accuracy 96% and 93% for the independent and combined classification scenarios, accordingly. The comparison of the achieved performance against the baseline methods shows the superiority of the proposed framework and the ability to recognize arousal-valance levels with high accuracy from peripheral signals, in real-life scenarios.


Assuntos
Encéfalo , Emoções , Humanos , Emoções/fisiologia , Comunicação , Nível de Alerta , Frequência Cardíaca , Eletroencefalografia
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 686-689, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891385

RESUMO

The automated recognition of human emotions plays an important role in developing machines with emotional intelligence. However, most of the affective computing models are based on images, audio, videos and brain signals. There is a lack of prior studies that focus on utilizing only peripheral physiological signals for emotion recognition, which can ideally be implemented in daily life settings using wearables, e.g., smartwatches. Here, an emotion classification method using peripheral physiological signals, obtained by wearable devices that enable continuous monitoring of emotional states, is presented. A Long Short-Term Memory neural network-based classification model is proposed to accurately predict emotions in real-time into binary levels and quadrants of the arousal-valence space. The peripheral sensored data used here were collected from 20 participants, who engaged in a naturalistic debate. Different annotation schemes were adopted and their impact on the classification performance was explored. Evaluation results demonstrate the capability of our method with a measured accuracy of >93% and >89% for binary levels and quad classes, respectively. This paves the way for enhancing the role of wearable devices in emotional state recognition in everyday life.


Assuntos
Eletroencefalografia , Memória de Curto Prazo , Nível de Alerta , Emoções , Humanos , Redes Neurais de Computação
4.
Sci Data ; 7(1): 293, 2020 09 08.
Artigo em Inglês | MEDLINE | ID: mdl-32901038

RESUMO

Recognizing emotions during social interactions has many potential applications with the popularization of low-cost mobile sensors, but a challenge remains with the lack of naturalistic affective interaction data. Most existing emotion datasets do not support studying idiosyncratic emotions arising in the wild as they were collected in constrained environments. Therefore, studying emotions in the context of social interactions requires a novel dataset, and K-EmoCon is such a multimodal dataset with comprehensive annotations of continuous emotions during naturalistic conversations. The dataset contains multimodal measurements, including audiovisual recordings, EEG, and peripheral physiological signals, acquired with off-the-shelf devices from 16 sessions of approximately 10-minute long paired debates on a social issue. Distinct from previous datasets, it includes emotion annotations from all three available perspectives: self, debate partner, and external observers. Raters annotated emotional displays at intervals of every 5 seconds while viewing the debate footage, in terms of arousal-valence and 18 additional categorical emotions. The resulting K-EmoCon is the first publicly available emotion dataset accommodating the multiperspective assessment of emotions during social interactions.


Assuntos
Emoções , Comportamento Social , Fala , Nível de Alerta , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...