Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Neurosci ; 17: 1188434, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37292164

RESUMO

Introduction: Deep-learn methods based on convolutional neural networks (CNNs) have demonstrated impressive performance in depression analysis. Nevertheless, some critical challenges need to be resolved in these methods: (1) It is still difficult for CNNs to learn long-range inductive biases in the low-level feature extraction of different facial regions because of the spatial locality. (2) It is difficult for a model with only a single attention head to concentrate on various parts of the face simultaneously, leading to less sensitivity to other important facial regions associated with depression. In the case of facial depression recognition, many of the clues come from a few areas of the face simultaneously, e.g., the mouth and eyes. Methods: To address these issues, we present an end-to-end integrated framework called Hybrid Multi-head Cross Attention Network (HMHN), which includes two stages. The first stage consists of the Grid-Wise Attention block (GWA) and Deep Feature Fusion block (DFF) for the low-level visual depression feature learning. In the second stage, we obtain the global representation by encoding high-order interactions among local features with Multi-head Cross Attention block (MAB) and Attention Fusion block (AFB). Results: We experimented on AVEC2013 and AVEC2014 depression datasets. The results of AVEC 2013 (RMSE = 7.38, MAE = 6.05) and AVEC 2014 (RMSE = 7.60, MAE = 6.01) demonstrated the efficacy of our method and outperformed most of the state-of-the-art video-based depression recognition approaches. Discussion: We proposed a deep learning hybrid model for depression recognition by capturing the higher-order interactions between the depression features of multiple facial regions, which can effectively reduce the error in depression recognition and gives great potential for clinical experiments.

2.
Comput Biol Med ; 157: 106589, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36934531

RESUMO

Artificial intelligence methods are widely applied to depression recognition and provide an objective solution. Many effective automated methods for detecting depression use facial expressions, which are strong indicators to reflect psychiatric disorders. However, these methods suffer from insufficient representations of depression. To this end, we propose a novel Part-and-Relation Attention Network (PRA-Net), which can enhance depression representations by accurately focusing on features that are highly correlated with depression. Specifically, we first perform partition on the feature map instead of the original image, in order to obtain part features rich in semantic information. Afterwards, self-attention is used to calculate the weight of each part feature. Following, the relationship between the part feature and the global content representation is explored by relation attention to refine the weight. Finally, all features are aggregated into a more compact and depression-informative representation via both weights for depression score prediction. Extensive experiments demonstrate the superiority of our method. Compared to other end-to-end methods, our method achieves state-of-the-art performance on AVEC2013 and AVEC2014.


Assuntos
Inteligência Artificial , Expressão Facial , Humanos , Depressão/diagnóstico , Semântica
3.
Artigo em Inglês | MEDLINE | ID: mdl-36417750

RESUMO

In recent years, with the widespread popularity of the Internet, social media has become an indispensable part of people's lives. People regard online social media as an essential tool for interaction and communication. Due to the convenience of data acquisition from social media, mental health research on social media has received a lot of attention. The early detection of psychological disorder based on social media can help prevent further deterioration in at-risk people. In this paper, depression detection is performed based on non-verbal (acoustics and visual) behaviors of vlog. We propose a time-aware attention-based multimodal fusion depression detection network (TAMFN) to mine and fuse the multimodal features fully. The TAMFN model is constructed by a temporal convolutional network with the global information (GTCN), an intermodal feature extraction (IFE) module, and a time-aware attention multimodal fusion (TAMF) module. The GTCN model captures more temporal behavior information by combining local and global temporal information. The IFE module extracts the early interaction information between modalities to enrich the feature representation. The TAMF module guides the multimodal feature fusion by mining the temporal importance between different modalities. Our experiments are carried out on D-Vlog dataset, and the comparative experimental results report that our proposed TAMFN outperforms all benchmark models, indicating the effectiveness of the proposed TAMFN model.


Assuntos
Benchmarking , Depressão , Humanos , Depressão/diagnóstico , Comunicação , Internet
4.
Artigo em Inglês | MEDLINE | ID: mdl-36067098

RESUMO

Depression is a common mental illness which has brought great harm to the individuals. With recent evidence that many objective physiological signals are associated with depression, automated detection of depression is urgent and important for the growing concern of mental illness. We investigate the problem of classifying depression by facial expressions, which may aid in online diagnosis and rehabilitation engineering of depression. In this work, We propose a weakly supervised learning approach employing multiple instance learning (MIL) on 150 videos data from 75 depressed and 75 healthy subjects. In addition, we present a novel MIL dual-stream aggregator that considers both the instance-level and the bag-level in order to emphasize the information with symptoms. Specifically, our method named ADDMIL uses max-pooling at the instance level to capture symptom information and further integrates the contribution of each instance at the bag level using attention weights. Our method achieves 74.7% accuracy and 74.5% recall on the collected dataset, which not only improves 10.1% accuracy and 9.8% recall over the baseline but also exceeds the best accuracy result of MIL-based method by 2.1%. Our work achieves results that are comparable to the state-of-the-art methods and demonstrates that multiple instance learning has great potential for depression classification. We present for the first time a weakly supervised learning approach in the detection of depression through raw facial expressions, which may provide a new framework for other psychiatric disorders detection methods.


Assuntos
Algoritmos , Expressão Facial , Humanos , Depressão/diagnóstico , Interpretação de Imagem Assistida por Computador/métodos , Rememoração Mental
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...