Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Clin Transl Oncol ; 26(6): 1438-1445, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38194018

RESUMO

BACKGROUND: Lung adenocarcinoma is a common cause of cancer-related deaths worldwide, and accurate EGFR genotyping is crucial for optimal treatment outcomes. Conventional methods for identifying the EGFR genotype have several limitations. Therefore, we proposed a deep learning model using non-invasive CT images to predict EGFR mutation status with robustness and generalizability. METHODS: A total of 525 patients were enrolled at the local hospital to serve as the internal data set for model training and validation. In addition, a cohort of 30 patients from the publicly available Cancer Imaging Archive Data Set was selected for external testing. All patients underwent plain chest CT, and their EGFR mutation status labels were categorized as either mutant or wild type. The CT images were analyzed using a self-attention-based ViT-B/16 model to predict the EGFR mutation status, and the model's performance was evaluated. To produce an attention map indicating the suspicious locations of EGFR mutations, Grad-CAM was utilized. RESULTS: The ViT deep learning model achieved impressive results, with an accuracy of 0.848, an AUC of 0.868, a sensitivity of 0.924, and a specificity of 0.718 on the validation cohort. Furthermore, in the external test cohort, the model achieved comparable performances, with an accuracy of 0.833, an AUC of 0.885, a sensitivity of 0.900, and a specificity of 0.800. CONCLUSIONS: The ViT model demonstrates a high level of accuracy in predicting the EGFR mutation status of lung adenocarcinoma patients. Moreover, with the aid of attention maps, the model can assist clinicians in making informed clinical decisions.


Assuntos
Adenocarcinoma de Pulmão , Aprendizado Profundo , Receptores ErbB , Neoplasias Pulmonares , Mutação , Tomografia Computadorizada por Raios X , Humanos , Receptores ErbB/genética , Adenocarcinoma de Pulmão/genética , Adenocarcinoma de Pulmão/patologia , Neoplasias Pulmonares/genética , Neoplasias Pulmonares/patologia , Feminino , Masculino , Pessoa de Meia-Idade , Idoso , Adulto
2.
Sensors (Basel) ; 23(14)2023 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-37514677

RESUMO

Due to its capacity to gather vast, high-level data about human activity from wearable or stationary sensors, human activity recognition substantially impacts people's day-to-day lives. Multiple people and things may be seen acting in the video, dispersed throughout the frame in various places. Because of this, modeling the interactions between many entities in spatial dimensions is necessary for visual reasoning in the action recognition task. The main aim of this paper is to evaluate and map the current scenario of human actions in red, green, and blue videos, based on deep learning models. A residual network (ResNet) and a vision transformer architecture (ViT) with a semi-supervised learning approach are evaluated. The DINO (self-DIstillation with NO labels) is used to enhance the potential of the ResNet and ViT. The evaluated benchmark is the human motion database (HMDB51), which tries to better capture the richness and complexity of human actions. The obtained results for video classification with the proposed ViT are promising based on performance metrics and results from the recent literature. The results obtained using a bi-dimensional ViT with long short-term memory demonstrated great performance in human action recognition when applied to the HMDB51 dataset. The mentioned architecture presented 96.7 ± 0.35% and 41.0 ± 0.27% in terms of accuracy (mean ± standard deviation values) in the train and test phases of the HMDB51 dataset, respectively.


Assuntos
Aprendizado Profundo , Humanos , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado , Atividades Humanas , Movimento (Física)
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA