Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters











Database
Language
Publication year range
1.
Clin Transl Oncol ; 26(6): 1438-1445, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38194018

ABSTRACT

BACKGROUND: Lung adenocarcinoma is a common cause of cancer-related deaths worldwide, and accurate EGFR genotyping is crucial for optimal treatment outcomes. Conventional methods for identifying the EGFR genotype have several limitations. Therefore, we proposed a deep learning model using non-invasive CT images to predict EGFR mutation status with robustness and generalizability. METHODS: A total of 525 patients were enrolled at the local hospital to serve as the internal data set for model training and validation. In addition, a cohort of 30 patients from the publicly available Cancer Imaging Archive Data Set was selected for external testing. All patients underwent plain chest CT, and their EGFR mutation status labels were categorized as either mutant or wild type. The CT images were analyzed using a self-attention-based ViT-B/16 model to predict the EGFR mutation status, and the model's performance was evaluated. To produce an attention map indicating the suspicious locations of EGFR mutations, Grad-CAM was utilized. RESULTS: The ViT deep learning model achieved impressive results, with an accuracy of 0.848, an AUC of 0.868, a sensitivity of 0.924, and a specificity of 0.718 on the validation cohort. Furthermore, in the external test cohort, the model achieved comparable performances, with an accuracy of 0.833, an AUC of 0.885, a sensitivity of 0.900, and a specificity of 0.800. CONCLUSIONS: The ViT model demonstrates a high level of accuracy in predicting the EGFR mutation status of lung adenocarcinoma patients. Moreover, with the aid of attention maps, the model can assist clinicians in making informed clinical decisions.


Subject(s)
Adenocarcinoma of Lung , Deep Learning , ErbB Receptors , Lung Neoplasms , Mutation , Tomography, X-Ray Computed , Humans , ErbB Receptors/genetics , Adenocarcinoma of Lung/genetics , Adenocarcinoma of Lung/pathology , Lung Neoplasms/genetics , Lung Neoplasms/pathology , Female , Male , Middle Aged , Aged , Adult
2.
Front Med (Lausanne) ; 10: 1241484, 2023.
Article in English | MEDLINE | ID: mdl-37746081

ABSTRACT

Introduction: The use of deep convolutional neural networks for analyzing skin lesion images has shown promising results. The identification of skin cancer by faster and less expensive means can lead to an early diagnosis, saving lives and avoiding treatment costs. However, to implement this technology in a clinical context, it is important for specialists to understand why a certain model makes a prediction; it must be explainable. Explainability techniques can be used to highlight the patterns of interest for a prediction. Methods: Our goal was to test five different techniques: Grad-CAM, Grad-CAM++, Score-CAM, Eigen-CAM, and LIME, to analyze the agreement rate between features highlighted by the visual explanation maps to 3 important clinical criteria for melanoma classification: asymmetry, border irregularity, and color heterogeneity (ABC rule) in 100 melanoma images. Two dermatologists scored the visual maps and the clinical images using a semi-quantitative scale, and the results were compared. They also ranked their preferable techniques. Results: We found that the techniques had different agreement rates and acceptance. In the overall analysis, Grad-CAM showed the best total+partial agreement rate (93.6%), followed by LIME (89.8%), Grad-CAM++ (88.0%), Eigen-CAM (86.4%), and Score-CAM (84.6%). Dermatologists ranked their favorite options: Grad-CAM and Grad-CAM++, followed by Score-CAM, LIME, and Eigen-CAM. Discussion: Saliency maps are one of the few methods that can be used for visual explanations. The evaluation of explainability with humans is ideal to assess the understanding and applicability of these methods. Our results demonstrated that there is a significant agreement between clinical features used by dermatologists to diagnose melanomas and visual explanation techniques, especially Grad-Cam.

3.
Curr Issues Mol Biol ; 44(12): 5963-5985, 2022 Nov 29.
Article in English | MEDLINE | ID: mdl-36547067

ABSTRACT

Neurodegenerative diseases, tauopathies, constitute a serious global health problem. The etiology of these diseases is unclear and an increase in their incidence has been projected in the next 30 years. Therefore, the study of the molecular mechanisms that might stop these neurodegenerative processes is very relevant. Classification of neurodegenerative diseases using Machine and Deep Learning algorithms has been widely studied for medical imaging such as Magnetic Resonance Imaging. However, post-mortem immunofluorescence imaging studies of the brains of patients have not yet been used for this purpose. These studies may represent a valuable tool for monitoring aberrant chemical changes or pathological post-translational modifications of the Tau polypeptide. We propose a Convolutional Neural Network pipeline for the classification of Tau pathology of Alzheimer's disease and Progressive Supranuclear Palsy by analyzing post-mortem immunofluorescence images with different Tau biomarkers performed with models generated with the architecture ResNet-IFT using Transfer Learning. These models' outputs were interpreted with interpretability algorithms such as Guided Grad-CAM and Occlusion Analysis. To determine the best classifier, four different architectures were tested. We demonstrated that our design was able to classify diseases with an accuracy of 98.41% on average whilst providing an interpretation concerning the proper classification involving different structural patterns in the immunoreactivity of the Tau protein in NFTs present in the brains of patients with Progressive Supranuclear Palsy and Alzheimer's disease.

4.
Sensors (Basel) ; 22(15)2022 Jul 28.
Article in English | MEDLINE | ID: mdl-35957201

ABSTRACT

Due to wearables' popularity, human activity recognition (HAR) plays a significant role in people's routines. Many deep learning (DL) approaches have studied HAR to classify human activities. Previous studies employ two HAR validation approaches: subject-dependent (SD) and subject-independent (SI). Using accelerometer data, this paper shows how to generate visual explanations about the trained models' decision making on both HAR and biometric user identification (BUI) tasks and the correlation between them. We adapted gradient-weighted class activation mapping (grad-CAM) to one-dimensional convolutional neural networks (CNN) architectures to produce visual explanations of HAR and BUI models. Our proposed networks achieved 0.978 and 0.755 accuracy, employing both SD and SI. The proposed BUI network achieved 0.937 average accuracy. We demonstrate that HAR's high performance with SD comes not only from physical activity learning but also from learning an individual's signature, as in BUI models. Our experiments show that CNN focuses on larger signal sections in BUI, while HAR focuses on smaller signal segments. We also use the grad-CAM technique to identify database bias problems, such as signal discontinuities. Combining explainable techniques with deep learning can help models design, avoid results overestimation, find bias problems, and improve generalization capability.


Subject(s)
Biometric Identification , Neural Networks, Computer , Databases, Factual , Human Activities , Humans
SELECTION OF CITATIONS
SEARCH DETAIL