Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 19(6): e0296985, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38889117

RESUMO

Deep neural networks have been widely adopted in numerous domains due to their high performance and accessibility to developers and application-specific end-users. Fundamental to image-based applications is the development of Convolutional Neural Networks (CNNs), which possess the ability to automatically extract features from data. However, comprehending these complex models and their learned representations, which typically comprise millions of parameters and numerous layers, remains a challenge for both developers and end-users. This challenge arises due to the absence of interpretable and transparent tools to make sense of black-box models. There exists a growing body of Explainable Artificial Intelligence (XAI) literature, including a collection of methods denoted Class Activation Maps (CAMs), that seek to demystify what representations the model learns from the data, how it informs a given prediction, and why it, at times, performs poorly in certain tasks. We propose a novel XAI visualization method denoted CAManim that seeks to simultaneously broaden and focus end-user understanding of CNN predictions by animating the CAM-based network activation maps through all layers, effectively depicting from end-to-end how a model progressively arrives at the final layer activation. Herein, we demonstrate that CAManim works with any CAM-based method and various CNN architectures. Beyond qualitative model assessments, we additionally propose a novel quantitative assessment that expands upon the Remove and Debias (ROAD) metric, pairing the qualitative end-to-end network visual explanations assessment with our novel quantitative "yellow brick ROAD" assessment (ybROAD). This builds upon prior research to address the increasing demand for interpretable, robust, and transparent model assessment methodology, ultimately improving an end-user's trust in a given model's predictions. Examples and source code can be found at: https://omni-ml.github.io/pytorch-grad-cam-anim/.


Assuntos
Redes Neurais de Computação , Inteligência Artificial , Humanos , Algoritmos , Aprendizado Profundo
2.
Sci Rep ; 14(1): 9013, 2024 04 19.
Artigo em Inglês | MEDLINE | ID: mdl-38641713

RESUMO

Deep learning algorithms have demonstrated remarkable potential in clinical diagnostics, particularly in the field of medical imaging. In this study, we investigated the application of deep learning models in early detection of fetal kidney anomalies. To provide an enhanced interpretation of those models' predictions, we proposed an adapted two-class representation and developed a multi-class model interpretation approach for problems with more than two labels and variable hierarchical grouping of labels. Additionally, we employed the explainable AI (XAI) visualization tools Grad-CAM and HiResCAM, to gain insights into model predictions and identify reasons for misclassifications. The study dataset consisted of 969 ultrasound images from unique patients; 646 control images and 323 cases of kidney anomalies, including 259 cases of unilateral urinary tract dilation and 64 cases of unilateral multicystic dysplastic kidney. The best performing model achieved a cross-validated area under the ROC curve of 91.28% ± 0.52%, with an overall accuracy of 84.03% ± 0.76%, sensitivity of 77.39% ± 1.99%, and specificity of 87.35% ± 1.28%. Our findings emphasize the potential of deep learning models in predicting kidney anomalies from limited prenatal ultrasound imagery. The proposed adaptations in model representation and interpretation represent a novel solution to multi-class prediction problems.


Assuntos
Aprendizado Profundo , Nefropatias , Sistema Urinário , Gravidez , Feminino , Humanos , Ultrassonografia Pré-Natal/métodos , Diagnóstico Pré-Natal/métodos , Nefropatias/diagnóstico por imagem , Sistema Urinário/anormalidades
3.
BMC Bioinformatics ; 23(1): 38, 2022 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-35026982

RESUMO

BACKGROUND: Accurate cancer classification is essential for correct treatment selection and better prognostication. microRNAs (miRNAs) are small RNA molecules that negatively regulate gene expression, and their dyresgulation is a common disease mechanism in many cancers. Through a clearer understanding of miRNA dysregulation in cancer, improved mechanistic knowledge and better treatments can be sought. RESULTS: We present a topology-preserving deep learning framework to study miRNA dysregulation in cancer. Our study comprises miRNA expression profiles from 3685 cancer and non-cancer tissue samples and hierarchical annotations on organ and neoplasticity status. Using unsupervised learning, a two-dimensional topological map is trained to cluster similar tissue samples. Labelled samples are used after training to identify clustering accuracy in terms of tissue-of-origin and neoplasticity status. In addition, an approach using activation gradients is developed to determine the attention of the networks to miRNAs that drive the clustering. Using this deep learning framework, we classify the neoplasticity status of held-out test samples with an accuracy of 91.07%, the tissue-of-origin with 86.36%, and combined neoplasticity status and tissue-of-origin with an accuracy of 84.28%. The topological maps display the ability of miRNAs to recognize tissue types and neoplasticity status. Importantly, when our approach identifies samples that do not cluster well with their respective classes, activation gradients provide further insight in cancer subtypes or grades. CONCLUSIONS: An unsupervised deep learning approach is developed for cancer classification and interpretation. This work provides an intuitive approach for understanding molecular properties of cancer and has significant potential for cancer classification and treatment selection.


Assuntos
MicroRNAs , Neoplasias , Análise por Conglomerados , Perfilação da Expressão Gênica , Regulação Neoplásica da Expressão Gênica , Humanos , MicroRNAs/genética , Neoplasias/genética
4.
Am J Pathol ; 192(2): 344-352, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34774515

RESUMO

Next-generation sequencing has enabled the collection of large biological data sets, allowing novel molecular-based classification methods to be developed for increased understanding of disease. miRNAs are small regulatory RNA molecules that can be quantified using next-generation sequencing and are excellent classificatory markers. Herein, a deep cancer classifier (DCC) was adapted to differentiate neoplastic from nonneoplastic samples using comprehensive miRNA expression profiles from 1031 human breast and skin tissue samples. The classifier was fine-tuned and evaluated using 750 neoplastic and 281 nonneoplastic breast and skin tissue samples. Performance of the DCC was compared with two machine-learning classifiers: support vector machine and random forests. In addition, performance of feature extraction through the DCC was also compared with a developed feature selection algorithm, cancer specificity. The DCC had the highest performance of area under the receiver operating curve and high performance in both sensitivity and specificity, unlike machine-learning and feature selection models, which often performed well in one metric compared with the other. In particular, deep learning had noticeable advantages with highly heterogeneous data sets. In addition, our cancer specificity algorithm identified candidate biomarkers for differentiating neoplastic and nonneoplastic tissue samples (eg, miR-144 and miR-375 in breast cancer and miR-375 and miR-451 in skin cancer).


Assuntos
Neoplasias da Mama , Perfilação da Expressão Gênica , Aprendizado de Máquina , MicroRNAs , RNA Neoplásico , Neoplasias da Mama/classificação , Neoplasias da Mama/genética , Neoplasias da Mama/metabolismo , Feminino , Humanos , MicroRNAs/genética , MicroRNAs/metabolismo , RNA Neoplásico/genética , RNA Neoplásico/metabolismo
5.
Pac Symp Biocomput ; 27: 373-384, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34890164

RESUMO

Next-generation sequencing has provided rapid collection and quantification of 'big' biological data. In particular, multi-omics and integration of different molecular data such as miRNA and mRNA can provide important insights to disease classification and processes. There is a need for computational methods that can correctly model and interpret these relationships, and handle the difficulties of large-scale data. In this study, we develop a novel method of representing miRNA-mRNA interactions to classify cancer. Specifically, graphs are designed to account for the interactions and biological communication between miRNAs and mRNAs, using message-passing and attention mechanisms. Patient-matched miRNA and mRNA expression data is obtained from The Cancer Genome Atlas for 12 cancers, and targeting information is incorporated from TargetScan. A Graph Transformer Network (GTN) is selected to provide high interpretability of classification through self-attention mechanisms. The GTN is able to classify the 12 different cancers with an accuracy of 93.56% and is compared to a Graph Convolutional Network, Random Forest, Support Vector Machine, and Multilayer Perceptron. While the GTN does not outperform all of the other classifiers in terms of accuracy, it allows high interpretation of results. Multi-omics models are compared and generally outperform their respective single-omics performance. Extensive analysis of attention identifies important targeting pathways and molecular biomarkers based on integrated miRNA and mRNA expression.


Assuntos
MicroRNAs , Neoplasias , Biologia Computacional , Sequenciamento de Nucleotídeos em Larga Escala , Humanos , MicroRNAs/genética , Neoplasias/genética , RNA Mensageiro/genética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...