Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
J Digit Imaging ; 36(6): 2567-2577, 2023 12.
Article in English | MEDLINE | ID: mdl-37787869

ABSTRACT

Deep neural networks (DNNs) have already impacted the field of medicine in data analysis, classification, and image processing. Unfortunately, their performance is drastically reduced when datasets are scarce in nature (e.g., rare diseases or early-research data). In such scenarios, DNNs display poor capacity for generalization and often lead to highly biased estimates and silent failures. Moreover, deterministic systems cannot provide epistemic uncertainty, a key component to asserting the model's reliability. In this work, we developed a probabilistic system for classification as a framework for addressing the aforementioned criticalities. Specifically, we implemented a Bayesian convolutional neural network (BCNN) for the classification of cardiac amyloidosis (CA) subtypes. We prepared four different CNNs: base-deterministic, dropout-deterministic, dropout-Bayesian, and Bayesian. We then trained them on a dataset of 1107 PET images from 47 CA and control patients (data scarcity scenario). The Bayesian model achieved performances (78.28 (1.99) % test accuracy) comparable to the base-deterministic, dropout-deterministic, and dropout-Bayesian ones, while showing strongly increased "Out of Distribution" input detection (validation-test accuracy mismatch reduction). Additionally, both the dropout-Bayesian and the Bayesian models enriched the classification through confidence estimates, while reducing the criticalities of the dropout-deterministic and base-deterministic approaches. This in turn increased the model's reliability, also providing much needed insights into the network's estimates. The obtained results suggest that a Bayesian CNN can be a promising solution for addressing the challenges posed by data scarcity in medical imaging classification tasks.


Subject(s)
Deep Learning , Humans , Reproducibility of Results , Bayes Theorem , Neural Networks, Computer , Diagnostic Imaging
2.
Sensors (Basel) ; 23(6)2023 Mar 21.
Article in English | MEDLINE | ID: mdl-36992032

ABSTRACT

Left Ventricle (LV) detection from Cardiac Magnetic Resonance (CMR) imaging is a fundamental step, preliminary to myocardium segmentation and characterization. This paper focuses on the application of a Visual Transformer (ViT), a novel neural network architecture, to automatically detect LV from CMR relaxometry sequences. We implemented an object detector based on the ViT model to identify LV from CMR multi-echo T2* sequences. We evaluated performances differentiated by slice location according to the American Heart Association model using 5-fold cross-validation and on an independent dataset of CMR T2*, T2, and T1 acquisitions. To the best of our knowledge, this is the first attempt to localize LV from relaxometry sequences and the first application of ViT for LV detection. We collected an Intersection over Union (IoU) index of 0.68 and a Correct Identification Rate (CIR) of blood pool centroid of 0.99, comparable with other state-of-the-art methods. IoU and CIR values were significantly lower in apical slices. No significant differences in performances were assessed on independent T2* dataset (IoU = 0.68, p = 0.405; CIR = 0.94, p = 0.066). Performances were significantly worse on the T2 and T1 independent datasets (T2: IoU = 0.62, CIR = 0.95; T1: IoU = 0.67, CIR = 0.98), but still encouraging considering the different types of acquisition. This study confirms the feasibility of the application of ViT architectures in LV detection and defines a benchmark for relaxometry imaging.


Subject(s)
Heart Ventricles , Heart , Heart Ventricles/diagnostic imaging , Magnetic Resonance Imaging/methods , Myocardium/pathology , Magnetic Resonance Spectroscopy
3.
J Digit Imaging ; 36(1): 189-203, 2023 02.
Article in English | MEDLINE | ID: mdl-36344633

ABSTRACT

Convolutional Neural Networks (CNN) which support the diagnosis of Alzheimer's Disease using 18F-FDG PET images are obtaining promising results; however, one of the main challenges in this domain is the fact that these models work as black-box systems. We developed a CNN that performs a multiclass classification task of volumetric 18F-FDG PET images, and we experimented two different post hoc explanation techniques developed in the field of Explainable Artificial Intelligence: Saliency Map (SM) and Layerwise Relevance Propagation (LRP). Finally, we quantitatively analyze the explanations returned and inspect their relationship with the PET signal. We collected 2552 scans from the Alzheimer's Disease Neuroimaging Initiative labeled as Cognitively Normal (CN), Mild Cognitive Impairment (MCI), and Alzheimer's Disease (AD) and we developed and tested a 3D CNN that classifies the 3D PET scans into its final clinical diagnosis. The model developed achieves, to the best of our knowledge, performances comparable with the relevant literature on the test set, with an average Area Under the Curve (AUC) for prediction of CN, MCI, and AD 0.81, 0.63, and 0.77 respectively. We registered the heatmaps with the Talairach Atlas to perform a regional quantitative analysis of the relationship between heatmaps and PET signals. With the quantitative analysis of the post hoc explanation techniques, we observed that LRP maps were more effective in mapping the importance metrics in the anatomic atlas. No clear relationship was found between the heatmap and the PET signal.


Subject(s)
Alzheimer Disease , Humans , Fluorodeoxyglucose F18 , Artificial Intelligence , Positron-Emission Tomography/methods , Neural Networks, Computer , Early Diagnosis
SELECTION OF CITATIONS
SEARCH DETAIL
...