Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Neuroimage ; 273: 120073, 2023 06.
Article in English | MEDLINE | ID: mdl-37037063

ABSTRACT

Identifying Alzheimer's disease (AD) involves a deliberate diagnostic process owing to its innate traits of irreversibility with subtle and gradual progression. These characteristics make AD biomarker identification from structural brain imaging (e.g., structural MRI) scans quite challenging. Using clinically-guided prototype learning, we propose a novel deep-learning approach through eXplainable AD Likelihood Map Estimation (XADLiME) for AD progression modeling over 3D sMRIs. Specifically, we establish a set of topologically-aware prototypes onto the clusters of latent clinical features, uncovering an AD spectrum manifold. Considering this pseudo map as an enriched reference, we employ an estimating network to approximate the AD likelihood map over a 3D sMRI scan. Additionally, we promote the explainability of such a likelihood map by revealing a comprehensible overview from clinical and morphological perspectives. During the inference, this estimated likelihood map served as a substitute for unseen sMRI scans for effectively conducting the downstream task while providing thorough explainable states.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Humans , Alzheimer Disease/diagnostic imaging , Brain/diagnostic imaging , Magnetic Resonance Imaging/methods , Learning , Biomarkers , Neuroimaging/methods
2.
IEEE Trans Pattern Anal Mach Intell ; 45(4): 4843-4857, 2023 04.
Article in English | MEDLINE | ID: mdl-35947563

ABSTRACT

Existing studies on disease diagnostic models focus either on diagnostic model learning for performance improvement or on the visual explanation of a trained diagnostic model. We propose a novel learn-explain-reinforce (LEAR) framework that unifies diagnostic model learning, visual explanation generation (explanation unit), and trained diagnostic model reinforcement (reinforcement unit) guided by the visual explanation. For the visual explanation, we generate a counterfactual map that transforms an input sample to be identified as an intended target label. For example, a counterfactual map can localize hypothetical abnormalities within a normal brain image that may cause it to be diagnosed with Alzheimer's disease (AD). We believe that the generated counterfactual maps represent data-driven knowledge about a target task, i.e., AD diagnosis using structural MRI, which can be a vital source of information to reinforce the generalization of the trained diagnostic model. To this end, we devise an attention-based feature refinement module with the guidance of the counterfactual maps. The explanation and reinforcement units are reciprocal and can be operated iteratively. Our proposed approach was validated via qualitative and quantitative analysis on the ADNI dataset. Its comprehensibility and fidelity were demonstrated through ablation studies and comparisons with existing methods.


Subject(s)
Alzheimer Disease , Humans , Alzheimer Disease/diagnostic imaging , Algorithms , Learning , Brain/diagnostic imaging , Magnetic Resonance Imaging , Neuroimaging
SELECTION OF CITATIONS
SEARCH DETAIL
...