Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Med Imaging ; 43(5): 1664-1676, 2024 May.
Article in English | MEDLINE | ID: mdl-38109240

ABSTRACT

Structural magnetic resonance imaging (sMRI) has been widely applied in computer-aided Alzheimer's disease (AD) diagnosis, owing to its capabilities in providing detailed brain morphometric patterns and anatomical features in vivo. Although previous works have validated the effectiveness of incorporating metadata (e.g., age, gender, and educational years) for sMRI-based AD diagnosis, existing methods solely paid attention to metadata-associated correlation to AD (e.g., gender bias in AD prevalence) or confounding effects (e.g., the issue of normal aging and metadata-related heterogeneity). Hence, it is difficult to fully excavate the influence of metadata on AD diagnosis. To address these issues, we constructed a novel Multi-template Meta-information Regularized Network (MMRN) for AD diagnosis. Specifically, considering diagnostic variation resulting from different spatial transformations onto different brain templates, we first regarded different transformations as data augmentation for self-supervised learning after template selection. Since the confounding effects may arise from excessive attention to meta-information owing to its correlation with AD, we then designed the modules of weakly supervised meta-information learning and mutual information minimization to learn and disentangle meta-information from learned class-related representations, which accounts for meta-information regularization for disease diagnosis. We have evaluated our proposed MMRN on two public multi-center cohorts, including the Alzheimer's Disease Neuroimaging Initiative (ADNI) with 1,950 subjects and the National Alzheimer's Coordinating Center (NACC) with 1,163 subjects. The experimental results have shown that our proposed method outperformed the state-of-the-art approaches in both tasks of AD diagnosis, mild cognitive impairment (MCI) conversion prediction, and normal control (NC) vs. MCI vs. AD classification.


Subject(s)
Alzheimer Disease , Brain , Magnetic Resonance Imaging , Alzheimer Disease/diagnostic imaging , Humans , Magnetic Resonance Imaging/methods , Brain/diagnostic imaging , Aged , Female , Male , Image Interpretation, Computer-Assisted/methods , Aged, 80 and over , Algorithms
2.
Phys Med Biol ; 67(8)2022 04 01.
Article in English | MEDLINE | ID: mdl-35299163

ABSTRACT

Capitalizing on structural magnetic resonance imaging (sMRI), existing deep learning methods (especially convolutional neural networks, CNNs) have been widely and successfully applied to computer-aided diagnosis of Alzheimer's disease (AD) and its prodromal stage (i.e. mild cognitive impairment, MCI). But considering the generalization capability of the obtained model trained on limited number of samples, we construct a multi-task multi-level feature adversarial network (M2FAN) for joint diagnosis and atrophy localization using baseline sMRI. Specifically, the linear-aligned T1 MR images were first processed by a lightweight CNN backbone to capture the shared intermediate feature representations, which were then branched into a global subnet for preliminary dementia diagnosis and a multi instance learning network for brain atrophy localization in multi-task learning manner. As the global discriminative information captured by the global subnet might be unstable for disease diagnosis, we further designed a module of multi-level feature adversarial learning that accounts for regularization to make global features robust against the adversarial perturbation synthesized by the local/instance features to improve the diagnostic performance. Our proposed method was evaluated on three public datasets (i.e. ADNI-1, ADNI-2, and AIBL), demonstrating competitive performance compared with several state-of-the-art methods in both tasks of AD diagnosis and MCI conversion prediction.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Alzheimer Disease/diagnostic imaging , Atrophy/diagnostic imaging , Cognitive Dysfunction/diagnostic imaging , Humans , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Neuroimaging/methods
3.
Phys Med Biol ; 66(21)2021 11 02.
Article in English | MEDLINE | ID: mdl-34663762

ABSTRACT

Significance. Gliomas are the most common type of primary brain tumors and have different grades. Accurate grading of a glioma is therefore significant for its clinical treatment planning and prognostic assessment with multiple-modality magnetic resonance imaging (MRI).Objective and Approach. In this study, we developed a noninvasive deep-learning method based on multimodal MRI for grading gliomas by focusing on effective multimodal fusion via leveraging collaborative and diverse high-order statistical information. Specifically, a novel high-order multimodal interaction module was designed to promote interactive learning of multimodal knowledge for more efficient fusion. For more powerful feature expression and feature correlation learning, the high-order attention mechanism is embedded in the interaction module for modeling complex and high-order statistical information to enhance the classification capability of the network. Moreover, we applied increasing orders at different levels to hierarchically recalibrate each modality stream through diverse-order attention statistics, thus encouraging all-sided attention knowledge with lesser parameters.Main results. To evaluate the effectiveness of the proposed scheme, extensive experiments were conducted on The Cancer Imaging Archive (TCIA) and Multimodal Brain Tumor Image Segmentation Benchmark 2017 (BraTS2017) datasets with five-fold cross validation to demonstrate that the proposed method can achieve high prediction performance, with area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity values of 95.2%, 94.28%, 95.24%, and 92.00% on the BraTS2017 and 93.50%, 92.86%, 97.14%, and 90.48% on TCIA datasets, respectively.


Subject(s)
Brain Neoplasms , Glioma , Brain Neoplasms/diagnostic imaging , Glioma/diagnostic imaging , Glioma/pathology , Humans , Magnetic Resonance Imaging/methods , Neoplasm Grading , Neural Networks, Computer , ROC Curve
4.
Phys Med Biol ; 66(8)2021 04 16.
Article in English | MEDLINE | ID: mdl-33765665

ABSTRACT

Magnetic resonance imaging (MRI) has been widely used in assessing development of Alzheimer's disease (AD) by providing structural information of disease-associated regions (e.g. atrophic regions). In this paper, we propose a light-weight cross-view hierarchical fusion network (CvHF-net), consisting of local patch and global subject subnets, for joint localization and identification of the discriminative local patches and regions in the whole brain MRI, upon which feature representations are then jointly learned and fused to construct hierarchical classification models for AD diagnosis. Firstly, based on the extracted class-discriminative 3D patches, we employ the local patch subnets to utilize multiple 2D views to represent 3D patches by using an attention-aware hierarchical fusion structure in a divide-and-conquer manner. Since different local patches are with various abilities in AD identification, the global subject subnet is developed to bias the allocation of available resources towards the most informative parts among these local patches to obtain global information for AD identification. Besides, an instance declined pruning algorithm is embedded in the CvHF-net for adaptively selecting most discriminant patches in a task-driven manner. The proposed method was evaluated on the AD Neuroimaging Initiative dataset and the experimental results show that our proposed method can achieve good performance on AD diagnosis.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Algorithms , Alzheimer Disease/diagnostic imaging , Brain/diagnostic imaging , Cognitive Dysfunction/diagnostic imaging , Humans , Magnetic Resonance Imaging , Neuroimaging
5.
Clin Gastroenterol Hepatol ; 18(13): 2998-3007.e5, 2020 12.
Article in English | MEDLINE | ID: mdl-32205218

ABSTRACT

BACKGROUND & AIMS: Noninvasive and accurate methods are needed to identify patients with clinically significant portal hypertension (CSPH). We investigated the ability of deep convolutional neural network (CNN) analysis of computed tomography (CT) or magnetic resonance (MR) to identify patients with CSPH. METHODS: We collected liver and spleen images from patients who underwent contrast-enhanced CT or MR analysis within 14 days of transjugular catheterization for hepatic venous pressure gradient measurement. The CT cohort comprised participants with cirrhosis in the CHESS1701 study, performed at 4 university hospitals in China from August 2016 through September 2017. The MR cohort comprised participants with cirrhosis in the CHESS1802 study, performed at 8 university hospitals in China and 1 in Turkey from December 2018 through April 2019. Patients with CSPH were identified as those with a hepatic venous pressure gradient of 10 mm Hg or higher. In total, we analyzed 10,014 liver images and 899 spleen images collected from 679 participants who underwent CT analysis, and 45,554 liver and spleen images from 271 participants who underwent MR analysis. For each cohort, participants were shuffled and then sampled randomly and equiprobably for 6 times into training, validation, and test data sets (ratio, 3:1:1). Therefore, a total of 6 deep CNN models for each cohort were developed for identification of CSPH. RESULTS: The CT-based CNN analysis identified patients with CSPH with an area under the receiver operating characteristic curve (AUC) value of 0.998 in the training set (95% CI, 0.996-1.000), an AUC of 0.912 in the validation set (95% CI, 0.854-0.971), and an AUC of 0.933 (95% CI, 0.883-0.984) in the test data sets. The MR-based CNN analysis identified patients with CSPH with an AUC of 1.000 in the training set (95% CI, 0.999-1.000), an AUC of 0.924 in the validation set (95% CI, 0.833-1.000), and an AUC of 0.940 in the test data set (95% CI, 0.880-0.999). When the model development procedures were repeated 6 times, AUC values for all CNN analyses were 0.888 or greater, with no significant differences between rounds (P > .05). CONCLUSIONS: We developed a deep CNN to analyze CT or MR images of liver and spleen from patients with cirrhosis that identifies patients with CSPH with an AUC value of 0.9. This provides a noninvasive and rapid method for detection of CSPH (ClincialTrials.gov numbers: NCT03138915 and NCT03766880).


Subject(s)
Hypertension, Portal , Humans , Hypertension, Portal/complications , Hypertension, Portal/diagnosis , Liver Cirrhosis/complications , Liver Cirrhosis/diagnosis , Neural Networks, Computer , Portal Pressure
SELECTION OF CITATIONS
SEARCH DETAIL
...