Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 62
Filter
1.
iScience ; 27(2): 108881, 2024 Feb 16.
Article in English | MEDLINE | ID: mdl-38318348

ABSTRACT

Automated tools to detect large vessel occlusion (LVO) in acute ischemic stroke patients using brain computed tomography angiography (CTA) have been shown to reduce the time for treatment, leading to better clinical outcomes. There is a lot of information in a single CTA and deep learning models do not have an obvious way of being conditioned on areas most relevant for LVO detection, i.e., the vasculature structure. In this work, we compare and contrast strategies to make convolutional neural networks focus on the vasculature without discarding context information of the brain parenchyma and propose an attention-inspired strategy to encourage this. We use brain CTAs from which we obtain 3D vasculature images. Then, we compare ways of combining the vasculature and the CTA images using a general-purpose network trained to detect LVO. The results show that the proposed strategies allow to improve LVO detection and could potentially help to learn other cerebrovascular-related tasks.

2.
J Magn Reson Imaging ; 2023 Oct 06.
Article in English | MEDLINE | ID: mdl-37803817

ABSTRACT

BACKGROUND: The combination of anatomical MRI and deep learning-based methods such as convolutional neural networks (CNNs) is a promising strategy to build predictive models of multiple sclerosis (MS) prognosis. However, studies assessing the effect of different input strategies on model's performance are lacking. PURPOSE: To compare whole-brain input sampling strategies and regional/specific-tissue strategies, which focus on a priori known relevant areas for disability accrual, to stratify MS patients based on their disability level. STUDY TYPE: Retrospective. SUBJECTS: Three hundred nineteen MS patients (382 brain MRI scans) with clinical assessment of disability level performed within the following 6 months (~70% training/~15% validation/~15% inference in-house dataset) and 440 MS patients from multiple centers (independent external validation cohort). FIELD STRENGTH/SEQUENCE: Single vendor 1.5 T or 3.0 T. Magnetization-Prepared Rapid Gradient-Echo and Fluid-Attenuated Inversion Recovery sequences. ASSESSMENT: A 7-fold patient cross validation strategy was used to train a 3D-CNN to classify patients into two groups, Expanded Disability Status Scale score (EDSS) ≥ 3.0 or EDSS < 3.0. Two strategies were investigated: 1) a global approach, taking the whole brain volume as input and 2) regional approaches using five different regions-of-interest: white matter, gray matter, subcortical gray matter, ventricles, and brainstem structures. The performance of the models was assessed in the in-house and the independent external cohorts. STATISTICAL TESTS: Balanced accuracy, sensitivity, specificity, area under receiver operating characteristic (ROC) curve (AUC). RESULTS: With the in-house dataset, the gray matter regional model showed the highest stratification accuracy (81%), followed by the global approach (79%). In the external dataset, without any further retraining, an accuracy of 72% was achieved for the white matter model and 71% for the global approach. DATA CONCLUSION: The global approach offered the best trade-off between internal performance and external validation to stratify MS patients based on accumulated disability. EVIDENCE LEVEL: 4 TECHNICAL EFFICACY: Stage 2.

3.
Neuroimage Clin ; 38: 103376, 2023.
Article in English | MEDLINE | ID: mdl-36940621

ABSTRACT

The application of convolutional neural networks (CNNs) to MRI data has emerged as a promising approach to achieving unprecedented levels of accuracy when predicting the course of neurological conditions, including multiple sclerosis, by means of extracting image features not detectable through conventional methods. Additionally, the study of CNN-derived attention maps, which indicate the most relevant anatomical features for CNN-based decisions, has the potential to uncover key disease mechanisms leading to disability accumulation. From a cohort of patients prospectively followed up after a first demyelinating attack, we selected those with T1-weighted and T2-FLAIR brain MRI sequences available for image analysis and a clinical assessment performed within the following six months (N = 319). Patients were divided into two groups according to expanded disability status scale (EDSS) score: ≥3.0 and < 3.0. A 3D-CNN model predicted the class using whole-brain MRI scans as input. A comparison with a logistic regression (LR) model using volumetric measurements as explanatory variables and a validation of the CNN model on an independent dataset with similar characteristics (N = 440) were also performed. The layer-wise relevance propagation method was used to obtain individual attention maps. The CNN model achieved a mean accuracy of 79% and proved to be superior to the equivalent LR-model (77%). Additionally, the model was successfully validated in the independent external cohort without any re-training (accuracy = 71%). Attention-map analyses revealed the predominant role of frontotemporal cortex and cerebellum for CNN decisions, suggesting that the mechanisms leading to disability accrual exceed the mere presence of brain lesions or atrophy and probably involve how damage is distributed in the central nervous system.


Subject(s)
Deep Learning , Multiple Sclerosis , Humans , Multiple Sclerosis/diagnostic imaging , Multiple Sclerosis/pathology , Magnetic Resonance Imaging/methods , Brain/diagnostic imaging , Brain/pathology , Attention , Blindness/pathology
4.
Comput Med Imaging Graph ; 103: 102157, 2023 01.
Article in English | MEDLINE | ID: mdl-36535217

ABSTRACT

Automated methods for segmentation-based brain volumetry may be confounded by the presence of white matter (WM) lesions, which introduce abnormal intensities that can alter the classification of not only neighboring but also distant brain tissue. These lesions are common in pathologies where brain volumetry is also an important prognostic marker, such as in multiple sclerosis (MS), and thus reducing their effects is critical for improving volumetric accuracy and reliability. In this work, we analyze the effect of WM lesions on deep learning based brain tissue segmentation methods for brain volumetry and introduce techniques to reduce the error these lesions produce on the measured volumes. We propose a 3D patch-based deep learning framework for brain tissue segmentation which is trained on the outputs of a reference classical method. To deal more robustly with pathological cases having WM lesions, we use a combination of small patches and a percentile-based input normalization. To minimize the effect of WM lesions, we also propose a multi-task double U-Net architecture performing end-to-end inpainting and segmentation, along with a training data generation procedure. In the evaluation, we first analyze the error introduced by artificial WM lesions on our framework as well as in the reference segmentation method without the use of lesion inpainting techniques. To the best of our knowledge, this is the first analysis of WM lesion effect on a deep learning based tissue segmentation approach for brain volumetry. The proposed framework shows a significantly smaller and more localized error introduced by WM lesions than the reference segmentation method, that displays much larger global differences. We also evaluated the proposed lesion effect minimization technique by comparing the measured volumes before and after introducing artificial WM lesions to healthy images. The proposed approach performing end-to-end inpainting and segmentation effectively reduces the error introduced by small and large WM lesions in the resulting volumetry, obtaining absolute volume differences of 0.01 ± 0.03% for GM and 0.02 ± 0.04% for WM. Increasing the accuracy and reliability of automated brain volumetry methods will reduce the sample size needed to establish meaningful correlations in clinical studies and allow its use in individualized assessments as a diagnostic and prognostic marker for neurodegenerative pathologies.


Subject(s)
Deep Learning , Multiple Sclerosis , White Matter , Humans , White Matter/diagnostic imaging , White Matter/pathology , Reproducibility of Results , Magnetic Resonance Imaging/methods , Brain/diagnostic imaging , Brain/pathology , Multiple Sclerosis/diagnostic imaging , Multiple Sclerosis/pathology , Image Processing, Computer-Assisted/methods
5.
Front Neurosci ; 16: 1007619, 2022.
Article in English | MEDLINE | ID: mdl-36507318

ABSTRACT

Longitudinal magnetic resonance imaging (MRI) has an important role in multiple sclerosis (MS) diagnosis and follow-up. Specifically, the presence of new lesions on brain MRI scans is considered a robust predictive biomarker for the disease progression. New lesions are a high-impact prognostic factor to predict evolution to MS or risk of disability accumulation over time. However, the detection of this disease activity is performed visually by comparing the follow-up and baseline scans. Due to the presence of small lesions, misregistration, and high inter-/intra-observer variability, this detection of new lesions is prone to errors. In this direction, one of the last Medical Image Computing and Computer Assisted Intervention (MICCAI) challenges was dealing with this automatic new lesion quantification. The MSSEG-2: MS new lesions segmentation challenge offers an evaluation framework for this new lesion segmentation task with a large database (100 patients, each with two-time points) compiled from the OFSEP (Observatoire français de la sclérose en plaques) cohort, the French MS registry, including 3D T2-w fluid-attenuated inversion recovery (T2-FLAIR) images from different centers and scanners. Apart from a change in centers, MRI scanners, and acquisition protocols, there are more challenges that hinder the automated detection process of new lesions such as the need for large annotated datasets, which may be not easily available, or the fact that new lesions are small areas producing a class imbalance problem that could bias trained models toward the non-lesion class. In this article, we present a novel automated method for new lesion detection of MS patient images. Our approach is based on a cascade of two 3D patch-wise fully convolutional neural networks (FCNNs). The first FCNN is trained to be more sensitive revealing possible candidate new lesion voxels, while the second FCNN is trained to reduce the number of misclassified voxels coming from the first network. 3D T2-FLAIR images from the two-time points were pre-processed and linearly co-registered. Afterward, a fully CNN, where its inputs were only the baseline and follow-up images, was trained to detect new MS lesions. Our approach obtained a mean segmentation dice similarity coefficient of 0.42 with a detection F1-score of 0.5. Compared to the challenge participants, we obtained one of the highest precision scores (PPVL = 0.52), the best PPVL rate (0.53), and a lesion detection sensitivity (SensL of 0.53).

6.
Front Neurosci ; 16: 954662, 2022.
Article in English | MEDLINE | ID: mdl-36248650

ABSTRACT

The assessment of disease activity using serial brain MRI scans is one of the most valuable strategies for monitoring treatment response in patients with multiple sclerosis (MS) receiving disease-modifying treatments. Recently, several deep learning approaches have been proposed to improve this analysis, obtaining a good trade-off between sensitivity and specificity, especially when using T1-w and T2-FLAIR images as inputs. However, the need to acquire two different types of images is time-consuming, costly and not always available in clinical practice. In this paper, we investigate an approach to generate synthetic T1-w images from T2-FLAIR images and subsequently analyse the impact of using original and synthetic T1-w images on the performance of a state-of-the-art approach for longitudinal MS lesion detection. We evaluate our approach on a dataset containing 136 images from MS patients, and 73 images with lesion activity (the appearance of new T2 lesions in follow-up scans). To evaluate the synthesis of the images, we analyse the structural similarity index metric and the median absolute error and obtain consistent results. To study the impact of synthetic T1-w images, we evaluate the performance of the new lesion detection approach when using (1) both T2-FLAIR and T1-w original images, (2) only T2-FLAIR images, and (3) both T2-FLAIR and synthetic T1-w images. Sensitivities of 0.75, 0.63, and 0.81, respectively, were obtained at the same false-positive rate (0.14) for all experiments. In addition, we also present the results obtained when using the data from the international MSSEG-2 challenge, showing also an improvement when including synthetic T1-w images. In conclusion, we show that the use of synthetic images can support the lack of data or even be used instead of the original image to homogenize the contrast of the different acquisitions in new T2 lesions detection algorithms.

7.
Clin Pract ; 12(3): 350-362, 2022 May 20.
Article in English | MEDLINE | ID: mdl-35645317

ABSTRACT

The aim of this study is to show the usefulness of collaborative work in the evaluation of prostate cancer from T2-weighted MRI using a dedicated software tool. The variability of annotations on images of the prostate gland (central and peripheral zones as well as tumour) by two independent experts was firstly evaluated, and secondly compared with a consensus between these two experts. Using a prostate MRI database, experts drew regions of interest (ROIs) corresponding to healthy prostate (peripheral and central zones) and cancer. One of the experts then drew the ROI with knowledge of the other expert's ROI. The surface area of each ROI was used to measure the Hausdorff distance and the Dice coefficient was measured from the respective contours. They were evaluated between the different experiments, taking the annotations of the second expert as the reference. The results showed that the significant differences between the two experts disappeared with collaborative work. To conclude, this study shows that collaborative work with a dedicated tool allows consensus between expertise in the evaluation of prostate cancer from T2-weighted MRI.

8.
Mult Scler ; 28(8): 1209-1218, 2022 07.
Article in English | MEDLINE | ID: mdl-34859704

ABSTRACT

BACKGROUND: Active (new/enlarging) T2 lesion counts are routinely used in the clinical management of multiple sclerosis. Thus, automated tools able to accurately identify active T2 lesions would be of high interest to neuroradiologists for assisting in their clinical activity. OBJECTIVE: To compare the accuracy in detecting active T2 lesions and of radiologically active patients based on different visual and automated methods. METHODS: One hundred multiple sclerosis patients underwent two magnetic resonance imaging examinations within 12 months. Four approaches were assessed for detecting active T2 lesions: (1) conventional neuroradiological reports; (2) prospective visual analyses performed by an expert; (3) automated unsupervised tool; and (4) supervised convolutional neural network. As a gold standard, a reference outcome was created by the consensus of two observers. RESULTS: The automated methods detected a higher number of active T2 lesions, and a higher number of active patients, but a higher number of false-positive active patients than visual methods. The convolutional neural network model was more sensitive in detecting active T2 lesions and active patients than the other automated method. CONCLUSION: Automated convolutional neural network models show potential as an aid to neuroradiological assessment in clinical practice, although visual supervision of the outcomes is still required.


Subject(s)
Multiple Sclerosis , Humans , Magnetic Resonance Imaging/methods , Multiple Sclerosis/pathology , Prospective Studies
9.
J Magn Reson Imaging ; 2021 Jun 16.
Article in English | MEDLINE | ID: mdl-34137113

ABSTRACT

BACKGROUND: Manual brain extraction from magnetic resonance (MR) images is time-consuming and prone to intra- and inter-rater variability. Several automated approaches have been developed to alleviate these constraints, including deep learning pipelines. However, these methods tend to reduce their performance in unseen magnetic resonance imaging (MRI) scanner vendors and different imaging protocols. PURPOSE: To present and evaluate for clinical use PARIETAL, a pre-trained deep learning brain extraction method. We compare its reproducibility in a scan/rescan analysis and its robustness among scanners of different manufacturers. STUDY TYPE: Retrospective. POPULATION: Twenty-one subjects (12 women) with age range 22-48 years acquired using three different MRI scanner machines including scan/rescan in each of them. FIELD STRENGTH/SEQUENCE: T1-weighted images acquired in a 3-T Siemens with magnetization prepared rapid gradient-echo sequence and two 1.5 T scanners, Philips and GE, with spin-echo and spoiled gradient-recalled (SPGR) sequences, respectively. ASSESSMENT: Analysis of the intracranial cavity volumes obtained for each subject on the three different scanners and the scan/rescan acquisitions. STATISTICAL TESTS: Parametric permutation tests of the differences in volumes to rank and statistically evaluate the performance of PARIETAL compared to state-of-the-art methods. RESULTS: The mean absolute intracranial volume differences obtained by PARIETAL in the scan/rescan analysis were 1.88 mL, 3.91 mL, and 4.71 mL for Siemens, GE, and Philips scanners, respectively. PARIETAL was the best-ranked method on Siemens and GE scanners, while decreasing to Rank 2 on the Philips images. Intracranial differences for the same subject between scanners were 5.46 mL, 27.16 mL, and 30.44 mL for GE/Philips, Siemens/Philips, and Siemens/GE comparison, respectively. The permutation tests revealed that PARIETAL was always in Rank 1, obtaining the most similar volumetric results between scanners. DATA CONCLUSION: PARIETAL accurately segments the brain and it generalizes to images acquired at different sites without the need of training or fine-tuning it again. PARIETAL is publicly available. LEVEL OF EVIDENCE: 2 TECHNICAL EFFICACY STAGE: 2.

10.
Front Neurosci ; 15: 608808, 2021.
Article in English | MEDLINE | ID: mdl-33994917

ABSTRACT

Segmentation of brain images from Magnetic Resonance Images (MRI) is an indispensable step in clinical practice. Morphological changes of sub-cortical brain structures and quantification of brain lesions are considered biomarkers of neurological and neurodegenerative disorders and used for diagnosis, treatment planning, and monitoring disease progression. In recent years, deep learning methods showed an outstanding performance in medical image segmentation. However, these methods suffer from generalisability problem due to inter-centre and inter-scanner variabilities of the MRI images. The main objective of the study is to develop an automated deep learning segmentation approach that is accurate and robust to the variabilities in scanner and acquisition protocols. In this paper, we propose a transductive transfer learning approach for domain adaptation to reduce the domain-shift effect in brain MRI segmentation. The transductive scenario assumes that there are sets of images from two different domains: (1) source-images with manually annotated labels; and (2) target-images without expert annotations. Then, the network is jointly optimised integrating both source and target images into the transductive training process to segment the regions of interest and to minimise the domain-shift effect. We proposed to use a histogram loss in the feature level to carry out the latter optimisation problem. In order to demonstrate the benefit of the proposed approach, the method has been tested in two different brain MRI image segmentation problems using multi-centre and multi-scanner databases for: (1) sub-cortical brain structure segmentation; and (2) white matter hyperintensities segmentation. The experiments showed that the segmentation performance of a pre-trained model could be significantly improved by up to 10%. For the first segmentation problem it was possible to achieve a maximum improvement from 0.680 to 0.799 in average Dice Similarity Coefficient (DSC) metric and for the second problem the average DSC improved from 0.504 to 0.602. Moreover, the improvements after domain adaptation were on par or showed better performance compared to the commonly used traditional unsupervised segmentation methods (FIRST and LST), also achieving faster execution time. Taking this into account, this work presents one more step toward the practical implementation of deep learning algorithms into the clinical routine.

11.
Comput Med Imaging Graph ; 90: 101908, 2021 06.
Article in English | MEDLINE | ID: mdl-33901919

ABSTRACT

Hemorrhagic stroke is the condition involving the rupture of a vessel inside the brain and is characterized by high mortality rates. Even if the patient survives, stroke can cause temporary or permanent disability depending on how long blood flow has been interrupted. Therefore, it is crucial to act fast to prevent irreversible damage. In this work, a deep learning-based approach to automatically segment hemorrhagic stroke lesions in CT scans is proposed. Our approach is based on a 3D U-Net architecture which incorporates the recently proposed squeeze-and-excitation blocks. Moreover, a restrictive patch sampling is proposed to alleviate the class imbalance problem and also to deal with the issue of intra-ventricular hemorrhage, which has not been considered as a stroke lesion in our study. Moreover, we also analyzed the effect of patch size, the use of different modalities, data augmentation and the incorporation of different loss functions on the segmentation results. All analyses have been performed using a five fold cross-validation strategy on a clinical dataset composed of 76 cases. Obtained results demonstrate that the introduction of squeeze-and-excitation blocks, together with the restrictive patch sampling and symmetric modality augmentation, significantly improved the obtained results, achieving a mean DSC of 0.86±0.074, showing promising automated segmentation results.


Subject(s)
Hemorrhagic Stroke , Stroke , Humans , Image Processing, Computer-Assisted , Stroke/diagnostic imaging , Tomography, X-Ray Computed
12.
Neuroinformatics ; 19(3): 477-492, 2021 07.
Article in English | MEDLINE | ID: mdl-33389607

ABSTRACT

Brain atrophy quantification plays a fundamental role in neuroinformatics since it permits studying brain development and neurological disorders. However, the lack of a ground truth prevents testing the accuracy of longitudinal atrophy quantification methods. We propose a deep learning framework to generate longitudinal datasets by deforming T1-w brain magnetic resonance imaging scans as requested through segmentation maps. Our proposal incorporates a cascaded multi-path U-Net optimised with a multi-objective loss which allows its paths to generate different brain regions accurately. We provided our model with baseline scans and real follow-up segmentation maps from two longitudinal datasets, ADNI and OASIS, and observed that our framework could produce synthetic follow-up scans that matched the real ones (Total scans= 584; Median absolute error: 0.03 ± 0.02; Structural similarity index: 0.98 ± 0.02; Dice similarity coefficient: 0.95 ± 0.02; Percentage of brain volume change: 0.24 ± 0.16; Jacobian integration: 1.13 ± 0.05). Compared to two relevant works generating brain lesions using U-Nets and conditional generative adversarial networks (CGAN), our proposal outperformed them significantly in most cases (p < 0.01), except in the delineation of brain edges where the CGAN took the lead (Jacobian integration: Ours - 1.13 ± 0.05 vs CGAN - 1.00 ± 0.02; p < 0.01). We examined whether changes induced with our framework were detected by FAST, SPM, SIENA, SIENAX, and the Jacobian integration method. We observed that induced and detected changes were highly correlated (Adj. R2 > 0.86). Our preliminary results on harmonised datasets showed the potential of our framework to be applied to various data collections without further adjustment.


Subject(s)
Magnetic Resonance Imaging , Neural Networks, Computer , Atrophy , Brain/diagnostic imaging , Brain/pathology , Humans , Image Processing, Computer-Assisted
13.
Neuroimage Clin ; 27: 102306, 2020.
Article in English | MEDLINE | ID: mdl-32585568

ABSTRACT

Accurate volume measurements of the brain structures are important for treatment evaluation and disease follow-up in multiple sclerosis (MS) patients. With the aim of obtaining reproducible measurements and avoiding the intra-/inter-rater variability that manual delineations introduce, several automated brain structure segmentation strategies have been proposed in recent years. However, most of these strategies tend to be affected by the abnormal MS lesion intensities, which corrupt the structure segmentation result. To address this problem, we recently reformulated two label fusion strategies of the state of the art, improving their segmentation performance on the lesion areas. Here, we integrate these reformulated strategies in a completely automated pipeline that includes pre-processing (inhomogeneity correction and intensity normalization), atlas selection, masked registration and label fusion, and combine them with an automated lesion segmentation method of the state of the art. We study the effect of automating the lesion mask acquisition on the structure segmentation result, analyzing the output of the proposed pipeline when used in combination with manually and automatically segmented lesion masks. We further analyze the effect of those masks on the segmentation result of the original label fusion strategies when combined with the well-established pre-processing step of lesion filling. The experiments performed show that, when the original methods are used to segment the lesion-filled images, significant structure volume differences are observed in a comparison between manually and automatically segmented lesion masks. The results indicate a mean volume decrease of 1.13%±1.93 in the cerebrospinal fluid, and a mean volume increase of 0.13%±0.14 and 0.05%±0.08 in the cerebral white matter and cerebellar gray matter, respectively. On the other hand, no significant volume differences were found when the proposed automated pipeline was used for segmentation, which demonstrates its robustness against variations in the lesion mask used.


Subject(s)
Brain/pathology , Gray Matter/pathology , Image Processing, Computer-Assisted , Multiple Sclerosis/pathology , Adult , Female , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Male , Middle Aged , White Matter/pathology
14.
Comput Methods Programs Biomed ; 194: 105521, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32434099

ABSTRACT

BACKGROUND AND OBJECTIVE: Acute stroke lesion segmentation tasks are of great clinical interest as they can help doctors make better informed time-critical treatment decisions. Magnetic resonance imaging (MRI) is time demanding but can provide images that are considered the gold standard for diagnosis. Automated stroke lesion segmentation can provide with an estimate of the location and volume of the lesioned tissue, which can help in the clinical practice to better assess and evaluate the risks of each treatment. METHODS: We propose a deep learning methodology for acute and sub-acute stroke lesion segmentation using multimodal MR imaging. We pre-process the data to facilitate learning features based on the symmetry of brain hemispheres. The issue of class imbalance is tackled using small patches with a balanced training patch sampling strategy and a dynamically weighted loss function. Moreover, a combination of whole patch predictions, using a U-Net based CNN architecture, and high degree of overlapping patches reduces the need for additional post-processing. RESULTS: The proposed method is evaluated using two public datasets from the 2015 Ischemic Stroke Lesion Segmentation challenge (ISLES 2015). These involve the tasks of sub-acute stroke lesion segmentation (SISS) and acute stroke penumbra estimation (SPES) from multiple diffusion, perfusion and anatomical MRI modalities. The performance is compared against state-of-the-art methods with a blind online testing set evaluation on each of the challenges. At the time of submitting this manuscript, our approach is the first method in the online rankings for the SISS (DSC=0.59 ± 0.31) and SPES sub-tasks (DSC=0.84 ± 0.10). When compared with the rest of submitted strategies, we achieve top rank performance with a lower Hausdorff distance. CONCLUSIONS: Better segmentation results are obtained by leveraging the anatomy and pathophysiology of acute stroke lesions and using a combined approach to minimize the effects of class imbalance. The same training procedure is used for both tasks, showing the proposed methodology can generalize well enough to deal with different unrelated tasks and imaging modalities without hyper-parameter tuning. In order to promote the reproducibility of our results, a public version of the proposed method has been released to the scientific community.


Subject(s)
Neural Networks, Computer , Stroke , Humans , Magnetic Resonance Imaging , Multimodal Imaging , Reproducibility of Results , Stroke/diagnostic imaging
15.
Neuroimage Clin ; 25: 102181, 2020.
Article in English | MEDLINE | ID: mdl-31982680

ABSTRACT

Autism Spectrum Disorder (ASD) is a brain disorder that is typically characterized by deficits in social communication and interaction, as well as restrictive and repetitive behaviors and interests. During the last years, there has been an increase in the use of magnetic resonance imaging (MRI) to help in the detection of common patterns in autism subjects versus typical controls for classification purposes. In this work, we propose a method for the classification of ASD patients versus control subjects using both functional and structural MRI information. Functional connectivity patterns among brain regions, together with volumetric correspondences of gray matter volumes among cortical parcels are used as features for functional and structural processing pipelines, respectively. The classification network is a combination of stacked autoencoders trained in an unsupervised manner and multilayer perceptrons trained in a supervised manner. Quantitative analysis is performed on 817 cases from the multisite international Autism Brain Imaging Data Exchange I (ABIDE I) dataset, consisting of 368 ASD patients and 449 control subjects and obtaining a classification accuracy of 85.06 ± 3.52% when using an ensemble of classifiers. Merging information from functional and structural sources significantly outperforms the implemented individual pipelines.


Subject(s)
Autism Spectrum Disorder/diagnostic imaging , Brain/diagnostic imaging , Image Interpretation, Computer-Assisted/standards , Machine Learning , Neuroimaging/standards , Adolescent , Adult , Autism Spectrum Disorder/pathology , Autism Spectrum Disorder/physiopathology , Brain/pathology , Brain/physiopathology , Child , Connectome/methods , Connectome/standards , Female , Humans , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging , Male , Middle Aged , Neuroimaging/methods , Reproducibility of Results , Young Adult
16.
Neuroimage Clin ; 25: 102149, 2020.
Article in English | MEDLINE | ID: mdl-31918065

ABSTRACT

INTRODUCTION: Longitudinal magnetic resonance imaging (MRI) has an important role in multiple sclerosis (MS) diagnosis and follow-up. Specifically, the presence of new T2-w lesions on brain MR scans is considered a predictive biomarker for the disease. In this study, we propose a fully convolutional neural network (FCNN) to detect new T2-w lesions in longitudinal brain MR images. METHODS: One year apart, multichannel brain MR scans (T1-w, T2-w, PD-w, and FLAIR) were obtained for 60 patients, 36 of them with new T2-w lesions. Modalities from both temporal points were preprocessed and linearly coregistered. Afterwards, an FCNN, whose inputs were from the baseline and follow-up images, was trained to detect new MS lesions. The first part of the network consisted of U-Net blocks that learned the deformation fields (DFs) and nonlinearly registered the baseline image to the follow-up image for each input modality. The learned DFs together with the baseline and follow-up images were then fed to the second part, another U-Net that performed the final detection and segmentation of new T2-w lesions. The model was trained end-to-end, simultaneously learning both the DFs and the new T2-w lesions, using a combined loss function. We evaluated the performance of the model following a leave-one-out cross-validation scheme. RESULTS: In terms of the detection of new lesions, we obtained a mean Dice similarity coefficient of 0.83 with a true positive rate of 83.09% and a false positive detection rate of 9.36%. In terms of segmentation, we obtained a mean Dice similarity coefficient of 0.55. The performance of our model was significantly better compared to the state-of-the-art methods (p < 0.05). CONCLUSIONS: Our proposal shows the benefits of combining a learning-based registration network with a segmentation network. Compared to other methods, the proposed model decreases the number of false positives. During testing, the proposed model operates faster than the other two state-of-the-art methods based on the DF obtained by Demons.


Subject(s)
Brain/diagnostic imaging , Deep Learning , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Multiple Sclerosis/diagnostic imaging , Neuroimaging/methods , Adult , Biomarkers , Female , Humans , Longitudinal Studies , Male , Middle Aged , Multiple Sclerosis/pathology
17.
Comput Biol Med ; 115: 103487, 2019 12.
Article in English | MEDLINE | ID: mdl-31629272

ABSTRACT

The use of Computed Tomography (CT) imaging for patients with stroke symptoms is an essential step for triaging and diagnosis in many hospitals. However, the subtle expression of ischemia in acute CT images has made it hard for automated methods to extract potentially quantifiable information. In this work, we present and evaluate an automated deep learning tool for acute stroke lesion core segmentation from CT and CT perfusion images. For evaluation, the Ischemic Stroke Lesion Segmentation (ISLES) 2018 challenge dataset is used that includes 94 cases for training and 62 for testing. The presented method is an improved version of our workshop challenge approach that was ranked among the workshop challenge finalists. The introduced contributions include a more regularized network training procedure, symmetric modality augmentation and uncertainty filtering. Each of these steps is quantitatively evaluated by cross-validation on the training set. Moreover, our proposal is evaluated against other state-of-the-art methods with a blind testing set evaluation using the challenge website, which maintains an ongoing leaderboard for fair and direct method comparison. The tool reaches competitive performance ranking among the top performing methods of the ISLES 2018 testing leaderboard with an average Dice similarity coefficient of 49%. In the clinical setting, this method can provide an estimate of lesion core size and location without performing time costly magnetic resonance imaging. The presented tool is made publicly available for the research community.


Subject(s)
Brain Ischemia/diagnostic imaging , Deep Learning , Imaging, Three-Dimensional , Magnetic Resonance Imaging , Stroke/diagnostic imaging , Tomography, X-Ray Computed , Humans
18.
Sci Rep ; 9(1): 6742, 2019 05 01.
Article in English | MEDLINE | ID: mdl-31043688

ABSTRACT

In recent years, some convolutional neural networks (CNNs) have been proposed to segment sub-cortical brain structures from magnetic resonance images (MRIs). Although these methods provide accurate segmentation, there is a reproducibility issue regarding segmenting MRI volumes from different image domains - e.g., differences in protocol, scanner, and intensity profile. Thus, the network must be retrained from scratch to perform similarly in different imaging domains, limiting the applicability of such methods in clinical settings. In this paper, we employ the transfer learning strategy to solve the domain shift problem. We reduced the number of training images by leveraging the knowledge obtained by a pretrained network, and improved the training speed by reducing the number of trainable parameters of the CNN. We tested our method on two publicly available datasets - MICCAI 2012 and IBSR - and compared them with a commonly used approach: FIRST. Our method showed similar results to those obtained by a fully trained CNN, and our method used a remarkably smaller number of images from the target domain. Moreover, training the network with only one image from MICCAI 2012 and three images from IBSR datasets was sufficient to significantly outperform FIRST with (p < 0.001) and (p < 0.05), respectively.


Subject(s)
Brain/diagnostic imaging , Image Processing, Computer-Assisted , Neural Networks, Computer , User-Computer Interface , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging
19.
Med Image Anal ; 54: 76-87, 2019 05.
Article in English | MEDLINE | ID: mdl-30836308

ABSTRACT

Breast magnetic resonance imaging (MRI) and X-ray mammography are two image modalities widely used for early detection and diagnosis of breast diseases in women. The combination of these modalities, traditionally done using intensity-based registration algorithms, leads to a more accurate diagnosis and treatment, due to the capability of co-localizing lesions and susceptibles areas between the two image modalities. In this work, we present the first attempt to register breast MRI and X-ray mammographic images using intensity gradients as the similarity measure. Specifically, a patient-specific biomechanical model of the breast, extracted from the MRI image, is used to mimic the mammographic acquisition. The intensity gradients of the glandular tissue are directly projected from the 3D MRI volume to the 2D mammographic space, and two different gradient-based metrics are tested to lead the registration, the normalized cross-correlation of the scalar gradient values and the gradient correlation of the vectoral gradients. We compare these two approaches to an intensity-based algorithm, where the MRI volume is transformed to a synthetic computed tomography (pseudo-CT) image using the partial volume effect obtained by the glandular tissue segmentation performed by means of an Expectation-Maximization algorithm. This allows us to obtain the digitally reconstructed radiographies by a direct intensity projection. The best results are obtained using the scalar gradient approach along with a transversal isotropic material model, obtaining a target registration error (TRE), in millimeters, of 5.65 ±â€¯2.76 for CC- and of 7.83 ±â€¯3.04 for MLO-mammograms, while the TRE is 7.33 ±â€¯3.62 in the 3D MRI. We also evaluate the effect of the glandularity of the breast as well as the landmark position on the TRE, obtaining moderated correlation values (0.65 and 0.77 respectively), concluding that these aspects need to be considered to increase the accuracy in further approaches.


Subject(s)
Breast/diagnostic imaging , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging , Mammography , Multimodal Imaging , Algorithms , Anatomic Landmarks , Artifacts , Breast Neoplasms/diagnostic imaging , Contrast Media , Female , Humans , Imaging, Three-Dimensional
20.
Neuroimage Clin ; 22: 101709, 2019.
Article in English | MEDLINE | ID: mdl-30822719

ABSTRACT

Intensity-based multi-atlas segmentation strategies have shown to be particularly successful in segmenting brain images of healthy subjects. However, in the same way as most of the methods in the state of the art, their performance tends to be affected by the presence of MRI visible lesions, such as those found in multiple sclerosis (MS) patients. Here, we present an approach to minimize the effect of the abnormal lesion intensities on multi-atlas segmentation. We propose a new voxel/patch correspondence model for intensity-based multi-atlas label fusion strategies that leads to more accurate similarity measures, having a key role in the final brain segmentation. We present the theory of this model and integrate it into two well-known fusion strategies: Non-local Spatial STAPLE (NLSS) and Joint Label Fusion (JLF). The experiments performed show that our proposal improves the segmentation performance of the lesion areas. The results indicate a mean Dice Similarity Coefficient (DSC) improvement of 1.96% for NLSS (3.29% inside and 0.79% around the lesion masks) and, an improvement of 2.06% for JLF (2.31% inside and 1.42% around lesions). Furthermore, we show that, with the proposed strategy, the well-established preprocessing step of lesion filling can be disregarded, obtaining similar or even more accurate segmentation results.


Subject(s)
Brain/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Multiple Sclerosis/diagnostic imaging , Multiple Sclerosis/pathology , Atlases as Topic , Brain/pathology , Datasets as Topic , Humans , Magnetic Resonance Imaging/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...