Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 85
Filter
1.
Sci Rep ; 14(1): 21348, 2024 09 12.
Article in English | MEDLINE | ID: mdl-39266642

ABSTRACT

Segmentation of multiple sclerosis (MS) lesions on brain MRI scans is crucial for diagnosis, disease and treatment monitoring but is a time-consuming task. Despite several automated algorithms have been proposed, there is still no consensus on the most effective method. Here, we applied a consensus-based framework to improve lesion segmentation on T1-weighted and FLAIR scans. The framework is designed to combine publicly available state-of-the-art deep learning models, by running multiple segmentation tasks before merging the outputs of each algorithm. To assess the effectiveness of the approach, we applied it to MRI datasets from two different centers, including a private and a public dataset, with 131 and 30 MS patients respectively, with manually segmented lesion masks available. No further training was performed for any of the included algorithms. Overlap and detection scores were improved, with Dice increasing by 4-8% and precision by 3-4% respectively for the private and public dataset. High agreement was obtained between estimated and true lesion load (ρ = 0.92 and ρ = 0.97) and count (ρ = 0.83 and ρ = 0.94). Overall, this framework ensures accurate and reliable results, exploiting complementary features and overcoming some of the limitations of individual algorithms.


Subject(s)
Algorithms , Brain , Magnetic Resonance Imaging , Multiple Sclerosis , Humans , Multiple Sclerosis/diagnostic imaging , Multiple Sclerosis/pathology , Magnetic Resonance Imaging/methods , Brain/diagnostic imaging , Brain/pathology , Female , Consensus , Male , Image Processing, Computer-Assisted/methods , Adult , Deep Learning , Image Interpretation, Computer-Assisted/methods , Middle Aged
2.
Pattern Recognit ; 138: None, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37781685

ABSTRACT

Supervised machine learning methods have been widely developed for segmentation tasks in recent years. However, the quality of labels has high impact on the predictive performance of these algorithms. This issue is particularly acute in the medical image domain, where both the cost of annotation and the inter-observer variability are high. Different human experts contribute estimates of the "actual" segmentation labels in a typical label acquisition process, influenced by their personal biases and competency levels. The performance of automatic segmentation algorithms is limited when these noisy labels are used as the expert consensus label. In this work, we use two coupled CNNs to jointly learn, from purely noisy observations alone, the reliability of individual annotators and the expert consensus label distributions. The separation of the two is achieved by maximally describing the annotator's "unreliable behavior" (we call it "maximally unreliable") while achieving high fidelity with the noisy training data. We first create a toy segmentation dataset using MNIST and investigate the properties of the proposed algorithm. We then use three public medical imaging segmentation datasets to demonstrate our method's efficacy, including both simulated (where necessary) and real-world annotations: 1) ISBI2015 (multiple-sclerosis lesions); 2) BraTS (brain tumors); 3) LIDC-IDRI (lung abnormalities). Finally, we create a real-world multiple sclerosis lesion dataset (QSMSC at UCL: Queen Square Multiple Sclerosis Center at UCL, UK) with manual segmentations from 4 different annotators (3 radiologists with different level skills and 1 expert to generate the expert consensus label). In all datasets, our method consistently outperforms competing methods and relevant baselines, especially when the number of annotations is small and the amount of disagreement is large. The studies also reveal that the system is capable of capturing the complicated spatial characteristics of annotators' mistakes.

3.
Med Image Anal ; 81: 102532, 2022 10.
Article in English | MEDLINE | ID: mdl-35872359

ABSTRACT

The performance of deep learning for cardiac magnetic resonance imaging (MRI) segmentation is oftentimes degraded when using small datasets and sparse annotations for training or adapting a pre-trained model to previously unseen datasets. Here, we developed and evaluated an approach to addressing some of these issues to facilitate broader use of deep learning for short-axis cardiac MRI segmentation. We developed a globally optimal label fusion (GOLF) algorithm that enforced spatial smoothness to generate consensus segmentation from segmentation predictions provided by a deep learning ensemble algorithm. The GOLF consensus was entered into an uncertainty-guided coupled continuous kernel cut (ugCCKC) algorithm that employed normalized cut, image-grid continuous regularization, and "nesting" and circular shape priors of the left ventricular myocardium (LVM) and cavity (LVC). In addition, the uncertainty measurements derived from the segmentation predictions were used to constrain the similarity of GOLF and final segmentation. We optimized ugCCKC through upper bound relaxation, for which we developed an efficient coupled continuous max-flow algorithm implemented in an iterative manner. We showed that GOLF yielded average symmetric surface distance (ASSD) 0.2-0.8 mm lower than an averaging method with higher or similar Dice similarity coefficient (DSC). We also demonstrated that ugCCKC incorporating the shape priors improved DSC by 0.01-0.05 and reduced ASSD by 0.1-0.9 mm. In addition, we integrated GOLF and ugCCKC into a deep learning ensemble algorithm by refining the segmentation of an unannotated dataset and using the refined segmentation to update the trained models. With the proposed framework, we demonstrated the capability of using relatively small datasets (5-10 subjects) with sparse (5-25% slices labeled) annotations to train a deep learning algorithm, while achieving DSC of 0.871-0.893 for LVM and 0.933-0.959 for LVC on the LVQuan dataset, and these were 0.844-0.871 for LVM and 0.923-0.931 for LVC on the ACDC dataset. Furthermore, we showed that the proposed approach can be adapted to substantially alleviate the domain shift issue. Moreover, we calculated a number of commonly used LV function measurements using the derived segmentation and observed strong correlations (Pearson r=0.77-1.00, p<0.001) between algorithm and manual LV function analyses. These results suggest that the developed approaches can be used to facilitate broader application of deep learning in research and clinical cardiac MR imaging workflow.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Heart/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Uncertainty
4.
Alzheimers Dement (Amst) ; 14(1): e12324, 2022.
Article in English | MEDLINE | ID: mdl-35634535

ABSTRACT

Research suggests a link between Alzheimer's Disease in Down Syndrome (DS) and the overproduction of amyloid plaques. Using Positron Emission Tomography (PET) we can assess the in-vivo regional amyloid load using several available ligands. To measure amyloid distributions in specific brain regions, a brain atlas is used. A popular method of creating a brain atlas is to segment a participant's structural Magnetic Resonance Imaging (MRI) scan. Acquiring an MRI is often challenging in intellectually-imparied populations because of contraindications or data exclusion due to significant motion artifacts or incomplete sequences related to general discomfort. When an MRI cannot be acquired, it is typically replaced with a standardized brain atlas derived from neurotypical populations (i.e. healthy individuals without DS) which may be inappropriate for use in DS. In this project, we create a series of disease and diagnosis-specific (cognitively stable (CS-DS), mild cognitive impairment (MCI-DS), and dementia (DEM-DS)) probabilistic group atlases of participants with DS and evaluate their accuracy of quantifying regional amyloid load compared to the individually-based MRI segmentations. Further, we compare the diagnostic-specific atlases with a probabilistic atlas constructed from similar-aged cognitively-stable neurotypical participants. We hypothesized that regional PET signals will best match the individually-based MRI segmentations by using DS group atlases that aligns with a participant's disorder and disease status (e.g. DS and MCI-DS). Our results vary by brain region but generally show that using a disorder-specific atlas in DS better matches the individually-based MRI segmentations than using an atlas constructed from cognitively-stable neurotypical participants. We found no additional benefit of using diagnose-specific atlases matching disease status. All atlases are made publicly available for the research community. Highlight: Down syndrome (DS) joint-label-fusion atlases provide accurate positron emission tomography (PET) amyloid measurements.A disorder-specific DS atlas is better than a neurotypical atlas for PET quantification.It is not necessary to use a disease-state-specific atlas for quantification in aged DS.Dorsal striatum results vary, possibly due to this region and dementia progression.

5.
Comput Methods Programs Biomed ; 208: 106197, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34102562

ABSTRACT

Accurate and automatic segmentation of the hippocampus plays a vital role in the diagnosis and treatment of nervous system diseases. However, due to the anatomical variability of different subjects, the registered atlas images are not always perfectly aligned with the target image. This makes the segmentation of the hippocampus still face great challenges. In this paper, we propose a robust discriminative label fusion method under the multi-atlas framework. It is a patch embedding label fusion method based on conditional random field (CRF) model that integrates the metric learning and the graph cuts by an integrated formulation. Unlike most current label fusion methods with fixed (non-learning) distance metrics, a novel distance metric learning is presented to enhance discriminative observation and embed it into the unary potential function. In particular, Bayesian inference is utilized to extend a classic distance metric learning, in which large margin constraints are instead of pairwise constraints to obtain a more robust distance metric. And the pairwise homogeneity is fully considered in the spatial prior term based on classification labels and voxel intensity. The resulting integrated formulation is globally minimized by the efficient graph cuts algorithm. Further, sparse patch based method is utilized to polish the obtained segmentation results in label space. The proposed method is evaluated on IABA dataset and ADNI dataset for hippocampus segmentation. The Dice scores achieved by our method are 87.2%, 87.8%, 88.2% and 88.9% on left and right hippocampus on both two datasets, while the best Dice scores obtained by other methods are 86.0%, 86.9%, 86.8% and 88.0% on IABA dataset and ADNI dataset respectively. Experiments show that our approach achieves higher accuracy than state-of-the-art methods. We hope the proposed model can be transferred to combine with other promising distance measurement algorithms.


Subject(s)
Image Interpretation, Computer-Assisted , Magnetic Resonance Imaging , Algorithms , Bayes Theorem , Hippocampus/diagnostic imaging , Humans
6.
Med Image Anal ; 68: 101906, 2021 02.
Article in English | MEDLINE | ID: mdl-33260117

ABSTRACT

Albeit spectral-domain OCT (SDOCT) is now in clinical use for glaucoma management, published clinical trials relied on time-domain OCT (TDOCT) which is characterized by low signal-to-noise ratio, leading to low statistical power. For this reason, such trials require large numbers of patients observed over long intervals and become more costly. We propose a probabilistic ensemble model and a cycle-consistent perceptual loss for improving the statistical power of trials utilizing TDOCT. TDOCT are converted to synthesized SDOCT and segmented via Bayesian fusion of an ensemble of GANs. The final retinal nerve fibre layer segmentation is obtained automatically on an averaged synthesized image using label fusion. We benchmark different networks using i) GAN, ii) Wasserstein GAN (WGAN) (iii) GAN + perceptual loss and iv) WGAN + perceptual loss. For training and validation, an independent dataset is used, while testing is performed on the UK Glaucoma Treatment Study (UKGTS), i.e. a TDOCT-based trial. We quantify the statistical power of the measurements obtained with our method, as compared with those derived from the original TDOCT. The results provide new insights into the UKGTS, showing a significantly better separation between treatment arms, while improving the statistical power of TDOCT on par with visual field measurements.


Subject(s)
Glaucoma , Bayes Theorem , Glaucoma/diagnostic imaging , Humans , Signal-To-Noise Ratio , Tomography, Optical Coherence , Visual Fields
7.
Neuroimage Clin ; 27: 102306, 2020.
Article in English | MEDLINE | ID: mdl-32585568

ABSTRACT

Accurate volume measurements of the brain structures are important for treatment evaluation and disease follow-up in multiple sclerosis (MS) patients. With the aim of obtaining reproducible measurements and avoiding the intra-/inter-rater variability that manual delineations introduce, several automated brain structure segmentation strategies have been proposed in recent years. However, most of these strategies tend to be affected by the abnormal MS lesion intensities, which corrupt the structure segmentation result. To address this problem, we recently reformulated two label fusion strategies of the state of the art, improving their segmentation performance on the lesion areas. Here, we integrate these reformulated strategies in a completely automated pipeline that includes pre-processing (inhomogeneity correction and intensity normalization), atlas selection, masked registration and label fusion, and combine them with an automated lesion segmentation method of the state of the art. We study the effect of automating the lesion mask acquisition on the structure segmentation result, analyzing the output of the proposed pipeline when used in combination with manually and automatically segmented lesion masks. We further analyze the effect of those masks on the segmentation result of the original label fusion strategies when combined with the well-established pre-processing step of lesion filling. The experiments performed show that, when the original methods are used to segment the lesion-filled images, significant structure volume differences are observed in a comparison between manually and automatically segmented lesion masks. The results indicate a mean volume decrease of 1.13%±1.93 in the cerebrospinal fluid, and a mean volume increase of 0.13%±0.14 and 0.05%±0.08 in the cerebral white matter and cerebellar gray matter, respectively. On the other hand, no significant volume differences were found when the proposed automated pipeline was used for segmentation, which demonstrates its robustness against variations in the lesion mask used.


Subject(s)
Brain/pathology , Gray Matter/pathology , Image Processing, Computer-Assisted , Multiple Sclerosis/pathology , Adult , Female , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Male , Middle Aged , White Matter/pathology
8.
J Med Imaging (Bellingham) ; 7(1): 014004, 2020 Jan.
Article in English | MEDLINE | ID: mdl-32118089

ABSTRACT

Purpose: Placental size in early pregnancy has been associated with important clinical outcomes, including fetal growth. However, extraction of placental size from three-dimensional ultrasound (3DUS) requires time-consuming interactive segmentation methods and is prone to user variability. We propose a semiautomated segmentation technique that requires minimal user input to robustly measure placental volume from 3DUS images. Approach: For semiautomated segmentation, a single, central 2D slice was manually annotated to initialize an automated multi-atlas label fusion (MALF) algorithm. The dataset consisted of 47 3DUS volumes obtained at 11 to 14 weeks in singleton pregnancies (28 anterior and 19 posterior). Twenty-six of these subjects were imaged twice within the same session. Dice overlap and surface distance were used to quantify the automated segmentation accuracy compared to expert manual segmentations. The mean placental volume measurements obtained by our method and VOCAL (virtual organ computer-aided analysis), a leading commercial semiautomated method, were compared to the manual reference set. The test-retest reliability was also assessed. Results: The overlap between our automated segmentation and manual (mean Dice: 0.824 ± 0.061 , median: 0.831) was within the range reported by other methods requiring extensive manual input. The average surface distance was 1.66 ± 0.96 mm . The correlation coefficient between test-retest volumes was r = 0.88 , and the intraclass correlation was ICC ( 1 ) = 0.86 . Conclusions: MALF is a promising method that can allow accurate and reliable segmentation of the placenta with minimal user interaction. Further refinement of this technique may allow for placental biometry to be incorporated into clinical pregnancy surveillance.

9.
Neuroinformatics ; 18(2): 319-331, 2020 04.
Article in English | MEDLINE | ID: mdl-31898145

ABSTRACT

Segmentation of medical images using multiple atlases has recently gained immense attention due to their augmented robustness against variabilities across different subjects. These atlas-based methods typically comprise of three steps: atlas selection, image registration, and finally label fusion. Image registration is one of the core steps in this process, accuracy of which directly affects the final labeling performance. However, due to inter-subject anatomical variations, registration errors are inevitable. The aim of this paper is to develop a deep learning-based confidence estimation method to alleviate the potential effects of registration errors. We first propose a fully convolutional network (FCN) with residual connections to learn the relationship between the image patch pair (i.e., patches from the target subject and the atlas) and the related label confidence patch. With the obtained label confidence patch, we can identify the potential errors in the warped atlas labels and correct them. Then, we use two label fusion methods to fuse the corrected atlas labels. The proposed methods are validated on a publicly available dataset for hippocampus segmentation. Experimental results demonstrate that our proposed methods outperform the state-of-the-art segmentation methods.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Neuroimaging/methods , Atlases as Topic , Hippocampus/anatomy & histology , Hippocampus/physiology , Humans , Magnetic Resonance Imaging/methods
10.
Proc IEEE Int Symp Biomed Imaging ; 2020: 363-367, 2020 Apr.
Article in English | MEDLINE | ID: mdl-35261721

ABSTRACT

In this work, we improve the performance of multi-atlas segmentation (MAS) by integrating the recently proposed VoteNet model with the joint label fusion (JLF) approach. Specifically, we first illustrate that using a deep convolutional neural network to predict atlas probabilities can better distinguish correct atlas labels from incorrect ones than relying on image intensity difference as is typical in JLF. Motivated by this finding, we propose VoteNet+, an improved deep network to locally predict the probability of an atlas label to differ from the label of the target image. Furthermore, we show that JLF is more suitable for the VoteNet framework as a label fusion method than plurality voting. Lastly, we use Platt scaling to calibrate the probabilities of our new model. Results on LPBA40 3D MR brain images show that our proposed method can achieve better performance than VoteNet.

11.
Article in English | MEDLINE | ID: mdl-34354762

ABSTRACT

This work proposes a novel framework for brain tumor segmentation prediction in longitudinal multi-modal MRI scans, comprising two methods; feature fusion and joint label fusion (JLF). The first method fuses stochastic multi-resolution texture features with tumor cell density feature to obtain tumor segmentation predictions in follow-up timepoints using data from baseline pre-operative timepoint. The cell density feature is obtained by solving the 3D reaction-diffusion equation for biophysical tumor growth modelling using the Lattice-Boltzmann method. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method, known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). We quantitatively evaluate both proposed methods using the Dice Similarity Coefficient (DSC) in longitudinal scans of 9 patients from the public BraTS 2015 multi-institutional dataset. The evaluation results for the feature-based fusion method show improved tumor segmentation prediction for the whole tumor(DSC WT = 0.314, p = 0.1502), tumor core (DSC TC = 0.332, p = 0.0002), and enhancing tumor (DSC ET = 0.448, p = 0.0002) regions. The feature-based fusion shows some improvement on tumor prediction of longitudinal brain tumor tracking, whereas the JLF offers statistically significant improvement on the actual segmentation of WT and ET (DSC WT = 0.85 ± 0.055, DSC ET = 0.837 ± 0.074), and also improves the results of GB. The novelty of this work is two-fold: (a) exploit tumor cell density as a feature to predict brain tumor segmentation, using a stochastic multi-resolution RF-based method, and (b) improve the performance of another successful tumor segmentation method, GB, by fusing with the RF-based segmentation labels.

12.
Stat Atlases Comput Models Heart ; 11395: 142-151, 2019.
Article in English | MEDLINE | ID: mdl-31579311

ABSTRACT

Ischemic mitral regurgitation (IMR) is primarily a left ventricular disease in which the mitral valve is dysfunctional due to ventricular remodeling after myocardial infarction. Current automated methods have focused on analyzing the mitral valve and left ventricle independently. While these methods have allowed for valuable insights into mechanisms of IMR, they do not fully integrate pathological features of the left ventricle and mitral valve. Thus, there is an unmet need to develop an automated segmentation algorithm for the left ventricular mitral valve complex, in order to allow for a more comprehensive study of this disease. The objective of this study is to generate and evaluate segmentations of the left ventricular mitral valve complex in pre-operative 3D transesophageal echocardiography using multi-atlas label fusion. These patient-specific segmentations could enable future statistical shape analysis for clinical outcome prediction and surgical risk stratification. In this study, we demonstrate a preliminary segmentation pipeline that achieves an average Dice coefficient of 0.78 ± 0.06.

13.
Artif Intell Med ; 96: 12-24, 2019 05.
Article in English | MEDLINE | ID: mdl-31164205

ABSTRACT

Label fusion is one of the key steps in multi-atlas based segmentation of structural magnetic resonance (MR) images. Although a number of label fusion methods have been developed in literature, most of those existing methods fail to address two important problems, i.e., (1) compared with boundary voxels, inner voxels usually have higher probability (or reliability) to be correctly segmented, and (2) voxels with high segmentation reliability (after initial segmentation) can help refine the segmentation of voxels with low segmentation reliability in the target image. To this end, we propose a general reliability-based robust label fusion framework for multi-atlas based MR image segmentation. Specifically, in the first step, we perform initial segmentation for MR images using a conventional multi-atlas label fusion method. In the second step, for each voxel in the target image, we define two kinds of reliability, including the label reliability and spatial reliability that are estimated based on the soft label and spatial information from the initial segmentation, respectively. Finally, we employ voxels with high label-spatial reliability to help refine the label fusion process of those with low reliability in the target image. We incorporate our proposed framework into four well-known label fusion methods, including locally-weighted voting (LWV), non-local mean patch-based method (PBM), joint label fusion (JLF) and sparse patch-based method (SPBM), and obtain four novel label-spatial reliability-based label fusion approaches (called ls-LWV, ls-PBM, ls-JLF, and ls-SPBM). We validate the proposed methods in segmenting ROIs of brain MR images from the NIREP, LONI-LPBA40 and ADNI datasets. The experimental results demonstrate that our label-spatial reliability-based label fusion methods outperform the state-of-the-art methods in multi-atlas image segmentation.


Subject(s)
Brain/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Algorithms , Humans , Reproducibility of Results
14.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 36(3): 453-459, 2019 Jun 25.
Article in Chinese | MEDLINE | ID: mdl-31232549

ABSTRACT

A multi-label based level set model for multiple sclerosis lesion segmentation is proposed based on the shape, position and other information of lesions from magnetic resonance image. First, fuzzy c-means model is applied to extract the initial lesion region. Second, an intensity prior information term and a label fusion term are constructed using intensity information of the initial lesion region, the above two terms are integrated into a region-based level set model. The final lesion segmentation is achieved by evolving the level set contour. The experimental results show that the proposed method can accurately and robustly extract brain lesions from magnetic resonance images. The proposed method helps to reduce the work of radiologists significantly, which is useful in clinical application.


Subject(s)
Magnetic Resonance Imaging , Multiple Sclerosis/diagnostic imaging , Algorithms , Humans
15.
J Med Syst ; 43(8): 241, 2019 Jun 21.
Article in English | MEDLINE | ID: mdl-31227923

ABSTRACT

The multi-atlas method is one of the efficient and common automatic labeling method, which uses the prior information provided by expert-labeled images to guide the labeling of the target. However, most multi-atlas-based methods depend on the registration that may not give the correct information during the label propagation. To address the issue, we designed a new automatic labeling method through the hashing retrieval based atlas forest. The proposed method propagates labels without registration to reduce the errors, and constructs a target-oriented learning model to integrate information among the atlases. This method innovates a coarse classification strategy to preprocess the dataset, which retains the integrity of dataset and reduces computing time. Furthermore, the method considers each voxel in the atlas as a sample and encodes these samples with hashing for the fast sample retrieval. In the stage of labeling, the method selects suitable samples through hashing learning and trains atlas forests by integrating the information from the dataset. Then, the trained model is used to predict the labels of the target. Experimental results on two datasets illustrated that the proposed method is promising in the automatic labeling of MR brain images.


Subject(s)
Image Processing, Computer-Assisted , Neuroimaging , Pattern Recognition, Automated/methods , Algorithms , Humans , Machine Learning
16.
J Med Syst ; 43(7): 225, 2019 Jun 12.
Article in English | MEDLINE | ID: mdl-31190229

ABSTRACT

Melanoma is a life threading disease when it grows outside the corium layer of the skin. Mortality rates of the Melanoma cases are maximum among the skin cancer patients. The cost required for the treatment of advanced melanoma cases is very high and the survival rate is low. Numerous computerized dermoscopy systems are developed based on the combination of shape, texture and color features to facilitate early diagnosis of melanoma. The availability and cost of the dermoscopic imaging system is still an issue. To mitigate this issue, this paper presented an integrated segmentation and Third Dimensional (3D) feature extraction approach for the accurate diagnosis of melanoma. A multi-atlas method is applied for the image segmentation. The patch-based label fusion model is expressed in a Bayesian framework to improve the segmentation accuracy. A depth map is obtained from the Two-dimensional (2D) dermoscopic image for reconstructing the 3D skin lesion represented as structure tensors. The 3D shape features including the relative depth features are obtained. Streaks are the significant morphological terms of the melanoma in the radial growth phase. The proposed method yields maximum segmentation accuracy, sensibility, specificity and minimum cost function than the existing segmentation technique and classifier.


Subject(s)
Dermoscopy/methods , Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Melanoma/diagnosis , Bayes Theorem , Color , Humans , Melanoma/diagnostic imaging , Pattern Recognition, Automated/methods , Sensitivity and Specificity
17.
Med Image Anal ; 55: 88-102, 2019 07.
Article in English | MEDLINE | ID: mdl-31035060

ABSTRACT

Accurate and robust segmentation of abdominal organs on CT is essential for many clinical applications such as computer-aided diagnosis and computer-aided surgery. But this task is challenging due to the weak boundaries of organs, the complexity of the background, and the variable sizes of different organs. To address these challenges, we introduce a novel framework for multi-organ segmentation of abdominal regions by using organ-attention networks with reverse connections (OAN-RCs) which are applied to 2D views, of the 3D CT volume, and output estimates which are combined by statistical fusion exploiting structural similarity. More specifically, OAN is a two-stage deep convolutional network, where deep network features from the first stage are combined with the original image, in a second stage, to reduce the complex background and enhance the discriminative information for the target organs. Intuitively, OAN reduces the effect of the complex background by focusing attention so that each organ only needs to be discriminated from its local background. RCs are added to the first stage to give the lower layers more semantic information thereby enabling them to adapt to the sizes of different organs. Our networks are trained on 2D views (slices) enabling us to use holistic information and allowing efficient computation (compared to using 3D patches). To compensate for the limited cross-sectional information of the original 3D volumetric CT, e.g., the connectivity between neighbor slices, multi-sectional images are reconstructed from the three different 2D view directions. Then we combine the segmentation results from the different views using statistical fusion, with a novel term relating the structural similarity of the 2D views to the original 3D structure. To train the network and evaluate results, 13 structures were manually annotated by four human raters and confirmed by a senior expert on 236 normal cases. We tested our algorithm by 4-fold cross-validation and computed Dice-Sørensen similarity coefficients (DSC) and surface distances for evaluating our estimates of the 13 structures. Our experiments show that the proposed approach gives strong results and outperforms 2D- and 3D-patch based state-of-the-art methods in terms of DSC and mean surface distances.


Subject(s)
Abdomen/diagnostic imaging , Algorithms , Imaging, Three-Dimensional/methods , Neural Networks, Computer , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Humans , Models, Statistical
18.
Neuroimage Clin ; 22: 101709, 2019.
Article in English | MEDLINE | ID: mdl-30822719

ABSTRACT

Intensity-based multi-atlas segmentation strategies have shown to be particularly successful in segmenting brain images of healthy subjects. However, in the same way as most of the methods in the state of the art, their performance tends to be affected by the presence of MRI visible lesions, such as those found in multiple sclerosis (MS) patients. Here, we present an approach to minimize the effect of the abnormal lesion intensities on multi-atlas segmentation. We propose a new voxel/patch correspondence model for intensity-based multi-atlas label fusion strategies that leads to more accurate similarity measures, having a key role in the final brain segmentation. We present the theory of this model and integrate it into two well-known fusion strategies: Non-local Spatial STAPLE (NLSS) and Joint Label Fusion (JLF). The experiments performed show that our proposal improves the segmentation performance of the lesion areas. The results indicate a mean Dice Similarity Coefficient (DSC) improvement of 1.96% for NLSS (3.29% inside and 0.79% around the lesion masks) and, an improvement of 2.06% for JLF (2.31% inside and 1.42% around lesions). Furthermore, we show that, with the proposed strategy, the well-established preprocessing step of lesion filling can be disregarded, obtaining similar or even more accurate segmentation results.


Subject(s)
Brain/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Multiple Sclerosis/diagnostic imaging , Multiple Sclerosis/pathology , Atlases as Topic , Brain/pathology , Datasets as Topic , Humans , Magnetic Resonance Imaging/methods
19.
Neuroimage ; 194: 105-119, 2019 07 01.
Article in English | MEDLINE | ID: mdl-30910724

ABSTRACT

Detailed whole brain segmentation is an essential quantitative technique in medical image analysis, which provides a non-invasive way of measuring brain regions from a clinical acquired structural magnetic resonance imaging (MRI). Recently, deep convolution neural network (CNN) has been applied to whole brain segmentation. However, restricted by current GPU memory, 2D based methods, downsampling based 3D CNN methods, and patch-based high-resolution 3D CNN methods have been the de facto standard solutions. 3D patch-based high resolution methods typically yield superior performance among CNN approaches on detailed whole brain segmentation (>100 labels), however, whose performance are still commonly inferior compared with state-of-the-art multi-atlas segmentation methods (MAS) due to the following challenges: (1) a single network is typically used to learn both spatial and contextual information for the patches, (2) limited manually traced whole brain volumes are available (typically less than 50) for training a network. In this work, we propose the spatially localized atlas network tiles (SLANT) method to distribute multiple independent 3D fully convolutional networks (FCN) for high-resolution whole brain segmentation. To address the first challenge, multiple spatially distributed networks were used in the SLANT method, in which each network learned contextual information for a fixed spatial location. To address the second challenge, auxiliary labels on 5111 initially unlabeled scans were created by multi-atlas segmentation for training. Since the method integrated multiple traditional medical image processing methods with deep learning, we developed a containerized pipeline to deploy the end-to-end solution. From the results, the proposed method achieved superior performance compared with multi-atlas segmentation methods, while reducing the computational time from >30 h to 15 min. The method has been made available in open source (https://github.com/MASILab/SLANTbrainSeg).


Subject(s)
Brain/anatomy & histology , Deep Learning , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Atlases as Topic , Humans , Magnetic Resonance Imaging/methods , Neuroimaging/methods
20.
Curr Med Imaging Rev ; 15(5): 443-452, 2019.
Article in English | MEDLINE | ID: mdl-32008551

ABSTRACT

BACKGROUND: This review aims to identify the development of the algorithms for brain tissue and structure segmentation in MRI images. DISCUSSION: Starting from the results of the Grand Challenges on brain tissue and structure segmentation held in Medical Image Computing and Computer-Assisted Intervention (MICCAI), this review analyses the development of the algorithms and discusses the tendency from multi-atlas label fusion to deep learning. The intrinsic characteristics of the winners' algorithms on the Grand Challenges from the year 2012 to 2018 are analyzed and the results are compared carefully. CONCLUSION: Although deep learning has got higher rankings in the challenge, it has not yet met the expectations in terms of accuracy. More effective and specialized work should be done in the future.


Subject(s)
Brain/diagnostic imaging , Deep Learning , Magnetic Resonance Imaging , Neuroimaging/methods , Algorithms , Brain Mapping/methods , Humans
SELECTION OF CITATIONS
SEARCH DETAIL