Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
Front Cardiovasc Med ; 10: 1167500, 2023.
Article in English | MEDLINE | ID: mdl-37904806

ABSTRACT

Introduction: As the life expectancy of children with congenital heart disease (CHD) is rapidly increasing and the adult population with CHD is growing, there is an unmet need to improve clinical workflow and efficiency of analysis. Cardiovascular magnetic resonance (CMR) is a noninvasive imaging modality for monitoring patients with CHD. CMR exam is based on multiple breath-hold 2-dimensional (2D) cine acquisitions that should be precisely prescribed and is expert and institution dependent. Moreover, 2D cine images have relatively thick slices, which does not allow for isotropic delineation of ventricular structures. Thus, development of an isotropic 3D cine acquisition and automatic segmentation method is worthwhile to make CMR workflow straightforward and efficient, as the present work aims to establish. Methods: Ninety-nine patients with many types of CHD were imaged using a non-angulated 3D cine CMR sequence covering the whole-heart and great vessels. Automatic supervised and semi-supervised deep-learning-based methods were developed for whole-heart segmentation of 3D cine images to separately delineate the cardiac structures, including both atria, both ventricles, aorta, pulmonary arteries, and superior and inferior vena cavae. The segmentation results derived from the two methods were compared with the manual segmentation in terms of Dice score, a degree of overlap agreement, and atrial and ventricular volume measurements. Results: The semi-supervised method resulted in a better overlap agreement with the manual segmentation than the supervised method for all 8 structures (Dice score 83.23 ± 16.76% vs. 77.98 ± 19.64%; P-value ≤0.001). The mean difference error in atrial and ventricular volumetric measurements between manual segmentation and semi-supervised method was lower (bias ≤ 5.2 ml) than the supervised method (bias ≤ 10.1 ml). Discussion: The proposed semi-supervised method is capable of cardiac segmentation and chamber volume quantification in a CHD population with wide anatomical variability. It accurately delineates the heart chambers and great vessels and can be used to accurately calculate ventricular and atrial volumes throughout the cardiac cycle. Such a segmentation method can reduce inter- and intra- observer variability and make CMR exams more standardized and efficient.

2.
Med Image Anal ; 80: 102469, 2022 08.
Article in English | MEDLINE | ID: mdl-35640385

ABSTRACT

Training deep learning models that segment an image in one step typically requires a large collection of manually annotated images that captures the anatomical variability in a cohort. This poses challenges when anatomical variability is extreme but training data is limited, as when segmenting cardiac structures in patients with congenital heart disease (CHD). In this paper, we propose an iterative segmentation model and show that it can be accurately learned from a small dataset. Implemented as a recurrent neural network, the model evolves a segmentation over multiple steps, from a single user click until reaching an automatically determined stopping point. We develop a novel loss function that evaluates the entire sequence of output segmentations, and use it to learn model parameters. Segmentations evolve predictably according to growth dynamics encapsulated by training data, which consists of images, partially completed segmentations, and the recommended next step. The user can easily refine the final segmentation by examining those that are earlier or later in the output sequence. Using a dataset of 3D cardiac MR scans from patients with a wide range of CHD types, we show that our iterative model offers better generalization to patients with the most severe heart malformations.


Subject(s)
Heart Defects, Congenital , Neural Networks, Computer , Heart/diagnostic imaging , Heart Defects, Congenital/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Thorax
3.
BMC Med Imaging ; 21(1): 69, 2021 04 13.
Article in English | MEDLINE | ID: mdl-33849483

ABSTRACT

BACKGROUND: In oncology, the correct determination of nodal metastatic disease is essential for patient management, as patient treatment and prognosis are closely linked to the stage of the disease. The aim of the study was to develop a tool for automatic 3D detection and segmentation of lymph nodes (LNs) in computed tomography (CT) scans of the thorax using a fully convolutional neural network based on 3D foveal patches. METHODS: The training dataset was collected from the Computed Tomography Lymph Nodes Collection of the Cancer Imaging Archive, containing 89 contrast-enhanced CT scans of the thorax. A total number of 4275 LNs was segmented semi-automatically by a radiologist, assessing the entire 3D volume of the LNs. Using this data, a fully convolutional neuronal network based on 3D foveal patches was trained with fourfold cross-validation. Testing was performed on an unseen dataset containing 15 contrast-enhanced CT scans of patients who were referred upon suspicion or for staging of bronchial carcinoma. RESULTS: The algorithm achieved a good overall performance with a total detection rate of 76.9% for enlarged LNs during fourfold cross-validation in the training dataset with 10.3 false-positives per volume and of 69.9% in the unseen testing dataset. In the training dataset a better detection rate was observed for enlarged LNs compared to smaller LNs, the detection rate for LNs with a short-axis diameter (SAD) ≥ 20 mm and SAD 5-10 mm being 91.6% and 62.2% (p < 0.001), respectively. Best detection rates were obtained for LNs located in Level 4R (83.6%) and Level 7 (80.4%). CONCLUSIONS: The proposed 3D deep learning approach achieves an overall good performance in the automatic detection and segmentation of thoracic LNs and shows reasonable generalizability, yielding the potential to facilitate detection during routine clinical work and to enable radiomics research without observer-bias.


Subject(s)
Carcinoma, Bronchogenic/diagnostic imaging , Deep Learning , Lung Neoplasms/diagnostic imaging , Lymph Nodes/diagnostic imaging , Neural Networks, Computer , Tomography, X-Ray Computed/methods , Adult , Aged , Axilla , Contrast Media/administration & dosage , Datasets as Topic , Female , Humans , Lymphatic Metastasis/diagnostic imaging , Male , Mediastinum , Middle Aged , Thorax
4.
Neuroimage Clin ; 17: 169-178, 2018.
Article in English | MEDLINE | ID: mdl-29071211

ABSTRACT

Myelin imaging is a form of quantitative magnetic resonance imaging (MRI) that measures myelin content and can potentially allow demyelinating diseases such as multiple sclerosis (MS) to be detected earlier. Although focal lesions are the most visible signs of MS pathology on conventional MRI, it has been shown that even tissues that appear normal may exhibit decreased myelin content as revealed by myelin-specific images (i.e., myelin maps). Current methods for analyzing myelin maps typically use global or regional mean myelin measurements to detect abnormalities, but ignore finer spatial patterns that may be characteristic of MS. In this paper, we present a machine learning method to automatically learn, from multimodal MR images, latent spatial features that can potentially improve the detection of MS pathology at early stage. More specifically, 3D image patches are extracted from myelin maps and the corresponding T1-weighted (T1w) MRIs, and are used to learn a latent joint myelin-T1w feature representation via unsupervised deep learning. Using a data set of images from MS patients and healthy controls, a common set of patches are selected via a voxel-wise t-test performed between the two groups. In each MS image, any patches overlapping with focal lesions are excluded, and a feature imputation method is used to fill in the missing values. A feature selection process (LASSO) is then utilized to construct a sparse representation. The resulting normal-appearing features are used to train a random forest classifier. Using the myelin and T1w images of 55 relapse-remitting MS patients and 44 healthy controls in an 11-fold cross-validation experiment, the proposed method achieved an average classification accuracy of 87.9% (SD = 8.4%), which is higher and more consistent across folds than those attained by regional mean myelin (73.7%, SD = 13.7%) and T1w measurements (66.7%, SD = 10.6%), or deep-learned features in either the myelin (83.8%, SD = 11.0%) or T1w (70.1%, SD = 13.6%) images alone, suggesting that the proposed method has strong potential for identifying image features that are more sensitive and specific to MS pathology in normal-appearing brain tissues.


Subject(s)
Brain Mapping , Brain/diagnostic imaging , Magnetic Resonance Imaging , Multiple Sclerosis/diagnostic imaging , Myelin Sheath/pathology , Adult , Case-Control Studies , Female , Humans , Image Processing, Computer-Assisted , Machine Learning , Male , Middle Aged
5.
Shape Med Imaging (2018) ; 11167: 291-299, 2018 Sep.
Article in English | MEDLINE | ID: mdl-31093609

ABSTRACT

Organ-at-risk (OAR) segmentation is a key step for radiotherapy treatment planning. Model-based segmentation (MBS) has been successfully used for the fully automatic segmentation of anatomical structures and it has proven to be robust to noise due to its incorporated shape prior knowledge. In this work, we investigate the advantages of combining neural networks with the prior anatomical shape knowledge of the model-based segmentation of organs-at-risk for brain radiotherapy (RT) on Magnetic Resonance Imaging (MRI). We train our boundary detectors using two different approaches: classic strong gradients as described in [4] and as a locally adaptive regression task, where for each triangle a convolutional neural network (CNN) was trained to estimate the distances between the mesh triangles and organ boundary, which were then combined into a single network, as described by [1]. We evaluate both methods using a 5-fold cross- validation on both T1w and T2w brain MRI data from sixteen primary and metastatic brain cancer patients (some post-surgical). Using CNN-based boundary detectors improved the results for all structures in both T1w and T2w data. The improvements were statistically significant (p < 0.05) for all segmented structures in the T1w images and only for the auditory system in the T2w images.

6.
Article in English | MEDLINE | ID: mdl-31172133

ABSTRACT

We propose a new iterative segmentation model which can be accurately learned from a small dataset. A common approach is to train a model to directly segment an image, requiring a large collection of manually annotated images to capture the anatomical variability in a cohort. In contrast, we develop a segmentation model that recursively evolves a segmentation in several steps, and implement it as a recurrent neural network. We learn model parameters by optimizing the intermediate steps of the evolution in addition to the final segmentation. To this end, we train our segmentation propagation model by presenting incomplete and/or inaccurate input segmentations paired with a recommended next step. Our work aims to alleviate challenges in segmenting heart structures from cardiac MRI for patients with congenital heart disease (CHD), which encompasses a range of morphological deformations and topological changes. We demonstrate the advantages of this approach on a dataset of 20 images from CHD patients, learning a model that accurately segments individual heart chambers and great vessels. Compared to direct segmentation, the iterative method yields more accurate segmentation for patients with the most severe CHD malformations.

7.
Neuroimage ; 152: 312-329, 2017 05 15.
Article in English | MEDLINE | ID: mdl-28286318

ABSTRACT

An important image processing step in spinal cord magnetic resonance imaging is the ability to reliably and accurately segment grey and white matter for tissue specific analysis. There are several semi- or fully-automated segmentation methods for cervical cord cross-sectional area measurement with an excellent performance close or equal to the manual segmentation. However, grey matter segmentation is still challenging due to small cross-sectional size and shape, and active research is being conducted by several groups around the world in this field. Therefore a grey matter spinal cord segmentation challenge was organised to test different capabilities of various methods using the same multi-centre and multi-vendor dataset acquired with distinct 3D gradient-echo sequences. This challenge aimed to characterize the state-of-the-art in the field as well as identifying new opportunities for future improvements. Six different spinal cord grey matter segmentation methods developed independently by various research groups across the world and their performance were compared to manual segmentation outcomes, the present gold-standard. All algorithms provided good overall results for detecting the grey matter butterfly, albeit with variable performance in certain quality-of-segmentation metrics. The data have been made publicly available and the challenge web site remains open to new submissions. No modifications were introduced to any of the presented methods as a result of this challenge for the purposes of this publication.


Subject(s)
Brain Mapping/methods , Cervical Cord/anatomy & histology , Gray Matter/anatomy & histology , Image Processing, Computer-Assisted/methods , Adult , Algorithms , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Reproducibility of Results , White Matter/anatomy & histology
8.
IEEE Trans Med Imaging ; 35(5): 1229-1239, 2016 05.
Article in English | MEDLINE | ID: mdl-26886978

ABSTRACT

We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Multiple Sclerosis/diagnostic imaging , Neural Networks, Computer , Algorithms , Brain/diagnostic imaging , Brain/pathology , Humans , Machine Learning , Magnetic Resonance Imaging , Multiple Sclerosis/pathology
9.
Neural Comput ; 27(1): 211-27, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25380341

ABSTRACT

Deep learning has traditionally been computationally expensive, and advances in training methods have been the prerequisite for improving its efficiency in order to expand its application to a variety of image classification problems. In this letter, we address the problem of efficient training of convolutional deep belief networks by learning the weights in the frequency domain, which eliminates the time-consuming calculation of convolutions. An essential consideration in the design of the algorithm is to minimize the number of transformations to and from frequency space. We have evaluated the running time improvements using two standard benchmark data sets, showing a speed-up of up to 8 times on 2D images and up to 200 times on 3D volumes. Our training algorithm makes training of convolutional deep belief networks on 3D medical images with a resolution of up to 128×128×128 voxels practical, which opens new directions for using deep learning for medical image analysis.


Subject(s)
Image Interpretation, Computer-Assisted , Learning/physiology , Neural Networks, Computer , Algorithms , Animals , Computer Graphics , Humans
10.
Article in English | MEDLINE | ID: mdl-25485412

ABSTRACT

Changes in brain morphology and white matter lesions are two hallmarks of multiple sclerosis (MS) pathology, but their variability beyond volumetrics is poorly characterized. To further our understanding of complex MS pathology, we aim to build a statistical model of brain images that can automatically discover spatial patterns of variability in brain morphology and lesion distribution. We propose building such a model using a deep belief network (DBN), a layered network whose parameters can be learned from training images. In contrast to other manifold learning algorithms, the DBN approach does not require a prebuilt proximity graph, which is particularly advantageous for modeling lesions, because their sparse and random nature makes defining a suitable distance measure between lesion images challenging. Our model consists of a morphology DBN, a lesion DBN, and a joint DBN that models concurring morphological and lesion patterns. Our results show that this model can automatically discover the classic patterns of MS pathology, as well as more subtle ones, and that the parameters computed have strong relationships to MS clinical scores.


Subject(s)
Artificial Intelligence , Brain/pathology , Diffusion Tensor Imaging/methods , Models, Statistical , Multiple Sclerosis/pathology , Nerve Fibers, Myelinated/pathology , Pattern Recognition, Automated/methods , Computer Simulation , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Models, Neurological , Reproducibility of Results , Sensitivity and Specificity
11.
Med Image Comput Comput Assist Interv ; 16(Pt 2): 633-40, 2013.
Article in English | MEDLINE | ID: mdl-24579194

ABSTRACT

Manifold learning of medical images plays a potentially important role for modeling anatomical variability within a population with pplications that include segmentation, registration, and prediction of clinical parameters. This paper describes a novel method for learning the manifold of 3D brain images that, unlike most existing manifold learning methods, does not require the manifold space to be locally linear, and does not require a predefined similarity measure or a prebuilt proximity graph. Our manifold learning method is based on deep learning, a machine learning approach that uses layered networks (called deep belief networks, or DBNs) and has received much attention recently in the computer vision field due to their success in object recognition tasks. DBNs have traditionally been too computationally expensive for application to 3D images due to the large number of trainable parameters. Our primary contributions are (1) a much more computationally efficient training method for DBNs that makes training on 3D medical images with a resolution of up to 128 x 128 x 128 practical, and (2) the demonstration that DBNs can learn a low-dimensional manifold of brain volumes that detects modes of variations that correlate to demographic and disease parameters.


Subject(s)
Artificial Intelligence , Brain Diseases/pathology , Brain/pathology , Imaging, Three-Dimensional/methods , Information Storage and Retrieval/methods , Magnetic Resonance Imaging/methods , Pattern Recognition, Automated/methods , Algorithms , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...