Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Patch Based Tech Med Imaging (2017) ; 10530: 12-19, 2017 Sep.
Article in English | MEDLINE | ID: mdl-29104969

ABSTRACT

Automatic labeling of anatomical structures in brain images plays an important role in neuroimaging analysis. Among all methods, multi-atlas based segmentation methods are widely used, due to their robustness in propagating prior label information. However, non-linear registration is always needed, which is time-consuming. Alternatively, the patch-based methods have been proposed to relax the requirement of image registration, but the labeling is often determined independently by the target image information, without getting direct assistance from the atlases. To address these limitations, in this paper, we propose a multi-atlas guided 3D fully convolutional networks (FCN) for brain image labeling. Specifically, multi-atlas based guidance is incorporated during the network learning. Based on this, the discriminative of the FCN is boosted, which eventually contribute to accurate prediction. Experiments show that the use of multi-atlas guidance improves the brain labeling performance.

2.
Med Phys ; 44(5): 1661-1677, 2017 May.
Article in English | MEDLINE | ID: mdl-28177548

ABSTRACT

PURPOSE: High-resolution MR images can depict rich details of brain anatomical structures and show subtle changes in longitudinal data. 7T MRI scanners can acquire MR images with higher resolution and better tissue contrast than the routine 3T MRI scanners. However, 7T MRI scanners are currently more expensive and less available in clinical and research centers. To this end, we propose a method to generate super-resolution 3T MRI that resembles 7T MRI, which is called as 7T-like MR image in this paper. METHODS: First, we propose a mapping from 3T MRI to 7T MRI space, using regression random forest. The mapped 3T MR images serve as intermediate results with similar appearance as 7T MR images. Second, we predict the final higher resolution 7T-like MR images based on sparse representation, using paired local dictionaries for both the mapped 3T MR images and 7T MR images. RESULTS: Based on 15 subjects with both 3T and 7T MR images, the predicted 7T-like MR images by our method can best match the ground-truth 7T MR images, compared to other methods. Meanwhile, the experiment on brain tissue segmentation shows that our 7T-like MR images lead to the highest accuracy in the segmentation of WM, GM, and CSF brain tissues, compared to segmentations of 3T MR images as well as the reconstructed 7T-like MR images by other methods. CONCLUSIONS: We propose a novel method for prediction of high-resolution 7T-like MR images from low-resolution 3T MR images. Our predicted 7T-like MR images demonstrate better spatial resolution compared to 3T MR images, as well as prediction results by other comparison methods. Such high-quality 7T-like MR images could better facilitate disease diagnosis and intervention.


Subject(s)
Brain/diagnostic imaging , Magnetic Resonance Imaging , Humans , Sensitivity and Specificity
3.
Med Image Comput Comput Assist Interv ; 10433: 764-772, 2017 Sep.
Article in English | MEDLINE | ID: mdl-30009281

ABSTRACT

7T MRI scanner provides MR images with higher resolution and better contrast than 3T MR scanners. This helps many medical analysis tasks, including tissue segmentation. However, currently there is a very limited number of 7T MRI scanners worldwide. This motivates us to propose a novel image post-processing framework that can jointly generate high-resolution 7T-like images and their corresponding high-quality 7T-like tissue segmentation maps, solely from the routine 3T MR images. Our proposed framework comprises two parallel components, namely (1) reconstruction and (2) segmentation. The reconstruction component includes the multi-step cascaded convolutional neural networks (CNNs) that map the input 3T MR image to a 7T-like MR image, in terms of both resolution and contrast. Similarly, the segmentation component involves another paralleled cascaded CNNs, with a different architecture, to generate high-quality segmentation maps. These cascaded feedbacks between the two designed paralleled CNNs allow both tasks to mutually benefit from each another when learning the respective reconstruction and segmentation mappings. For evaluation, we have tested our framework on 15 subjects (with paired 3T and 7T images) using a leave-one-out cross-validation. The experimental results show that our estimated 7T-like images have richer anatomical details and better segmentation results, compared to the 3T MRI. Furthermore, our method also achieved better results in both reconstruction and segmentation tasks, compared to the state-of-the-art methods.

4.
Med Image Anal ; 36: 162-171, 2017 02.
Article in English | MEDLINE | ID: mdl-27914302

ABSTRACT

Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Algorithms , Alzheimer Disease/diagnostic imaging , Hippocampus/diagnostic imaging , Humans , Neuroimaging/methods
5.
IEEE Trans Med Imaging ; 35(9): 2085-97, 2016 09.
Article in English | MEDLINE | ID: mdl-27046894

ABSTRACT

In the recent MRI scanning, ultra-high-field (7T) MR imaging provides higher resolution and better tissue contrast compared to routine 3T MRI, which may help in more accurate and early brain diseases diagnosis. However, currently, 7T MRI scanners are more expensive and less available at clinical and research centers. These motivate us to propose a method for the reconstruction of images close to the quality of 7T MRI, called 7T-like images, from 3T MRI, to improve the quality in terms of resolution and contrast. By doing so, the post-processing tasks, such as tissue segmentation, can be done more accurately and brain tissues details can be seen with higher resolution and contrast. To do this, we have acquired a unique dataset which includes paired 3T and 7T images scanned from same subjects, and then propose a hierarchical reconstruction based on group sparsity in a novel multi-level Canonical Correlation Analysis (CCA) space, to improve the quality of 3T MR image to be 7T-like MRI. First, overlapping patches are extracted from the input 3T MR image. Then, by extracting the most similar patches from all the aligned 3T and 7T images in the training set, the paired 3T and 7T dictionaries are constructed for each patch. It is worth noting that, for the training, we use pairs of 3T and 7T MR images from each training subject. Then, we propose multi-level CCA to map the paired 3T and 7T patch sets to a common space to increase their correlations. In such space, each input 3T MRI patch is sparsely represented by the 3T dictionary and then the obtained sparse coefficients are used together with the corresponding 7T dictionary to reconstruct the 7T-like patch. Also, to have the structural consistency between adjacent patches, the group sparsity is employed. This reconstruction is performed with changing patch sizes in a hierarchical framework. Experiments have been done using 13 subjects with both 3T and 7T MR images. The results show that our method outperforms previous methods and is able to recover better structural details. Also, to place our proposed method in a medical application context, we evaluated the influence of post-processing methods such as brain tissue segmentation on the reconstructed 7T-like MR images. Results show that our 7T-like images lead to higher accuracy in segmentation of white matter (WM), gray matter (GM), cerebrospinal fluid (CSF), and skull, compared to segmentation of 3T MR images.


Subject(s)
Magnetic Resonance Imaging , Brain , Humans
6.
Med Image Comput Comput Assist Interv ; 9901: 572-580, 2016 Oct.
Article in English | MEDLINE | ID: mdl-28149968

ABSTRACT

The emerging era of ultra-high-field MRI using 7T MRI scanners dramatically improved sensitivity, image resolution, and tissue contrast when compared to 3T MRI scanners in examining various anatomical structures. The advantages of these high-resolution MR images include higher segmentation accuracy of MRI brain tissues. However, currently, accessibility to 7T MRI scanners remains much more limited than 3T MRI scanners due to technological and economical constraints. Hence, we propose in this work the first learning-based model that improves the segmentation of an input 3T MR image with any conventional segmentation method, through the reconstruction of a higher-quality 7T-like MR image, without actually acquiring an ultra-high-field 7T MRI. Our proposed framework comprises two main steps. First, we estimate a non-linear mapping from 3T MRI to 7T MRI space, using random forest regression model with novel weighting and ensembling schemes, to reconstruct initial 7T-like MR images. Second, we use a group sparse representation with a new pre-selection approach to further refine the 7T-like MR image reconstruction. We evaluated our 7T MRI reconstruction results along with their segmentation results using 13 subjects acquired with both 3T and 7T MR images. For tissue segmentation, we applied two widely used segmentation methods (FAST and SPM) to perform the experiments. Our results showed (1) the improvement of WM, GM and CSF brain tissues segmentation results when guided by reconstructed 7T-like images compared to 3T MR images, and (2) the outperformance of the proposed 7T MRI reconstruction method when compared to other state-of-the-art methods.


Subject(s)
Brain/diagnostic imaging , Magnetic Resonance Imaging/methods , Neuroimaging/methods , Algorithms , Brain/anatomy & histology , Humans , Reproducibility of Results , Sensitivity and Specificity
7.
IEEE Trans Cybern ; 46(1): 39-50, 2016 Jan.
Article in English | MEDLINE | ID: mdl-25647763

ABSTRACT

Blur is a key determinant in the perception of image quality. Generally, blur causes spread of edges, which leads to shape changes in images. Discrete orthogonal moments have been widely studied as effective shape descriptors. Intuitively, blur can be represented using discrete moments since noticeable blur affects the magnitudes of moments of an image. With this consideration, this paper presents a blind image blur evaluation algorithm based on discrete Tchebichef moments. The gradient of a blurred image is first computed to account for the shape, which is more effective for blur representation. Then the gradient image is divided into equal-size blocks and the Tchebichef moments are calculated to characterize image shape. The energy of a block is computed as the sum of squared non-DC moment values. Finally, the proposed image blur score is defined as the variance-normalized moment energy, which is computed with the guidance of a visual saliency model to adapt to the characteristic of human visual system. The performance of the proposed method is evaluated on four public image quality databases. The experimental results demonstrate that our method can produce blur scores highly consistent with subjective evaluations. It also outperforms the state-of-the-art image blur metrics and several general-purpose no-reference quality metrics.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Humans , Models, Theoretical , Photography
8.
Med Image Comput Comput Assist Interv ; 9351: 190-197, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26942233

ABSTRACT

Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary.

9.
Med Image Comput Comput Assist Interv ; 9350: 659-666, 2015 Oct.
Article in English | MEDLINE | ID: mdl-30101232

ABSTRACT

Advancements in 7T MR imaging bring higher spatial resolution and clearer tissue contrast, in comparison to the conventional 3T and 1.5T MR scanners. However, 7T MRI scanners are less accessible at the current stage due to higher costs. Through analyzing the appearances of 7T images, we could improve both the resolution and quality of 3T images by properly mapping them to 7T-like images; thus, promoting more accurate post-processing tasks, such as segmentation. To achieve this method based on an unique dataset acquired both 3T and 7T images from same subjects, we propose novel multi-level Canonical Correlation Analysis (CCA) method and group sparsity as a hierarchical framework to reconstruct 7T-like MRI from 3T MRI. First, the input 3T MR image is partitioned into a set of overlapping patches. For each patch, the local coupled 3T and 7T dictionaries are constructed by extracting the patches from a neighboring region from all aligned 3T and 7T images in the training set. In the training phase, we have both 3T and 7T MR images scanned from each training subject. Then, these two patch sets are mapped to the same space using multi-level CCA. Next, each input 3T MRI patch is sparsely represented by the 3T dictionary and then the obtained sparse coefficients are utilized to reconstruct the 7T patch with the corresponding 7T dictionary. Group sparsity is further utilized to maintain the consistency between neighboring patches. Such reconstruction is performed hierarchically with adaptive patch size. The experiments were performed on 10 subjects who had both 3T and 7T MR images. Experimental results demonstrate that our proposed method is capable of recovering rich structural details and outperforms other methods, including the sparse representation method and CCA method.

SELECTION OF CITATIONS
SEARCH DETAIL
...