Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Magn Reson Imaging ; 87: 38-46, 2022 04.
Article in English | MEDLINE | ID: mdl-34968699

ABSTRACT

Recently, deep learning approaches with various network architectures have drawn significant attention from the magnetic resonance imaging (MRI) community because of their great potential for image reconstruction from undersampled k-space data in fast MRI. However, the robustness of a trained network when applied to test data deviated from training data is still an important open question. In this work, we focus on quantitatively evaluating the influence of image contrast, human anatomy, sampling pattern, undersampling factor, and noise level on the generalization of a trained network composed by a cascade of several CNNs and a data consistency layer, called a deep cascade of convolutional neural network (DC-CNN). The DC-CNN is trained from datasets with different image contrast, human anatomy, sampling pattern, undersampling factor, and noise level, and then applied to test datasets consistent or inconsistent with the training datasets to assess the generalizability of the learned DC-CNN network. The results of our experiments show that reconstruction quality from the DC-CNN network is highly sensitive to sampling pattern, undersampling factor, and noise level, which are closely related to signal-to-noise ratio (SNR), and is relatively less sensitive to the image contrast. We also show that a deviation of human anatomy between training and test data leads to a substantial reduction of image quality for the brain dataset, whereas comparable performance for the chest and knee dataset having fewer anatomy details than brain images. This work further provides some empirical understanding of the generalizability of trained networks when there are deviations between training and test data. It also demonstrates the potential of transfer learning for image reconstruction from datasets different from those used in training the network.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Brain/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Signal-To-Noise Ratio
2.
J Xray Sci Technol ; 28(4): 751-771, 2020.
Article in English | MEDLINE | ID: mdl-32597827

ABSTRACT

BACKGROUND: Triple-energy computed tomography (TECT) can obtain x-ray attenuation measurements at three energy spectra, thereby allowing identification of different material compositions with same or very similar attenuation coefficients. This ability is known as material decomposition, which can decompose TECT images into different basis material image. However, the basis material image would be severely degraded when material decomposition is directly performed on the noisy TECT measurements using a matrix inversion method. OBJECTIVE: To achieve high quality basis material image, we present a statistical image-based material decomposition method for TECT, which uses the penalized weighted least-squares (PWLS) criteria with total variation (TV) regularization (PWLS-TV). METHODS: The weighted least-squares term involves the noise statistical properties of the material decomposition process, and the TV regularization penalizes differences between local neighboring pixels in a decomposed image, thereby contributing to improving the quality of the basis material image. Subsequently, an alternating optimization method is used to minimize the objective function. RESULTS: The performance of PWLS-TV is quantitatively evaluated using digital and mouse thorax phantoms. The experimental results show that PWLS-TV material decomposition method can greatly improve the quality of decomposed basis material image compared to the quality of images obtained using the competing methods in terms of suppressing noise and preserving edge and fine structure details. CONCLUSIONS: The PWLS-TV method can simultaneously perform noise reduction and material decomposition in one iterative step, and it results in a considerable improvement of basis material image quality.


Subject(s)
Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Animals , Least-Squares Analysis , Mice , Phantoms, Imaging , Radiography, Thoracic , Signal-To-Noise Ratio
3.
Med Biol Eng Comput ; 57(9): 1933-1946, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31254175

ABSTRACT

A direct application of the compressed sensing (CS) theory to dynamic magnetic resonance imaging (MRI) reconstruction needs vectorization or matricization of the dynamic MRI data, which is composed of a stack of 2D images and can be naturally regarded as a tensor. This 1D/2D model may destroy the inherent spatial structure property of the data. An alternative way to exploit the multidimensional structure in dynamic MRI is to employ tensor decomposition for dictionary learning, that is, learning multiple dictionaries along each dimension (mode) and sparsely representing the multidimensional data with respect to the Kronecker product of these dictionaries. In this work, we introduce a novel tensor dictionary learning method under an orthonormal constraint on the elementary matrix of the tensor dictionary for dynamic MRI reconstruction. The proposed algorithm alternates sparse coding, tensor dictionary learning, and updating reconstruction, and each corresponding subproblem is efficiently solved by a closed-form solution. Numerical experiments on phantom and synthetic data show significant improvements in reconstruction accuracy and computational efficiency obtained by the proposed scheme over the existing method that uses the 1D/2D model with overcomplete dictionary learning. Graphical abstract Fig. 1 Comparison between (a) the traditional method and (b) the proposed method based on dictionary learning for dynamic MRI reconstruction.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Humans , Phantoms, Imaging
4.
Comput Biol Med ; 103: 167-182, 2018 12 01.
Article in English | MEDLINE | ID: mdl-30384175

ABSTRACT

In this paper, we present an iterative reconstruction for photon-counting CT using prior image constrained total generalized variation (PICTGV). This work aims to exploit structural correlation in the energy domain to reduce image noise in photon-counting CT with narrow energy bins. This is motived by the fact that the similarity between high-quality full-spectrum image and target image is an important prior knowledge for photon-counting CT reconstruction. The PICTGV method is implemented using a splitting-based fast iterative shrinkage-threshold algorithm (FISTA). Evaluations conducted with simulated and real photon-counting CT data demonstrate that PICTGV method outperforms the existing prior image constrained compressed sensing (PICCS) method in terms of noise reduction, artifact suppression and resolution preservation. In the simulated head data study, the average relative root mean squared error is reduced from 2.3% in PICCS method to 1.2% in PICTGV method, and the average universal quality index increases from 0.67 in PICCS method to 0.76 in PICTGV method. The results show that the present PICTGV method improves the performance of the PICCS method for photon-counting CT reconstruction with narrow energy bins.


Subject(s)
Head/diagnostic imaging , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Algorithms , Animals , Humans , Muscle, Skeletal/diagnostic imaging , Phantoms, Imaging , Photons , Sheep
5.
Inverse Probl ; 34(2)2018 Feb.
Article in English | MEDLINE | ID: mdl-30294061

ABSTRACT

Spectral computed tomography (CT) has been a promising technique in research and clinic because of its ability to produce improved energy resolution images with narrow energy bins. However, the narrow energy bin image is often affected by serious quantum noise because of the limited number of photons used in the corresponding energy bin. To address this problem, we present an iterative reconstruction method for spectral CT using nonlocal low-rank and sparse matrix decomposition (NLSMD), which exploits the self-similarity of patches that are collected in multi-energy images. Specifically, each set of patches can be decomposed into a low-rank component and a sparse component, and the low-rank component represents the stationary background over different energy bins, while the sparse component represents the rest of different spectral features in individual energy bins. Subsequently, an effective alternating optimization algorithm was developed to minimize the associated objective function. To validate and evaluate the NLSMD method, qualitative and quantitative studies were conducted by using simulated and real spectral CT data. Experimental results show that the NLSMD method improves spectral CT images in terms of noise reduction, artifacts suppression and resolution preservation.

6.
J Xray Sci Technol ; 2017 Apr 05.
Article in English | MEDLINE | ID: mdl-28387700

ABSTRACT

BCKGROUND: Accurate statistical model of the measured projection data is essential for computed tomography (CT) image reconstruction. The transmission data can be described by a compound Poisson distribution upon an electronic noise background. However, such a statistical distribution is numerically intractable for image reconstruction. OBJECTIVE: Although the sinogram data is easily manipulated, it lacks a statistical description for image reconstruction. To address this problem, we present an alpha-divergence constrained total generalized variation (AD-TGV) method for sparse-view x-ray CT image reconstruction. METHODS: The AD-TGV method is formulated as an optimization problem, which balances the alpha-divergence (AD) fidelity and total generalized variation (TGV) regularization in one framework. The alpha-divergence is used to measure the discrepancy between the measured and estimated projection data. The TGV regularization can effectively eliminate the staircase and patchy artifacts which is often observed in total variation (TV) regularization. A modified proximal forward-backward splitting algorithm was proposed to minimize the associated objective function. RESULTS: Qualitative and quantitative evaluations were carried out on both phantom and patient data. Compared with the original TV-based method, the evaluations clearly demonstrate that the AD-TGV method achieves higher accuracy and lower noise, while preserving structural details. CONCLUSIONS: The experimental results show that the presented AD-TGV method can achieve more gains over the AD-TV method in preserving structural details and suppressing image noise and undesired patchy artifacts. The authors can draw the conclusion that the presented AD-TGV method is potential for radiation dose reduction by lowering the milliampere-seconds (mAs) and/or reducing the number of projection views.

7.
Nan Fang Yi Ke Da Xue Xue Bao ; 37(12): 1585-1591, 2017 Dec 20.
Article in Chinese | MEDLINE | ID: mdl-29292249

ABSTRACT

OBJECTIVE: To obtain high?quality low?dose CT images using total generalized variation regularization based on the projection data for low?dose CT reconstruction. METHODS: The projection data of the CT images were transformed from Poisson distribution to Gaussian distribution using the linear Anscombe transform. The transformed data were then restored by an efficient total generalized variation minimization algorithm. Reconstruction was finally achieved by inverse Anscombe transform and filtered back projection (FBP) method. RESULTS: The image quality of low?dose CT was greatly improved by the proposed algorithm in both Clock and Shepp?Logan phantoms. The signal?to?noise ratios (SNRs) of the Clock and Shepp-Logan images reconstructed by FBP algorithm were 17.752 dB and 19.379 dB, which were increased by the proposed algorithm to 24.0352 and 23.4181 dB, respectively. The NMSE of the Clock and Shepp?Logan images reconstructed by FBP algorithm was 0.86% and 0.58%, which was reduced by the proposed algorithm to 0.2% and 0.23%, respectively. CONCLUSION: The proposed method can effectively suppress noise and strip artifacts in low?dose CT images when piecewise constant assumption is not possible.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Radiographic Image Interpretation, Computer-Assisted , Tomography, X-Ray Computed , Phantoms, Imaging , Radiation Dosage , Signal-To-Noise Ratio
8.
Neurocomputing (Amst) ; 197: 143-160, 2016 Jul 12.
Article in English | MEDLINE | ID: mdl-27440948

ABSTRACT

Cerebral perfusion x-ray computed tomography (PCT) is an important functional imaging modality for evaluating cerebrovascular diseases and has been widely used in clinics over the past decades. However, due to the protocol of PCT imaging with repeated dynamic sequential scans, the associative radiation dose unavoidably increases as compared with that used in conventional CT examinations. Minimizing the radiation exposure in PCT examination is a major task in the CT field. In this paper, considering the rich similarity redundancy information among enhanced sequential PCT images, we propose a low-dose PCT image restoration model by incorporating the low-rank and sparse matrix characteristic of sequential PCT images. Specifically, the sequential PCT images were first stacked into a matrix (i.e., low-rank matrix), and then a non-convex spectral norm/regularization and a spatio-temporal total variation norm/regularization were then built on the low-rank matrix to describe the low rank and sparsity of the sequential PCT images, respectively. Subsequently, an improved split Bregman method was adopted to minimize the associative objective function with a reasonable convergence rate. Both qualitative and quantitative studies were conducted using a digital phantom and clinical cerebral PCT datasets to evaluate the present method. Experimental results show that the presented method can achieve images with several noticeable advantages over the existing methods in terms of noise reduction and universal quality index. More importantly, the present method can produce more accurate kinetic enhanced details and diagnostic hemodynamic parameter maps.

9.
PLoS One ; 10(10): e0140579, 2015.
Article in English | MEDLINE | ID: mdl-26495975

ABSTRACT

Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as "ALM-ANAD". The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Radiographic Image Enhancement/methods , Tomography, X-Ray Computed/methods , Humans , Imaging, Three-Dimensional/methods , Phantoms, Imaging , Reproducibility of Results
10.
Phys Med Biol ; 59(12): 2997-3017, 2014 Jun 21.
Article in English | MEDLINE | ID: mdl-24842150

ABSTRACT

Sparse-view CT reconstruction algorithms via total variation (TV) optimize the data iteratively on the basis of a noise- and artifact-reducing model, resulting in significant radiation dose reduction while maintaining image quality. However, the piecewise constant assumption of TV minimization often leads to the appearance of noticeable patchy artifacts in reconstructed images. To obviate this drawback, we present a penalized weighted least-squares (PWLS) scheme to retain the image quality by incorporating the new concept of total generalized variation (TGV) regularization. We refer to the proposed scheme as 'PWLS-TGV' for simplicity. Specifically, TGV regularization utilizes higher order derivatives of the objective image, and the weighted least-squares term considers data-dependent variance estimation, which fully contribute to improving the image quality with sparse-view projection measurement. Subsequently, an alternating optimization algorithm was adopted to minimize the associative objective function. To evaluate the PWLS-TGV method, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present PWLS-TGV method can achieve images with several noticeable gains over the original TV-based method in terms of accuracy and resolution properties.


Subject(s)
Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Humans , Phantoms, Imaging , Torso/diagnostic imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...