Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
NMR Biomed ; 34(1): e4405, 2021 01.
Article in English | MEDLINE | ID: mdl-32875668

ABSTRACT

Highly accelerated real-time cine MRI using compressed sensing (CS) is a promising approach to achieve high spatio-temporal resolution and clinically acceptable image quality in patients with arrhythmia and/or dyspnea. However, its lengthy image reconstruction time may hinder its clinical translation. The purpose of this study was to develop a neural network for reconstruction of non-Cartesian real-time cine MRI k-space data faster (<1 min per slice with 80 frames) than graphics processing unit (GPU)-accelerated CS reconstruction, without significant loss in image quality or accuracy in left ventricular (LV) functional parameters. We introduce a perceptual complex neural network (PCNN) that trains on complex-valued MRI signal and incorporates a perceptual loss term to suppress incoherent image details. This PCNN was trained and tested with multi-slice, multi-phase, cine images from 40 patients (20 for training, 20 for testing), where the zero-filled images were used as input and the corresponding CS reconstructed images were used as practical ground truth. The resulting images were compared using quantitative metrics (structural similarity index (SSIM) and normalized root mean square error (NRMSE)) and visual scores (conspicuity, temporal fidelity, artifacts, and noise scores), individually graded on a five-point scale (1, worst; 3, acceptable; 5, best), and LV ejection fraction (LVEF). The mean processing time per slice with 80 frames for PCNN was 23.7 ± 1.9 s for pre-processing (Step 1, same as CS) and 0.822 ± 0.004 s for dealiasing (Step 2, 166 times faster than CS). Our PCNN produced higher data fidelity metrics (SSIM = 0.88 ± 0.02, NRMSE = 0.014 ± 0.004) compared with CS. While all the visual scores were significantly different (P < 0.05), the median scores were all 4.0 or higher for both CS and PCNN. LVEFs measured from CS and PCNN were strongly correlated (R2 = 0.92) and in good agreement (mean difference = -1.4% [2.3% of mean]; limit of agreement = 10.6% [17.6% of mean]). The proposed PCNN is capable of rapid reconstruction (25 s per slice with 80 frames) of non-Cartesian real-time cine MRI k-space data, without significant loss in image quality or accuracy in LV functional parameters.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Magnetic Resonance Imaging, Cine , Neural Networks, Computer , Aged , Data Compression , Female , Humans , Male
2.
Radiol Cardiothorac Imaging ; 2(3): e190205, 2020 Jun 25.
Article in English | MEDLINE | ID: mdl-32656535

ABSTRACT

PURPOSE: To implement an integrated reconstruction pipeline including a graphics processing unit (GPU)-based convolutional neural network (CNN) architecture and test whether it reconstructs four-dimensional non-Cartesian, non-contrast material-enhanced MR angiographic k-space data faster than a central processing unit (CPU)-based compressed sensing (CS) reconstruction pipeline, without significant losses in data fidelity, summed visual score (SVS), or arterial vessel-diameter measurements. MATERIALS AND METHODS: Raw k-space data of 24 patients (18 men and six women; mean age, 56.8 years ± 11.8 [standard deviation]) suspected of having thoracic aortic disease were used to evaluate the proposed reconstruction pipeline derived from an open-source three-dimensional CNN. For training, 4800 zero-filled images and the corresponding CS-reconstructed images from 10 patients were used as input-output pairs. For testing, 6720 zero-filled images from 14 different patients were used as inputs to a trained CNN. Metrics for evaluating the agreement between the CNN and CS images included reconstruction times, structural similarity index (SSIM) and normalized root-mean-square error (NRMSE), SVS (3 = nondiagnostic, 9 = clinically acceptable, 15 = excellent), and vessel diameters. RESULTS: The mean reconstruction time was 65 times and 69 times shorter for the CPU-based and GPU-based CNN pipelines (216.6 seconds ± 40.5 and 204.9 seconds ± 40.5), respectively, than for CS (14 152.3 seconds ± 1708.6) (P < .001). Compared with CS as practical ground truth, CNNs produced high data fidelity (SSIM = 0.94 ± 0.02, NRMSE = 2.8% ± 0.4) and not significantly different (P = .25) SVS and aortic diameters, except at one out of seven locations, where the percentage difference was only 3% (ie, clinically irrelevant). CONCLUSION: The proposed integrated reconstruction pipeline including a CNN architecture is capable of rapidly reconstructing time-resolved volumetric cardiovascular MRI k-space data, without a significant loss in data quality, thereby supporting clinical translation of said non-contrast-enhanced MR angiograms. Supplemental material is available for this article. © RSNA, 2020.

3.
IEEE Trans Pattern Anal Mach Intell ; 36(10): 1909-21, 2014 Oct.
Article in English | MEDLINE | ID: mdl-26352624

ABSTRACT

Over the last decade, a number of computational imaging (CI) systems have been proposed for tasks such as motion deblurring, defocus deblurring and multispectral imaging. These techniques increase the amount of light reaching the sensor via multiplexing and then undo the deleterious effects of multiplexing by appropriate reconstruction algorithms. Given the widespread appeal and the considerable enthusiasm generated by these techniques, a detailed performance analysis of the benefits conferred by this approach is important. Unfortunately, a detailed analysis of CI has proven to be a challenging problem because performance depends equally on three components: (1) the optical multiplexing, (2) the noise characteristics of the sensor, and (3) the reconstruction algorithm which typically uses signal priors. A few recent papers [12], [30], [49] have performed analysis taking multiplexing and noise characteristics into account. However, analysis of CI systems under state-of-the-art reconstruction algorithms, most of which exploit signal prior models, has proven to be unwieldy. In this paper, we present a comprehensive analysis framework incorporating all three components. In order to perform this analysis, we model the signal priors using a Gaussian Mixture Model (GMM). A GMM prior confers two unique characteristics. First, GMM satisfies the universal approximation property which says that any prior density function can be approximated to any fidelity using a GMM with appropriate number of mixtures. Second, a GMM prior lends itself to analytical tractability allowing us to derive simple expressions for the `minimum mean square error' (MMSE) which we use as a metric to characterize the performance of CI systems. We use our framework to analyze several previously proposed CI techniques (focal sweep, flutter shutter, parabolic exposure, etc.), giving conclusive answer to the question: `How much performance gain is due to use of a signal prior and how much is due to multiplexing? Our analysis also clearly shows that multiplexing provides significant performance gains above and beyond the gains obtained due to use of signal priors.

4.
J Opt Soc Am A Opt Image Sci Vis ; 28(12): 2540-53, 2011 Dec 01.
Article in English | MEDLINE | ID: mdl-22193267

ABSTRACT

The resolution of a camera system determines the fidelity of visual features in captured images. Higher resolution implies greater fidelity and, thus, greater accuracy when performing automated vision tasks, such as object detection, recognition, and tracking. However, the resolution of any camera is fundamentally limited by geometric aberrations. In the past, it has generally been accepted that the resolution of lenses with geometric aberrations cannot be increased beyond a certain threshold. We derive an analytic scaling law showing that, for lenses with spherical aberrations, resolution can be increased beyond the aberration limit by applying a postcapture deblurring step. We then show that resolution can be further increased when image priors are introduced. Based on our analysis, we advocate for computational camera designs consisting of a spherical lens shared by several small planar sensors. We show example images captured with a proof-of-concept gigapixel camera, demonstrating that high resolution can be achieved with a compact form factor and low complexity. We conclude with an analysis on the trade-off between performance and complexity for computational imaging systems with spherical lenses.

5.
Appl Opt ; 46(8): 1244-50, 2007 Mar 10.
Article in English | MEDLINE | ID: mdl-17318244

ABSTRACT

Volumetric 3D displays are frequently purported to lack the ability to reconstruct scenes with viewer-position-dependent effects such as occlusion. To counter these claims, a swept-screen 198-view horizontal-parallax-only 3D display is reported here that is capable of viewer-position-dependent effects. A digital projector illuminates a rotating vertical diffuser with a series of multiperspective 768 x 768 pixel renderings of a 3D scene. Evidence of near-far object occlusion is reported. The aggregate virtual screen surface for a stationary observer is described, as are guidelines to construct a full-parallax system and the theoretical ability of the present system to project imagery outside of the volume swept by the screen.

SELECTION OF CITATIONS
SEARCH DETAIL
...