Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Placenta ; 2024 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-39153938

RESUMEN

The leading cause of perinatal mortality is fetal growth restriction (FGR), defined as in utero fetal growth below the 10th percentile. Insufficient exchange of oxygen and nutrients at the maternal-fetal interface is associated with FGR. This transport occurs through the vasculature of the placenta, particularly in the terminal villi, where the vascular membranes have a large surface area and are the thinnest. Altered structure of the placenta villi is thought to contribute to decreased oxygen exchange efficiency, however, understanding how the three-dimensional microstructure and properties decrease this efficiency remains a challenge. Here, a novel, multiscale workflow is presented to quantify patient-specific biophysical properties, 3D structural features, and blood flow of the villous tissue. Namely, nanoindentation, optical coherence tomography, and ultrasound imaging were employed to measure the time-dependent material properties of placenta tissue, the 3D structure of villous tissue, and blood flow through the villi to characterize the microvasculature of the placenta at increasing length scales. Quantifying the biophysical properties, the 3D architecture, and blood flow in the villous tissue can be used to infer changes in maternal-fetal oxygen transport at the villous membrane. Overall, this multiscale understanding will advance knowledge of how microvascular changes in the placenta ultimately lead to FGR, opening opportunities for diagnosis and intervention.

2.
Magn Reson Med ; 92(3): 1048-1063, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38725383

RESUMEN

PURPOSE: To introduce a novel deep model-based architecture (DMBA), SPICER, that uses pairs of noisy and undersampled k-space measurements of the same object to jointly train a model for MRI reconstruction and automatic coil sensitivity estimation. METHODS: SPICER consists of two modules to simultaneously reconstructs accurate MR images and estimates high-quality coil sensitivity maps (CSMs). The first module, CSM estimation module, uses a convolutional neural network (CNN) to estimate CSMs from the raw measurements. The second module, DMBA-based MRI reconstruction module, forms reconstructed images from the input measurements and the estimated CSMs using both the physical measurement model and learned CNN prior. With the benefit of our self-supervised learning strategy, SPICER can be efficiently trained without any fully sampled reference data. RESULTS: We validate SPICER on both open-access datasets and experimentally collected data, showing that it can achieve state-of-the-art performance in highly accelerated data acquisition settings (up to 10 × $$ 10\times $$ ). Our results also highlight the importance of different modules of SPICER-including the DMBA, the CSM estimation, and the SPICER training loss-on the final performance of the method. Moreover, SPICER can estimate better CSMs than pre-estimation methods especially when the ACS data is limited. CONCLUSION: Despite being trained on noisy undersampled data, SPICER can reconstruct high-quality images and CSMs in highly undersampled settings, which outperforms other self-supervised learning methods and matches the performance of the well-known E2E-VarNet trained on fully sampled ground-truth data.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Imagen por Resonancia Magnética/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático Supervisado , Encéfalo/diagnóstico por imagen , Aprendizaje Profundo , Fantasmas de Imagen
3.
Med Phys ; 50(10): 6163-6176, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37184305

RESUMEN

BACKGROUND: MRI has a rapidly growing role in radiation therapy (RT) for treatment planning, real-time image guidance, and beam gating (e.g., MRI-Linac). Free-breathing 4D-MRI is desirable in respiratory motion management for therapy. Moreover, high-quality 3D-MRIs without motion artifacts are needed to delineate lesions. Existing MRI methods require multiple scans with lengthy acquisition times or are limited by low spatial resolution, contrast, and signal-to-noise ratio. PURPOSE: We developed a novel method to obtain motion-resolved 4D-MRIs and motion-integrated 3D-MRI reconstruction using a single rapid (35-45 s scan on a 0.35 T MRI-Linac. METHODS: Golden-angle radial stack-of-stars MRI scans were acquired from a respiratory motion phantom and 12 healthy volunteers (n = 12) on a 0.35 T MRI-Linac. A self-navigated method was employed to detect respiratory motion using 2000 (acquisition time = 5-7 min) and the first 200 spokes (acquisition time = 35-45 s). Multi-coil non-uniform fast Fourier transform (MCNUFFT), compressed sensing (CS), and deep-learning Phase2Phase (P2P) methods were employed to reconstruct motion-resolved 4D-MRI using 2000 spokes (MCNUFFT2000) and 200 spokes (CS200 and P2P200). Deformable motion vector fields (MVFs) were computed from the 4D-MRIs and used to reconstruct motion-corrected 3D-MRIs with the MOtion Transformation Integrated forward-Fourier (MOTIF) method. Image quality was evaluated quantitatively using the structural similarity index measure (SSIM) and the root mean square error (RMSE), and qualitatively in a blinded radiological review. RESULTS: Evaluation using the respiratory motion phantom experiment showed that the proposed method reversed the effects of motion blurring and restored edge sharpness. In the human study, P2P200 had smaller inaccuracy in MVFs estimation than CS200. P2P200 had significantly greater SSIMs (p < 0.0001) and smaller RMSEs (p < 0.001) than CS200 in motion-resolved 4D-MRI and motion-corrected 3D-MRI. The radiological review found that MOTIF 3D-MRIs using MCNUFFT2000 exhibited the highest image quality (scoring > 8 out of 10), followed by P2P200 (scoring > 5 out of 10), and then motion-uncorrected (scoring < 3 out of 10) in sharpness, contrast, and artifact-freeness. CONCLUSIONS: We have successfully demonstrated a method for respiratory motion management for MRI-guided RT. The method integrated self-navigated respiratory motion detection, deep-learning P2P 4D-MRI reconstruction, and a motion integrated reconstruction (MOTIF) for 3D-MRI using a single rapid MRI scan (35-45 s) on a 0.35 T MRI-Linac system.


Asunto(s)
Imagenología Tridimensional , Imagen por Resonancia Magnética , Humanos , Imagenología Tridimensional/métodos , Movimiento (Física) , Imagen por Resonancia Magnética/métodos , Respiración , Fantasmas de Imagen
4.
Med Phys ; 50(2): 808-820, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36412165

RESUMEN

BACKGROUND: Motion-compensated (MoCo) reconstruction shows great promise in improving four-dimensional cone-beam computed tomography (4D-CBCT) image quality. MoCo reconstruction for a 4D-CBCT could be more accurate using motion information at the CBCT imaging time than that obtained from previous 4D-CT scans. However, such data-driven approaches are hampered by the quality of initial 4D-CBCT images used for motion modeling. PURPOSE: This study aims to develop a deep-learning method to generate high-quality motion models for MoCo reconstruction to improve the quality of final 4D-CBCT images. METHODS: A 3D artifact-reduction convolutional neural network (CNN) was proposed to improve conventional phase-correlated Feldkamp-Davis-Kress (PCF) reconstructions by reducing undersampling-induced streaking artifacts while maintaining motion information. The CNN-generated artifact-mitigated 4D-CBCT images (CNN enhanced) were then used to build a motion model which was used by MoCo reconstruction (CNN+MoCo). The proposed procedure was evaluated using in-vivo patient datasets, an extended cardiac-torso (XCAT) phantom, and the public SPARE challenge datasets. The quality of reconstructed images for XCAT phantom and SPARE datasets was quantitatively assessed using root-mean-square-error (RMSE) and normalized cross-correlation (NCC). RESULTS: The trained CNN effectively reduced the streaking artifacts of PCF CBCT images for all datasets. More detailed structures can be recovered using the proposed CNN+MoCo reconstruction procedure. XCAT phantom experiments showed that the accuracy of estimated motion model using CNN enhanced images was greatly improved over PCF. CNN+MoCo showed lower RMSE and higher NCC compared to PCF, CNN enhanced and conventional MoCo. For the SPARE datasets, the average (± standard deviation) RMSE in mm-1 for body region of PCF, CNN enhanced, conventional MoCo and CNN+MoCo were 0.0040 ± 0.0009, 0.0029 ± 0.0002, 0.0024 ± 0.0003 and 0.0021 ± 0.0003. Corresponding NCC were 0.84 ± 0.05, 0.91 ± 0.05, 0.91 ± 0.05 and 0.93 ± 0.04. CONCLUSIONS: CNN-based artifact reduction can substantially reduce the artifacts in the initial 4D-CBCT images. The improved images could be used to enhance the motion modeling and ultimately improve the quality of the final 4D-CBCT images reconstructed using MoCo.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Tomografía Computarizada de Haz Cónico Espiral , Humanos , Tomografía Computarizada Cuatridimensional/métodos , Tomografía Computarizada de Haz Cónico/métodos , Movimiento (Física) , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
5.
NMR Biomed ; 36(5): e4883, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36442839

RESUMEN

The purpose of the current study was to introduce a Deep learning-based Accelerated and Noise-Suppressed Estimation (DANSE) method for reconstructing quantitative maps of biological tissue cellular-specific, R2t*, and hemodynamic-specific, R2', metrics of quantitative gradient-recalled echo (qGRE) MRI. The DANSE method adapts a supervised learning paradigm to train a convolutional neural network for robust estimation of R2t* and R2' maps with significantly reduced sensitivity to noise and the adverse effects of macroscopic (B0 ) magnetic field inhomogeneities directly from the gradient-recalled echo (GRE) magnitude images. The R2t* and R2' maps for training were generated by means of a voxel-by-voxel fitting of a previously developed biophysical quantitative qGRE model accounting for tissue, hemodynamic, and B0 -inhomogeneities contributions to multigradient-echo GRE signal using a nonlinear least squares (NLLS) algorithm. We show that the DANSE model efficiently estimates the aforementioned qGRE maps and preserves all the features of the NLLS approach with significant improvements including noise suppression and computation speed (from many hours to seconds). The noise-suppression feature of DANSE is especially prominent for data with low signal-to-noise ratio (SNR ~ 50-100), where DANSE-generated R2t* and R2' maps had up to three times smaller errors than that of the NLLS method. The DANSE method enables fast reconstruction of qGRE maps with significantly reduced sensitivity to noise and magnetic field inhomogeneities. The DANSE method does not require any information about field inhomogeneities during application. It exploits spatial and gradient echo time-dependent patterns in the GRE data and previously gained knowledge from the biophysical model, thus producing high quality qGRE maps, even in environments with high noise levels. These features along with fast computational speed can lead to broad qGRE clinical and research applications.


Asunto(s)
Aprendizaje Profundo , Humanos , Encéfalo/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Relación Señal-Ruido , Hemodinámica
6.
IEEE Trans Med Imaging ; 41(9): 2371-2384, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35344490

RESUMEN

Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets. Recent work on Noise2Noise (N2N) has shown the potential of using multiple noisy measurements of the same object as an alternative to having a ground-truth. However, existing N2N-based methods are not suitable for learning from the measurements of an object undergoing nonrigid deformation. This paper addresses this issue by proposing the deformation-compensated learning (DeCoLearn) method for training deep reconstruction networks by compensating for object deformations. A key component of DeCoLearn is a deep registration module, which is jointly trained with the deep reconstruction network without any ground-truth supervision. We validate DeCoLearn on both simulated and experimentally collected magnetic resonance imaging (MRI) data and show that it significantly improves imaging quality.


Asunto(s)
Imagen por Resonancia Magnética , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos
7.
Magn Reson Med ; 88(2): 676-690, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35344592

RESUMEN

PURPOSE: We evaluated the impact of PET respiratory motion correction (MoCo) in a phantom and patients. Moreover, we proposed and examined a PET MoCo approach using motion vector fields (MVFs) from a deep-learning reconstructed short MRI scan. METHODS: The evaluation of PET MoCo was performed in a respiratory motion phantom study with varying lesion sizes and tumor to background ratios (TBRs) using a static scan as the ground truth. MRI-based MVFs were derived from either 2000 spokes (MoCo2000 , 5-6 min acquisition time) using a Fourier transform reconstruction or 200 spokes (MoCoP2P200 , 30-40 s acquisition time) using a deep-learning Phase2Phase (P2P) reconstruction and then incorporated into PET MoCo reconstruction. For six patients with hepatic lesions, the performance of PET MoCo was evaluated using quantitative metrics (SUVmax , SUVpeak , SUVmean , lesion volume) and a blinded radiological review on lesion conspicuity. RESULTS: MRI-assisted PET MoCo methods provided similar results to static scans across most lesions with varying TBRs in the phantom. Both MoCo2000 and MoCoP2P200 PET images had significantly higher SUVmax , SUVpeak , SUVmean and significantly lower lesion volume than non-motion-corrected (non-MoCo) PET images. There was no statistical difference between MoCo2000 and MoCoP2P200 PET images for SUVmax , SUVpeak , SUVmean or lesion volume. Both radiological reviewers found that MoCo2000 and MoCoP2P200 PET significantly improved lesion conspicuity. CONCLUSION: An MRI-assisted PET MoCo method was evaluated using the ground truth in a phantom study. In patients with hepatic lesions, PET MoCo images improved quantitative and qualitative metrics based on only 30-40 s of MRI motion modeling data.


Asunto(s)
Aprendizaje Profundo , Tomografía de Emisión de Positrones , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Movimiento (Física) , Tomografía de Emisión de Positrones/métodos
8.
Magn Reson Med ; 88(1): 106-119, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35257400

RESUMEN

PURPOSE: To introduce two novel learning-based motion artifact removal networks (LEARN) for the estimation of quantitative motion- and B0 -inhomogeneity-corrected R2∗ maps from motion-corrupted multi-Gradient-Recalled Echo (mGRE) MRI data. METHODS: We train two convolutional neural networks (CNNs) to correct motion artifacts for high-quality estimation of quantitative B0 -inhomogeneity-corrected R2∗ maps from mGRE sequences. The first CNN, LEARN-IMG, performs motion correction on complex mGRE images, to enable the subsequent computation of high-quality motion-free quantitative R2∗ (and any other mGRE-enabled) maps using the standard voxel-wise analysis or machine learning-based analysis. The second CNN, LEARN-BIO, is trained to directly generate motion- and B0 -inhomogeneity-corrected quantitative R2∗ maps from motion-corrupted magnitude-only mGRE images by taking advantage of the biophysical model describing the mGRE signal decay. RESULTS: We show that both CNNs trained on synthetic MR images are capable of suppressing motion artifacts while preserving details in the predicted quantitative R2∗ maps. Significant reduction of motion artifacts on experimental in vivo motion-corrupted data has also been achieved by using our trained models. CONCLUSION: Both LEARN-IMG and LEARN-BIO can enable the computation of high-quality motion- and B0 -inhomogeneity-corrected R2∗ maps. LEARN-IMG performs motion correction on mGRE images and relies on the subsequent analysis for the estimation of R2∗ maps, while LEARN-BIO directly performs motion- and B0 -inhomogeneity-corrected R2∗ estimation. Both LEARN-IMG and LEARN-BIO jointly process all the available gradient echoes, which enables them to exploit spatial patterns available in the data. The high computational speed of LEARN-BIO is an advantage that can lead to a broader clinical application.


Asunto(s)
Artefactos , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Movimiento (Física) , Redes Neurales de la Computación
9.
J Mech Behav Biomed Mater ; 126: 105046, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34953435

RESUMEN

Artificial neural networks (ANN), established tools in machine learning, are applied to the problem of estimating parameters of a transversely isotropic (TI) material model using data from magnetic resonance elastography (MRE) and diffusion tensor imaging (DTI). We use neural networks to estimate parameters from experimental measurements of ultrasound-induced shear waves after training on analogous data from simulations of a computer model with similar loading, geometry, and boundary conditions. Strain ratios and shear-wave speeds (from MRE) and fiber direction (the direction of maximum diffusivity from diffusion tensor imaging (DTI)) are used as inputs to neural networks trained to estimate the parameters of a TI material (baseline shear modulus µ, shear anisotropy φ, and tensile anisotropy ζ). Ensembles of neural networks are applied to obtain distributions of parameter estimates. The robustness of this approach is assessed by quantifying the sensitivity of property estimates to assumptions in modeling (such as assumed loss factor) and choices in fitting (such as the size of the neural network). This study demonstrates the successful application of simulation-trained neural networks to estimate anisotropic material parameters from complementary MRE and DTI imaging data.


Asunto(s)
Imagen de Difusión Tensora , Diagnóstico por Imagen de Elasticidad , Anisotropía , Simulación por Computador , Elasticidad , Redes Neurales de la Computación
10.
Invest Radiol ; 56(12): 809-819, 2021 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-34038064

RESUMEN

OBJECTIVES: Respiratory binning of free-breathing magnetic resonance imaging data reduces motion blurring; however, it exacerbates noise and introduces severe artifacts due to undersampling. Deep neural networks can remove artifacts and noise but usually require high-quality ground truth images for training. This study aimed to develop a network that can be trained without this requirement. MATERIALS AND METHODS: This retrospective study was conducted on 33 participants enrolled between November 2016 and June 2019. Free-breathing magnetic resonance imaging was performed using a radial acquisition. Self-navigation was used to bin the k-space data into 10 respiratory phases. To simulate short acquisitions, subsets of radial spokes were used in reconstructing images with multicoil nonuniform fast Fourier transform (MCNUFFT), compressed sensing (CS), and 2 deep learning methods: UNet3DPhase and Phase2Phase (P2P). UNet3DPhase was trained using a high-quality ground truth, whereas P2P was trained using noisy images with streaking artifacts. Two radiologists blinded to the reconstruction methods independently reviewed the sharpness, contrast, and artifact-freeness of the end-expiration images reconstructed from data collected at 16% of the Nyquist sampling rate. The generalized estimating equation method was used for statistical comparison. Motion vector fields were derived to examine the respiratory motion range of 4-dimensional images reconstructed using different methods. RESULTS: A total of 15 healthy participants and 18 patients with hepatic malignancy (50 ± 15 years, 6 women) were enrolled. Both reviewers found that the UNet3DPhase and P2P images had higher contrast (P < 0.01) and fewer artifacts (P < 0.01) than the CS images. The UNet3DPhase and P2P images were reported to be sharper than the CS images by 1 reviewer (P < 0.01) but not by the other reviewer (P = 0.22, P = 0.18). UNet3DPhase and P2P were similar in sharpness and contrast, whereas UNet3DPhase had fewer artifacts (P < 0.01). The motion vector lengths for the MCNUFFT800 and P2P800 images were comparable (10.5 ± 4.2 mm and 9.9 ± 4.0 mm, respectively), whereas both were significantly larger than CS2000 (7.0 ± 3.9 mm; P < 0.0001) and UNnet3DPhase800 (6.9 ± 3.2; P < 0.0001) images. CONCLUSIONS: Without a ground truth, P2P can reconstruct sharp, artifact-free, and high-contrast respiratory motion-resolved images from highly undersampled data. Unlike the CS and UNet3DPhase methods, P2P did not artificially reduce the respiratory motion range.


Asunto(s)
Aprendizaje Profundo , Artefactos , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Hígado , Imagen por Resonancia Magnética/métodos , Respiración , Estudios Retrospectivos
11.
Magn Reson Med ; 84(6): 2932-2942, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32767489

RESUMEN

PURPOSE: To introduce a novel deep learning method for Robust and Accelerated Reconstruction (RoAR) of quantitative and B0-inhomogeneity-corrected R2* maps from multi-gradient recalled echo (mGRE) MRI data. METHODS: RoAR trains a convolutional neural network (CNN) to generate quantitative R2∗ maps free from field inhomogeneity artifacts by adopting a self-supervised learning strategy given (a) mGRE magnitude images, (b) the biophysical model describing mGRE signal decay, and (c) preliminary-evaluated F-function accounting for contribution of macroscopic B0 field inhomogeneities. Importantly, no ground-truth R2* images are required and F-function is only needed during RoAR training but not application. RESULTS: We show that RoAR preserves all features of R2* maps while offering significant improvements over existing methods in computation speed (seconds vs. hours) and reduced sensitivity to noise. Even for data with SNR = 5 RoAR produced R2* maps with accuracy of 22% while voxel-wise analysis accuracy was 47%. For SNR = 10 the RoAR accuracy increased to 17% vs. 24% for direct voxel-wise analysis. CONCLUSIONS: RoAR is trained to recognize the macroscopic magnetic field inhomogeneities directly from the input magnitude-only mGRE data and eliminate their effect on R2∗ measurements. RoAR training is based on the biophysical model and does not require ground-truth R2* maps. Since RoAR utilizes signal information not just from individual voxels but also accounts for spatial patterns of the signals in the images, it reduces the sensitivity of R2* maps to the noise in the data. These features plus high computational speed provide significant benefits for the potential usage of RoAR in clinical settings.


Asunto(s)
Artefactos , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Redes Neurales de la Computación
12.
Opt Express ; 26(11): 14678-14688, 2018 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-29877404

RESUMEN

Image reconstruction under multiple light scattering is crucial in a number of applications such as diffraction tomography. The reconstruction problem is often formulated as a nonconvex optimization, where a nonlinear measurement model is used to account for multiple scattering and regularization is used to enforce prior constraints on the object. In this paper, we propose a powerful alternative to this optimization-based view of image reconstruction by designing and training a deep convolutional neural network that can invert multiple scattered measurements to produce a high-quality image of the refractive index. Our results on both simulated and experimental datasets show that the proposed approach is substantially faster and achieves higher imaging quality compared to the state-of-the-art methods based on optimization.

13.
IEEE Trans Image Process ; 26(4): 1723-1731, 2017 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-28129158

RESUMEN

Multi-modal sensing is increasingly becoming important in a number of applications, providing new capabilities and processing challenges. In this paper, we explore the benefit of combining a low-resolution depth sensor with a high-resolution optical video sensor, in order to provide a high-resolution depth map of the scene. We propose a new formulation that is able to incorporate temporal information and exploit the motion of objects in the video to significantly improve the results over existing methods. In particular, our approach exploits the space-time redundancy in the depth and intensity using motion-adaptive low-rank regularization. We provide experiments to validate our approach and confirm that the quality of the estimated high-resolution depth is improved substantially. Our approach can be a first component in systems using vision techniques that rely on high-resolution depth information.

14.
IEEE Trans Image Process ; 26(2): 539-548, 2017 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-27875224

RESUMEN

Total variation (TV) is a one of the most popular regularizers for stabilizing the solution of ill-posed inverse problems. This paper proposes a novel proximal-gradient algorithm for minimizing TV regularized least-squares cost functionals. Unlike traditional methods that require nested iterations for computing the proximal step of TV, our algorithm approximates the latter with several simple proximals that have closed form solutions. We theoretically prove that the proposed parallel proximal method achieves the TV solution with arbitrarily high precision at a global rate of converge that is equivalent to the fast proximal-gradient methods. The results in this paper have the potential to enhance the applicability of TV for solving very large-scale imaging inverse problems.

15.
J Opt Soc Am A Opt Image Sci Vis ; 32(6): 1092-100, 2015 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-26367043

RESUMEN

We propose a new technique for two-dimensional phase unwrapping. The unwrapped phase is found as the solution of an inverse problem that consists in the minimization of an energy functional. The latter includes a weighted data fidelity term that favors sparsity in the error between the true and wrapped phase differences, as well as a regularizer based on higher-order total variation. One desirable feature of our method is its rotation invariance, which allows it to unwrap a much larger class of images compared to the state of the art. We demonstrate the effectiveness of our method through several experiments on simulated and real data obtained through the tomographic phase microscope. The proposed method can enhance the applicability and outreach of techniques that rely on quantitative phase evaluation.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen Óptica/métodos , Algoritmos , Relación Señal-Ruido
16.
IEEE Trans Image Process ; 22(7): 2699-710, 2013 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-23549896

RESUMEN

We present a novel statistically-based discretization paradigm and derive a class of maximum a posteriori (MAP) estimators for solving ill-conditioned linear inverse problems. We are guided by the theory of sparse stochastic processes, which specifies continuous-domain signals as solutions of linear stochastic differential equations. Accordingly, we show that the class of admissible priors for the discretized version of the signal is confined to the family of infinitely divisible distributions. Our estimators not only cover the well-studied methods of Tikhonov and l1-type regularizations as particular cases, but also open the door to a broader class of sparsity-promoting regularization schemes that are typically nonconvex. We provide an algorithm that handles the corresponding nonconvex problems and illustrate the use of our formalism by applying it to deconvolution, magnetic resonance imaging, and X-ray tomographic reconstruction problems. Finally, we compare the performance of estimators associated with models of increasing sparsity.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Modelos Lineales , Procesamiento de Señales Asistido por Computador , Procesos Estocásticos , Algoritmos , Teorema de Bayes , Humanos , Imagen por Resonancia Magnética , Modelos Biológicos , Neuronas/fisiología , Fantasmas de Imagen , Células Madre/fisiología , Tomografía Computarizada por Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...