Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Magn Reson Med ; 91(5): 2028-2043, 2024 May.
Article in English | MEDLINE | ID: mdl-38173304

ABSTRACT

PURPOSE: To develop a framework that jointly estimates rigid motion and polarizing magnetic field (B0 ) perturbations ( δ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ ) for brain MRI using a single navigator of a few milliseconds in duration, and to additionally allow for navigator acquisition at arbitrary timings within any type of sequence to obtain high-temporal resolution estimates. THEORY AND METHODS: Methods exist that match navigator data to a low-resolution single-contrast image (scout) to estimate either motion or δ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ . In this work, called QUEEN (QUantitatively Enhanced parameter Estimation from Navigators), we propose combined motion and δ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ estimation from a fast, tailored trajectory with arbitrary-contrast navigator data. To this end, the concept of a quantitative scout (Q-Scout) acquisition is proposed from which contrast-matched scout data is predicted for each navigator. Finally, navigator trajectories, contrast-matched scout, and δ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ are integrated into a motion-informed parallel-imaging framework. RESULTS: Simulations and in vivo experiments show the need to model δ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ to obtain accurate motion parameters estimated in the presence of strong δ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ . Simulations confirm that tailored navigator trajectories are needed to robustly estimate both motion and δ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ . Furthermore, experiments show that a contrast-matched scout is needed for parameter estimation from multicontrast navigator data. A retrospective, in vivo reconstruction experiment shows improved image quality when using the proposed Q-Scout and QUEEN estimation. CONCLUSIONS: We developed a framework to jointly estimate rigid motion parameters and δ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ from navigators. Combing a contrast-matched scout with the proposed trajectory allows for navigator deployment in almost any sequence and/or timing, which allows for higher temporal-resolution motion and δ B 0 $$ \delta {\mathbf{B}}_{\mathbf{0}} $$ estimates.


Subject(s)
Algorithms , Magnetic Resonance Imaging , Retrospective Studies , Motion , Magnetic Resonance Imaging/methods , Neuroimaging , Artifacts , Image Processing, Computer-Assisted/methods , Brain/diagnostic imaging
2.
Magn Reson Med ; 91(3): 987-1001, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37936313

ABSTRACT

PURPOSE: This study aims to develop a high-efficiency and high-resolution 3D imaging approach for simultaneous mapping of multiple key tissue parameters for routine brain imaging, including T1 , T2 , proton density (PD), ADC, and fractional anisotropy (FA). The proposed method is intended for pushing routine clinical brain imaging from weighted imaging to quantitative imaging and can also be particularly useful for diffusion-relaxometry studies, which typically suffer from lengthy acquisition time. METHODS: To address challenges associated with diffusion weighting, such as shot-to-shot phase variation and low SNR, we integrated several innovative data acquisition and reconstruction techniques. Specifically, we used M1-compensated diffusion gradients, cardiac gating, and navigators to mitigate phase variations caused by cardiac motion. We also introduced a data-driven pre-pulse gradient to cancel out eddy currents induced by diffusion gradients. Additionally, to enhance image quality within a limited acquisition time, we proposed a data-sharing joint reconstruction approach coupled with a corresponding sequence design. RESULTS: The phantom and in vivo studies indicated that the T1 and T2 values measured by the proposed method are consistent with a conventional MR fingerprinting sequence and the diffusion results (including diffusivity, ADC, and FA) are consistent with the spin-echo EPI DWI sequence. CONCLUSION: The proposed method can achieve whole-brain T1 , T2 , diffusivity, ADC, and FA maps at 1-mm isotropic resolution within 10 min, providing a powerful tool for investigating the microstructural properties of brain tissue, with potential applications in clinical and research settings.


Subject(s)
Brain , Magnetic Resonance Imaging , Humans , Brain/diagnostic imaging , Magnetic Resonance Imaging/methods , Phantoms, Imaging , Mathematical Concepts
3.
bioRxiv ; 2023 Mar 28.
Article in English | MEDLINE | ID: mdl-37034586

ABSTRACT

Introduction: Spatio-temporal MRI methods enable whole-brain multi-parametric mapping at ultra-fast acquisition times through efficient k-space encoding, but can have very long reconstruction times, which limit their integration into clinical practice. Deep learning (DL) is a promising approach to accelerate reconstruction, but can be computationally intensive to train and deploy due to the large dimensionality of spatio-temporal MRI. DL methods also need large training data sets and can produce results that don't match the acquired data if data consistency is not enforced. The aim of this project is to reduce reconstruction time using DL whilst simultaneously limiting the risk of deep learning induced hallucinations, all with modest hardware requirements. Methods: Deep Learning Initialized Compressed Sensing (Deli-CS) is proposed to reduce the reconstruction time of iterative reconstructions by "kick-starting" the iterative reconstruction with a DL generated starting point. The proposed framework is applied to volumetric multi-axis spiral projection MRF that achieves whole-brain T1 and T2 mapping at 1-mm isotropic resolution for a 2-minute acquisition. First, the traditional reconstruction is optimized from over two hours to less than 40 minutes while using more than 90% less RAM and only 4.7 GB GPU memory, by using a memory-efficient GPU implementation. The Deli-CS framework is then implemented and evaluated against the above reconstruction. Results: Deli-CS achieves comparable reconstruction quality with 50% fewer iterations bringing the full reconstruction time to 20 minutes. Conclusion: Deli-CS reduces the reconstruction time of subspace reconstruction of volumetric spatio-temporal acquisitions by providing a warm start to the iterative reconstruction algorithm.

4.
Bioengineering (Basel) ; 10(3)2023 Mar 06.
Article in English | MEDLINE | ID: mdl-36978725

ABSTRACT

Cardiac magnetic resonance (CMR) is an essential clinical tool for the assessment of cardiovascular disease. Deep learning (DL) has recently revolutionized the field through image reconstruction techniques that allow unprecedented data undersampling rates. These fast acquisitions have the potential to considerably impact the diagnosis and treatment of cardiovascular disease. Herein, we provide a comprehensive review of DL-based reconstruction methods for CMR. We place special emphasis on state-of-the-art unrolled networks, which are heavily based on a conventional image reconstruction framework. We review the main DL-based methods and connect them to the relevant conventional reconstruction theory. Next, we review several methods developed to tackle specific challenges that arise from the characteristics of CMR data. Then, we focus on DL-based methods developed for specific CMR applications, including flow imaging, late gadolinium enhancement, and quantitative tissue characterization. Finally, we discuss the pitfalls and future outlook of DL-based reconstructions in CMR, focusing on the robustness, interpretability, clinical deployment, and potential for new methods.

5.
IEEE Trans Med Imaging ; 41(12): 3895-3906, 2022 12.
Article in English | MEDLINE | ID: mdl-35969576

ABSTRACT

Learning-based translation between MRI contrasts involves supervised deep models trained using high-quality source- and target-contrast images derived from fully-sampled acquisitions, which might be difficult to collect under limitations on scan costs or time. To facilitate curation of training sets, here we introduce the first semi-supervised model for MRI contrast translation (ssGAN) that can be trained directly using undersampled k-space data. To enable semi-supervised learning on undersampled data, ssGAN introduces novel multi-coil losses in image, k-space, and adversarial domains. The multi-coil losses are selectively enforced on acquired k-space samples unlike traditional losses in single-coil synthesis models. Comprehensive experiments on retrospectively undersampled multi-contrast brain MRI datasets are provided. Our results demonstrate that ssGAN yields on par performance to a supervised model, while outperforming single-coil models trained on coil-combined magnitude images. It also outperforms cascaded reconstruction-synthesis models where a supervised synthesis model is trained following self-supervised reconstruction of undersampled data. Thus, ssGAN holds great promise to improve the feasibility of learning-based multi-contrast MRI synthesis.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods , Retrospective Studies , Magnetic Resonance Imaging/methods , Supervised Machine Learning
6.
IEEE Trans Med Imaging ; 41(10): 2598-2614, 2022 10.
Article in English | MEDLINE | ID: mdl-35436184

ABSTRACT

Generative adversarial models with convolutional neural network (CNN) backbones have recently been established as state-of-the-art in numerous medical image synthesis tasks. However, CNNs are designed to perform local processing with compact filters, and this inductive bias compromises learning of contextual features. Here, we propose a novel generative adversarial approach for medical image synthesis, ResViT, that leverages the contextual sensitivity of vision transformers along with the precision of convolution operators and realism of adversarial learning. ResViT's generator employs a central bottleneck comprising novel aggregated residual transformer (ART) blocks that synergistically combine residual convolutional and transformer modules. Residual connections in ART blocks promote diversity in captured representations, while a channel compression module distills task-relevant information. A weight sharing strategy is introduced among ART blocks to mitigate computational burden. A unified implementation is introduced to avoid the need to rebuild separate synthesis models for varying source-target modality configurations. Comprehensive demonstrations are performed for synthesizing missing sequences in multi-contrast MRI, and CT images from MRI. Our results indicate superiority of ResViT against competing CNN- and transformer-based methods in terms of qualitative observations and quantitative metrics.


Subject(s)
Data Compression , Image Processing, Computer-Assisted , Endoscopy , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer
7.
Med Image Anal ; 78: 102429, 2022 05.
Article in English | MEDLINE | ID: mdl-35367713

ABSTRACT

Magnetic resonance imaging (MRI) offers the flexibility to image a given anatomic volume under a multitude of tissue contrasts. Yet, scan time considerations put stringent limits on the quality and diversity of MRI data. The gold-standard approach to alleviate this limitation is to recover high-quality images from data undersampled across various dimensions, most commonly the Fourier domain or contrast sets. A primary distinction among recovery methods is whether the anatomy is processed per volume or per cross-section. Volumetric models offer enhanced capture of global contextual information, but they can suffer from suboptimal learning due to elevated model complexity. Cross-sectional models with lower complexity offer improved learning behavior, yet they ignore contextual information across the longitudinal dimension of the volume. Here, we introduce a novel progressive volumetrization strategy for generative models (ProvoGAN) that serially decomposes complex volumetric image recovery tasks into successive cross-sectional mappings task-optimally ordered across individual rectilinear dimensions. ProvoGAN effectively captures global context and recovers fine-structural details across all dimensions, while maintaining low model complexity and improved learning behavior. Comprehensive demonstrations on mainstream MRI reconstruction and synthesis tasks show that ProvoGAN yields superior performance to state-of-the-art volumetric and cross-sectional models.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Cross-Sectional Studies , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
8.
IEEE Trans Med Imaging ; 41(7): 1747-1763, 2022 07.
Article in English | MEDLINE | ID: mdl-35085076

ABSTRACT

Supervised reconstruction models are characteristically trained on matched pairs of undersampled and fully-sampled data to capture an MRI prior, along with supervision regarding the imaging operator to enforce data consistency. To reduce supervision requirements, the recent deep image prior framework instead conjoins untrained MRI priors with the imaging operator during inference. Yet, canonical convolutional architectures are suboptimal in capturing long-range relationships, and priors based on randomly initialized networks may yield suboptimal performance. To address these limitations, here we introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adversarial TransformERs (SLATER). SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images. During pre-training, this unconditional network learns a high-quality MRI prior in an unsupervised generative modeling task. During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data. Comprehensive experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against state-of-the-art unsupervised methods.


Subject(s)
Deep Learning , Magnetic Resonance Imaging , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
9.
Med Image Anal ; 70: 101944, 2021 05.
Article in English | MEDLINE | ID: mdl-33690024

ABSTRACT

Multi-contrast MRI protocols increase the level of morphological information available for diagnosis. Yet, the number and quality of contrasts are limited in practice by various factors including scan time and patient motion. Synthesis of missing or corrupted contrasts from other high-quality ones can alleviate this limitation. When a single target contrast is of interest, common approaches for multi-contrast MRI involve either one-to-one or many-to-one synthesis methods depending on their input. One-to-one methods take as input a single source contrast, and they learn a latent representation sensitive to unique features of the source. Meanwhile, many-to-one methods receive multiple distinct sources, and they learn a shared latent representation more sensitive to common features across sources. For enhanced image synthesis, we propose a multi-stream approach that aggregates information across multiple source images via a mixture of multiple one-to-one streams and a joint many-to-one stream. The complementary feature maps generated in the one-to-one streams and the shared feature maps generated in the many-to-one stream are combined with a fusion block. The location of the fusion block is adaptively modified to maximize task-specific performance. Quantitative and radiological assessments on T1,- T2-, PD-weighted, and FLAIR images clearly demonstrate the superior performance of the proposed method compared to previous state-of-the-art one-to-one and many-to-one methods.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Humans
10.
IEEE Trans Med Imaging ; 38(10): 2375-2388, 2019 10.
Article in English | MEDLINE | ID: mdl-30835216

ABSTRACT

Acquiring images of the same anatomy with multiple different contrasts increases the diversity of diagnostic information available in an MR exam. Yet, the scan time limitations may prohibit the acquisition of certain contrasts, and some contrasts may be corrupted by noise and artifacts. In such cases, the ability to synthesize unacquired or corrupted contrasts can improve diagnostic utility. For multi-contrast synthesis, the current methods learn a nonlinear intensity transformation between the source and target images, either via nonlinear regression or deterministic neural networks. These methods can, in turn, suffer from the loss of structural details in synthesized images. Here, in this paper, we propose a new approach for multi-contrast MRI synthesis based on conditional generative adversarial networks. The proposed approach preserves intermediate-to-high frequency details via an adversarial loss, and it offers enhanced synthesis performance via pixel-wise and perceptual losses for registered multi-contrast images and a cycle-consistency loss for unregistered images. Information from neighboring cross-sections are utilized to further improve synthesis quality. Demonstrations on T1- and T2- weighted images from healthy subjects and patients clearly indicate the superior performance of the proposed approach compared to the previous state-of-the-art methods. Our synthesis approach can help improve the quality and versatility of the multi-contrast MRI exams without the need for prolonged or repeated examinations.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Brain/diagnostic imaging , Brain Neoplasms/diagnostic imaging , Glioma/diagnostic imaging , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...