Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 65
Filter
1.
Article in English | MEDLINE | ID: mdl-38885104

ABSTRACT

Learning-based methods offer performance leaps over traditional methods in classification analysis of high-dimensional functional MRI (fMRI) data. In this domain, deep-learning models that analyze functional connectivity (FC) features among brain regions have been particularly promising. However, many existing models receive as input temporally static FC features that summarize inter-regional interactions across an entire scan, reducing the temporal sensitivity of classifiers by limiting their ability to leverage information on dynamic FC features of brain activity. To improve the performance of baseline classification models without compromising efficiency, here we propose a novel plug-in based on a graph neural network, GraphCorr, to provide enhanced input features to baseline models. The proposed plug-in computes a set of latent FC features with enhanced temporal information while maintaining comparable dimensionality to static features. Taking brain regions as nodes and blood-oxygen-level-dependent (BOLD) signals as node inputs, GraphCorr leverages a node embedder module based on a transformer encoder to capture dynamic latent representations of BOLD signals. GraphCorr also leverages a lag filter module to account for delayed interactions across nodes by learning correlational features of windowed BOLD signals across time delays. These two feature groups are then fused via a message passing algorithm executed on the formulated graph. Comprehensive demonstrations on three public datasets indicate improved classification performance for several state-of-the-art graph and convolutional baseline models when they are augmented with GraphCorr.

2.
Med Image Anal ; 94: 103121, 2024 May.
Article in English | MEDLINE | ID: mdl-38402791

ABSTRACT

Curation of large, diverse MRI datasets via multi-institutional collaborations can help improve learning of generalizable synthesis models that reliably translate source- onto target-contrast images. To facilitate collaborations, federated learning (FL) adopts decentralized model training while mitigating privacy concerns by avoiding sharing of imaging data. However, conventional FL methods can be impaired by the inherent heterogeneity in the data distribution, with domain shifts evident within and across imaging sites. Here we introduce the first personalized FL method for MRI Synthesis (pFLSynth) that improves reliability against data heterogeneity via model specialization to individual sites and synthesis tasks (i.e., source-target contrasts). To do this, pFLSynth leverages an adversarial model equipped with novel personalization blocks that control the statistics of generated feature maps across the spatial/channel dimensions, given latent variables specific to sites and tasks. To further promote communication efficiency and site specialization, partial network aggregation is employed over later generator stages while earlier generator stages and the discriminator are trained locally. As such, pFLSynth enables multi-task training of multi-site synthesis models with high generalization performance across sites and tasks. Comprehensive experiments demonstrate the superior performance and reliability of pFLSynth in MRI synthesis against prior federated methods.


Subject(s)
Learning , Magnetic Resonance Imaging , Humans , Reproducibility of Results
3.
IEEE Trans Med Imaging ; 43(1): 321-334, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37527298

ABSTRACT

Magnetic particle imaging (MPI) offers unparalleled contrast and resolution for tracing magnetic nanoparticles. A common imaging procedure calibrates a system matrix (SM) that is used to reconstruct data from subsequent scans. The ill-posed reconstruction problem can be solved by simultaneously enforcing data consistency based on the SM and regularizing the solution based on an image prior. Traditional hand-crafted priors cannot capture the complex attributes of MPI images, whereas recent MPI methods based on learned priors can suffer from extensive inference times or limited generalization performance. Here, we introduce a novel physics-driven method for MPI reconstruction based on a deep equilibrium model with learned data consistency (DEQ-MPI). DEQ-MPI reconstructs images by augmenting neural networks into an iterative optimization, as inspired by unrolling methods in deep learning. Yet, conventional unrolling methods are computationally restricted to few iterations resulting in non-convergent solutions, and they use hand-crafted consistency measures that can yield suboptimal capture of the data distribution. DEQ-MPI instead trains an implicit mapping to maximize the quality of a convergent solution, and it incorporates a learned consistency measure to better account for the data distribution. Demonstrations on simulated and experimental data indicate that DEQ-MPI achieves superior image quality and competitive inference time to state-of-the-art MPI reconstruction methods.


Subject(s)
Diagnostic Imaging , Nanoparticles , Neural Networks, Computer , Magnetics , Magnetic Phenomena , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
4.
IEEE J Biomed Health Inform ; 28(3): 1273-1284, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38051612

ABSTRACT

Monitoring of prevalent airborne diseases such as COVID-19 characteristically involves respiratory assessments. While auscultation is a mainstream method for preliminary screening of disease symptoms, its utility is hampered by the need for dedicated hospital visits. Remote monitoring based on recordings of respiratory sounds on portable devices is a promising alternative, which can assist in early assessment of COVID-19 that primarily affects the lower respiratory tract. In this study, we introduce a novel deep learning approach to distinguish patients with COVID-19 from healthy controls given audio recordings of cough or breathing sounds. The proposed approach leverages a novel hierarchical spectrogram transformer (HST) on spectrogram representations of respiratory sounds. HST embodies self-attention mechanisms over local windows in spectrograms, and window size is progressively grown over model stages to capture local to global context. HST is compared against state-of-the-art conventional and deep-learning baselines. Demonstrations on crowd-sourced multi-national datasets indicate that HST outperforms competing methods, achieving over 90% area under the receiver operating characteristic curve (AUC) in detecting COVID-19 cases.


Subject(s)
COVID-19 , Respiratory Sounds , Humans , Respiratory Sounds/diagnosis , COVID-19/diagnosis , Auscultation , Cough , Electric Power Supplies
6.
Comput Biol Med ; 167: 107610, 2023 12.
Article in English | MEDLINE | ID: mdl-37883853

ABSTRACT

Magnetic resonance imaging (MRI) is an essential diagnostic tool that suffers from prolonged scan times. Reconstruction methods can alleviate this limitation by recovering clinically usable images from accelerated acquisitions. In particular, learning-based methods promise performance leaps by employing deep neural networks as data-driven priors. A powerful approach uses scan-specific (SS) priors that leverage information regarding the underlying physical signal model for reconstruction. SS priors are learned on each individual test scan without the need for a training dataset, albeit they suffer from computationally burdening inference with nonlinear networks. An alternative approach uses scan-general (SG) priors that instead leverage information regarding the latent features of MRI images for reconstruction. SG priors are frozen at test time for efficiency, albeit they require learning from a large training dataset. Here, we introduce a novel parallel-stream fusion model (PSFNet) that synergistically fuses SS and SG priors for performant MRI reconstruction in low-data regimes, while maintaining competitive inference times to SG methods. PSFNet implements its SG prior based on a nonlinear network, yet it forms its SS prior based on a linear network to maintain efficiency. A pervasive framework for combining multiple priors in MRI reconstruction is algorithmic unrolling that uses serially alternated projections, causing error propagation under low-data regimes. To alleviate error propagation, PSFNet combines its SS and SG priors via a novel parallel-stream architecture with learnable fusion parameters. Demonstrations are performed on multi-coil brain MRI for varying amounts of training data. PSFNet outperforms SG methods in low-data regimes, and surpasses SS methods with few tens of training samples. On average across tasks, PSFNet achieves 3.1 dB higher PSNR, 2.8% higher SSIM, and 0.3 × lower RMSE than baselines. Furthermore, in both supervised and unsupervised setups, PSFNet requires an order of magnitude lower samples compared to SG methods, and enables an order of magnitude faster inference compared to SS methods. Thus, the proposed model improves deep MRI reconstruction with elevated learning and computational efficiency.


Subject(s)
Image Processing, Computer-Assisted , Rivers , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Radionuclide Imaging , Magnetic Resonance Imaging/methods
7.
Med Image Anal ; 88: 102872, 2023 08.
Article in English | MEDLINE | ID: mdl-37384951

ABSTRACT

Deep MRI reconstruction is commonly performed with conditional models that de-alias undersampled acquisitions to recover images consistent with fully-sampled data. Since conditional models are trained with knowledge of the imaging operator, they can show poor generalization across variable operators. Unconditional models instead learn generative image priors decoupled from the operator to improve reliability against domain shifts related to the imaging operator. Recent diffusion models are particularly promising given their high sample fidelity. Nevertheless, inference with a static image prior can perform suboptimally. Here we propose the first adaptive diffusion prior for MRI reconstruction, AdaDiff, to improve performance and reliability against domain shifts. AdaDiff leverages an efficient diffusion prior trained via adversarial mapping over large reverse diffusion steps. A two-phase reconstruction is executed following training: a rapid-diffusion phase that produces an initial reconstruction with the trained prior, and an adaptation phase that further refines the result by updating the prior to minimize data-consistency loss. Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff outperforms competing conditional and unconditional methods under domain shifts, and achieves superior or on par within-domain performance.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Humans , Image Processing, Computer-Assisted/methods , Reproducibility of Results , Magnetic Resonance Imaging/methods , Neuroimaging , Learning , Brain/diagnostic imaging
8.
IEEE Trans Med Imaging ; 42(12): 3524-3539, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37379177

ABSTRACT

Imputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.


Subject(s)
Magnetic Resonance Imaging , Tomography, X-Ray Computed , Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods
9.
Med Image Anal ; 88: 102841, 2023 08.
Article in English | MEDLINE | ID: mdl-37224718

ABSTRACT

Deep-learning models have enabled performance leaps in analysis of high-dimensional functional MRI (fMRI) data. Yet, many previous methods are suboptimally sensitive for contextual representations across diverse time scales. Here, we present BolT, a blood-oxygen-level-dependent transformer model, for analyzing multi-variate fMRI time series. BolT leverages a cascade of transformer encoders equipped with a novel fused window attention mechanism. Encoding is performed on temporally-overlapped windows within the time series to capture local representations. To integrate information temporally, cross-window attention is computed between base tokens in each window and fringe tokens from neighboring windows. To gradually transition from local to global representations, the extent of window overlap and thereby number of fringe tokens are progressively increased across the cascade. Finally, a novel cross-window regularization is employed to align high-level classification features across the time series. Comprehensive experiments on large-scale public datasets demonstrate the superior performance of BolT against state-of-the-art methods. Furthermore, explanatory analyses to identify landmark time points and regions that contribute most significantly to model decisions corroborate prominent neuroscientific findings in the literature.


Subject(s)
Magnetic Resonance Imaging , Humans , Time Factors
11.
Z Med Phys ; 33(2): 203-219, 2023 May.
Article in English | MEDLINE | ID: mdl-35216887

ABSTRACT

PURPOSE: Image quality in accelerated MRI rests on careful selection of various reconstruction parameters. A common yet tedious and error-prone practice is to hand-tune each parameter to attain visually appealing reconstructions. Here, we propose a parameter tuning strategy to automate hybrid parallel imaging (PI) - compressed sensing (CS) reconstructions via low-rank modeling of local k-space neighborhoods (LORAKS) supplemented with sparsity regularization in wavelet and total variation (TV) domains. METHODS: For low-rank regularization, we leverage a soft-thresholding operation based on singular values for matrix rank selection in LORAKS. For sparsity regularization, we employ Stein's unbiased risk estimate criterion to select the wavelet regularization parameter and local standard deviation of reconstructions to select the TV regularization parameter. Comprehensive demonstrations are presented on a numerical brain phantom and in vivo brain and knee acquisitions. Quantitative assessments are performed via PSNR, SSIM and NMSE metrics. RESULTS: The proposed hybrid PI-CS method improves reconstruction quality compared to PI-only techniques, and it achieves on par image quality to reconstructions with brute-force optimization of reconstruction parameters. These results are prominent across several different datasets and the range of examined acceleration rates. CONCLUSION: A data-driven parameter tuning strategy to automate hybrid PI-CS reconstructions is presented. The proposed method achieves reliable reconstructions of accelerated multi-coil MRI datasets without the need for exhaustive hand-tuning of reconstruction parameters.


Subject(s)
Algorithms , Magnetic Resonance Imaging , Magnetic Resonance Imaging/methods , Brain/diagnostic imaging , Multimodal Imaging , Phantoms, Imaging , Image Processing, Computer-Assisted/methods
12.
IEEE Trans Med Imaging ; 42(7): 1996-2009, 2023 Jul.
Article in English | MEDLINE | ID: mdl-36350868

ABSTRACT

Multi-institutional efforts can facilitate training of deep MRI reconstruction models, albeit privacy risks arise during cross-site sharing of imaging data. Federated learning (FL) has recently been introduced to address privacy concerns by enabling distributed training without transfer of imaging data. Existing FL methods employ conditional reconstruction models to map from undersampled to fully-sampled acquisitions via explicit knowledge of the accelerated imaging operator. Since conditional models generalize poorly across different acceleration rates or sampling densities, imaging operators must be fixed between training and testing, and they are typically matched across sites. To improve patient privacy, performance and flexibility in multi-site collaborations, here we introduce Federated learning of Generative IMage Priors (FedGIMP) for MRI reconstruction. FedGIMP leverages a two-stage approach: cross-site learning of a generative MRI prior, and prior adaptation following injection of the imaging operator. The global MRI prior is learned via an unconditional adversarial model that synthesizes high-quality MR images based on latent variables. A novel mapper subnetwork produces site-specific latents to maintain specificity in the prior. During inference, the prior is first combined with subject-specific imaging operators to enable reconstruction, and it is then adapted to individual cross-sections by minimizing a data-consistency loss. Comprehensive experiments on multi-institutional datasets clearly demonstrate enhanced performance of FedGIMP against both centralized and FL methods based on conditional models.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
13.
IEEE Trans Med Imaging ; 41(12): 3895-3906, 2022 12.
Article in English | MEDLINE | ID: mdl-35969576

ABSTRACT

Learning-based translation between MRI contrasts involves supervised deep models trained using high-quality source- and target-contrast images derived from fully-sampled acquisitions, which might be difficult to collect under limitations on scan costs or time. To facilitate curation of training sets, here we introduce the first semi-supervised model for MRI contrast translation (ssGAN) that can be trained directly using undersampled k-space data. To enable semi-supervised learning on undersampled data, ssGAN introduces novel multi-coil losses in image, k-space, and adversarial domains. The multi-coil losses are selectively enforced on acquired k-space samples unlike traditional losses in single-coil synthesis models. Comprehensive experiments on retrospectively undersampled multi-contrast brain MRI datasets are provided. Our results demonstrate that ssGAN yields on par performance to a supervised model, while outperforming single-coil models trained on coil-combined magnitude images. It also outperforms cascaded reconstruction-synthesis models where a supervised synthesis model is trained following self-supervised reconstruction of undersampled data. Thus, ssGAN holds great promise to improve the feasibility of learning-based multi-contrast MRI synthesis.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods , Retrospective Studies , Magnetic Resonance Imaging/methods , Supervised Machine Learning
14.
J Neurosci ; 2022 Jul 20.
Article in English | MEDLINE | ID: mdl-35863889

ABSTRACT

Object and action perception in cluttered dynamic natural scenes relies on efficient allocation of limited brain resources to prioritize the attended targets over distractors. It has been suggested that during visual search for objects, distributed semantic representation of hundreds of object categories is warped to expand the representation of targets. Yet, little is known about whether and where in the brain visual search for action categories modulates semantic representations. To address this fundamental question, we studied brain activity recorded from five subjects (1 female) via functional magnetic resonance imaging while they viewed natural movies and searched for either communication or locomotion actions. We find that attention directed to action categories elicits tuning shifts that warp semantic representations broadly across neocortex, and that these shifts interact with intrinsic selectivity of cortical voxels for target actions. These results suggest that attention serves to facilitate task performance during social interactions by dynamically shifting semantic selectivity towards target actions, and that tuning shifts are a general feature of conceptual representations in the brain.SIGNIFICANCE STATEMENTThe ability to swiftly perceive the actions and intentions of others is a crucial skill for humans, which relies on efficient allocation of limited brain resources to prioritise the attended targets over distractors. However, little is known about the nature of high-level semantic representations during natural visual search for action categories. Here we provide the first evidence showing that attention significantly warps semantic representations by inducing tuning shifts in single cortical voxels, broadly spread across occipitotemporal, parietal, prefrontal, and cingulate cortices. This dynamic attentional mechanism can facilitate action perception by efficiently allocating neural resources to accentuate the representation of task-relevant action categories.

15.
IEEE Trans Med Imaging ; 41(12): 3562-3574, 2022 12.
Article in English | MEDLINE | ID: mdl-35816533

ABSTRACT

Magnetic particle imaging (MPI) offers exceptional contrast for magnetic nanoparticles (MNP) at high spatio-temporal resolution. A common procedure in MPI starts with a calibration scan to measure the system matrix (SM), which is then used to set up an inverse problem to reconstruct images of the MNP distribution during subsequent scans. This calibration enables the reconstruction to sensitively account for various system imperfections. Yet time-consuming SM measurements have to be repeated under notable changes in system properties. Here, we introduce a novel deep learning approach for accelerated MPI calibration based on Transformers for SM super-resolution (TranSMS). Low-resolution SM measurements are performed using large MNP samples for improved signal-to-noise ratio efficiency, and the high-resolution SM is super-resolved via model-based deep learning. TranSMS leverages a vision transformer module to capture contextual relationships in low-resolution input images, a dense convolutional module for localizing high-resolution image features, and a data-consistency module to ensure measurement fidelity. Demonstrations on simulated and experimental data indicate that TranSMS significantly improves SM recovery and MPI reconstruction for up to 64-fold acceleration in two-dimensional imaging.


Subject(s)
Diagnostic Imaging , Magnetics , Calibration , Signal-To-Noise Ratio , Magnetic Phenomena , Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods
16.
IEEE J Biomed Health Inform ; 26(9): 4679-4690, 2022 09.
Article in English | MEDLINE | ID: mdl-35767499

ABSTRACT

Melanoma is a fatal skin cancer that is curable and has dramatically increasing survival rate when diagnosed at early stages. Learning-based methods hold significant promise for the detection of melanoma from dermoscopic images. However, since melanoma is a rare disease, existing databases of skin lesions predominantly contain highly imbalanced numbers of benign versus malignant samples. In turn, this imbalance introduces substantial bias in classification models due to the statistical dominance of the majority class. To address this issue, we introduce a deep clustering approach based on the latent-space embedding of dermoscopic images. Clustering is achieved using a novel center-oriented margin-free triplet loss (COM-Triplet) enforced on image embeddings from a convolutional neural network backbone. The proposed method aims to form maximally-separated cluster centers as opposed to minimizing classification error, so it is less sensitive to class imbalance. To avoid the need for labeled data, we further propose to implement COM-Triplet based on pseudo-labels generated by a Gaussian mixture model (GMM). Comprehensive experiments show that deep clustering with COM-Triplet loss outperforms clustering with triplet loss, and competing classifiers in both supervised and unsupervised settings.


Subject(s)
Melanoma , Skin Neoplasms , Cluster Analysis , Humans , Melanoma/diagnostic imaging , Melanoma/pathology , Neural Networks, Computer , Normal Distribution , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology
17.
IEEE Trans Med Imaging ; 41(10): 2598-2614, 2022 10.
Article in English | MEDLINE | ID: mdl-35436184

ABSTRACT

Generative adversarial models with convolutional neural network (CNN) backbones have recently been established as state-of-the-art in numerous medical image synthesis tasks. However, CNNs are designed to perform local processing with compact filters, and this inductive bias compromises learning of contextual features. Here, we propose a novel generative adversarial approach for medical image synthesis, ResViT, that leverages the contextual sensitivity of vision transformers along with the precision of convolution operators and realism of adversarial learning. ResViT's generator employs a central bottleneck comprising novel aggregated residual transformer (ART) blocks that synergistically combine residual convolutional and transformer modules. Residual connections in ART blocks promote diversity in captured representations, while a channel compression module distills task-relevant information. A weight sharing strategy is introduced among ART blocks to mitigate computational burden. A unified implementation is introduced to avoid the need to rebuild separate synthesis models for varying source-target modality configurations. Comprehensive demonstrations are performed for synthesizing missing sequences in multi-contrast MRI, and CT images from MRI. Our results indicate superiority of ResViT against competing CNN- and transformer-based methods in terms of qualitative observations and quantitative metrics.


Subject(s)
Data Compression , Image Processing, Computer-Assisted , Endoscopy , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer
18.
Med Image Anal ; 78: 102429, 2022 05.
Article in English | MEDLINE | ID: mdl-35367713

ABSTRACT

Magnetic resonance imaging (MRI) offers the flexibility to image a given anatomic volume under a multitude of tissue contrasts. Yet, scan time considerations put stringent limits on the quality and diversity of MRI data. The gold-standard approach to alleviate this limitation is to recover high-quality images from data undersampled across various dimensions, most commonly the Fourier domain or contrast sets. A primary distinction among recovery methods is whether the anatomy is processed per volume or per cross-section. Volumetric models offer enhanced capture of global contextual information, but they can suffer from suboptimal learning due to elevated model complexity. Cross-sectional models with lower complexity offer improved learning behavior, yet they ignore contextual information across the longitudinal dimension of the volume. Here, we introduce a novel progressive volumetrization strategy for generative models (ProvoGAN) that serially decomposes complex volumetric image recovery tasks into successive cross-sectional mappings task-optimally ordered across individual rectilinear dimensions. ProvoGAN effectively captures global context and recovers fine-structural details across all dimensions, while maintaining low model complexity and improved learning behavior. Comprehensive demonstrations on mainstream MRI reconstruction and synthesis tasks show that ProvoGAN yields superior performance to state-of-the-art volumetric and cross-sectional models.


Subject(s)
Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Cross-Sectional Studies , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
19.
IEEE Trans Med Imaging ; 41(7): 1747-1763, 2022 07.
Article in English | MEDLINE | ID: mdl-35085076

ABSTRACT

Supervised reconstruction models are characteristically trained on matched pairs of undersampled and fully-sampled data to capture an MRI prior, along with supervision regarding the imaging operator to enforce data consistency. To reduce supervision requirements, the recent deep image prior framework instead conjoins untrained MRI priors with the imaging operator during inference. Yet, canonical convolutional architectures are suboptimal in capturing long-range relationships, and priors based on randomly initialized networks may yield suboptimal performance. To address these limitations, here we introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adversarial TransformERs (SLATER). SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images. During pre-training, this unconditional network learns a high-quality MRI prior in an unsupervised generative modeling task. During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data. Comprehensive experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against state-of-the-art unsupervised methods.


Subject(s)
Deep Learning , Magnetic Resonance Imaging , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods
20.
IEEE Trans Med Imaging ; 41(1): 14-26, 2022 01.
Article in English | MEDLINE | ID: mdl-34351856

ABSTRACT

Balanced steady-state free precession (bSSFP) imaging enables high scan efficiency in MRI, but differs from conventional sequences in terms of elevated sensitivity to main field inhomogeneity and nonstandard [Formula: see text]-weighted tissue contrast. To address these limitations, multiple bSSFP images of the same anatomy are commonly acquired with a set of different RF phase-cycling increments. Joint processing of phase-cycled acquisitions serves to mitigate sensitivity to field inhomogeneity. Recently phase-cycled bSSFP acquisitions were also leveraged to estimate relaxation parameters based on explicit signal models. While effective, these model-based methods often involve a large number of acquisitions (N ≈ 10-16), degrading scan efficiency. Here, we propose a new constrained ellipse fitting method (CELF) for parameter estimation with improved efficiency and accuracy in phase-cycled bSSFP MRI. CELF is based on the elliptical signal model framework for complex bSSFP signals; and it introduces geometrical constraints on ellipse properties to improve estimation efficiency, and dictionary-based identification to improve estimation accuracy. CELF generates maps of [Formula: see text], [Formula: see text], off-resonance and on-resonant bSSFP signal by employing a separate [Formula: see text] map to mitigate sensitivity to flip angle variations. Our results indicate that CELF can produce accurate off-resonance and banding-free bSSFP maps with as few as N = 4 acquisitions, while estimation accuracy for relaxation parameters is notably limited by biases from microstructural sensitivity of bSSFP imaging.


Subject(s)
Algorithms , Magnetic Resonance Imaging , Artifacts , Phantoms, Imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...