Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
J Med Imaging (Bellingham) ; 10(5): 051808, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37235130

ABSTRACT

Purpose: High-resolution late gadolinium enhanced (LGE) cardiac magnetic resonance imaging (MRI) volumes are difficult to acquire due to the limitations of the maximal breath-hold time achievable by the patient. This results in anisotropic 3D volumes of the heart with high in-plane resolution, but low-through-plane resolution. Thus, we propose a 3D convolutional neural network (CNN) approach to improve the through-plane resolution of the cardiac LGE-MRI volumes. Approach: We present a 3D CNN-based framework with two branches: a super-resolution branch to learn the mapping between low-resolution and high-resolution LGE-MRI volumes, and a gradient branch that learns the mapping between the gradient map of low-resolution LGE-MRI volumes and the gradient map of high-resolution LGE-MRI volumes. The gradient branch provides structural guidance to the CNN-based super-resolution framework. To assess the performance of the proposed CNN-based framework, we train two CNN models with and without gradient guidance, namely, dense deep back-projection network (DBPN) and enhanced deep super-resolution network. We train and evaluate our method on the 2018 atrial segmentation challenge dataset. Additionally, we also evaluate these trained models on the left atrial and scar quantification and segmentation challenge 2022 dataset to assess their generalization ability. Finally, we investigate the effect of the proposed CNN-based super-resolution framework on the 3D segmentation of the left atrium (LA) from these cardiac LGE-MRI image volumes. Results: Experimental results demonstrate that our proposed CNN method with gradient guidance consistently outperforms bicubic interpolation and the CNN models without gradient guidance. Furthermore, the segmentation results, evaluated using Dice score, obtained using the super-resolved images generated by our proposed method are superior to the segmentation results obtained using the images generated by bicubic interpolation (p<0.01) and the CNN models without gradient guidance (p<0.05). Conclusion: The presented CNN-based super-resolution method with gradient guidance improves the through-plane resolution of the LGE-MRI volumes and the structure guidance provided by the gradient branch can be useful to aid the 3D segmentation of cardiac chambers, such as LA, from the 3D LGE-MRI images.

2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1707-1710, 2022 07.
Article in English | MEDLINE | ID: mdl-36086376

ABSTRACT

In this paper, we describe a 3D convolutional neural network (CNN) framework to compute and generate super-resolution late gadolinium enhanced (LGE) cardiac magnetic resonance imaging (MRI) images. The proposed CNN framework consists of two branches: a super-resolution branch with a 3D dense deep back-projection network (DBPN) as the backbone to learn the mapping of low-resolution LGE cardiac volumes to high-resolution LGE cardiac volumes, and a gradient branch that learns the mapping of the gradient map of low resolution LGE cardiac volumes to the gradient map of their high-resolution counterparts. The gradient branch of the CNN provides additional cardiac structure information to the super-resolution branch to generate structurally more accurate super-resolution LGE MRI images. We conducted our experiments on the 2018 atrial segmentation challenge dataset. The proposed CNN framework achieved a mean peak signal-to-noise ratio (PSNR) of 30.91 and 25.66 and a mean structural similarity index measure (SSIM) of 0.91 and 0.75 on training the model on low-resolution images downsamp led by a scale factor of 2 and 4, respectively.


Subject(s)
Gadolinium , Image Processing, Computer-Assisted , Heart Atria , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3795-3799, 2021 11.
Article in English | MEDLINE | ID: mdl-34892062

ABSTRACT

Cardiac Cine Magnetic Resonance (CMR) Imaging has made a significant paradigm shift in medical imaging technology, thanks to its capability of acquiring high spatial and temporal resolution images of different structures within the heart that can be used for reconstructing patient-specific ventricular computational models. In this work, we describe the development of dynamic patient-specific right ventricle (RV) models associated with normal subjects and abnormal RV patients to be subsequently used to assess RV function based on motion and kinematic analysis. We first constructed static RV models using segmentation masks of cardiac chambers generated from our accurate, memory-efficient deep neural architecture - CondenseUNet - featuring both a learned group structure and a regularized weight-pruner to estimate the motion of the right ventricle. In our study, we use a deep learning-based deformable network that takes 3D input volumes and outputs a motion field which is then used to generate isosurface meshes of the cardiac geometry at all cardiac frames by propagating the end-diastole (ED) isosurface mesh using the reconstructed motion field. The proposed model was trained and tested on the Automated Cardiac Diagnosis Challenge (ACDC) dataset featuring 150 cine cardiac MRI patient datasets. The isosurface meshes generated using the proposed pipeline were compared to those obtained using motion propagation via traditional non-rigid registration based on several performance metrics, including Dice score and mean absolute distance (MAD).


Subject(s)
Deep Learning , Magnetic Resonance Imaging, Cine , Heart Ventricles/diagnostic imaging , Humans , Magnetic Resonance Imaging , Neural Networks, Computer
4.
Article in English | MEDLINE | ID: mdl-34079155

ABSTRACT

Late gadolinium enhanced (LGE) cardiac magnetic resonance (CMR) imaging, the current benchmark for assessment of myocardium viability, enables the identification and quantification of the compromised myocardial tissue regions, as they appear hyper-enhanced compared to the surrounding, healthy myocardium. However, in LGE CMR images, the reduced contrast between the left ventricle (LV) myocardium and LV blood-pool hampers the accurate delineation of the LV myocardium. On the other hand, the balanced-Steady State Free Precession (bSSFP) cine CMR imaging provides high resolution images ideal for accurate segmentation of the cardiac chambers. In the interest of generating patient-specific hybrid 3D and 4D anatomical models of the heart, to identify and quantify the compromised myocardial tissue regions for revascularization therapy planning, in our previous work, we presented a spatial transformer network (STN) based convolutional neural network (CNN) architecture for registration of LGE and bSSFP cine CMR image datasets made available through the 2019 Multi-Sequence Cardiac Magnetic Resonance Segmentation Challenge (MS-CMRSeg). We performed a supervised registration by leveraging the region of interest (RoI) information using the manual annotations of the LV blood-pool, LV myocardium and right ventricle (RV) blood-pool provided for both the LGE and the bSSFP cine CMR images. In order to reduce the reliance on the number of manual annotations for training such network, we propose a joint deep learning framework consisting of three branches: a STN based RoI guided CNN for registration of LGE and bSSFP cine CMR images, an U-Net model for segmentation of bSSFP cine CMR images, and an U-Net model for segmentation of LGE CMR images. This results in learning of a joint multi-scale feature encoder by optimizing all three branches of the network architecture simultaneously. Our experiments show that the registration results obtained by training 25 of the available 45 image datasets in a joint deep learning framework is comparable to the registration results obtained by stand-alone STN based CNN model by training 35 of the available 45 image datasets and also shows significant improvement in registration performance when compared to the results achieved by the stand-alone STN based CNN model by training 25 of the available 45 image datasets.

5.
Article in English | MEDLINE | ID: mdl-35662880

ABSTRACT

Cardiac magnetic resonance imaging (MRI) provides 3D images with high-resolution in-plane information, however, they are known to have low through-plane resolution due to the trade-off between resolution, image acquisition time and signal-to-noise ratio. This results in anisotropic 3D images which could lead to difficulty in diagnosis, especially in late gadolinium enhanced (LGE) cardiac MRI, which is the reference imaging modality for locating the extent of myocardial fibrosis in various cardiovascular diseases like myocardial infarction and atrial fibrillation. To address this issue, we propose a self-supervised deep learning-based approach to enhance the through-plane resolution of the LGE MRI images. We train a convolutional neural network (CNN) model on randomly extracted patches of short-axis LGE MRI images and this trained CNN model is used to leverage the information learnt from the high-resolution in-plane data to improve the through-plane resolution. We conducted experiments on LGE MRI dataset made available through the 2018 atrial segmentation challenge. Our proposed method achieved a mean peak signal-to-noise-ratio (PSNR) of 36.99 and 35.92 and a mean structural similarity index measure (SSIM) of 0.9 and 0.84 on training the CNN model using low-resolution images downsampled by a scale factor of 2 and 4, respectively.

6.
Funct Imaging Model Heart ; 12738: 253-263, 2021 Jun.
Article in English | MEDLINE | ID: mdl-37216301

ABSTRACT

Patient-specific left ventricle (LV) myocardial models have the potential to be used in a variety of clinical scenarios for improved diagnosis and treatment plans. Cine cardiac magnetic resonance (MR) imaging provides high resolution images to reconstruct patient-specific geometric models of the LV myocardium. With the advent of deep learning, accurate segmentation of cardiac chambers from cine cardiac MR images and unsupervised learning for image registration for cardiac motion estimation on a large number of image datasets is attainable. Here, we propose a deep leaning-based framework for the development of patient-specific geometric models of LV myocardium from cine cardiac MR images, using the Automated Cardiac Diagnosis Challenge (ACDC) dataset. We use the deformation field estimated from the VoxelMorph-based convolutional neural network (CNN) to propagate the isosurface mesh and volume mesh of the end-diastole (ED) frame to the subsequent frames of the cardiac cycle. We assess the CNN-based propagated models against segmented models at each cardiac phase, as well as models propagated using another traditional nonrigid image registration technique. Additionally, we generate dynamic LV myocardial volume meshes at all phases of the cardiac cycle using the log barrier-based mesh warping (LBWARP) method and compare them with the CNN-propagated volume meshes.

7.
Article in English | MEDLINE | ID: mdl-32699460

ABSTRACT

Cine cardiac magnetic resonance imaging (CMRI), the current gold standard for cardiac function analysis, provides images with high spatio-temporal resolution. Computing clinical cardiac parameters like ventricular blood-pool volumes, ejection fraction and myocardial mass from these high resolution images is an important step in cardiac disease diagnosis, therapy planning and monitoring cardiac health. An accurate segmentation of left ventricle blood-pool, myocardium and right ventricle blood-pool is crucial for computing these clinical cardiac parameters. U-Net inspired models are the current state-of-the-art for medical image segmentation. SegAN, a novel adversarial network architecture with multi-scale loss function, has shown superior segmentation performance over U-Net models with single-scale loss function. In this paper, we compare the performance of stand-alone U-Net models and U-Net models in SegAN framework for segmentation of left ventricle blood-pool, myocardium and right ventricle blood-pool from the 2017 ACDC segmentation challenge dataset. The mean Dice scores achieved by training U-Net models was on the order of 89.03%, 89.32% and 88.71% for left ventricle blood-pool, myocardium and right ventricle blood-pool, respectively. The mean Dice scores achieved by training the U-Net models in SegAN framework are 91.31%, 88.68% and 90.93% for left ventricle blood-pool, myocardium and right ventricle blood-pool, respectively.

8.
Article in English | MEDLINE | ID: mdl-34079839

ABSTRACT

In this work, we describe an unsupervised deep learning framework featuring a Laplacian-based operator as smoothing loss for deformable registration of 3D cine cardiac magnetic resonance (CMR) images. Before registration, the input 3D images are corrected for slice misalignment by segmenting the left ventricle (LV) blood-pool, LV myocardium and right ventricle (RV) blood-pool using a U-Net model and aligning the 2D slices along the center of the LV blood-pool. We conducted experiments using the Automated Cardiac Diagnosis Challenge (ACDC) dataset. We used the registration deformation field to warp the manually segmented LV blood-pool, LV myocardium and RV blood-pool labels from end-diastole (ED) frame to the other frames in the cardiac cycle. We achieved a mean Dice score of 94.84%, 85.22% and 84.36%, and Hausdorff distance (HD) of 2.74 mm, 5.88 mm and 9.04 mm, for the LV blood-pool, LV myocardium and RV blood-pool, respectively. We also introduce a pipeline to estimate patient tractography using the proposed CNN-based cardiac motion estimation.

9.
Med Image Underst Anal ; 1248: 208-220, 2020 Jul.
Article in English | MEDLINE | ID: mdl-34278386

ABSTRACT

Late gadolinium enhanced (LGE) cardiac magnetic resonance (CMR) imaging is the current gold standard for assessing myocardium viability for patients diagnosed with myocardial infarction, myocarditis or cardiomyopathy. This imaging method enables the identification and quantification of myocardial tissue regions that appear hyper-enhanced. However, the delineation of the myocardium is hampered by the reduced contrast between the myocardium and the left ventricle (LV) blood-pool due to the gadolinium-based contrast agent. The balanced-Steady State Free Precession (bSSFP) cine CMR imaging provides high resolution images with superior contrast between the myocardium and the LV blood-pool. Hence, the registration of the LGE CMR images and the bSSFP cine CMR images is a vital step for accurate localization and quantification of the compromised myocardial tissue. Here, we propose a Spatial Transformer Network (STN) inspired convolutional neural network (CNN) architecture to perform supervised registration of bSSFP cine CMR and LGE CMR images. We evaluate our proposed method on the 2019 Multi-Sequence Cardiac Magnetic Resonance Segmentation Challenge (MS-CMRSeg) dataset and use several evaluation metrics, including the center-to-center LV and right ventricle (RV) blood-pool distance, and the contour-to-contour blood-pool and myocardium distance between the LGE and bSSFP CMR images. Specifically, we showed that our registration method reduced the bSSFP to LGE LV blood-pool center distance from 3.28mm before registration to 2.27mm post registration and RV blood-pool center distance from 4.35mm before registration to 2.52mm post registration. We also show that the average surface distance (ASD) between bSSFP and LGE is reduced from 2.53mm to 2.09mm, 1.78mm to 1.40mm and 2.42mm to 1.73mm for LV blood-pool, LV myocardium and RV blood-pool, respectively.

10.
Funct Imaging Model Heart ; 11504: 415-424, 2019 Jun.
Article in English | MEDLINE | ID: mdl-32699845

ABSTRACT

Cardiac magnetic resonance imaging (CMRI) provides high resolution images ideal for assessing cardiac function and diagnosis of cardiovascular diseases. To assess cardiac function, estimation of ejection fraction, ventricular volume, mass and stroke volume are crucial, and the segmentation of left ventricle from CMRI is the first critical step. Fully convolutional neural network architectures have proved to be very efficient for medical image segmentation, with U-Net inspired architecture as the current state-of-the-art. Generative adversarial networks (GAN) inspired architectures have recently gained popularity in medical image segmentation with one of them being SegAN, a novel end-to-end adversarial neural network architecture. In this paper, we investigate SegAN with three different types of U-Net inspired architectures for left ventricle segmentation from cardiac MRI data. We performed our experiments on the 2017 ACDC segmentation challenge dataset. Our results show that the performance of U-Net architectures is better when trained in the SegAN framework than when trained stand-alone. The mean Dice scores achieved for three different U-Net architectures trained in the SegAN framework was on the order of 93.62%, 92.49% and 94.57%, showing a significant improvement over their Dice scores following stand-alone training - 92.58%), 91.46% and 93.81%, respectively.

SELECTION OF CITATIONS
SEARCH DETAIL
...