Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Med Image Anal ; 97: 103276, 2024 Jul 17.
Article in English | MEDLINE | ID: mdl-39068830

ABSTRACT

Radiation therapy plays a crucial role in cancer treatment, necessitating precise delivery of radiation to tumors while sparing healthy tissues over multiple days. Computed tomography (CT) is integral for treatment planning, offering electron density data crucial for accurate dose calculations. However, accurately representing patient anatomy is challenging, especially in adaptive radiotherapy, where CT is not acquired daily. Magnetic resonance imaging (MRI) provides superior soft-tissue contrast. Still, it lacks electron density information, while cone beam CT (CBCT) lacks direct electron density calibration and is mainly used for patient positioning. Adopting MRI-only or CBCT-based adaptive radiotherapy eliminates the need for CT planning but presents challenges. Synthetic CT (sCT) generation techniques aim to address these challenges by using image synthesis to bridge the gap between MRI, CBCT, and CT. The SynthRAD2023 challenge was organized to compare synthetic CT generation methods using multi-center ground truth data from 1080 patients, divided into two tasks: (1) MRI-to-CT and (2) CBCT-to-CT. The evaluation included image similarity and dose-based metrics from proton and photon plans. The challenge attracted significant participation, with 617 registrations and 22/17 valid submissions for tasks 1/2. Top-performing teams achieved high structural similarity indices (≥0.87/0.90) and gamma pass rates for photon (≥98.1%/99.0%) and proton (≥97.3%/97.0%) plans. However, no significant correlation was found between image similarity metrics and dose accuracy, emphasizing the need for dose evaluation when assessing the clinical applicability of sCT. SynthRAD2023 facilitated the investigation and benchmarking of sCT generation techniques, providing insights for developing MRI-only and CBCT-based adaptive radiotherapy. It showcased the growing capacity of deep learning to produce high-quality sCT, reducing reliance on conventional CT for treatment planning.

2.
Med Phys ; 50(9): 5331-5342, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37527331

ABSTRACT

BACKGROUND: Respiratory-resolved four-dimensional magnetic resonance imaging (4D-MRI) provides essential motion information for accurate radiation treatments of mobile tumors. However, obtaining high-quality 4D-MRI suffers from long acquisition and reconstruction times. PURPOSE: To develop a deep learning architecture to quickly acquire and reconstruct high-quality 4D-MRI, enabling accurate motion quantification for MRI-guided radiotherapy (MRIgRT). METHODS: A small convolutional neural network called MODEST is proposed to reconstruct 4D-MRI by performing a spatial and temporal decomposition, omitting the need for 4D convolutions to use all the spatio-temporal information present in 4D-MRI. This network is trained on undersampled 4D-MRI after respiratory binning to reconstruct high-quality 4D-MRI obtained by compressed sensing reconstruction. The network is trained, validated, and tested on 4D-MRI of 28 lung cancer patients acquired with a T1-weighted golden-angle radial stack-of-stars (GA-SOS) sequence. The 4D-MRI of 18, 5, and 5 patients were used for training, validation, and testing. Network performances are evaluated on image quality measured by the structural similarity index (SSIM) and motion consistency by comparing the position of the lung-liver interface on undersampled 4D-MRI before and after respiratory binning. The network is compared to conventional architectures such as a U-Net, which has 30 times more trainable parameters. RESULTS: MODEST can reconstruct high-quality 4D-MRI with higher image quality than a U-Net, despite a thirty-fold reduction in trainable parameters. High-quality 4D-MRI can be obtained using MODEST in approximately 2.5 min, including acquisition, processing, and reconstruction. CONCLUSION: High-quality accelerated 4D-MRI can be obtained using MODEST, which is particularly interesting for MRIgRT.


Subject(s)
Lung Neoplasms , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Motion , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/radiotherapy , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods
3.
Med Image Anal ; 80: 102509, 2022 08.
Article in English | MEDLINE | ID: mdl-35688047

ABSTRACT

Convolutional neural networks (CNNs) are increasingly adopted in medical imaging, e.g., to reconstruct high-quality images from undersampled magnetic resonance imaging (MRI) acquisitions or estimate subject motion during an examination. MRI is naturally acquired in the complex domain C, obtaining magnitude and phase information in k-space. However, CNNs in complex regression tasks are almost exclusively trained to minimize the L2 loss or maximizing the magnitude structural similarity (SSIM), which are possibly not optimal as they do not take full advantage of the magnitude and phase information present in the complex domain. This work identifies that minimizing the L2 loss in the complex field has an asymmetry in the magnitude/phase loss landscape and is biased, underestimating the reconstructed magnitude. To resolve this, we propose a new loss function for regression in the complex domain called ⊥-loss, which adds a novel phase term to established magnitude loss functions, e.g., L2 or SSIM. We show ⊥-loss is symmetric in the magnitude/phase domain and has favourable properties when applied to regression in the complex domain. Specifically, we evaluate the ⊥+ℓ2-loss and ⊥+SSIM-loss for complex undersampled MR image reconstruction tasks and MR image registration tasks. We show that training a model to minimize the ⊥+ℓ2-loss outperforms models trained to minimize the L2 loss and results in similar performance compared to models trained to maximize the magnitude SSIM while offering high-quality phase reconstruction. Moreover, ⊥-loss is defined in Rn, and we apply the loss function to the R2 domain by learning 2D deformation vector fields for image registration. We show that a model trained to minimize the ⊥+ℓ2-loss outperforms models trained to minimize the end-point error loss.


Subject(s)
Deep Learning , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer
4.
Med Phys ; 48(11): 6597-6613, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34525223

ABSTRACT

PURPOSE: To enable real-time adaptive magnetic resonance imaging-guided radiotherapy (MRIgRT) by obtaining time-resolved three-dimensional (3D) deformation vector fields (DVFs) with high spatiotemporal resolution and low latency ( < 500  ms). Theory and Methods: Respiratory-resolved T 1 -weighted 4D-MRI of 27 patients with lung cancer were acquired using a golden-angle radial stack-of-stars readout. A multiresolution convolutional neural network (CNN) called TEMPEST was trained on up to 32 × retrospectively undersampled MRI of 17 patients, reconstructed with a nonuniform fast Fourier transform, to learn optical flow DVFs. TEMPEST was validated using 4D respiratory-resolved MRI, a digital phantom, and a physical motion phantom. The time-resolved motion estimation was evaluated in-vivo using two volunteer scans, acquired on a hybrid MR-scanner with integrated linear accelerator. Finally, we evaluated the model robustness on a publicly-available four-dimensional computed tomography (4D-CT) dataset. RESULTS: TEMPEST produced accurate DVFs on respiratory-resolved MRI at 20-fold acceleration, with the average end-point-error < 2  mm, both on respiratory-sorted MRI and on a digital phantom. TEMPEST estimated accurate time-resolved DVFs on MRI of a motion phantom, with an error < 2  mm at 28 × undersampling. On two volunteer scans, TEMPEST accurately estimated motion compared to the self-navigation signal using 50 spokes per dynamic (366 × undersampling). At this undersampling factor, DVFs were estimated within 200 ms, including MRI acquisition. On fully sampled CT data, we achieved a target registration error of 1.87 ± 1.65 mm without retraining the model. CONCLUSION: A CNN trained on undersampled MRI produced accurate 3D DVFs with high spatiotemporal resolution for MRIgRT.


Subject(s)
Magnetic Resonance Imaging , Neural Networks, Computer , Humans , Imaging, Three-Dimensional , Motion , Phantoms, Imaging , Respiration , Retrospective Studies
5.
Phys Med Biol ; 65(15): 155015, 2020 08 07.
Article in English | MEDLINE | ID: mdl-32408295

ABSTRACT

To enable magnetic resonance imaging (MRI)-guided radiotherapy with real-time adaptation, motion must be quickly estimated with low latency. The motion estimate is used to adapt the radiation beam to the current anatomy, yielding a more conformal dose distribution. As the MR acquisition is the largest component of latency, deep learning (DL) may reduce the total latency by enabling much higher undersampling factors compared to conventional reconstruction and motion estimation methods. The benefit of DL on image reconstruction and motion estimation was investigated for obtaining accurate deformation vector fields (DVFs) with high temporal resolution and minimal latency. 2D cine MRI acquired at 1.5 T from 135 abdominal cancer patients were retrospectively included in this study. Undersampled radial golden angle acquisitions were retrospectively simulated. DVFs were computed using different combinations of conventional- and DL-based methods for image reconstruction and motion estimation, allowing a comparison of four approaches to achieve real-time motion estimation. The four approaches were evaluated based on the end-point-error and root-mean-square error compared to a ground-truth optical flow estimate on fully-sampled images, the structural similarity (SSIM) after registration and time necessary to acquire k-space, reconstruct an image and estimate motion. The lowest DVF error and highest SSIM were obtained using conventional methods up to [Formula: see text]. For undersampling factors [Formula: see text], the lowest DVF error and highest SSIM were obtained using conventional image reconstruction and DL-based motion estimation. We have found that, with this combination, accurate DVFs can be obtained up to [Formula: see text] with an average root-mean-square error up to 1 millimeter and an SSIM greater than 0.8 after registration, taking 60 milliseconds. High-quality 2D DVFs from highly undersampled k-space can be obtained with a high temporal resolution with conventional image reconstruction and a deep learning-based motion estimation approach for real-time adaptive MRI-guided radiotherapy.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging, Cine , Movement , Radiotherapy, Image-Guided , Abdominal Neoplasms/diagnostic imaging , Abdominal Neoplasms/physiopathology , Abdominal Neoplasms/radiotherapy , Humans , Retrospective Studies , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...