Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 68
Filtrar
1.
Med Phys ; 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38865687

RESUMO

BACKGROUND: Dual-energy computed tomography (DECT) and material decomposition play vital roles in quantitative medical imaging. However, the decomposition process may suffer from significant noise amplification, leading to severely degraded image signal-to-noise ratios (SNRs). While existing iterative algorithms perform noise suppression using different image priors, these heuristic image priors cannot accurately represent the features of the target image manifold. Although deep learning-based decomposition methods have been reported, these methods are in the supervised-learning framework requiring paired data for training, which is not readily available in clinical settings. PURPOSE: This work aims to develop an unsupervised-learning framework with data-measurement consistency for image-domain material decomposition in DECT. METHODS: The proposed framework combines iterative decomposition and deep learning-based image prior in a generative adversarial network (GAN) architecture. In the generator module, a data-fidelity loss is introduced to enforce the measurement consistency in material decomposition. In the discriminator module, the discriminator is trained to differentiate the low-noise material-specific images from the high-noise images. In this scheme, paired images of DECT and ground-truth material-specific images are not required for the model training. Once trained, the generator can perform image-domain material decomposition with noise suppression in a single step. RESULTS: In the simulation studies of head and lung digital phantoms, the proposed method reduced the standard deviation (SD) in decomposed images by 97% and 91% from the values in direct inversion results. It also generated decomposed images with structural similarity index measures (SSIMs) greater than 0.95 against the ground truth. In the clinical head and lung patient studies, the proposed method suppressed the SD by 95% and 93% compared to the decomposed images of matrix inversion. CONCLUSIONS: Since the invention of DECT, noise amplification during material decomposition has been one of the biggest challenges, impeding its quantitative use in clinical practice. The proposed method performs accurate material decomposition with efficient noise suppression. Furthermore, the proposed method is within an unsupervised-learning framework, which does not require paired data for model training and resolves the issue of lack of ground-truth data in clinical scenarios.

2.
Med Phys ; 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38889368

RESUMO

BACKGROUND: Iodine maps, derived from image-processing of contrast-enhanced dual-energy computed tomography (DECT) scans, highlight the differences in tissue iodine intake. It finds multiple applications in radiology, including vascular imaging, pulmonary evaluation, kidney assessment, and cancer diagnosis. In radiation oncology, it can contribute to designing more accurate and personalized treatment plans. However, DECT scanners are not commonly available in radiation therapy centers. Additionally, the use of iodine contrast agents is not suitable for all patients, especially those allergic to iodine agents, posing further limitations to the accessibility of this technology. PURPOSE: The purpose of this work is to generate synthetic iodine map images from non-contrast single-energy CT (SECT) images using conditional denoising diffusion probabilistic model (DDPM). METHODS: One-hundered twenty-six head-and-neck patients' images were retrospectively investigated in this work. Each patient underwent non-contrast SECT and contrast DECT scans. Ground truth iodine maps were generated from contrast DECT scans using commercial software syngo.via installed in the clinic. A conditional DDPM was implemented in this work to synthesize iodine maps. Three-fold cross-validation was conducted, with each iteration selecting the data from 42 patients as the test dataset and the remainder as the training dataset. Pixel-to-pixel generative adversarial network (GAN) and CycleGAN served as reference methods for evaluating the proposed DDPM method. RESULTS: The accuracy of the proposed DDPM was evaluated using three quantitative metrics: mean absolute error (MAE) (1.039 ± 0.345 mg/mL), structural similarity index measure (SSIM) (0.89 ± 0.10) and peak signal-to-noise ratio (PSNR) (25.4 ± 3.5 db) respectively. Compared to the reference methods, the proposed technique showcased superior performance across the evaluated metrics, further validated by the paired two-tailed t-tests. CONCLUSION: The proposed conditional DDPM framework has demonstrated the feasibility of generating synthetic iodine map images from non-contrast SECT images. This method presents a potential clinical application, which is providing accurate iodine contrast map in instances where only non-contrast SECT is accessible.

3.
ArXiv ; 2024 May 04.
Artigo em Inglês | MEDLINE | ID: mdl-38745706

RESUMO

Background: Stereotactic body radiotherapy (SBRT) is a well-established treatment modality for liver metastases in patients unsuitable for surgery. Both CT and MRI are useful during treatment planning for accurate target delineation and to reduce potential organs-at-risk (OAR) toxicity from radiation. MRI-CT deformable image registration (DIR) is required to propagate the contours defined on high-contrast MRI to CT images. An accurate DIR method could lead to more precisely defined treatment volumes and superior OAR sparing on the treatment plan. Therefore, it is beneficial to develop an accurate MRI-CT DIR for liver SBRT. Purpose: To create a new deep learning model that can estimate the deformation vector field (DVF) for directly registering abdominal MRI-CT images. Methods: The proposed method assumed a diffeomorphic deformation. By using topology-preserved deformation features extracted from the probabilistic diffeomorphic registration model, abdominal motion can be accurately obtained and utilized for DVF estimation. The model integrated Swin transformers, which have demonstrated superior performance in motion tracking, into the convolutional neural network (CNN) for deformation feature extraction. The model was optimized using a cross-modality image similarity loss and a surface matching loss. To compute the image loss, a modality-independent neighborhood descriptor (MIND) was used between the deformed MRI and CT images. The surface matching loss was determined by measuring the distance between the warped coordinates of the surfaces of contoured structures on the MRI and CT images. To evaluate the performance of the model, a retrospective study was carried out on a group of 50 liver cases that underwent rigid registration of MRI and CT scans. The deformed MRI image was assessed against the CT image using the target registration error (TRE), Dice similarity coefficient (DSC), and mean surface distance (MSD) between the deformed contours of the MRI image and manual contours of the CT image. Results: When compared to only rigid registration, DIR with the proposed method resulted in an increase of the mean DSC values of the liver and portal vein from 0.850±0.102 and 0.628±0.129 to 0.903±0.044 and 0.763±0.073, a decrease of the mean MSD of the liver from 7.216±4.513 mm to 3.232±1.483 mm, and a decrease of the TRE from 26.238±2.769 mm to 8.492±1.058 mm. Conclusion: The proposed DIR method based on a diffeomorphic transformer provides an effective and efficient way to generate an accurate DVF from an MRI-CT image pair of the abdomen. It could be utilized in the current treatment planning workflow for liver SBRT.

4.
Med Phys ; 2024 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-38820286

RESUMO

BACKGROUND: Stereotactic body radiotherapy (SBRT) is a well-established treatment modality for liver metastases in patients unsuitable for surgery. Both CT and MRI are useful during treatment planning for accurate target delineation and to reduce potential organs-at-risk (OAR) toxicity from radiation. MRI-CT deformable image registration (DIR) is required to propagate the contours defined on high-contrast MRI to CT images. An accurate DIR method could lead to more precisely defined treatment volumes and superior OAR sparing on the treatment plan. Therefore, it is beneficial to develop an accurate MRI-CT DIR for liver SBRT. PURPOSE: To create a new deep learning model that can estimate the deformation vector field (DVF) for directly registering abdominal MRI-CT images. METHODS: The proposed method assumed a diffeomorphic deformation. By using topology-preserved deformation features extracted from the probabilistic diffeomorphic registration model, abdominal motion can be accurately obtained and utilized for DVF estimation. The model integrated Swin transformers, which have demonstrated superior performance in motion tracking, into the convolutional neural network (CNN) for deformation feature extraction. The model was optimized using a cross-modality image similarity loss and a surface matching loss. To compute the image loss, a modality-independent neighborhood descriptor (MIND) was used between the deformed MRI and CT images. The surface matching loss was determined by measuring the distance between the warped coordinates of the surfaces of contoured structures on the MRI and CT images. To evaluate the performance of the model, a retrospective study was carried out on a group of 50 liver cases that underwent rigid registration of MRI and CT scans. The deformed MRI image was assessed against the CT image using the target registration error (TRE), Dice similarity coefficient (DSC), and mean surface distance (MSD) between the deformed contours of the MRI image and manual contours of the CT image. RESULTS: When compared to only rigid registration, DIR with the proposed method resulted in an increase of the mean DSC values of the liver and portal vein from 0.850 ± 0.102 and 0.628 ± 0.129 to 0.903 ± 0.044 and 0.763 ± 0.073, a decrease of the mean MSD of the liver from 7.216 ± 4.513 mm to 3.232 ± 1.483 mm, and a decrease of the TRE from 26.238 ± 2.769 mm to 8.492 ± 1.058 mm. CONCLUSION: The proposed DIR method based on a diffeomorphic transformer provides an effective and efficient way to generate an accurate DVF from an MRI-CT image pair of the abdomen. It could be utilized in the current treatment planning workflow for liver SBRT.

5.
Phys Med Biol ; 69(11)2024 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-38744300

RESUMO

Objectives. In this work, we proposed a deep-learning segmentation algorithm for cardiac magnetic resonance imaging to aid in contouring of the left ventricle, right ventricle, and Myocardium (Myo).Approach.We proposed a shifted window multilayer perceptron (Swin-MLP) mixer network which is built upon a 3D U-shaped symmetric encoder-decoder structure. We evaluated our proposed network using public data from 100 individuals. The network performance was quantitatively evaluated using 3D volume similarity between the ground truth contours and the predictions using Dice score coefficient, sensitivity, and precision as well as 2D surface similarity using Hausdorff distance (HD), mean surface distance (MSD) and residual mean square distance (RMSD). We benchmarked the performance against two other current leading edge networks known as Dynamic UNet and Swin-UNetr on the same public dataset.Results.The proposed network achieved the following volume similarity metrics when averaged over three cardiac segments: Dice = 0.952 ± 0.017, precision = 0.948 ± 0.016, sensitivity = 0.956 ± 0.022. The average surface similarities were HD = 1.521 ± 0.121 mm, MSD = 0.266 ± 0.075 mm, and RMSD = 0.668 ± 0.288 mm. The network shows statistically significant improvement in comparison to the Dynamic UNet and Swin-UNetr algorithms for most volumetric and surface metrics withp-value less than 0.05. Overall, the proposed Swin-MLP mixer network demonstrates better or comparable performance than competing methods.Significance.The proposed Swin-MLP mixer network demonstrates more accurate segmentation performance compared to current leading edge methods. This robust method demonstrates the potential to streamline clinical workflows for multiple applications.


Assuntos
Coração , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Humanos , Processamento de Imagem Assistida por Computador/métodos , Coração/diagnóstico por imagem , Redes Neurais de Computação , Aprendizado Profundo , Algoritmos
6.
Med Phys ; 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38588512

RESUMO

PURPOSE: Positron Emission Tomography (PET) has been a commonly used imaging modality in broad clinical applications. One of the most important tradeoffs in PET imaging is between image quality and radiation dose: high image quality comes with high radiation exposure. Improving image quality is desirable for all clinical applications while minimizing radiation exposure is needed to reduce risk to patients. METHODS: We introduce PET Consistency Model (PET-CM), an efficient diffusion-based method for generating high-quality full-dose PET images from low-dose PET images. It employs a two-step process, adding Gaussian noise to full-dose PET images in the forward diffusion, and then denoising them using a PET Shifted-window Vision Transformer (PET-VIT) network in the reverse diffusion. The PET-VIT network learns a consistency function that enables direct denoising of Gaussian noise into clean full-dose PET images. PET-CM achieves state-of-the-art image quality while requiring significantly less computation time than other methods. Evaluation with normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), multi-scale structure similarity index (SSIM), normalized cross-correlation (NCC), and clinical evaluation including Human Ranking Score (HRS) and Standardized Uptake Value (SUV) Error analysis shows its superiority in synthesizing full-dose PET images from low-dose inputs. RESULTS: In experiments comparing eighth-dose to full-dose images, PET-CM demonstrated impressive performance with NMAE of 1.278 ± 0.122%, PSNR of 33.783 ± 0.824 dB, SSIM of 0.964 ± 0.009, NCC of 0.968 ± 0.011, HRS of 4.543, and SUV Error of 0.255 ± 0.318%, with an average generation time of 62 s per patient. This is a significant improvement compared to the state-of-the-art diffusion-based model with PET-CM reaching this result 12× faster. Similarly, in the quarter-dose to full-dose image experiments, PET-CM delivered competitive outcomes, achieving an NMAE of 0.973 ± 0.066%, PSNR of 36.172 ± 0.801 dB, SSIM of 0.984 ± 0.004, NCC of 0.990 ± 0.005, HRS of 4.428, and SUV Error of 0.151 ± 0.192% using the same generation process, which underlining its high quantitative and clinical precision in both denoising scenario. CONCLUSIONS: We propose PET-CM, the first efficient diffusion-model-based method, for estimating full-dose PET images from low-dose images. PET-CM provides comparable quality to the state-of-the-art diffusion model with higher efficiency. By utilizing this approach, it becomes possible to maintain high-quality PET images suitable for clinical use while mitigating the risks associated with radiation. The code is availble at https://github.com/shaoyanpan/Full-dose-Whole-body-PET-Synthesis-from-Low-dose-PET-Using-Consistency-Model.

7.
Med Phys ; 2024 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-38346111

RESUMO

BACKGROUND: Prostate cancer (PCa) is the most common cancer in men and the second leading cause of male cancer-related death. Gleason score (GS) is the primary driver of PCa risk-stratification and medical decision-making, but can only be assessed at present via biopsy under anesthesia. Magnetic resonance imaging (MRI) is a promising non-invasive method to further characterize PCa, providing additional anatomical and functional information. Meanwhile, the diagnostic power of MRI is limited by qualitative or, at best, semi-quantitative interpretation criteria, leading to inter-reader variability. PURPOSES: Computer-aided diagnosis employing quantitative MRI analysis has yielded promising results in non-invasive prediction of GS. However, convolutional neural networks (CNNs) do not implicitly impose a frame of reference to the objects. Thus, CNNs do not encode the positional information properly, limiting method robustness against simple image variations such as flipping, scaling, or rotation. Capsule network (CapsNet) has been proposed to address this limitation and achieves promising results in this domain. In this study, we develop a 3D Efficient CapsNet to stratify GS-derived PCa risk using T2-weighted (T2W) MRI images. METHODS: In our method, we used 3D CNN modules to extract spatial features and primary capsule layers to encode vector features. We then propose to integrate fully-connected capsule layers (FC Caps) to create a deeper hierarchy for PCa grading prediction. FC Caps comprises a secondary capsule layer which routes active primary capsules and a final capsule layer which outputs PCa risk. To account for data imbalance, we propose a novel dynamic weighted margin loss. We evaluate our method on a public PCa T2W MRI dataset from the Cancer Imaging Archive containing data from 976 patients. RESULTS: Two groups of experiments were performed: (1) we first identified high-risk disease by classifying low + medium risk versus high risk; (2) we then stratified disease in one-versus-one fashion: low versus high risk, medium versus high risk, and low versus medium risk. Five-fold cross validation was performed. Our model achieved an area under receiver operating characteristic curve (AUC) of 0.83 and 0.64 F1-score for low versus high grade, 0.79 AUC and 0.75 F1-score for low + medium versus high grade, 0.75 AUC and 0.69 F1-score for medium versus high grade and 0.59 AUC and 0.57 F1-score for low versus medium grade. Our method outperformed state-of-the-art radiomics-based classification and deep learning methods with the highest metrics for each experiment. Our divide-and-conquer strategy achieved weighted Cohen's Kappa score of 0.41, suggesting moderate agreement with ground truth PCa risks. CONCLUSIONS: In this study, we proposed a novel 3D Efficient CapsNet for PCa risk stratification and demonstrated its feasibility. This developed tool provided a non-invasive approach to assess PCa risk from T2W MR images, which might have potential to personalize the treatment of PCa and reduce the number of unnecessary biopsies.

8.
J Med Imaging (Bellingham) ; 11(1): 014503, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38370421

RESUMO

Purpose: Glioblastoma (GBM) is aggressive and malignant. The methylation status of the O6-methylguanine-DNA methyltransferase (MGMT) promoter in GBM tissue is considered an important biomarker for developing the most effective treatment plan. Although the standard method for assessing the MGMT promoter methylation status is via bisulfite modification and deoxyribonucleic acid (DNA) sequencing of biopsy or surgical specimens, a secondary automated method based on medical imaging may improve the efficiency and accuracy of those tests. Approach: We propose a deep vision graph neural network (ViG) using multiparametric magnetic resonance imaging (MRI) to predict the MGMT promoter methylation status noninvasively. Our model was compared to the RSNA radiogenomic classification winners. The dataset includes 583 usable patient cases. Combinations of MRI sequences were compared. Our multi-sequence fusion strategy was compared with those using single MR sequences. Results: Our best model [Fluid Attenuated Inversion Recovery (FLAIR), T1-weighted pre-contrast (T1w), T2-weighted (T2)] outperformed the winning models with a test area under the curve (AUC) of 0.628, an accuracy of 0.632, a precision of 0.646, a recall of 0.677, a specificity of 0.581, and an F1 score of 0.661. Compared to the winning models with single MR sequences, our ViG utilizing fused-MRI showed a significant improvement statistically in AUC scores, which are FLAIR (p=0.042), T1w (p=0.017), T1wCE (p=0.001), and T2 (p=0.018). Conclusions: Our model is superior to challenge champions. A graph representation of the medical images enabled good handling of complexity and irregularity. Our work provides an automatic secondary check pipeline to ensure the correctness of MGMT methylation status prediction.

9.
Phys Med Biol ; 69(4)2024 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-38241726

RESUMO

Objective. High-resolution magnetic resonance imaging (MRI) can enhance lesion diagnosis, prognosis, and delineation. However, gradient power and hardware limitations prohibit recording thin slices or sub-1 mm resolution. Furthermore, long scan time is not clinically acceptable. Conventional high-resolution images generated using statistical or analytical methods include the limitation of capturing complex, high-dimensional image data with intricate patterns and structures. This study aims to harness cutting-edge diffusion probabilistic deep learning techniques to create a framework for generating high-resolution MRI from low-resolution counterparts, improving the uncertainty of denoising diffusion probabilistic models (DDPM).Approach. DDPM includes two processes. The forward process employs a Markov chain to systematically introduce Gaussian noise to low-resolution MRI images. In the reverse process, a U-Net model is trained to denoise the forward process images and produce high-resolution images conditioned on the features of their low-resolution counterparts. The proposed framework was demonstrated using T2-weighted MRI images from institutional prostate patients and brain patients collected in the Brain Tumor Segmentation Challenge 2020 (BraTS2020).Main results. For the prostate dataset, the bicubic interpolation model (Bicubic), conditional generative-adversarial network (CGAN), and our proposed DDPM framework improved the noise quality measure from low-resolution images by 4.4%, 5.7%, and 12.8%, respectively. Our method enhanced the signal-to-noise ratios by 11.7%, surpassing Bicubic (9.8%) and CGAN (8.1%). In the BraTS2020 dataset, the proposed framework and Bicubic enhanced peak signal-to-noise ratio from resolution-degraded images by 9.1% and 5.8%. The multi-scale structural similarity indexes were 0.970 ± 0.019, 0.968 ± 0.022, and 0.967 ± 0.023 for the proposed method, CGAN, and Bicubic, respectively.Significance. This study explores a deep learning-based diffusion probabilistic framework for improving MR image resolution. Such a framework can be used to improve clinical workflow by obtaining high-resolution images without penalty of the long scan time. Future investigation will likely focus on prospectively testing the efficacy of this framework with different clinical indications.


Assuntos
Bisacodil/análogos & derivados , Imageamento por Ressonância Magnética , Modelos Estatísticos , Masculino , Humanos , Razão Sinal-Ruído , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
10.
J Appl Clin Med Phys ; 25(4): e14260, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38243628

RESUMO

PURPOSE: To investigate bolus design and VMAT optimization settings for total scalp irradiation. METHODS: Three silicone bolus designs (flat, hat, and custom) from .decimal were evaluated for adherence to five anthropomorphic head phantoms. Flat bolus was cut from a silicone sheet. Generic hat bolus resembles an elongated swim cap while custom bolus is manufactured by injecting silicone into a 3D printed mold. Bolus placement time was recorded. Air gaps between bolus and scalp were quantified on CT images. The dosimetric effect of air gaps on target coverage was evaluated in a treatment planning study where the scalp was planned to 60 Gy in 30 fractions. A noncoplanar VMAT technique based on gEUD penalties was investigated that explored the full range of gEUD alpha values to determine which settings achieve sufficient target coverage while minimizing brain dose. ANOVA and the t-test were used to evaluate statistically significant differences (threshold = 0.05). RESULTS: The flat bolus took 32 ± 5.9 min to construct and place, which was significantly longer (p < 0.001) compared with 0.67 ± 0.2 min for the generic hat bolus or 0.53 ± 0.10 min for the custom bolus. The air gap volumes were 38 ± 9.3 cc, 32 ± 14 cc, and 17 ± 7.0 cc for the flat, hat, and custom boluses, respectively. While the air gap differences between the flat and custom boluses were significant (p = 0.011), there were no significant dosimetric differences in PTV coverage at V57Gy or V60Gy. In the VMAT optimization study, a gEUD alpha of 2 was found to minimize the mean brain dose. CONCLUSIONS: Two challenging aspects of total scalp irradiation were investigated: bolus design and plan optimization. Results from this study show opportunities to shorten bolus fabrication time during simulation and create high quality treatment plans using a straightforward VMAT template with simple optimization settings.


Assuntos
Radioterapia de Intensidade Modulada , Humanos , Dosagem Radioterapêutica , Radioterapia de Intensidade Modulada/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Couro Cabeludo/efeitos da radiação , Silicones
12.
Med Phys ; 51(3): 1847-1859, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37646491

RESUMO

BACKGROUND: Daily or weekly cone-beam computed tomography (CBCT) scans are commonly used for accurate patient positioning during the image-guided radiotherapy (IGRT) process, making it an ideal option for adaptive radiotherapy (ART) replanning. However, the presence of severe artifacts and inaccurate Hounsfield unit (HU) values prevent its use for quantitative applications such as organ segmentation and dose calculation. To enable the clinical practice of online ART, it is crucial to obtain CBCT scans with a quality comparable to that of a CT scan. PURPOSE: This work aims to develop a conditional diffusion model to perform image translation from the CBCT to the CT distribution for the image quality improvement of CBCT. METHODS: The proposed method is a conditional denoising diffusion probabilistic model (DDPM) that utilizes a time-embedded U-net architecture with residual and attention blocks to gradually transform the white Gaussian noise sample to the target CT distribution conditioned on the CBCT. The model was trained on deformed planning CT (dpCT) and CBCT image pairs, and its feasibility was verified in brain patient study and head-and-neck (H&N) patient study. The performance of the proposed algorithm was evaluated using mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and normalized cross-correlation (NCC) metrics on generated synthetic CT (sCT) samples. The proposed method was also compared to four other diffusion model-based sCT generation methods. RESULTS: In the brain patient study, the MAE, PSNR, and NCC of the generated sCT were 25.99 HU, 30.49 dB, and 0.99, respectively, compared to 40.63 HU, 27.87 dB, and 0.98 of the CBCT images. In the H&N patient study, the metrics were 32.56 HU, 27.65 dB, 0.98 and 38.99 HU, 27.00, 0.98 for sCT and CBCT, respectively. Compared to the other four diffusion models and one Cycle generative adversarial network (Cycle GAN), the proposed method showed superior results in both visual quality and quantitative analysis. CONCLUSIONS: The proposed conditional DDPM method can generate sCT from CBCT with accurate HU numbers and reduced artifacts, enabling accurate CBCT-based organ segmentation and dose calculation for online ART.


Assuntos
Bisacodil/análogos & derivados , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada de Feixe Cônico Espiral , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada de Feixe Cônico , Tomografia Computadorizada por Raios X , Modelos Estatísticos , Planejamento da Radioterapia Assistida por Computador/métodos
13.
Med Phys ; 51(3): 1974-1984, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37708440

RESUMO

BACKGROUND: An automated, accurate, and efficient lung four-dimensional computed tomography (4DCT) image registration method is clinically important to quantify respiratory motion for optimal motion management. PURPOSE: The purpose of this work is to develop a weakly supervised deep learning method for 4DCT lung deformable image registration (DIR). METHODS: The landmark-driven cycle network is proposed as a deep learning platform that performs DIR of individual phase datasets in a simulation 4DCT. This proposed network comprises a generator and a discriminator. The generator accepts moving and target CTs as input and outputs the deformation vector fields (DVFs) to match the two CTs. It is optimized during both forward and backward paths to enhance the bi-directionality of DVF generation. Further, the landmarks are used to weakly supervise the generator network. Landmark-driven loss is used to guide the generator's training. The discriminator then judges the realism of the deformed CT to provide extra DVF regularization. RESULTS: We performed four-fold cross-validation on 10 4DCT datasets from the public DIR-Lab dataset and a hold-out test on our clinic dataset, which included 50 4DCT datasets. The DIR-Lab dataset was used to evaluate the performance of the proposed method against other methods in the literature by calculating the DIR-Lab Target Registration Error (TRE). The proposed method outperformed other deep learning-based methods on the DIR-Lab datasets in terms of TRE. Bi-directional and landmark-driven loss were shown to be effective for obtaining high registration accuracy. The mean and standard deviation of TRE for the DIR-Lab datasets was 1.20 ± 0.72 mm and the mean absolute error (MAE) and structural similarity index (SSIM) for our datasets were 32.1 ± 11.6 HU and 0.979 ± 0.011, respectively. CONCLUSION: The landmark-driven cycle network has been validated and tested for automatic deformable image registration of patients' lung 4DCTs with results comparable to or better than competing methods.


Assuntos
Tomografia Computadorizada Quadridimensional , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Pulmão/diagnóstico por imagem , Simulação por Computador , Movimento (Física) , Algoritmos
14.
Phys Med Biol ; 69(2)2024 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-38091613

RESUMO

The advantage of proton therapy as compared to photon therapy stems from the Bragg peak effect, which allows protons to deposit most of their energy directly at the tumor while sparing healthy tissue. However, even with such benefits, proton therapy does present certain challenges. The biological effectiveness differences between protons and photons are not fully incorporated into clinical treatment planning processes. In current clinical practice, the relative biological effectiveness (RBE) between protons and photons is set as constant 1.1. Numerous studies have suggested that the RBE of protons can exhibit significant variability. Given these findings, there is a substantial interest in refining proton therapy treatment planning to better account for the variable RBE. Dose-average linear energy transfer (LETd) is a key physical parameter for evaluating the RBE of proton therapy and aids in optimizing proton treatment plans. Calculating precise LETddistributions necessitates the use of intricate physical models and the execution of specialized Monte-Carlo simulation software, which is a computationally intensive and time-consuming progress. In response to these challenges, we propose a deep learning based framework designed to predict the LETddistribution map using the dose distribution map. This approach aims to simplify the process and increase the speed of LETdmap generation in clinical settings. The proposed CycleGAN model has demonstrated superior performance over other GAN-based models. The mean absolute error (MAE), peak signal-to-noise ratio and normalized cross correlation of the LETdmaps generated by the proposed method are 0.096 ± 0.019 keVµm-1, 24.203 ± 2.683 dB, and 0.997 ± 0.002, respectively. The MAE of the proposed method in the clinical target volume, bladder, and rectum are 0.193 ± 0.103, 0.277 ± 0.112, and 0.211 ± 0.086 keVµm-1, respectively. The proposed framework has demonstrated the feasibility of generating synthetic LETdmaps from dose maps and has the potential to improve proton therapy planning by providing accurate LETdinformation.


Assuntos
Aprendizado Profundo , Terapia com Prótons , Terapia com Prótons/métodos , Prótons , Transferência Linear de Energia , Eficiência Biológica Relativa , Método de Monte Carlo , Planejamento da Radioterapia Assistida por Computador/métodos
15.
Med Phys ; 51(4): 2538-2548, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38011588

RESUMO

BACKGROUND AND PURPOSE: Magnetic resonance imaging (MRI)-based synthetic computed tomography (sCT) simplifies radiation therapy treatment planning by eliminating the need for CT simulation and error-prone image registration, ultimately reducing patient radiation dose and setup uncertainty. In this work, we propose a MRI-to-CT transformer-based improved denoising diffusion probabilistic model (MC-IDDPM) to translate MRI into high-quality sCT to facilitate radiation treatment planning. METHODS: MC-IDDPM implements diffusion processes with a shifted-window transformer network to generate sCT from MRI. The proposed model consists of two processes: a forward process, which involves adding Gaussian noise to real CT scans to create noisy images, and a reverse process, in which a shifted-window transformer V-net (Swin-Vnet) denoises the noisy CT scans conditioned on the MRI from the same patient to produce noise-free CT scans. With an optimally trained Swin-Vnet, the reverse diffusion process was used to generate noise-free sCT scans matching MRI anatomy. We evaluated the proposed method by generating sCT from MRI on an institutional brain dataset and an institutional prostate dataset. Quantitative evaluations were conducted using several metrics, including Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), Multi-scale Structure Similarity Index (SSIM), and Normalized Cross Correlation (NCC). Dosimetry analyses were also performed, including comparisons of mean dose and target dose coverages for 95% and 99%. RESULTS: MC-IDDPM generated brain sCTs with state-of-the-art quantitative results with MAE 48.825 ± 21.491 HU, PSNR 26.491 ± 2.814 dB, SSIM 0.947 ± 0.032, and NCC 0.976 ± 0.019. For the prostate dataset: MAE 55.124 ± 9.414 HU, PSNR 28.708 ± 2.112 dB, SSIM 0.878 ± 0.040, and NCC 0.940 ± 0.039. MC-IDDPM demonstrates a statistically significant improvement (with p < 0.05) in most metrics when compared to competing networks, for both brain and prostate synthetic CT. Dosimetry analyses indicated that the target dose coverage differences by using CT and sCT were within ± 0.34%. CONCLUSIONS: We have developed and validated a novel approach for generating CT images from routine MRIs using a transformer-based improved DDPM. This model effectively captures the complex relationship between CT and MRI images, allowing for robust and high-quality synthetic CT images to be generated in a matter of minutes. This approach has the potential to greatly simplify the treatment planning process for radiation therapy by eliminating the need for additional CT scans, reducing the amount of time patients spend in treatment planning, and enhancing the accuracy of treatment delivery.


Assuntos
Cabeça , Tomografia Computadorizada por Raios X , Masculino , Humanos , Tomografia Computadorizada por Raios X/métodos , Imageamento por Ressonância Magnética/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Radiometria , Processamento de Imagem Assistida por Computador/métodos
16.
Front Oncol ; 13: 1278180, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38074686

RESUMO

Background: The number of patients undergoing proton therapy has increased in recent years. Current treatment planning systems (TPS) calculate dose maps using three-dimensional (3D) maps of relative stopping power (RSP) and mass density. The patient-specific maps of RSP and mass density were obtained by translating the CT number (HU) acquired using single-energy computed tomography (SECT) with appropriate conversions and coefficients. The proton dose calculation uncertainty of this approach is 2.5%-3.5% plus 1 mm margin. SECT is the major clinical modality for proton therapy treatment planning. It would be intriguing to enhance proton dose calculation accuracy using a deep learning (DL) approach centered on SECT. Objectives: The purpose of this work is to develop a deep learning method to generate mass density and relative stopping power (RSP) maps based on clinical single-energy CT (SECT) data for proton dose calculation in proton therapy treatment. Methods: Artificial neural networks (ANN), fully convolutional neural networks (FCNN), and residual neural networks (ResNet) were used to learn the correlation between voxel-specific mass density, RSP, and SECT CT number (HU). A stoichiometric calibration method based on SECT data and an empirical model based on dual-energy CT (DECT) images were chosen as reference models to evaluate the performance of deep learning neural networks. SECT images of a CIRS 062M electron density phantom were used as the training dataset for deep learning models. CIRS anthropomorphic M701 and M702 phantoms were used to test the performance of deep learning models. Results: For M701, the mean absolute percentage errors (MAPE) of the mass density map by FCNN are 0.39%, 0.92%, 0.68%, 0.92%, and 1.57% on the brain, spinal cord, soft tissue, bone, and lung, respectively, whereas with the SECT stoichiometric method, they are 0.99%, 2.34%, 1.87%, 2.90%, and 12.96%. For RSP maps, the MAPE of FCNN on M701 are 0.85%, 2.32%, 0.75%, 1.22%, and 1.25%, whereas with the SECT reference model, they are 0.95%, 2.61%, 2.08%, 7.74%, and 8.62%. Conclusion: The results show that deep learning neural networks have the potential to generate accurate voxel-specific material property information, which can be used to improve the accuracy of proton dose calculation. Advances in knowledge: Deep learning-based frameworks are proposed to estimate material mass density and RSP from SECT with improved accuracy compared with conventional methods.

17.
Front Oncol ; 13: 1274803, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38156106

RESUMO

Background and purpose: A novel radiotracer, 18F-fluciclovine (anti-3-18F-FACBC), has been demonstrated to be associated with significantly improved survival when it is used in PET/CT imaging to guide postprostatectomy salvage radiotherapy for prostate cancer. We aimed to investigate the feasibility of using a deep learning method to automatically detect and segment lesions on 18F-fluciclovine PET/CT images. Materials and methods: We retrospectively identified 84 patients who are enrolled in Arm B of the Emory Molecular Prostate Imaging for Radiotherapy Enhancement (EMPIRE-1) trial. All 84 patients had prostate adenocarcinoma and underwent prostatectomy and 18F-fluciclovine PET/CT imaging with lesions identified and delineated by physicians. Three different neural networks with increasing levels of complexity (U-net, Cascaded U-net, and a cascaded detection segmentation network) were trained and tested on the 84 patients with a fivefold cross-validation strategy and a hold-out test, using manual contours as the ground truth. We also investigated using both PET and CT or using PET only as input to the neural network. Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), center-of-mass distance (CMD), and volume difference (VD) were used to quantify the quality of segmentation results against ground truth contours provided by physicians. Results: All three deep learning methods were able to detect 144/155 lesions and 153/155 lesions successfully when PET+CT and PET only, respectively, served as input. Quantitative results demonstrated that the neural network with the best performance was able to segment lesions with an average DSC of 0.68 ± 0.15 and HD95 of 4 ± 2 mm. The center of mass of the segmented contours deviated from physician contours by approximately 2 mm on average, and the volume difference was less than 1 cc. The novel network proposed by us achieves the best performance compared to current networks. The addition of CT as input to the neural network contributed to more cases of failure (DSC = 0), and among those cases of DSC > 0, it was shown to produce no statistically significant difference with the use of only PET as input for our proposed method. Conclusion: Quantitative results demonstrated the feasibility of the deep learning methods in automatically segmenting lesions on 18F-fluciclovine PET/CT images. This indicates the great potential of 18F-fluciclovine PET/CT combined with deep learning for providing a second check in identifying lesions as well as saving time and effort for physicians in contouring.

18.
Phys Med Biol ; 68(23)2023 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-37972414

RESUMO

The hippocampus plays a crucial role in memory and cognition. Because of the associated toxicity from whole brain radiotherapy, more advanced treatment planning techniques prioritize hippocampal avoidance, which depends on an accurate segmentation of the small and complexly shaped hippocampus. To achieve accurate segmentation of the anterior and posterior regions of the hippocampus from T1 weighted (T1w) MR images, we developed a novel model, Hippo-Net, which uses a cascaded model strategy. The proposed model consists of two major parts: (1) a localization model is used to detect the volume-of-interest (VOI) of hippocampus. (2) An end-to-end morphological vision transformer network (Franchietal2020Pattern Recognit.102107246, Ranemetal2022 IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW) pp 3710-3719) is used to perform substructures segmentation within the hippocampus VOI. The substructures include the anterior and posterior regions of the hippocampus, which are defined as the hippocampus proper and parts of the subiculum. The vision transformer incorporates the dominant features extracted from MR images, which are further improved by learning-based morphological operators. The integration of these morphological operators into the vision transformer increases the accuracy and ability to separate hippocampus structure into its two distinct substructures. A total of 260 T1w MRI datasets from medical segmentation decathlon dataset were used in this study. We conducted a five-fold cross-validation on the first 200 T1w MR images and then performed a hold-out test on the remaining 60 T1w MR images with the model trained on the first 200 images. In five-fold cross-validation, the Dice similarity coefficients were 0.900 ± 0.029 and 0.886 ± 0.031 for the hippocampus proper and parts of the subiculum, respectively. The mean surface distances (MSDs) were 0.426 ± 0.115 mm and 0.401 ± 0.100 mm for the hippocampus proper and parts of the subiculum, respectively. The proposed method showed great promise in automatically delineating hippocampus substructures on T1w MR images. It may facilitate the current clinical workflow and reduce the physicians' effort.


Assuntos
Hipocampo , Imageamento por Ressonância Magnética , Imageamento por Ressonância Magnética/métodos , Hipocampo/diagnóstico por imagem , Inteligência Artificial , Processamento de Imagem Assistida por Computador/métodos
19.
ArXiv ; 2023 Nov 17.
Artigo em Inglês | MEDLINE | ID: mdl-38013889

RESUMO

BACKGROUND: Dual-energy CT (DECT) and material decomposition play vital roles in quantitative medical imaging. However, the decomposition process may suffer from significant noise amplification, leading to severely degraded image signal-to-noise ratios (SNRs). While existing iterative algorithms perform noise suppression using different image priors, these heuristic image priors cannot accurately represent the features of the target image manifold. Although deep learning-based decomposition methods have been reported, these methods are in the supervised-learning framework requiring paired data for training, which is not readily available in clinical settings. PURPOSE: This work aims to develop an unsupervised-learning framework with data-measurement consistency for image-domain material decomposition in DECT.

20.
Cureus ; 15(7): e41260, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37529805

RESUMO

This study evaluated the feasibility of using artificial intelligence (AI) segmentation software for volume-modulated arc therapy (VMAT) prostate planning in conjunction with knowledge-based planning to facilitate a fully automated workflow. Two commercially available AI software programs, Radformation AutoContour (Radformation, New York, NY) and Siemens AI-Rad Companion (Siemens Healthineers, Malvern, PA) were used to auto-segment the rectum, bladder, femoral heads, and bowel bag on 30 retrospective clinical cases (10 intact prostate, 10 prostate bed, and 10 prostate and lymph node). Physician-segmented target volumes were transferred to AI structure sets. In-house RapidPlan models were used to generate plans using the original, physician-segmented structure sets as well as Radformation and Siemens AI-generated structure sets. Thus, there were three plans for each of the 30 cases, totaling 90 plans. Following RapidPlan optimization, planning target volume (PTV) coverage was set to 95%. Then, the plans optimized using AI structures were recalculated on the physician structure set with fixed monitor units. In this way, physician contours were used as the gold standard for identifying any clinically relevant differences in dose distributions. One-way analysis of variation (ANOVA) was used for statistical analysis. No statistically significant differences were observed across the three sets of plans for intact prostate, prostate bed, or prostate and lymph nodes. The results indicate that an automated volumetric modulated arc therapy (VMAT) prostate planning workflow can consistently achieve high plan quality. However, our results also show that small but consistent differences in contouring preferences may lead to subtle differences in planning results. Therefore, the clinical implementation of auto-contouring should be carefully validated.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...