Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Adv Radiat Oncol ; 9(1): 101340, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38260236

RESUMO

Purpose: Deep learning can be used to automatically digitize interstitial needles in high-dose-rate (HDR) brachytherapy for patients with cervical cancer. The aim of this study was to design a novel attention-gated deep-learning model, which may further improve the accuracy of and better differentiate needles. Methods and Materials: Seventeen patients with cervical cancer with 56 computed tomography-based interstitial HDR brachytherapy plans from the local hospital were retrospectively chosen with the local institutional review board's approval. Among them, 50 plans were randomly selected as the training set and the rest as the validation set. Spatial and channel attention gates (AGs) were added to 3-dimensional convolutional neural networks (CNNs) to highlight needle features and suppress irrelevant regions; this was supposed to facilitate convergence and improve accuracy of automatic needle digitization. Subsequently, the automatically digitized needles were exported to the Oncentra treatment planning system (Elekta Solutions AB, Stockholm, Sweden) for dose evaluation. The geometric and dosimetric accuracy of automatic needle digitization was compared among 3 methods: (1) clinically approved plans with manual needle digitization (ground truth); (2) the conventional deep-learning (CNN) method; and (3) the attention-added deep-learning (CNN + AG) method, in terms of the Dice similarity coefficient (DSC), tip and shaft positioning errors, dose distribution in the high-risk clinical target volume (HR-CTV), organs at risk, and so on. Results: The attention-gated CNN model was superior to CNN without AGs, with a greater DSC (approximately 94% for CNN + AG vs 89% for CNN). The needle tip and shaft errors of the CNN + AG method (1.1 mm and 1.8 mm, respectively) were also much smaller than those of the CNN method (2.0 mm and 3.3 mm, respectively). Finally, the dose difference for the HR-CTV D90 using the CNN + AG method was much more accurate than that using CNN (0.4% and 1.7%, respectively). Conclusions: The attention-added deep-learning model was successfully implemented for automatic needle digitization in HDR brachytherapy, with clinically acceptable geometric and dosimetric accuracy. Compared with conventional deep-learning neural networks, attention-gated deep learning may have superior performance and great clinical potential.

2.
Quant Imaging Med Surg ; 13(4): 2065-2080, 2023 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-37064379

RESUMO

Background: The aim of this study was to establish a correlation model between external surface motion and internal diaphragm apex movement using machine learning and to realize online automatic prediction of the diaphragm motion trajectory based on optical surface monitoring. Methods: The optical body surface parameters and kilovoltage (kV) X-ray fluoroscopic images of 7 liver tumor patients were captured synchronously for 50 seconds. The location of the diaphragm apex was manually delineated by a radiation oncologist and automatically detected with a convolutional network model in fluoroscopic images. The correlation model between the body surface parameters and the diaphragm apex of each patient was developed through linear regression (LR) based on synchronous datasets before radiotherapy. Model 1 (M1) was trained with data from the first 30 seconds of the datasets and tested with data from the following 20 seconds of the datasets in the first fraction to evaluate the intra-fractional prediction accuracy. Model 2 (M2) was trained with data from the first 30 seconds of the datasets in the next fraction. The motion trajectory of the diaphragm apex during the following 20 seconds in the next fraction was predicted with M1 and M2, respectively, to evaluate the inter-fractional prediction accuracy. The prediction errors of the 2 models were compared to analyze whether the correlation model needed to be re-established. Results: The average mean absolute error (MAE) and root mean square error (RMSE) using M1 trained with automatic detection location for the first fraction were 3.12±0.80 and 3.82±0.98 mm in the superior-inferior (SI) direction and 1.38±0.24 and 1.74±0.32 mm in the anterior-posterior (AP) direction, respectively. The average MAE and RMSE of M1 versus M2 in the AP direction were 2.63±0.71 versus 1.28±0.48 mm and 3.26±0.90 versus 1.61±0.60 mm, respectively. The average MAE and RMSE of M1 versus M2 in the SI direction were 5.84±1.22 versus 3.37±0.43 mm and 7.22±1.45 versus 4.07±0.54 mm, respectively. The prediction accuracy of M2 was significantly higher than that of M1. Conclusions: This study shows that it is feasible to use optical body surface information to automatically predict the diaphragm motion trajectory. At the same time, it is necessary to establish a new correlation model for the current fraction before each treatment.

3.
Med Phys ; 50(2): 922-934, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36317870

RESUMO

PURPOSE: To investigate the prognostic performance of multi-level computed tomography (CT)-dose fusion dosiomics at the image-, matrix-, and feature-levels from the gross tumor volume (GTV) at nasopharynx and the involved lymph node for nasopharyngeal carcinoma (NPC) patients. METHODS: Two hundred and nineteen NPC patients (175 vs. 44 for training vs. internal validation) were used to train prediction model, and 32 NPC patients were used for external validation. We first extracted CT and dose information from intratumoral nasopharynx (GTV_nx) and lymph node (GTV_nd) regions. Then, the corresponding peritumoral regions (RING_3 mm and RING_5 mm) were also considered. Thus, the individual and combination of intratumoral and peritumoral regions were as follows: GTV_nx, GTV_nd, RING_3 mm_nx, RING_3 mm_nd, RING_5 mm_nx, RING_5 mm_nd, GTV_nxnd, RING_3 mm_nxnd, RING_5 mm_nxnd, GTV + RING_3 mm_nxnd, and GTV + RING_5 mm_nxnd. For each region, 11 models were built by combining five clinical parameters and 127 features from: (1) dose images alone; (2-7) fused dose and CT images via wavelet-based fusion using CT weights of 0.2, 0.4, 0.6, and 0.8, gradient transfer fusion, and guided-filtering-based fusion (GFF); (8) fused matrices (sumMat); (9-10) fused features derived via feature averaging (avgFea) and feature concatenation (conFea); and finally, (11) CT images alone. The concordance index (C-index) and Kaplan-Meier curves with log-rank test were used to assess model performance. RESULTS: The fusion models' performance was better than single CT/dose model on both internal and external validation. Models that combined the information from both GTV_nx and GTV_nd regions outperformed the single region model. For internal validation, GTV + RING_3 mm_nxnd GFF model achieved the highest C-index both in recurrence-free survival (RFS) and metastasis-free survival (MFS) predictions (RFS: 0.822; MFS: 0.786). The highest C-index in external validation set was achieved by RING_3 mm_nxnd model (RFS: 0.762; MFS: 0.719). The GTV + RING_3 mm_nxnd GFF model is able to significantly separate patients into high-risk and low-risk groups compared to dose-only or CT-only models. CONCLUSION: Fusion dosiomics model combining the primary tumor, the involved lymph node, and 3 mm peritumoral information outperformed single-modality models for different outcome predictions, which is helpful for clinical decision-making and the development of personalized treatment.


Assuntos
Neoplasias Nasofaríngeas , Tomografia Computadorizada por Raios X , Humanos , Carcinoma Nasofaríngeo/diagnóstico por imagem , Carcinoma Nasofaríngeo/patologia , Prognóstico , Tomografia Computadorizada por Raios X/métodos , Neoplasias Nasofaríngeas/diagnóstico por imagem , Linfonodos/diagnóstico por imagem , Linfonodos/patologia
4.
Front Oncol ; 12: 827991, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35387126

RESUMO

Purpose: Accurate segmentation of gross target volume (GTV) from computed tomography (CT) images is a prerequisite in radiotherapy for nasopharyngeal carcinoma (NPC). However, this task is very challenging due to the low contrast at the boundary of the tumor and the great variety of sizes and morphologies of tumors between different stages. Meanwhile, the data source also seriously affect the results of segmentation. In this paper, we propose a novel three-dimensional (3D) automatic segmentation algorithm that adopts cascaded multiscale local enhancement of convolutional neural networks (CNNs) and conduct experiments on multi-institutional datasets to address the above problems. Materials and Methods: In this study, we retrospectively collected CT images of 257 NPC patients to test the performance of the proposed automatic segmentation model, and conducted experiments on two additional multi-institutional datasets. Our novel segmentation framework consists of three parts. First, the segmentation framework is based on a 3D Res-UNet backbone model that has excellent segmentation performance. Then, we adopt a multiscale dilated convolution block to enhance the receptive field and focus on the target area and boundary for segmentation improvement. Finally, a central localization cascade model for local enhancement is designed to concentrate on the GTV region for fine segmentation to improve the robustness. The Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average symmetric surface distance (ASSD) and 95% Hausdorff distance (HD95) are utilized as qualitative evaluation criteria to estimate the performance of our automated segmentation algorithm. Results: The experimental results show that compared with other state-of-the-art methods, our modified version 3D Res-UNet backbone has excellent performance and achieves the best results in terms of the quantitative metrics DSC, PPR, ASSD and HD95, which reached 74.49 ± 7.81%, 79.97 ± 13.90%, 1.49 ± 0.65 mm and 5.06 ± 3.30 mm, respectively. It should be noted that the receptive field enhancement mechanism and cascade architecture can have a great impact on the stable output of automatic segmentation results with high accuracy, which is critical for an algorithm. The final DSC, SEN, ASSD and HD95 values can be increased to 76.23 ± 6.45%, 79.14 ± 12.48%, 1.39 ± 5.44mm, 4.72 ± 3.04mm. In addition, the outcomes of multi-institution experiments demonstrate that our model is robust and generalizable and can achieve good performance through transfer learning. Conclusions: The proposed algorithm could accurately segment NPC in CT images from multi-institutional datasets and thereby may improve and facilitate clinical applications.

5.
Front Oncol ; 11: 725507, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34858813

RESUMO

PURPOSE: We developed a deep learning model to achieve automatic multitarget delineation on planning CT (pCT) and synthetic CT (sCT) images generated from cone-beam CT (CBCT) images. The geometric and dosimetric impact of the model was evaluated for breast cancer adaptive radiation therapy. METHODS: We retrospectively analyzed 1,127 patients treated with radiotherapy after breast-conserving surgery from two medical institutions. The CBCT images for patient setup acquired utilizing breath-hold guided by optical surface monitoring system were used to generate sCT with a generative adversarial network. Organs at risk (OARs), clinical target volume (CTV), and tumor bed (TB) were delineated automatically with a 3D U-Net model on pCT and sCT images. The geometric accuracy of the model was evaluated with metrics, including Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95). Dosimetric evaluation was performed by quick dose recalculation on sCT images relying on gamma analysis and dose-volume histogram (DVH) parameters. The relationship between ΔD95, ΔV95 and DSC-CTV was assessed to quantify the clinical impact of the geometric changes of CTV. RESULTS: The ranges of DSC and HD95 were 0.73-0.97 and 2.22-9.36 mm for pCT, 0.63-0.95 and 2.30-19.57 mm for sCT from institution A, 0.70-0.97 and 2.10-11.43 mm for pCT from institution B, respectively. The quality of sCT was excellent with an average mean absolute error (MAE) of 71.58 ± 8.78 HU. The mean gamma pass rate (3%/3 mm criterion) was 91.46 ± 4.63%. DSC-CTV down to 0.65 accounted for a variation of more than 6% of V95 and 3 Gy of D95. DSC-CTV up to 0.80 accounted for a variation of less than 4% of V95 and 2 Gy of D95. The mean ΔD90/ΔD95 of CTV and TB were less than 2Gy/4Gy, 4Gy/5Gy for all the patients. The cardiac dose difference in left breast cancer cases was larger than that in right breast cancer cases. CONCLUSIONS: The accurate multitarget delineation is achievable on pCT and sCT via deep learning. The results show that dose distribution needs to be considered to evaluate the clinical impact of geometric variations during breast cancer radiotherapy.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...