Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Oral Oncol ; 136: 106261, 2023 01.
Article in English | MEDLINE | ID: mdl-36446186

ABSTRACT

OBJECTIVE: We examined a modified encoder-decoder architecture-based fully convolutional neural network, OrganNet, for simultaneous auto-segmentation of 24 organs at risk (OARs) in the head and neck, followed by validation tests and evaluation of clinical application. MATERIALS AND METHODS: Computed tomography (CT) images from 310 radiotherapy plans were used as the experimental data set, of which 260 and 50 were used as the training and test sets, respectively. An improved U-Net architecture was established by introducing a batch normalization layer, residual squeeze-and-excitation layer, and unique organ-specific loss function for deep learning training. The performance of the trained network model was evaluated by comparing the manual-delineation and the STAPLE contour of 10 physicians from different centers. RESULTS: Our model achieved good segmentation in all 24 OARs in nasopharyngeal cancer radiotherapy plan CT images, with an average Dice similarity coefficient of 83.75%. Specifically, the mean Dice coefficients in large-volume organs (brainstem, spinal cord, left/right parotid glands, left/right temporal lobes, and left/right mandibles) were 84.97% - 95.00%, and in small-volume organs (pituitary, lens, optic nerve, and optic chiasma) were 55.46% - 91.56%. respectively. Using the STAPLE contours as standard contour, the OrganNet achieved comparable or better DICE in organ segmentation then that of the manual-delineation as well. CONCLUSION: The established OrganNet enables simultaneous automatic segmentation of multiple targets on CT images of the head and neck radiotherapy plans, effectively improves the accuracy of U-Net based segmentation for OARs, especially for small-volume organs.


Subject(s)
Deep Learning , Nasopharyngeal Neoplasms , Humans , Organs at Risk , Nasopharyngeal Carcinoma/radiotherapy , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Radiotherapy Planning, Computer-Assisted/methods
2.
J Transl Med ; 20(1): 524, 2022 11 12.
Article in English | MEDLINE | ID: mdl-36371220

ABSTRACT

OBJECTIVE: This paper intends to propose a method of using TransResSEUnet2.5D network for accurate automatic segmentation of the Gross Target Volume (GTV) in Radiotherapy for lung cancer. METHODS: A total of 11,370 computed tomograms (CT), deriving from 137 cases, of lung cancer patients under radiotherapy developed by radiotherapists were used as the training set; 1642 CT images in 20 cases were used as the validation set, and 1685 CT images in 20 cases were used as the test set. The proposed network was tuned and trained to obtain the best segmentation model and its performance was measured by the Dice Similarity Coefficient (DSC) and with 95% Hausdorff distance (HD95). Lastly, as to demonstrate the accuracy of the automatic segmentation of the network proposed in this study, all possible mirrors of the input images were put into Unet2D, Unet2.5D, Unet3D, ResSEUnet3D, ResSEUnet2.5D, and TransResUnet2.5D, and their respective segmentation performances were compared and assessed. RESULTS: The segmentation results of the test set showed that TransResSEUnet2.5D performed the best in the DSC (84.08 ± 0.04) %, HD95 (8.11 ± 3.43) mm and time (6.50 ± 1.31) s metrics compared to the other three networks. CONCLUSIONS: The TransResSEUnet 2.5D proposed in this study can automatically segment the GTV of radiotherapy for lung cancer patients with more accuracy.


Subject(s)
Lung Neoplasms , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/radiotherapy , Image Processing, Computer-Assisted/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...