Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
1.
Med Phys ; 46(5): 2204-2213, 2019 May.
Article in English | MEDLINE | ID: mdl-30887523

ABSTRACT

PURPOSE: This study suggests a lifelong learning-based convolutional neural network (LL-CNN) algorithm as a superior alternative to single-task learning approaches for automatic segmentation of head and neck (OARs) organs at risk. METHODS AND MATERIALS: Lifelong learning-based convolutional neural network was trained on twelve head and neck OARs simultaneously using a multitask learning framework. Once the weights of the shared network were established, the final multitask convolutional layer was replaced by a single-task convolutional layer. The single-task transfer learning network was trained on each OAR separately with early stoppage. The accuracy of LL-CNN was assessed based on Dice score and root-mean-square error (RMSE) compared to manually delineated contours set as the gold standard. LL-CNN was compared with 2D-UNet, 3D-UNet, a single-task CNN (ST-CNN), and a pure multitask CNN (MT-CNN). Training, validation, and testing followed Kaggle competition rules, where 160 patients were used for training, 20 were used for internal validation, and 20 in a separate test set were used to report final prediction accuracies. RESULTS: On average contours generated with LL-CNN had higher Dice coefficients and lower RMSE than 2D-UNet, 3D-Unet, ST- CNN, and MT-CNN. LL-CNN required ~72 hrs to train using a distributed learning framework on 2 Nvidia 1080Ti graphics processing units. LL-CNN required 20 s to predict all 12 OARs, which was approximately as fast as the fastest alternative methods with the exception of MT-CNN. CONCLUSIONS: This study demonstrated that for head and neck organs at risk, LL-CNN achieves a prediction accuracy superior to all alternative algorithms.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Organs at Risk/diagnostic imaging , Squamous Cell Carcinoma of Head and Neck/diagnostic imaging , Tomography, X-Ray Computed , Automation , Humans , Organs at Risk/radiation effects , Radiotherapy, Image-Guided , Risk , Squamous Cell Carcinoma of Head and Neck/radiotherapy
2.
Phys Med Biol ; 63(23): 235022, 2018 12 04.
Article in English | MEDLINE | ID: mdl-30511663

ABSTRACT

The goal of this study is to demonstrate the feasibility of a novel fully-convolutional volumetric dose prediction neural network (DoseNet) and test its performance on a cohort of prostate stereotactic body radiotherapy (SBRT) patients. DoseNet is suggested as a superior alternative to U-Net and fully connected distance map-based neural networks for non-coplanar SBRT prostate dose prediction. DoseNet utilizes 3D convolutional downsampling with corresponding 3D deconvolutional upsampling to preserve memory while simultaneously increasing the receptive field of the network. DoseNet was implemented on 2 Nvidia 1080 Ti graphics processing units and utilizes a 3 phase learning protocol to help achieve convergence and improve generalization. DoseNet was trained, validated, and tested with 151 patients following Kaggle completion rules. The dosimetric quality of DoseNet was evaluated by comparing the predicted dose distribution with the clinically approved delivered dose distribution in terms of conformity index, heterogeneity index, and various clinically relevant dosimetric parameters. The results indicate that the DoseNet algorithm is a superior alternative to U-Net and fully connected methods for prostate SBRT patients. DoseNet required ~50.1 h to train, and ~0.83 s to make a prediction on a 128 × 128 × 64 voxel image. In conclusion, DoseNet is capable of making accurate volumetric dose predictions for non-coplanar SBRT prostate patients, while simultaneously preserving computational efficiency.


Subject(s)
Neural Networks, Computer , Radiosurgery/methods , Radiotherapy Planning, Computer-Assisted/methods , Algorithms , Humans , Radiotherapy Dosage
3.
Phys Med Biol ; 63(18): 185017, 2018 09 17.
Article in English | MEDLINE | ID: mdl-30109996

ABSTRACT

The purpose of the work is to develop a deep unsupervised learning strategy for cone-beam CT (CBCT) to CT deformable image registration (DIR). This technique uses a deep convolutional inverse graphics network (DCIGN) based DIR algorithm implemented on 2 Nvidia 1080 Ti graphics processing units. The model is comprised of an encoding and decoding stage. The fully-convolutional encoding stage learns hierarchical features and simultaneously forms an information bottleneck, while the decoding stage restores the original dimensionality of the input image. Activations from the encoding stage are used as the input channels to a sparse DIR algorithm. DCIGN was trained using a distributive learning-based convolutional neural network architecture and used 285 head and neck patients to train, validate, and test the algorithm. The accuracy of the DCIGN algorithm was evaluated on 100 synthetic cases and 12 hold out test patient cases. The results indicate that DCIGN performed better than rigid registration, intensity corrected Demons, and landmark-guided deformable image registration for all evaluation metrics. DCIGN required ~14 h to train, and ~3.5 s to make a prediction on a 512 × 512 × 120 voxel image. In conclusion, DCIGN is able to maintain high accuracy in the presence of CBCT noise contamination, while simultaneously preserving high computational efficiency.


Subject(s)
Algorithms , Cone-Beam Computed Tomography/methods , Head and Neck Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...