Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters











Database
Language
Publication year range
1.
Radiother Oncol ; 200: 110500, 2024 Nov.
Article in English | MEDLINE | ID: mdl-39236985

ABSTRACT

BACKGROUND AND PURPOSE: To evaluate the impact of a deep learning (DL)-assisted interactive contouring tool on inter-observer variability and the time taken to complete tumour contouring. MATERIALS AND METHODS: Nine clinicians contoured the gross tumour volume (GTV) using the PET-CT scans of 10 non-small cell lung cancer (NSCLC) patients, either using DL-assisted or manual contouring tools. After contouring a case using one contouring method, the same case was contoured one week later using the other method. The contours and time taken were compared. RESULTS: Use of the DL-assisted tool led to a statistically significant decrease in active contouring time of 23 % relative to the standard manual segmentation method (p < 0.01). The mean observation time for all clinicians and cases made up nearly 60 % of interaction time for both contouring approaches. On average the time spent contouring per case was reduced from 22 min to 19 min when using the DL-assisted tool. Additionally, the DL-assisted tool reduced contour variability in the parts of tumour where clinicians tended to disagree the most, while the consensus contour was similar whichever of the two contouring approaches was used. CONCLUSIONS: A DL-assisted interactive contouring approach decreased active contouring time and local inter-observer variability when used to delineate lung cancer GTVs compared to a standard manual method. Integration of this tool into the clinical workflow could assist clinicians in contouring tasks and improve contouring efficiency.


Subject(s)
Carcinoma, Non-Small-Cell Lung , Deep Learning , Lung Neoplasms , Positron Emission Tomography Computed Tomography , Humans , Lung Neoplasms/radiotherapy , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Positron Emission Tomography Computed Tomography/methods , Carcinoma, Non-Small-Cell Lung/radiotherapy , Carcinoma, Non-Small-Cell Lung/pathology , Carcinoma, Non-Small-Cell Lung/diagnostic imaging , Observer Variation , Time Factors , Radiotherapy Planning, Computer-Assisted/methods , Tumor Burden
2.
Phys Med Biol ; 67(12)2022 06 13.
Article in English | MEDLINE | ID: mdl-35523158

ABSTRACT

Semi-automatic and fully automatic contouring tools have emerged as an alternative to fully manual segmentation to reduce time spent contouring and to increase contour quality and consistency. Particularly, fully automatic segmentation has seen exceptional improvements through the use of deep learning in recent years. These fully automatic methods may not require user interactions, but the resulting contours are often not suitable to be used in clinical practice without a review by the clinician. Furthermore, they need large amounts of labelled data to be available for training. This review presents alternatives to manual or fully automatic segmentation methods along the spectrum of variable user interactivity and data availability. The challenge lies to determine how much user interaction is necessary and how this user interaction can be used most effectively. While deep learning is already widely used for fully automatic tools, interactive methods are just at the starting point to be transformed by it. Interaction between clinician and machine, via artificial intelligence, can go both ways and this review will present the avenues that are being pursued to improve medical image segmentation.


Subject(s)
Artificial Intelligence , Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods
3.
Med Phys ; 48(6): 2951-2959, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33742454

ABSTRACT

PURPOSE: To investigate a deep learning approach that enables three-dimensional (3D) segmentation of an arbitrary structure of interest given a user provided two-dimensional (2D) contour for context. Such an approach could decrease delineation times and improve contouring consistency, particularly for anatomical structures for which no automatic segmentation tools exist. METHODS: A series of deep learning segmentation models using a Recurrent Residual U-Net with attention gates was trained with a successively expanding training set. Contextual information was provided to the models, using a previously contoured slice as an input, in addition to the slice to be contoured. In total, 6 models were developed, and 19 different anatomical structures were used for training and testing. Each of the models was evaluated for all 19 structures, even if they were excluded from the training set, in order to assess the model's ability to segment unseen structures of interest. Each model's performance was evaluated using the Dice similarity coefficient (DSC), Hausdorff distance, and relative added path length (APL). RESULTS: The segmentation performance for seen and unseen structures improved when the training set was expanded by addition of structures previously excluded from the training set. A model trained exclusively on heart structures achieved a DSC of 0.33, HD of 44 mm, and relative APL of 0.85 when segmenting the spleen, whereas a model trained on a diverse set of structures, but still excluding the spleen, achieved a DSC of 0.80, HD of 13 mm, and relative APL of 0.35. Iterative prediction performed better compared to direct prediction when considering unseen structures. CONCLUSIONS: Training a contextual deep learning model on a diverse set of structures increases the segmentation performance for the structures in the training set, but importantly enables the model to generalize and make predictions even for unseen structures that were not represented in the training set. This shows that user-provided context can be incorporated into deep learning contouring to facilitate semi-automatic segmentation of CT images for any given structure. Such an approach can enable faster de-novo contouring in clinical practice.


Subject(s)
Deep Learning , Heart , Image Processing, Computer-Assisted , Tomography, X-Ray Computed
SELECTION OF CITATIONS
SEARCH DETAIL