Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Diagnostics (Basel) ; 13(3)2023 Jan 18.
Article in English | MEDLINE | ID: mdl-36766468

ABSTRACT

In the context of brain tumour response assessment, deep learning-based three-dimensional (3D) tumour segmentation has shown potential to enter the routine radiological workflow. The purpose of the present study was to perform an external evaluation of a state-of-the-art deep learning 3D brain tumour segmentation algorithm (HD-GLIO) on an independent cohort of consecutive, post-operative patients. For 66 consecutive magnetic resonance imaging examinations, we compared delineations of contrast-enhancing (CE) tumour lesions and non-enhancing T2/FLAIR hyperintense abnormality (NE) lesions by the HD-GLIO algorithm and radiologists using Dice similarity coefficients (Dice). Volume agreement was assessed using concordance correlation coefficients (CCCs) and Bland-Altman plots. The algorithm performed very well regarding the segmentation of NE volumes (median Dice = 0.79) and CE tumour volumes larger than 1.0 cm3 (median Dice = 0.86). If considering all cases with CE tumour lesions, the performance dropped significantly (median Dice = 0.40). Volume agreement was excellent with CCCs of 0.997 (CE tumour volumes) and 0.922 (NE volumes). The findings have implications for the application of the HD-GLIO algorithm in the routine radiological workflow where small contrast-enhancing tumours will constitute a considerable share of the follow-up cases. Our study underlines that independent validations on clinical datasets are key to asserting the robustness of deep learning algorithms.

2.
Diagnostics (Basel) ; 12(12)2022 Dec 09.
Article in English | MEDLINE | ID: mdl-36553118

ABSTRACT

Consistent annotation of data is a prerequisite for the successful training and testing of artificial intelligence-based decision support systems in radiology. This can be obtained by standardizing terminology when annotating diagnostic images. The purpose of this study was to evaluate the annotation consistency among radiologists when using a novel diagnostic labeling scheme for chest X-rays. Six radiologists with experience ranging from one to sixteen years, annotated a set of 100 fully anonymized chest X-rays. The blinded radiologists annotated on two separate occasions. Statistical analyses were done using Randolph's kappa and PABAK, and the proportions of specific agreements were calculated. Fair-to-excellent agreement was found for all labels among the annotators (Randolph's Kappa, 0.40-0.99). The PABAK ranged from 0.12 to 1 for the two-reader inter-rater agreement and 0.26 to 1 for the intra-rater agreement. Descriptive and broad labels achieved the highest proportion of positive agreement in both the inter- and intra-reader analyses. Annotating findings with specific, interpretive labels were found to be difficult for less experienced radiologists. Annotating images with descriptive labels may increase agreement between radiologists with different experience levels compared to annotation with interpretive labels.

SELECTION OF CITATIONS
SEARCH DETAIL
...