Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
AJNR Am J Neuroradiol ; 42(4): 639-647, 2021 04.
Article in English | MEDLINE | ID: mdl-33574101

ABSTRACT

BACKGROUND AND PURPOSE: Accurate and reliable detection of white matter hyperintensities and their volume quantification can provide valuable clinical information to assess neurologic disease progression. In this work, a stacked generalization ensemble of orthogonal 3D convolutional neural networks, StackGen-Net, is explored for improving automated detection of white matter hyperintensities in 3D T2-FLAIR images. MATERIALS AND METHODS: Individual convolutional neural networks in StackGen-Net were trained on 2.5D patches from orthogonal reformatting of 3D-FLAIR (n = 21) to yield white matter hyperintensity posteriors. A meta convolutional neural network was trained to learn the functional mapping from orthogonal white matter hyperintensity posteriors to the final white matter hyperintensity prediction. The impact of training data and architecture choices on white matter hyperintensity segmentation performance was systematically evaluated on a test cohort (n = 9). The segmentation performance of StackGen-Net was compared with state-of-the-art convolutional neural network techniques on an independent test cohort from the Alzheimer's Disease Neuroimaging Initiative-3 (n = 20). RESULTS: StackGen-Net outperformed individual convolutional neural networks in the ensemble and their combination using averaging or majority voting. In a comparison with state-of-the-art white matter hyperintensity segmentation techniques, StackGen-Net achieved a significantly higher Dice score (0.76 [SD, 0.08], F1-lesion (0.74 [SD, 0.13]), and area under precision-recall curve (0.84 [SD, 0.09]), and the lowest absolute volume difference (13.3% [SD, 9.1%]). StackGen-Net performance in Dice scores (median = 0.74) did not significantly differ (P = .22) from interobserver (median = 0.73) variability between 2 experienced neuroradiologists. We found no significant difference (P = .15) in white matter hyperintensity lesion volumes from StackGen-Net predictions and ground truth annotations. CONCLUSIONS: A stacked generalization of convolutional neural networks, utilizing multiplanar lesion information using 2.5D spatial context, greatly improved the segmentation performance of StackGen-Net compared with traditional ensemble techniques and some state-of-the-art deep learning models for 3D-FLAIR.


Subject(s)
Deep Learning , White Matter , Humans , Imaging, Three-Dimensional , Magnetic Resonance Imaging , Neural Networks, Computer , White Matter/diagnostic imaging
2.
AJNR Am J Neuroradiol ; 41(6): 1061-1069, 2020 06.
Article in English | MEDLINE | ID: mdl-32439637

ABSTRACT

BACKGROUND AND PURPOSE: Fast and accurate quantification of globe volumes in the event of an ocular trauma can provide clinicians with valuable diagnostic information. In this work, an automated workflow using a deep learning-based convolutional neural network is proposed for prediction of globe contours and their subsequent volume quantification in CT images of the orbits. MATERIALS AND METHODS: An automated workflow using a deep learning -based convolutional neural network is proposed for prediction of globe contours in CT images of the orbits. The network, 2D Modified Residual UNET (MRes-UNET2D), was trained on axial CT images from 80 subjects with no imaging or clinical findings of globe injuries. The predicted globe contours and volume estimates were compared with manual annotations by experienced observers on 2 different test cohorts. RESULTS: On the first test cohort (n = 18), the average Dice, precision, and recall scores were 0.95, 96%, and 95%, respectively. The average 95% Hausdorff distance was only 1.5 mm, with a 5.3% error in globe volume estimates. No statistically significant differences (P = .72) were observed in the median globe volume estimates from our model and the ground truth. On the second test cohort (n = 9) in which a neuroradiologist and 2 residents independently marked the globe contours, MRes-UNET2D (Dice = 0.95) approached human interobserver variability (Dice = 0.94). We also demonstrated the utility of inter-globe volume difference as a quantitative marker for trauma in 3 subjects with known globe injuries. CONCLUSIONS: We showed that with fast prediction times, we can reliably detect and quantify globe volumes in CT images of the orbits across a variety of acquisition parameters.


Subject(s)
Deep Learning , Image Interpretation, Computer-Assisted/methods , Neuroimaging/methods , Orbit/diagnostic imaging , Tomography, X-Ray Computed/methods , Humans , Observer Variation
SELECTION OF CITATIONS
SEARCH DETAIL
...