Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
J Clin Oncol ; : JCO2301978, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38843483

ABSTRACT

PURPOSE: Artificial intelligence can reduce the time used by physicians on radiological assessments. For 18F-fluorodeoxyglucose-avid lymphomas, obtaining complete metabolic response (CMR) by end of treatment is prognostic. METHODS: Here, we present a deep learning-based algorithm for fully automated treatment response assessments according to the Lugano 2014 classification. The proposed four-stage method, trained on a multicountry clinical trial (ClinicalTrials.gov identifier: NCT01287741) and tested in three independent multicenter and multicountry test sets on different non-Hodgkin lymphoma subtypes and different lines of treatment (ClinicalTrials.gov identifiers: NCT02257567, NCT02500407, 20% holdout in ClinicalTrials.gov identifier: NCT01287741), outputs the detected lesions at baseline and follow-up to enable focused radiologist review. RESULTS: The method's response assessment achieved high agreement with the adjudicated radiologic responses (eg, agreement for overall response assessment of 93%, 87%, and 85% in ClinicalTrials.gov identifiers: NCT01287741, NCT02500407, and NCT02257567, respectively) similar to inter-radiologist agreement and was strongly prognostic of outcomes with a trend toward higher accuracy for death risk than adjudicated radiologic responses (hazard ratio for end of treatment by-model CMR of 0.123, 0.054, and 0.205 in ClinicalTrials.gov identifiers: NCT01287741, NCT02500407, and NCT02257567, compared with, respectively, 0.226, 0.292, and 0.272 for CMR by the adjudicated responses). Furthermore, a radiologist review of the algorithm's assessments was conducted. The radiologist median review time was 1.38 minutes/assessment, and no statistically significant differences were observed in the level of agreement of the radiologist with the model's response compared with the level of agreement of the radiologist with the adjudicated responses. CONCLUSION: These results suggest that the proposed method can be incorporated into radiologic response assessment workflows in cancer imaging for significant time savings and with performance similar to trained medical experts.

2.
J Digit Imaging ; 36(5): 2060-2074, 2023 10.
Article in English | MEDLINE | ID: mdl-37291384

ABSTRACT

Deep neural networks (DNNs) have recently showed remarkable performance in various computer vision tasks, including classification and segmentation of medical images. Deep ensembles (an aggregated prediction of multiple DNNs) were shown to improve a DNN's performance in various classification tasks. Here we explore how deep ensembles perform in the image segmentation task, in particular, organ segmentations in CT (Computed Tomography) images. Ensembles of V-Nets were trained to segment multiple organs using several in-house and publicly available clinical studies. The ensembles segmentations were tested on images from a different set of studies, and the effects of ensemble size as well as other ensemble parameters were explored for various organs. Compared to single models, Deep Ensembles significantly improved the average segmentation accuracy, especially for those organs where the accuracy was lower. More importantly, Deep Ensembles strongly reduced occasional "catastrophic" segmentation failures characteristic of single models and variability of the segmentation accuracy from image to image. To quantify this we defined the "high risk images": images for which at least one model produced an outlier metric (performed in the lower 5% percentile). These images comprised about 12% of the test images across all organs. Ensembles performed without outliers for 68%-100% of the "high risk images" depending on the performance metric used.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods
3.
iScience ; 25(12): 105712, 2022 Dec 22.
Article in English | MEDLINE | ID: mdl-36582483

ABSTRACT

Here, we have developed an automated image processing algorithm for segmenting lungs and individual lung tumors in in vivo micro-computed tomography (micro-CT) scans of mouse models of non-small cell lung cancer and lung fibrosis. Over 3000 scans acquired across multiple studies were used to train/validate a 3D U-net lung segmentation model and a Support Vector Machine (SVM) classifier to segment individual lung tumors. The U-net lung segmentation algorithm can be used to estimate changes in soft tissue volume within lungs (primarily tumors and blood vessels), whereas the trained SVM is able to discriminate between tumors and blood vessels and identify individual tumors. The trained segmentation algorithms (1) significantly reduce time required for lung and tumor segmentation, (2) reduce bias and error associated with manual image segmentation, and (3) facilitate identification of individual lung tumors and objective assessment of changes in lung and individual tumor volumes under different experimental conditions.

4.
BMC Med Imaging ; 22(1): 58, 2022 03 30.
Article in English | MEDLINE | ID: mdl-35354384

ABSTRACT

PURPOSE: Positron emission tomography (PET)/ computed tomography (CT) has been extensively used to quantify metabolically active tumors in various oncology indications. However, FDG-PET/CT often encounters false positives in tumor detection due to 18fluorodeoxyglucose (FDG) accumulation from the heart and bladder that often exhibit similar FDG uptake as tumors. Thus, it is necessary to eliminate this source of physiological noise. Major challenges for this task include: (1) large inter-patient variability in the appearance for the heart and bladder. (2) The size and shape of bladder or heart may appear different on PET and CT. (3) Tumors can be very close or connected to the heart or bladder. APPROACH: A deep learning based approach is proposed to segment the heart and bladder on whole body PET/CT automatically. Two 3D U-Nets were developed separately to segment the heart and bladder, where each network receives the PET and CT as a multi-modal input. Data sets were obtained from retrospective clinical trials and include 575 PET/CT for heart segmentation and 538 for bladder segmentation. RESULTS: The models were evaluated on a test set from an independent trial and achieved a Dice Similarity Coefficient (DSC) of 0.96 for heart segmentation and 0.95 for bladder segmentation, Average Surface Distance (ASD) of 0.44 mm on heart and 0.90 mm on bladder. CONCLUSIONS: This methodology could be a valuable component to the FDG-PET/CT data processing chain by removing FDG physiological noise associated with heart and/or bladder accumulation prior to image analysis by manual, semi- or automated tumor analysis methods.


Subject(s)
Deep Learning , Fluorodeoxyglucose F18 , Humans , Positron Emission Tomography Computed Tomography/methods , Retrospective Studies , Urinary Bladder/diagnostic imaging
5.
J Digit Imaging ; 33(4): 888-894, 2020 08.
Article in English | MEDLINE | ID: mdl-32378059

ABSTRACT

18F-Fluorodeoxyglucose-positron emission tomography (FDG-PET) is commonly used in clinical practice and clinical drug development to identify and quantify metabolically active tumors. Manual or computer-assisted tumor segmentation in FDG-PET images is a common way to assess tumor burden, such approaches are both labor intensive and may suffer from high inter-reader variability. We propose an end-to-end method leveraging 2D and 3D convolutional neural networks to rapidly identify and segment tumors and to extract metabolic information in eyes to thighs (whole body) FDG-PET/CT scans. The developed architecture is computationally efficient and devised to accommodate the size of whole-body scans, the extreme imbalance between tumor burden and the volume of healthy tissue, and the heterogeneous nature of the input images. Our dataset consists of a total of 3664 eyes to thighs FDG-PET/CT scans, from multi-site clinical trials in patients with non-Hodgkin's lymphoma (NHL) and advanced non-small cell lung cancer (NSCLC). Tumors were segmented and reviewed by board-certified radiologists. We report a mean 3D Dice score of 88.6% on an NHL hold-out set of 1124 scans and a 93% sensitivity on 274 NSCLC hold-out scans. The method is a potential tool for radiologists to rapidly assess eyes to thighs FDG-avid tumor burden.


Subject(s)
Carcinoma, Non-Small-Cell Lung , Lung Neoplasms , Fluorodeoxyglucose F18 , Humans , Neural Networks, Computer , Positron Emission Tomography Computed Tomography , Positron-Emission Tomography
SELECTION OF CITATIONS
SEARCH DETAIL
...