Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
NPJ Precis Oncol ; 6(1): 37, 2022 Jun 15.
Article in English | MEDLINE | ID: mdl-35705792

ABSTRACT

Understanding factors that impact prognosis for cancer patients have high clinical relevance for treatment decisions and monitoring of the disease outcome. Advances in artificial intelligence (AI) and digital pathology offer an exciting opportunity to capitalize on the use of whole slide images (WSIs) of hematoxylin and eosin (H&E) stained tumor tissue for objective prognosis and prediction of response to targeted therapies. AI models often require hand-delineated annotations for effective training which may not be readily available for larger data sets. In this study, we investigated whether AI models can be trained without region-level annotations and solely on patient-level survival data. We present a weakly supervised survival convolutional neural network (WSS-CNN) approach equipped with a visual attention mechanism for predicting overall survival. The inclusion of visual attention provides insights into regions of the tumor microenvironment with the pathological interpretation which may improve our understanding of the disease pathomechanism. We performed this analysis on two independent, multi-center patient data sets of lung (which is publicly available data) and bladder urothelial carcinoma. We perform univariable and multivariable analysis and show that WSS-CNN features are prognostic of overall survival in both tumor indications. The presented results highlight the significance of computational pathology algorithms for predicting prognosis using H&E stained images alone and underpin the use of computational methods to improve the efficiency of clinical trial studies.

2.
Ann Transl Med ; 9(1): 37, 2021 Jan.
Article in English | MEDLINE | ID: mdl-33553330

ABSTRACT

BACKGROUND: The presence of lymphovascular invasion (LVI) and perineural invasion (PNI) are of great prognostic importance in esophageal squamous cell carcinoma. Currently, positron emission tomography (PET) scans are the only means of functional assessment prior to treatment. We aimed to predict the presence of LVI and PNI in esophageal squamous cell carcinoma using PET imaging data by training a three-dimensional convolution neural network (3D-CNN). METHODS: Seven hundred and ninety-eight PET scans of patients with esophageal squamous cell carcinoma and 309 PET scans of patients with stage I lung cancer were collected. In the first part of this study, we built a 3D-CNN based on a residual network, ResNet, for a task to classify the scans into esophageal cancer or lung cancer. In the second stage, we collected the PET scans of 278 patients undergoing esophagectomy for a task to classify and predict the presence of LVI/PNI. RESULTS: In the first part, the model performance attained an area under the receiver operating characteristic curve (AUC) of 0.860. In the second part, we randomly split 80%, 10%, and 10% of our dataset into training set, validation set and testing set, respectively, for a task to classify the scans into the presence of LVI/PNI and evaluated the model performance on the testing set. Our 3D-CNN model attained an AUC of 0.668 in the testing set, which shows a better discriminative ability than random guessing. CONCLUSIONS: A 3D-CNN can be trained, using PET imaging datasets, to predict LNV/PNI in esophageal cancer with acceptable accuracy.

3.
J Clin Med ; 8(6)2019 Jun 13.
Article in English | MEDLINE | ID: mdl-31200519

ABSTRACT

In esophageal cancer, few prediction tools can be confidently used in current clinical practice. We developed a deep convolutional neural network (CNN) with 798 positron emission tomography (PET) scans of esophageal squamous cell carcinoma and 309 PET scans of stage I lung cancer. In the first stage, we pretrained a 3D-CNN with all PET scans for a task to classify the scans into esophageal cancer or lung cancer. Overall, 548 of 798 PET scans of esophageal cancer patients were included in the second stage with an aim to classify patients who expired within or survived more than one year after diagnosis. The area under the receiver operating characteristic curve (AUC) was used to evaluate model performance. In the pretrain model, the deep CNN attained an AUC of 0.738 in identifying patients who expired within one year after diagnosis. In the survival analysis, patients who were predicted to be expired but were alive at one year after diagnosis had a 5-year survival rate of 32.6%, which was significantly worse than the 5-year survival rate of the patients who were predicted to survive and were alive at one year after diagnosis (50.5%, p < 0.001). These results suggest that the prediction model could identify tumors with more aggressive behavior. In the multivariable analysis, the prediction result remained an independent prognostic factor (hazard ratio: 2.830; 95% confidence interval: 2.252-3.555, p < 0.001). We conclude that a 3D-CNN can be trained with PET image datasets to predict esophageal cancer outcome with acceptable accuracy.

SELECTION OF CITATIONS
SEARCH DETAIL
...