Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Biol Proced Online ; 25(1): 15, 2023 Jun 02.
Article in English | MEDLINE | ID: mdl-37268878

ABSTRACT

BACKGROUND: Deep learning has been extensively used in digital histopathology. The purpose of this study was to test deep learning (DL) algorithms for predicting the vital status of whole-slide image (WSI) of uveal melanoma (UM). METHODS: We developed a deep learning model (Google-net) to predict the vital status of UM patients from histopathological images in TCGA-UVM cohort and validated it in an internal cohort. The histopathological DL features extracted from the model and then were applied to classify UM patients into two subtypes. The differences between two subtypes in clinical outcomes, tumor mutation, and microenvironment, and probability of drug therapeutic response were investigated further. RESULTS: We observed that the developed DL model can achieve a high accuracy of > = 90% for patches and WSIs prediction. Using 14 histopathological DL features, we successfully classified UM patients into Cluster1 and Cluster2 subtypes. Compared to Cluster2, patients in the Cluster1 subtype have a poor survival outcome, increased expression levels of immune-checkpoint genes, higher immune-infiltration of CD8 + T cell and CD4 + T cells, and more sensitivity to anti-PD-1 therapy. Besides, we established and verified prognostic histopathological DL-signature and gene-signature which outperformed the traditional clinical features. Finally, a well-performed nomogram combining the DL-signature and gene-signature was constructed to predict the mortality of UM patients. CONCLUSIONS: Our findings suggest that DL model can accurately predict vital status in UM patents just using histopathological images. We found out two subgroups based on histopathological DL features, which may in favor of immunotherapy and chemotherapy. Finally, a well-performing nomogram that combines DL-signature and gene-signature was constructed to give a more straightforward and reliable prognosis for UM patients in treatment and management.

2.
Ophthalmol Ther ; 12(2): 1263-1279, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36826752

ABSTRACT

INTRODUCTION: Deep learning (DL) has been widely used to estimate clinical images. The objective of this project was to create DL models to predict the early postoperative visual acuity after small-incision lenticule extraction (SMILE) surgery. METHODS: We enrolled three independent patient cohorts (a retrospective cohort and two prospective SMILE cohorts) who underwent the SMILE refractive correction procedure at two different refractive surgery centers from July to September 2022. The medical records and surgical videos were collected for further analysis. Based on the uncorrected visual acuity (UCVA) at 24 h postsurgery, the eyes were divided into two groups: those showing good recovery and those showing poor recovery. We then trained a DL model (Resnet50) to predict eyes with early postoperative visual acuity of patients in the retrospective cohort who had undergone SMILE surgery from surgical videos and subsequently validated the model's performance in the two prospective cohorts. Finally, Gradient-weighted Class Activation Mapping (Grad-CAM) was performed for interpretation of the model. RESULTS: Among the 318 eyes (159 patients) enrolled in the study, 10,176 good quality femtosecond laser scanning images were obtained from the surgical videos. We observed that the developed DL model achieved a high accuracy of 96% for image prediction. The area under the curve (AUC) value of the DL model in the retrospective cohort was 0.962 and 0.998 in the training and validation datasets, respectively. The AUC values in two prospective cohorts were 0.959 and 0.936. At the video level, the trained machine learning (ML) model (XGBoost) also accurately distinguished patients with good or poor recovery. The AUC value of the ML model was 0.998 and 0.889 in the retrospective cohort (training and test datasets, respectively) and 1.000 and 0.984 in the two prospective cohorts. We also trained a DL model which can accurately distinguish suction loss (100%), black spots (85%), and opaque bubble layer (96%). The Grad-CAM heatmap indicated that our models can recognize the area of scanning and precisely identify intraoperative complications. CONCLUSIONS: Our findings suggest that artificial intelligence (DL and ML model) can accurately predict the early postoperative visual acuity and intraoperative complications after SMILE surgery just using surgical videos or images, which may display a great importance for artificial intelligence in application of refractive surgeries.

SELECTION OF CITATIONS
SEARCH DETAIL
...