Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Cancers (Basel) ; 14(23)2022 Dec 05.
Article in English | MEDLINE | ID: mdl-36497481

ABSTRACT

We previously constructed a VGG-16 based artificial intelligence (AI) model (image classifier [IC]) to predict the invasion depth in early gastric cancer (EGC) using endoscopic static images. However, images cannot capture the spatio-temporal information available during real-time endoscopy-the AI trained on static images could not estimate invasion depth accurately and reliably. Thus, we constructed a video classifier [VC] using videos for real-time depth prediction in EGC. We built a VC by attaching sequential layers to the last convolutional layer of IC v2, using video clips. We computed the standard deviation (SD) of output probabilities for a video clip and the sensitivities in the manner of frame units to observe consistency. The sensitivity, specificity, and accuracy of IC v2 for static images were 82.5%, 82.9%, and 82.7%, respectively. However, for video clips, the sensitivity, specificity, and accuracy of IC v2 were 33.6%, 85.5%, and 56.6%, respectively. The VC performed better analysis of the videos, with a sensitivity of 82.3%, a specificity of 85.8%, and an accuracy of 83.7%. Furthermore, the mean SD was lower for the VC than IC v2 (0.096 vs. 0.289). The AI model developed utilizing videos can predict invasion depth in EGC more precisely and consistently than image-trained models, and is more appropriate for real-world situations.

2.
Transl Lung Cancer Res ; 11(1): 14-23, 2022 Jan.
Article in English | MEDLINE | ID: mdl-35242624

ABSTRACT

BACKGROUND: Thoracic lymph node (LN) evaluation is essential for the accurate diagnosis of lung cancer and deciding the appropriate course of treatment. Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is considered a standard method for mediastinal nodal staging. This study aims to build a deep convolutional neural network (CNN) for the automatic classification of metastatic malignancies involving thoracic LN, using EBUS-TBNA. METHODS: Patients who underwent EBUS-TBNAs to assess the presence of malignancy in mediastinal LNs during a ten-month period at Severance Hospital, Seoul, Republic of Korea, were included in the study. Corresponding LN ultrasound images, pathology reports, demographic data, and clinical history were collected and analyzed. RESULTS: A total of 2,394 endobronchial ultrasound (EBUS) images of 1,459 benign LNs from 193 patients, and 935 malignant LNs from 177 patients, were collected. We employed the visual geometry group (VGG)-16 network to classify malignant LNs using only traditional cross-entropy for classification loss. The sensitivity, specificity, and accuracy of predicting malignancy were 69.7%, 74.3%, and 72.0%, respectively, and the overall area under the curve (AUC) was 0.782. We applied the new loss function to train the network and, using the modified VGG-16, the AUC improved to a value of 0.8. The sensitivity, specificity, and accuracy improved to 72.7%, 79.0%, and 75.8%, respectively. In addition, the proposed network can process 63 images per second on a single mainstream graphics processing unit (GPU) device, making it suitable for real-time analysis of EBUS images. CONCLUSIONS: Deep CNNs can effectively classify malignant LNs from EBUS images. Selecting LNs that require biopsy using real-time EBUS image analysis with deep learning is expected to shorten the EBUS-TBNA procedure time, increase lung cancer nodal staging accuracy, and improve patient safety.

3.
J Clin Med ; 8(9)2019 Aug 26.
Article in English | MEDLINE | ID: mdl-31454949

ABSTRACT

In early gastric cancer (EGC), tumor invasion depth is an important factor for determining the treatment method. However, as endoscopic ultrasonography has limitations when measuring the exact depth in a clinical setting as endoscopists often depend on gross findings and personal experience. The present study aimed to develop a model optimized for EGC detection and depth prediction, and we investigated factors affecting artificial intelligence (AI) diagnosis. We employed a visual geometry group(VGG)-16 model for the classification of endoscopic images as EGC (T1a or T1b) or non-EGC. To induce the model to activate EGC regions during training, we proposed a novel loss function that simultaneously measured classification and localization errors. We experimented with 11,539 endoscopic images (896 T1a-EGC, 809 T1b-EGC, and 9834 non-EGC). The areas under the curves of receiver operating characteristic curves for EGC detection and depth prediction were 0.981 and 0.851, respectively. Among the factors affecting AI prediction of tumor depth, only histologic differentiation was significantly associated, where undifferentiated-type histology exhibited a lower AI accuracy. Thus, the lesion-based model is an appropriate training method for AI in EGC. However, further improvements and validation are required, especially for undifferentiated-type histology.

SELECTION OF CITATIONS
SEARCH DETAIL
...