Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add filters








Language
Year range
1.
Chinese Journal of Digestive Endoscopy ; (12): 372-378, 2023.
Article in Chinese | WPRIM | ID: wpr-995393

ABSTRACT

Objective:To construct a real-time artificial intelligence (AI)-assisted endoscepic diagnosis system based on YOLO v3 algorithm, and to evaluate its ability of detecting focal gastric lesions in gastroscopy.Methods:A total of 5 488 white light gastroscopic images (2 733 images with gastric focal lesions and 2 755 images without gastric focal lesions) from June to November 2019 and videos of 92 cases (288 168 clear stomach frames) from May to June 2020 at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University were retrospectively collected for AI System test. A total of 3 997 prospective consecutive patients undergoing gastroscopy at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University from July 6, 2020 to November 27, 2020 and May 6, 2021 to August 2, 2021 were enrolled to assess the clinical applicability of AI System. When AI System recognized an abnormal lesion, it marked the lesion with a blue box as a warning. The ability to identify focal gastric lesions and the frequency and causes of false positives and false negatives of AI System were statistically analyzed.Results:In the image test set, the accuracy, the sensitivity, the specificity, the positive predictive value and the negative predictive value of AI System were 92.3% (5 064/5 488), 95.0% (2 597/2 733), 89.5% (2 467/ 2 755), 90.0% (2 597/2 885) and 94.8% (2 467/2 603), respectively. In the video test set, the accuracy, the sensitivity, the specificity, the positive predictive value and the negative predictive value of AI System were 95.4% (274 792/288 168), 95.2% (109 727/115 287), 95.5% (165 065/172 881), 93.4% (109 727/117 543) and 96.7% (165 065/170 625), respectively. In clinical application, the detection rate of local gastric lesions by AI System was 93.0% (6 830/7 344). A total of 514 focal gastric lesions were missed by AI System. The main reasons were punctate erosions (48.8%, 251/514), diminutive xanthomas (22.8%, 117/514) and diminutive polyps (21.4%, 110/514). The mean number of false positives per gastroscopy was 2 (1, 4), most of which were due to normal mucosa folds (50.2%, 5 635/11 225), bubbles and mucus (35.0%, 3 928/11 225), and liquid deposited in the fundus (9.1%, 1 021/11 225).Conclusion:The application of AI System can increase the detection rate of focal gastric lesions.

2.
Chinese Journal of Digestive Endoscopy ; (12): 293-297, 2023.
Article in Chinese | WPRIM | ID: wpr-995384

ABSTRACT

Objective:To assess the diagnostic efficacy of upper gastrointestinal endoscopic image assisted diagnosis system (ENDOANGEL-LD) based on artificial intelligence (AI) for detecting gastric lesions and neoplastic lesions under white light endoscopy.Methods:The diagnostic efficacy of ENDOANGEL-LD was tested using image testing dataset and video testing dataset, respectively. The image testing dataset included 300 images of gastric neoplastic lesions, 505 images of non-neoplastic lesions and 990 images of normal stomach of 191 patients in Renmin Hospital of Wuhan University from June 2019 to September 2019. Video testing dataset was from 83 videos (38 gastric neoplastic lesions and 45 non-neoplastic lesions) of 78 patients in Renmin Hospital of Wuhan University from November 2020 to April 2021. The accuracy, the sensitivity and the specificity of ENDOANGEL-LD for image testing dataset were calculated. The accuracy, the sensitivity and the specificity of ENDOANGEL-LD in video testing dataset for gastric neoplastic lesions were compared with those of four senior endoscopists.Results:In the image testing dataset, the accuracy, the sensitivity, the specificity of ENDOANGEL-LD for gastric lesions were 93.9% (1 685/1 795), 98.0% (789/805) and 90.5% (896/990) respectively; while the accuracy, the sensitivity and the specificity of ENDOANGEL-LD for gastric neoplastic lesions were 88.7% (714/805), 91.0% (273/300) and 87.3% (441/505) respectively. In the video testing dataset, the sensitivity [100.0% (38/38) VS 85.5% (130/152), χ2=6.220, P=0.013] of ENDOANGEL-LD was higher than that of four senior endoscopists. The accuracy [81.9% (68/83) VS 72.0% (239/332), χ2=3.408, P=0.065] and the specificity [ 66.7% (30/45) VS 60.6% (109/180), χ2=0.569, P=0.451] of ENDOANGEL-LD were comparable with those of four senior endoscopists. Conclusion:The ENDOANGEL-LD can accurately detect gastric lesions and further diagnose neoplastic lesions to help endoscopists in clinical work.

3.
Chinese Journal of Digestive Endoscopy ; (12): 965-971, 2022.
Article in Chinese | WPRIM | ID: wpr-995348

ABSTRACT

Objective:To develop an artificial intelligence-based system for measuring the size of gastrointestinal lesions under white light endoscopy in real time.Methods:The system consisted of 3 models. Model 1 was used to identify the biopsy forceps and mark the contour of the forceps in continuous pictures of the video. The results of model 1 were submitted to model 2 and classified into open and closed forceps. And model 3 was used to identify the lesions and mark the boundary of lesions in real time. Then the length of the lesions was compared with the contour of the forceps to calculate the size of lesions. Dataset 1 consisted of 4 835 images collected retrospectively from January 1, 2017 to November 30, 2019 in Renmin Hospital of Wuhan University, which were used for model training and validation. Dataset 2 consisted of images collected prospectively from December 1, 2019 to June 4, 2020 at the Endoscopy Center of Renmin Hospital of Wuhan University, which were used to test the ability of the model to segment the boundary of the biopsy forceps and lesions. Dataset 3 consisted of 302 images of 151 simulated lesions, each of which included one image of a larger tilt angle (45° from the vertical line of the lesion) and one image of a smaller tilt angle (10° from the vertical line of the lesion) to test the ability of the model to measure the lesion size with the biopsy forceps in different states. Dataset 4 was a video test set, which consisted of prospectively collected videos taken from the Endoscopy Center of Renmin Hospital of Wuhan University from August 5, 2019 to September 4, 2020. The accuracy of model 1 in identifying the presence or absence of biopsy forceps, model 2 in classifying the status of biopsy forceps (open or closed) and model 3 in identifying the presence or absence of lesions were observed with the results of endoscopist review or endoscopic surgery pathology as the gold standard. Intersection over union (IoU) was used to evaluate the segmentation effect of biopsy forceps in model 1 and lesion segmentation effect in model 3, and the absolute error and relative error were used to evaluate the ability of the system to measure lesion size.Results:(1)A total of 1 252 images were included in dataset 2, including 821 images of forceps (401 images of open forceps and 420 images of closed forceps), 431 images of non-forceps, 640 images of lesions and 612 images of non-lesions. Model 1 judged 433 images of non-forceps (430 images were accurate) and 819 images of forceps (818 images were accurate), and the accuracy was 99.68% (1 248/1 252). Based on the data of 818 images of forceps to evaluate the accuracy of model 1 on judging the segmentation effect of biopsy forceps lobe, the mean IoU was 0.91 (95% CI: 0.90-0.92). The classification accuracy of model 2 was evaluated by using 818 forceps pictures accurately judged by model 1. Model 2 judged 384 open forceps pictures (382 accurate) and 434 closed forceps pictures (416 accurate), and the classification accuracy of model 2 was 97.56% (798/818). Model 3 judged 654 images containing lesions (626 images were accurate) and 598 images of non-lesions (584 images were accurate), and the accuracy was 96.65% (1 210/1 252). Based on 626 images of lesions accurately judged by model 3, the mean IoU was 0.86 (95% CI: 0.85-0.87). (2) In dataset 3, the mean absolute error of systematic lesion size measurement was 0.17 mm (95% CI: 0.08-0.28 mm) and the mean relative error was 3.77% (95% CI: 0.00%-10.85%) when the tilt angle of biopsy forceps was small. The mean absolute error of systematic lesion size measurement was 0.17 mm (95% CI: 0.09-0.26 mm) and the mean relative error was 4.02% (95% CI: 2.90%-5.14%) when the biopsy forceps was tilted at a large angle. (3) In dataset 4, a total of 780 images of 59 endoscopic examination videos of 59 patients were included. The mean absolute error of systematic lesion size measurement was 0.24 mm (95% CI: 0.00-0.67 mm), and the mean relative error was 9.74% (95% CI: 0.00%-29.83%). Conclusion:The system could measure the size of endoscopic gastrointestinal lesions accurately and may improve the accuracy of endoscopists.

4.
Chinese Journal of Digestion ; (12): 464-469, 2022.
Article in Chinese | WPRIM | ID: wpr-958335

ABSTRACT

Objective:To construct a deep learning-based diagnostic system for gastrointestinal submucosal tumor (SMT) under endoscopic ultrasonography (EUS), so as to help endoscopists diagnose SMT.Methods:From January 1, 2019 to December 15, 2021, at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University, 245 patients with SMT confirmed by pathological diagnosis who underwent EUS and endoscopic submucosal dissection were enrolled. A total of 3 400 EUS images were collected. Among the images, 2 722 EUS images were used for training of lesion segmentation model, while 2 209 EUS images were used for training of stromal tumor and leiomyoma classification model; 283 and 191 images were selected as independent test sets to evaluate lesion segmentation model and classification model, respectively. Thirty images were selected as an independent data set for human-machine competition to compare the lesion classification accuracy between lesion classification models and 6 endoscopists. The performance of the segmentation model was evaluated by indexes such as Intersection-over-Union and Dice coefficient. The performance of the classification model was evaluated by accuracy. Chi-square test was used for statistical analysis.Results:The average Intersection-over-Union and Dice coefficient of lesion segmentation model were 0.754 and 0.835, respectively, and the accuracy, recall and F1 score were 95.2%, 98.9% and 97.0%, respectively. Based on the lesion segmentation, the accuracy of classification model increased from 70.2% to 92.1%. The results of human-machine competition showed that the accuracy of classification model in differential diagnosis of stromal tumor and leiomyoma was 86.7% (26/30), which was superior to that of 4 out of the 6 endoscopists(56.7%, 17/30; 56.7%, 17/30; 53.3%, 16/30; 60.0%, 18/30; respectively), and the differences were statistically significant( χ2=7.11, 7.36, 8.10, 6.13; all P<0.05). There was no significant difference between the accuracy of the other 2 endoscopists(76.7%, 23/30; 73.3%, 22/30; respectively) and model(both P<0.05). Conclusion:This system could be used for the auxiliary diagnosis of SMT under ultrasonic endoscope in the future, and to provide a powerful evidence for the selection of subsequent treatment decisions.

5.
Chinese Journal of Health Management ; (6): 153-157, 2022.
Article in Chinese | WPRIM | ID: wpr-932957

ABSTRACT

Objective:To explore the effects of different blood glucose management modes on self-management ability and glucose and lipid metabolism in patients with type 2 diabetes mellitus (T2DM) based on the WeChat platform.Methods:240 patients with T2DM were selected in Taiyuan Central Hospital from January to June 2020 for the study. They were randomly divided into general management groups, medical care management groups, peer management groups, and medical care and peer co-management groups using random number table, with 60 cases in each group. The general management group received routine outpatient follow-up. The medical care management group, peer management group, and medical care and peer management group established WeChat groups for management, respectively. Each group′s self-management ability and glucose and lipid metabolism indexes were compared before and after six months of intervention. t-test or nonparametric tests were used to compare. Results:After the intervention, self-management abilities such as diet, exercise, blood glucose monitoring, medication compliance, foot care, smoking and fasting blood glucose (FBG), and glycosylated hemoglobin (HbA 1c) were improved in the four groups (all P<0.05). The medical care management, peer management, and medical care and peer co-management groups were further improved than the general group (all P<0.05). Except for smoking, the above medical care and peer co-management group indicators were statistically different from those of the separate medical care management and peer management group (all P<0.05). Triacylglycerol (TG) of the four groups were improved compared with the previous [1.9(1.2, 2.7) compared with 2.3(1.6, 3.5)mmol/L, 1.4(1.2, 2.1) compared with 2.2(1.6, 3.2)mmol/L, 1.6(1.1, 2.0) compared with 2.2(1.4, 3.2)mmol/L, 1.5(1.0, 2.1) compared with 2.4(1.3, 3.1)mmol/L] (all P<0.05), and after the intervention, the total cholesterol (TC) of the four groups was also improved compared with that before the intervention [(4.7±0.9) compared with (5.1±1.2)mmol/L, (4.2±1.1) compared with (5.2±1.2)mmol/L, (4.3±1.1) compared with (5.4±1.3)mmol/L, (4.2±1.1) compared with (5.0±1.4)mmol/L] (all P<0.05), and TG and TC of the medical care management, peer management, and medical care and peer co-management groups were lower than those of the general group (all P<0.05). Conclusion:Based on the WeChat platform, the management mode of medical care and peer co-management is conducive to T2DM patients′ better self-management and blood glucose control.

SELECTION OF CITATIONS
SEARCH DETAIL