Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Adicionar filtros








Intervalo de ano
1.
Chinese Journal of Digestive Endoscopy ; (12): 372-378, 2023.
Artigo em Chinês | WPRIM | ID: wpr-995393

RESUMO

Objective:To construct a real-time artificial intelligence (AI)-assisted endoscepic diagnosis system based on YOLO v3 algorithm, and to evaluate its ability of detecting focal gastric lesions in gastroscopy.Methods:A total of 5 488 white light gastroscopic images (2 733 images with gastric focal lesions and 2 755 images without gastric focal lesions) from June to November 2019 and videos of 92 cases (288 168 clear stomach frames) from May to June 2020 at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University were retrospectively collected for AI System test. A total of 3 997 prospective consecutive patients undergoing gastroscopy at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University from July 6, 2020 to November 27, 2020 and May 6, 2021 to August 2, 2021 were enrolled to assess the clinical applicability of AI System. When AI System recognized an abnormal lesion, it marked the lesion with a blue box as a warning. The ability to identify focal gastric lesions and the frequency and causes of false positives and false negatives of AI System were statistically analyzed.Results:In the image test set, the accuracy, the sensitivity, the specificity, the positive predictive value and the negative predictive value of AI System were 92.3% (5 064/5 488), 95.0% (2 597/2 733), 89.5% (2 467/ 2 755), 90.0% (2 597/2 885) and 94.8% (2 467/2 603), respectively. In the video test set, the accuracy, the sensitivity, the specificity, the positive predictive value and the negative predictive value of AI System were 95.4% (274 792/288 168), 95.2% (109 727/115 287), 95.5% (165 065/172 881), 93.4% (109 727/117 543) and 96.7% (165 065/170 625), respectively. In clinical application, the detection rate of local gastric lesions by AI System was 93.0% (6 830/7 344). A total of 514 focal gastric lesions were missed by AI System. The main reasons were punctate erosions (48.8%, 251/514), diminutive xanthomas (22.8%, 117/514) and diminutive polyps (21.4%, 110/514). The mean number of false positives per gastroscopy was 2 (1, 4), most of which were due to normal mucosa folds (50.2%, 5 635/11 225), bubbles and mucus (35.0%, 3 928/11 225), and liquid deposited in the fundus (9.1%, 1 021/11 225).Conclusion:The application of AI System can increase the detection rate of focal gastric lesions.

2.
Chinese Journal of Digestive Endoscopy ; (12): 293-297, 2023.
Artigo em Chinês | WPRIM | ID: wpr-995384

RESUMO

Objective:To assess the diagnostic efficacy of upper gastrointestinal endoscopic image assisted diagnosis system (ENDOANGEL-LD) based on artificial intelligence (AI) for detecting gastric lesions and neoplastic lesions under white light endoscopy.Methods:The diagnostic efficacy of ENDOANGEL-LD was tested using image testing dataset and video testing dataset, respectively. The image testing dataset included 300 images of gastric neoplastic lesions, 505 images of non-neoplastic lesions and 990 images of normal stomach of 191 patients in Renmin Hospital of Wuhan University from June 2019 to September 2019. Video testing dataset was from 83 videos (38 gastric neoplastic lesions and 45 non-neoplastic lesions) of 78 patients in Renmin Hospital of Wuhan University from November 2020 to April 2021. The accuracy, the sensitivity and the specificity of ENDOANGEL-LD for image testing dataset were calculated. The accuracy, the sensitivity and the specificity of ENDOANGEL-LD in video testing dataset for gastric neoplastic lesions were compared with those of four senior endoscopists.Results:In the image testing dataset, the accuracy, the sensitivity, the specificity of ENDOANGEL-LD for gastric lesions were 93.9% (1 685/1 795), 98.0% (789/805) and 90.5% (896/990) respectively; while the accuracy, the sensitivity and the specificity of ENDOANGEL-LD for gastric neoplastic lesions were 88.7% (714/805), 91.0% (273/300) and 87.3% (441/505) respectively. In the video testing dataset, the sensitivity [100.0% (38/38) VS 85.5% (130/152), χ2=6.220, P=0.013] of ENDOANGEL-LD was higher than that of four senior endoscopists. The accuracy [81.9% (68/83) VS 72.0% (239/332), χ2=3.408, P=0.065] and the specificity [ 66.7% (30/45) VS 60.6% (109/180), χ2=0.569, P=0.451] of ENDOANGEL-LD were comparable with those of four senior endoscopists. Conclusion:The ENDOANGEL-LD can accurately detect gastric lesions and further diagnose neoplastic lesions to help endoscopists in clinical work.

3.
Chinese Journal of Digestive Endoscopy ; (12): 206-211, 2023.
Artigo em Chinês | WPRIM | ID: wpr-995376

RESUMO

Objective:To analyze the cost-effectiveness of a relatively mature artificial intelligence (AI)-assisted diagnosis and treatment system (ENDOANGEL) for gastrointestinal endoscopy in China, and to provide objective and effective data support for hospital acquisition decision.Methods:The number of gastrointestinal endoscopy procedures at the Endoscopy Center of Renmin Hospital of Wuhan University from January 2017 to December 2019 were collected to predict the procedures of gastrointestinal endoscopy during the service life (10 years) of ENDOANGEL. The net present value, payback period and average rate of return were used to analyze the cost-effectiveness of ENDOANGEL.Results:The net present value of an ENDOANGEL in the expected service life (10 years) was 6 724 100 yuan, the payback period was 1.10 years, and the average rate of return reached 147.84%.Conclusion:ENDOANGEL shows significant economic benefits, and it is reasonable for hospitals to acquire mature AI-assisted diagnosis and treatment system for gastrointestinal endoscopy.

4.
Chinese Journal of Digestive Endoscopy ; (12): 109-114, 2023.
Artigo em Chinês | WPRIM | ID: wpr-995366

RESUMO

Objective:To construct an artificial intelligence-assisted diagnosis system to recognize the characteristics of Helicobacter pylori ( HP) infection under endoscopy, and evaluate its performance in real clinical cases. Methods:A total of 1 033 cases who underwent 13C-urea breath test and gastroscopy in the Digestive Endoscopy Center of Renmin Hospital of Wuhan University from January 2020 to March 2021 were collected retrospectively. Patients with positive results of 13C-urea breath test (which were defined as HP infertion) were assigned to the case group ( n=485), and those with negative results to the control group ( n=548). Gastroscopic images of various mucosal features indicating HP positive and negative, as well as the gastroscopic images of HP positive and negative cases were randomly assigned to the training set, validation set and test set with at 8∶1∶1. An artificial intelligence-assisted diagnosis system for identifying HP infection was developed based on convolutional neural network (CNN) and long short-term memory network (LSTM). In the system, CNN can identify and extract mucosal features of endoscopic images of each patient, generate feature vectors, and then LSTM receives feature vectors to comprehensively judge HP infection status. The diagnostic performance of the system was evaluated by sensitivity, specificity, accuracy and area under receiver operating characteristic curve (AUC). Results:The diagnostic accuracy of this system for nodularity, atrophy, intestinal metaplasia, xanthoma, diffuse redness + spotty redness, mucosal swelling + enlarged fold + sticky mucus and HP negative features was 87.5% (14/16), 74.1% (83/112), 90.0% (45/50), 88.0% (22/25), 63.3% (38/60), 80.1% (238/297) and 85.7% (36 /42), respectively. The sensitivity, specificity, accuracy and AUC of the system for predicting HP infection was 89.6% (43/48), 61.8% (34/55), 74.8% (77/103), and 0.757, respectively. The diagnostic accuracy of the system was equivalent to that of endoscopist in diagnosing HP infection under white light (74.8% VS 72.1%, χ2=0.246, P=0.620). Conclusion:The system developed in this study shows noteworthy ability in evaluating HP status, and can be used to assist endoscopists to diagnose HP infection.

5.
Chinese Journal of Digestive Endoscopy ; (12): 965-971, 2022.
Artigo em Chinês | WPRIM | ID: wpr-995348

RESUMO

Objective:To develop an artificial intelligence-based system for measuring the size of gastrointestinal lesions under white light endoscopy in real time.Methods:The system consisted of 3 models. Model 1 was used to identify the biopsy forceps and mark the contour of the forceps in continuous pictures of the video. The results of model 1 were submitted to model 2 and classified into open and closed forceps. And model 3 was used to identify the lesions and mark the boundary of lesions in real time. Then the length of the lesions was compared with the contour of the forceps to calculate the size of lesions. Dataset 1 consisted of 4 835 images collected retrospectively from January 1, 2017 to November 30, 2019 in Renmin Hospital of Wuhan University, which were used for model training and validation. Dataset 2 consisted of images collected prospectively from December 1, 2019 to June 4, 2020 at the Endoscopy Center of Renmin Hospital of Wuhan University, which were used to test the ability of the model to segment the boundary of the biopsy forceps and lesions. Dataset 3 consisted of 302 images of 151 simulated lesions, each of which included one image of a larger tilt angle (45° from the vertical line of the lesion) and one image of a smaller tilt angle (10° from the vertical line of the lesion) to test the ability of the model to measure the lesion size with the biopsy forceps in different states. Dataset 4 was a video test set, which consisted of prospectively collected videos taken from the Endoscopy Center of Renmin Hospital of Wuhan University from August 5, 2019 to September 4, 2020. The accuracy of model 1 in identifying the presence or absence of biopsy forceps, model 2 in classifying the status of biopsy forceps (open or closed) and model 3 in identifying the presence or absence of lesions were observed with the results of endoscopist review or endoscopic surgery pathology as the gold standard. Intersection over union (IoU) was used to evaluate the segmentation effect of biopsy forceps in model 1 and lesion segmentation effect in model 3, and the absolute error and relative error were used to evaluate the ability of the system to measure lesion size.Results:(1)A total of 1 252 images were included in dataset 2, including 821 images of forceps (401 images of open forceps and 420 images of closed forceps), 431 images of non-forceps, 640 images of lesions and 612 images of non-lesions. Model 1 judged 433 images of non-forceps (430 images were accurate) and 819 images of forceps (818 images were accurate), and the accuracy was 99.68% (1 248/1 252). Based on the data of 818 images of forceps to evaluate the accuracy of model 1 on judging the segmentation effect of biopsy forceps lobe, the mean IoU was 0.91 (95% CI: 0.90-0.92). The classification accuracy of model 2 was evaluated by using 818 forceps pictures accurately judged by model 1. Model 2 judged 384 open forceps pictures (382 accurate) and 434 closed forceps pictures (416 accurate), and the classification accuracy of model 2 was 97.56% (798/818). Model 3 judged 654 images containing lesions (626 images were accurate) and 598 images of non-lesions (584 images were accurate), and the accuracy was 96.65% (1 210/1 252). Based on 626 images of lesions accurately judged by model 3, the mean IoU was 0.86 (95% CI: 0.85-0.87). (2) In dataset 3, the mean absolute error of systematic lesion size measurement was 0.17 mm (95% CI: 0.08-0.28 mm) and the mean relative error was 3.77% (95% CI: 0.00%-10.85%) when the tilt angle of biopsy forceps was small. The mean absolute error of systematic lesion size measurement was 0.17 mm (95% CI: 0.09-0.26 mm) and the mean relative error was 4.02% (95% CI: 2.90%-5.14%) when the biopsy forceps was tilted at a large angle. (3) In dataset 4, a total of 780 images of 59 endoscopic examination videos of 59 patients were included. The mean absolute error of systematic lesion size measurement was 0.24 mm (95% CI: 0.00-0.67 mm), and the mean relative error was 9.74% (95% CI: 0.00%-29.83%). Conclusion:The system could measure the size of endoscopic gastrointestinal lesions accurately and may improve the accuracy of endoscopists.

6.
Chinese Journal of Digestion ; (12): 433-438, 2022.
Artigo em Chinês | WPRIM | ID: wpr-958330

RESUMO

Objective:To compare the ability of deep convolutional neural network-crop (DCNN-C) and deep convolutional neural network-whole (DCNN-W), 2 artificial intelligence systems based on different training methods to dignose early gastric cancer (EGC) diagnosis under magnifying image-enhanced endoscopy (M-IEE).Methods:The images and video clips of EGC and non-cancerous lesions under M-IEE under narrow band imaging or blue laser imaging mode were retrospectively collected in the Endoscopy Center of Renmin Hospital of Wuhan University, for the training set and test set for DCNN-C and DCNN-W. The ability of DCNN-C and DCNN-W in EGC identity in image test set were compared. The ability of DCNN-C, DCNN-W and 3 senior endoscopists (average performance) in EGC identity in video test set were also compared. Paired Chi-squared test and Chi-squared test were used for statistical analysis. Inter-observer agreement was expressed as Cohen′s Kappa statistical coefficient (Kappa value).Results:In the image test set, the accuracy, sensitivity, specificity and positive predictive value of DCNN-C in EGC diagnosis were 94.97%(1 133/1 193), 97.12% (202/208), 94.52% (931/985), and 78.91%(202/256), respectively, which were higher than those of DCNN-W(86.84%, 1 036/1 193; 92.79%, 193/208; 85.58%, 843/985 and 57.61%, 193/335), and the differences were statistically significant ( χ2=4.82, 4.63, 61.04 and 29.69, P=0.028, =0.035, <0.001 and <0.001). In the video test set, the accuracy, specificity and positive predictive value of senior endoscopists in EGC diagnosis were 67.67%, 60.42%, and 53.37%, respectively, which were lower than those of DCNN-C (93.00%, 92.19% and 87.18%), and the differences were statistically significant ( χ2=20.83, 16.41 and 11.61, P<0.001, <0.001 and =0.001). The accuracy, specificity and positive predictive value of DCNN-C in EGC diagnosis were higher than those of DCNN-W (79.00%, 70.31% and 64.15%, respectively), and the differences were statistically significant ( χ2=7.04, 8.45 and 6.18, P=0.007, 0.003 and 0.013). There were no significant differences in accuracy, specificity and positive predictive value between senior endoscopists and DCNN-W in EGC diagnosis (all P>0.05). The sensitivity of senior endoscopists, DCNN-W and DCNN-C in EGC diagnosis were 80.56%, 94.44%, and 94.44%, respectively, and the differences were not statistically significant (all P>0.05). The results of the agreement analysis showed that the agreement between senior endoscopists and the gold standard was fair to moderate (Kappa=0.259, 0.532, 0.329), the agreement between DCNN-W and the gold standard was moderate (Kappa=0.587), and the agreement between DCNN-C and the gold standard was very high (Kappa=0.851). Conclusion:When the training set is the same, the ability of DCNN-C in EGC diagnosis is better than that of DCNN-W and senior endoscopists, and the diagnostic level of DCNN-W is equivalent to that of senior endoscopists.

7.
Chinese Journal of Digestive Endoscopy ; (12): 707-713, 2022.
Artigo em Chinês | WPRIM | ID: wpr-958309

RESUMO

Objective:To evaluate the Kyoto gastritis score for diagnosing Helicobacter pylori ( HP) infection in Chinese people. Methods:A total of 902 cases who underwent 13C-urea breath test and gastroscopy at the same time at the Digestive Endoscopy center of Renmin Hospital of Wuhan University from January 2020 to December 2020 were studied retrospectively, including 345 cases of HP-positive and 557 of HP-negative. The differences of mucosal features and Kyoto gastritis score between HP-positive and HP-negative patients were analyzed. A receiver operating characteristic curve was plotted to predict HP infection by Kyoto gastritis score. Results:Compared with HP-negative patients, nodules [8.1% (28/345) VS 0.2% (1/557), χ2=86.29, P<0.001], diffuse redness [47.8% (165/345) VS 6.6% (37/557), χ2=413.63, P<0.001], atrophy [27.8% (96/345) VS 13.8% (77/557), χ2=52.90, P<0.001] and fold enlargement [69.0% (238/345) VS 36.6% (204/557), χ2=175.38, P<0.001] occurred more frequently in HP-positive patients. For predicting HP infection, nodules showed the highest specificity [99.8% (556/557)] and positive predictive value [96.6% (28/29)], diffuse redness showed the largest area under the receiver operating characteristic curve (AUC) of 0.707, and fold enlargement showed the highest sensitivity [69.0% (238/345)] and negative predictive value [76.7% (353/460)]. The Kyoto gastritis score of HP-positive patients was higher than that of HP-negative patients [2 (1, 2) VS 0 (0, 1), Z=20.82, P<0.001]. Furthermore, at an optimal threshold of 2, the AUC of the Kyoto gastritis score for predicting HP infection was 0.779. Conclusion:Nodules, diffuse redness, atrophy and fold enlargement under gastroscopy can suggest positive of HP infection, and the Kyoto gastritis score≥2 is sufficient reference to diagnose HP positive.

8.
Chinese Journal of Digestive Endoscopy ; (12): 538-541, 2022.
Artigo em Chinês | WPRIM | ID: wpr-958290

RESUMO

Objective:To evaluate the impact of artificial intelligence (AI) system on the diagnosis rate of precancerous state of gastric cancer.Methods:A single center self-controlled study was conducted under the premise that such factors were controlled as mainframe and model of the endoscope, operating doctor, season and climate, and pathology was taken as the gold standard. The diagnosis rate of precancerous state of gastric cancer, including atrophic gastritis (AG) and intestinal metaplasia (IM) in traditional gastroscopy (from September 1, 2019 to November 30, 2019) and AI assisted endoscopy (from September 1, 2020 to November 15, 2020) in the Eighth Hospital of Wuhan was statistically analyzed and compared, and the subgroup analysis was conducted according to the seniority of doctors.Results:Compared with traditional gastroscopy, AI system could significantly improve the diagnosis rate of AG [13.3% (38/286) VS 7.4% (24/323), χ2=5.689, P=0.017] and IM [33.9% (97/286) VS 26.0% (84/323), χ2=4.544, P=0.033]. For the junior doctors (less than 5 years of endoscopic experience), AI system had a more significant effect on the diagnosis rate of AG [11.9% (22/185) VS 5.8% (11/189), χ2=4.284, P=0.038] and IM [30.3% (56/185) VS 20.6% (39/189), χ2=4.580, P=0.032]. For the senior doctors (more than 10 years of endoscopic experience), although the diagnosis rate of AG and IM increased slightly, the difference was not statistically significant. Conclusion:AI system shows the potential to improve the diagnosis rate of precancerous state of gastric cancer, especially for junior endoscopists, and to reduce missed diagnosis of early gastric cancer.

9.
Chinese Journal of Radiological Medicine and Protection ; (12): 823-829, 2022.
Artigo em Chinês | WPRIM | ID: wpr-956867

RESUMO

Objective:To investigate the effects of Bifidobacterium animalis subsp. lactis BB-12 on hippocampal neuroinflammation and cognitive function of mice after whole brain radiotherapy. Methods:A total of sixty male C57BL/6J mice aged 7-8 weeks were randomly divided into 5 groups with 12 mice in each group: control group (Con group), probiotic group (BB-12 group), irradiation group (IR group), irradiation and Memantine group (IR+ Memantine group), irradiation and probiotic group (IR+ BB-12 group). The model of radiation-induced brain injury of mice was established by 10 Gy whole brain radiotherapy with a medical linear accelerator. Y-maze test was used to evaluate the cognitive function. The activation of microglia and astrocytes was observed by immunofluorescence staining. The expressions of inflammatory cytokines interleukin-1β (IL-1β), IL-6 and tumor necrosis factor-α (TNF-α) were detected by quantitative real-time reverse transcription polymerase chain reaction (QRT-PCR) and Western blot.Results:Y-maze test showed that, compared with Con group, the percentage of the times of reaching the novel arm in the total times of the three arms decreased significantly in the IR group ( t=5.04, P<0.05). BB-12 mitigated radiation-induced cognitive dysfunction ( t=4.72, P<0.05). Compared with Con group, the number ( t=3.05, 7.18, P<0.05) and circularity index ( t=6.23, 2.52, P<0.05) of Iba1 and GFAP positive cells were increased, the microglia and astrocytes were activated in the hippocampus of IR group, but these alterations were eliminated by BB-12. After whole brain IR, the mRNA and protein expression levels of inflammatory cytokines IL-1β, IL-6 and TNF-α in the hippocampus of mice were significantly increased compared with Con group ( tmRNA =4.10, 3.04, 4.18, P<0.05; tprotein=11.49, 7.04, 8.42, P<0.05), which were also significantly reduced by BB-12 compared with IR group ( tmRNA=4.20, 3.40, 2.84, P<0.05; tprotein=6.36, 4.03, 3.75, P<0.05). Conclusions:Bifidobacterium animalis BB-12 can suppress neuroinflammation mediated by microglia and astrocytes in the hippocampus of mice after radiotherapy and alleviates IR-induced cognitive dysfunction. Therefore, BB-12 has potential application in alleviating radiation induced brain injury.

10.
Chinese Journal of Digestion ; (12): 606-612, 2021.
Artigo em Chinês | WPRIM | ID: wpr-912216

RESUMO

Objective:To develop early gastric cancer (EGC) detection system of magnifying blue laser imaging (ME-BLI) model and magnifying narrow-band imaging (ME-NBI) model based on deep convolutional neural network, to compare the performance differences of the two models and to explore the effects of training methods on the accuracy.Methods:The images of benign gastric lesions and EGC under ME-BLI and ME-NBI were respectively collected. A total of five data sets and three test sets were collected. Data set 1 included 2 024 noncancerous lesions and 452 EGC images under ME-BLI. Data set 2 included 2 024 noncancerous lesions and 452 EGC images under ME-NBI. Data set 3 was the combination of data set 1 and 2 (a total of 4 048 noncancerous lesions and 904 EGC images under ME-BLI and ME-NBI). Data set 4: on the basis of data set 2, another 62 noncancerous lesions and 2 305 EGC images under ME-NBI were added (2 086 noncancerous lesions and 2 757 EGC images under ME-NBI). Data set 5: on the basis of data set 3, another 62 noncancerous lesions and 2 305 EGC images under ME-NBI were added(4 110 noncancerous lesions and 3 209 EGC images under ME-NBI and ME-BLI). Test set A included 422 noncancerous lesions and 197 EGC images under ME-BLI. Test set B included 422 noncancerous lesions and 197 EGC images under ME-NBI. Test set C was the combination of test set A and B (844 noncancerous and 394 EGC images under ME-BLI and ME-NBI). Five models were constructed according to these five data sets respectively and their performance was evaluated in the three test sets. Per-lesion video was collected and used to compare the performance of deep convolutional neural network models under ME-BLI and ME-NBI for the detection of EGC in clinical environment, and compared with four senior endoscopy doctors. The primary endpoint was the diagnostic accuracy of EGG, sensitivity and specificity. Chi-square test was used for statistical analysis.Results:The performance of model 1 was the best in test set A with the accuracy, sensitivity and specificity of 76.90% (476/619), 63.96% (126/197) and 82.94% (350/422), respectively. The performance of model 2 was the best in test set B with the accuracy, sensitivity and specificity of 86.75% (537/619), 92.89% (183/197) and 83.89% (354/422), respectively. The performance of model 3 was the best in test set B with the accuracy, sensitivity and specificity of 86.91% (538/619), 84.26% (166/197) and 88.15% (372/422), respectively. The performance of model 4 was the best in test set B with the accuracy, sensitivity and specificity of 85.46% (529/619), 95.43% (188/197) and 80.81% (341/422), respectively. The performance of model 5 was the best in test set B, with the accuracy, sensitivity and specificity of 83.52% (517/619), 96.95% (191/197) and 77.25% (326/422), respectively. In terms of image recognition of EGC, the accuracy of models 2 to 5 was higher than that of model 1, and the differences were statistically significant ( χ2=147.90, 149.67, 134.20 and 115.30, all P<0.01). The sensitivity and specificity of models 2 and 3 were higher than those of model 1, the specificity of model 2 was lower than that of model 3, and the differences were statistically significant ( χ2=131.65, 64.15, 207.60, 262.03 and 96.73, all P < 0.01). The sensitivity of models 4 and 5 was higher than those of models 1 to 3, and the specificity of models 4 and 5 was lower than those of models 1 to 3, and the differences were statistically significant ( χ2=151.16, 165.49, 71.35, 112.47, 132.62, 153.14, 176.93, 74.62, 14.09, 15.47, 6.02 and 5.80, all P<0.05). The results of video test based on lesion showed that the average accuracy of doctors 1 to 4 was 68.16%. And the accuracy of models 1 to 5 was 69.47% (66/95), 69.47% (66/95), 70.53% (67/95), 76.84% (73/95) and 80.00% (76/95), respectively. There were no significant differences in the accuracy among models 1 to 5 and between models 1 to 5 and doctors 1 to 4 (all P>0.05). Conclusions:ME-BLI EGC recognition model based on deep learning has good accuracy, but the diagnostic effecacy is sligntly worse than that of ME-NBI model. The effects of EGC recognition model of ME-NBI combined with ME-BLI is better than that of a single model. A more sensitive ME-NBI model can be obtained by increasing the number of ME-NBI images, especially the images of EGG, but the specificity is worse.

11.
Chinese Journal of Digestive Endoscopy ; (12): 801-805, 2021.
Artigo em Chinês | WPRIM | ID: wpr-912176

RESUMO

Objective:To evaluate deep learning in improving the diagnostic rate of adenomatous and non-adenomatous polyps.Methods:Non-magnifying narrow band imaging (NBI) polyp images obtained from Endoscopy Center of Renmin Hospital, Wuhan University were divided into three datasets. Dataset 1 (2 699 adenomatous and 1 846 non-adenomatous non-magnifying NBI polyp images from January 2018 to October 2020) was used for model training and validation of the diagnosis system. Dataset 2 (288 adenomatous and 210 non-adenomatous non-magnifying NBI polyp images from January 2018 to October 2020) was used to compare the accuracy of polyp classification between the system and endoscopists. At the same time, the accuracy of 4 trainees in polyp classification with and without the assistance of this system was compared. Dataset 3 (203 adenomatous and 141 non-adenomatous non-magnifying NBI polyp images from November 2020 to January 2021) was used to prospectively test the system.Results:The accuracy of the system in polyp classification was 90.16% (449/498) in dataset 2, superior to that of endoscopists. With the assistance of the system, the accuracy of colorectal polyp diagnosis was significantly improved. In the prospective study, the accuracy of the system was 89.53% (308/344).Conclusion:The colorectal polyp classification system based on deep learning can significantly improve the accuracy of trainees in polyp classification.

12.
Chinese Journal of Digestive Endoscopy ; (12): 783-788, 2021.
Artigo em Chinês | WPRIM | ID: wpr-912173

RESUMO

Objective:To assess the influence of an artificial intelligence (AI) -assisted diagnosis system on the performance of endoscopists in diagnosing gastric cancer by magnifying narrow banding imaging (M-NBI).Methods:M-NBI images of early gastric cancer (EGC) and non-gastric cancer from Renmin Hospital of Wuhan University from March 2017 to January 2020 and public datasets were collected, among which 4 667 images (1 950 images of EGC and 2 717 of non-gastric cancer)were included in the training set and 1 539 images (483 images of EGC and 1 056 of non-gastric cancer) composed a test set. The model was trained using deep learning technique. One hundred M-NBI videos from Beijing Cancer Hospital and Renmin Hospital of Wuhan University between 9 June 2020 and 17 November 2020 were prospectively collected as a video test set, 38 of gastric cancer and 62 of non-gastric cancer. Four endoscopists from four other hospitals participated in the study, diagnosing the video test twice, with and without AI. The influence of the system on endoscopists′ performance was assessed.Results:Without AI assistance, accuracy, sensitivity, and specificity of endoscopists′ diagnosis of gastric cancer were 81.00%±4.30%, 71.05%±9.67%, and 87.10%±10.88%, respectively. With AI assistance, accuracy, sensitivity and specificity of diagnosis were 86.50%±2.06%, 84.87%±11.07%, and 87.50%±4.47%, respectively. Diagnostic accuracy ( P=0.302) and sensitivity ( P=0.180) of endoscopists with AI assistance were improved compared with those without. Accuracy, sensitivity and specificity of AI in identifying gastric cancer in the video test set were 88.00% (88/100), 97.37% (37/38), and 82.26% (51/62), respectively. Sensitivity of AI was higher than that of the average of endoscopists ( P=0.002). Conclusion:AI-assisted diagnosis system is an effective tool to assist diagnosis of gastric cancer in M-NBI, which can improve the diagnostic ability of endoscopists. It can also remind endoscopists of high-risk areas in real time to reduce the probability of missed diagnosis.

13.
Chinese Journal of Digestive Endoscopy ; (12): 584-590, 2020.
Artigo em Chinês | WPRIM | ID: wpr-871425

RESUMO

Objective:To establish a deep convolutional neural network (DCNN) model based on YOLO and ResNet algorithm for automatic detection of colorectal polyps and to test its function.Methods:Colonoscopy images and videos collected from the database of Digestive Endoscopy Center of Renmin Hospital of Wuhan University from January 2018 to March 2019 were divided into three databases (database 1, 3, 4). The public database CVC-ClinicDB (composed of 612 polyp images extracted from 29 colonoscopy videos provided by Barcelona Hospital, Spain) was used as the database 2. Database 1 (4 700 colonoscopy images from January 2018 to November 2018, including 3 700 intestinal polyp images and 1 000 non-polyp images) was used for establishing training and verifying the DCNN model. Database 2 (CVC-ClinicDB) and database 3 (720 colonoscopy images from January 2019 to March 2019, including 320 intestinal polyp images and 400 non-polyp images) were used for testing the DCNN model on image detection. Database 4 (15 colonoscopy videos in December 2019, containing 33 polyps) was used for testing the DCNN model on video detection. The sensitivity, specificity, accuracy and false positive rate of the DCNN model for detecting intestinal polyps were calculated.Results:The sensitivity of the DCNN model for detecting intestinal polyps in database 2 was 93.19% (602/646). In database 3, the DCNN model showed the accuracy of 95.00% (684/720), sensitivity of 98.13% (314/320), specificity of 92.50% (370/400), and false positive rate of 7.50% (30/400) for detecting intestinal polyps. In database 4, the DCNN model achieved a per-polyp-sensitivity of 100.00% (33/33), a per-image-accuracy of 96.29% (133 840/138 998), a per-image-sensitivity of 90.24% (4 066/4 506), a per-image-specificity of 96.49% (129 774/134 492), and a per-image-false positive rate of 3.51% (4 718/134 492).Conclusion:The DCNN model constructed in the study has a high sensitivity and specificity for automatic detection of colorectal polyps both in the colonoscopy images and videos, has a low false positive rate in the videos, and has the potential to assist endoscopists in diagnosis of colorectal polyps.

14.
Chinese Journal of Digestive Endoscopy ; (12): 476-480, 2020.
Artigo em Chinês | WPRIM | ID: wpr-871422

RESUMO

Objective:To construct an artificial intelligence-assisted diagnosis system to detect gastric ulcer lesions and identify benign and malignant gastric ulcers automatically.Methods:A total of 1 885 endoscopy images were collected from November 2016 to April 2019 in the Digestive Endoscopy Center of Renmin Hospital of Wuhan University. Among them, 636 were normal images, 630 were with benign gastric ulcers, and 619 were with malignant gastric ulcers. A total of 1 735 images belonged to training data set and 150 images were used for validation. These images were input into the Res-net50 model based on the fastai framework, the Res-net50 model based on the Keras framework, and the VGG-16 model based on the Keras framework respectively. Three separate binary classification models of normal gastric mucosa and benign ulcers, normal gastric mucosa and malignant ulcers, and benign and malignant ulcers were constructed.Results:The VGG-16 model showed the best ability of classification. The accuracy of the validation set was 98.0%, 98.0% and 85.0%, respectively, for distinguishing normal gastric mucosa from benign ulcers, normal gastric mucosa from malignant ulcers, and benign ulcers from malignant ulcers.Conclusion:The artificial intelligence-assisted diagnosis system obtained in this study shows noteworthy ability of detection of ulcerous lesions, and is expected to be used in clinical to assist doctors to detect ulcer and identify benign and malignant ulcers.

15.
Chinese Journal of Digestive Endoscopy ; (12): 125-130, 2020.
Artigo em Chinês | WPRIM | ID: wpr-871385

RESUMO

Objective:To construct a real-time monitoring system based on computer vision for monitoring withdrawal speed of colonoscopy and to validate its feasibility and performance.Methods:A total of 35 938 images and 63 videos of colonoscopy were collected in endoscopic database of Renmin Hospital of Wuhan University from May to October 2018. The images were divided into two datasets, one dataset included in vitro, in vivo and unqualified colonoscopy images, and another dataset included ileocecal and non-cecal area images. And then 3 594 and 2 000 images were selected respectively from the two datasets for testing the deep learning model, and the remaining images were used to train the model. Three colonoscopy videos were selected to evaluate the feasibility of real-time monitoring system, and 60 colonoscopy videos were used to evaluate its performance.Results:The accuracy rate of the deep learning model for classification for in vitro, in vivo, and unqualified colonoscopy images was 90.79% (897/988), 99.92% (1 300/1 301), and 99.08% (1 293/1 305), respectively, and the overall accuracy rate was 97.11% (3 490/3 594). The accuracy rate of identifying ileocecal and non-cecal area was 96.70% (967/1 000) and 94.90% (949/1 000), respectively, and the overall accuracy rate was 95.80% (1 916/2 000). In terms of feasibility evaluation, 3 colonoscopy videos data showed a linear relationship between the retraction speed and the image processing interval, which indicated that the real-time monitoring system automatically monitored the retraction speed during the colonoscopy withdrawal process. In terms of performance evaluation, the real-time monitoring system correctly predicted entry time and withdrawal time of all 60 examinations, and the results showed that the withdrawal speed and withdrawal time was significantly negative-related ( R=-0.661, P<0.001). The 95% confidence interval of withdrawal speed for the colonoscopy with withdrawal time of less than 5 min, 5-6 min, and more than 6 min was 43.90-49.74, 40.19-45.43, and 34.89-39.11 respectively. Therefore, 39.11 was set as the safe withdrawal speed and 45.43 as the alarm withdrawal speed. Conclusion:The real-time monitoring system we constructed can be used to monitor real-time withdrawal speed of colonoscopy and improve the quality of endoscopy.

16.
Chinese Journal of Digestive Endoscopy ; (12): 240-245, 2019.
Artigo em Chinês | WPRIM | ID: wpr-756250

RESUMO

Objective To analyze the blind area monitoring and independent image acquisition function of gastroscopic elves ( a real-time gastroscopic monitoring system) in gastroscopy. Methods A total of 38522 gastroscopic images from the database of Digestive Endoscopy Center of Renmin Hospital of Wuhan University were collected to train and validate the gastroscopic elves. Using computer to generate random numbers, 91 gastroscopic videos were selected to assess the position recognition accuracy of the gastroscopic elves, and 45 gastroscopic videos and matching gastroscopic images collected by endoscopists were selected to compare the coverage number and rate of gastroscopy sites between gastroscopic elves and endoscopists image acquisition. Two endoscopists entered the study to perform gastroscopies with or without gastroscopic elves. Forty-five gastroscopies respectively performed by the endoscopist A before and after usage of gastroscopic elves were collected, and 42 gastroscopies divided into 20 and 22 performed by the endoscopist B without use of gastroscopic elves in the same period were also collected. The coverage rate of gastroscopy sites was compared between the two endoscopists. Results The total position recognition accuracy of gastroscopic elves was 85. 125% ( 1156/1358) . The coverage rate of gastroscopic sites for the endoscopist A was (76. 790±8. 848)% and (87. 325±7. 065)%, respectively, before and after using gastroscopic elves, and the coverage rate in the same period for the endoscopist B was (75. 926 ±11. 565)% and (75. 253 ± 14. 662)%, respectively. The coverage rate before using gastroscopic elves had no statistical difference between the two endoscopists (t=0. 324, P=0. 747). The coverage rate for the endoscopist A after using gastroscopic elves was higher than that before using gastroscopic elves ( t=6. 222, P=0. 001) , and that of the endoscopist B in the same period ( t'=3. 588, P=0. 002) . The coverage number and rate of gastroscopy sites for gastroscopic elves and endoscopists image acquisition were 20. 956 ± 3. 406 and ( 77. 613 ± 12. 613)%, and 15. 467 ± 2. 296 and ( 57. 284 ± 8. 503)%, respectively, with statistical differences ( t=11. 523, P=0. 000; t=11. 523, P=0. 000). Conclusion Gastroscopic elves can improve the coverage number and rate of gastroscopy sites, and is worthy of promotion in clinics.

17.
Chinese Journal of Digestive Endoscopy ; (12): 611-614, 2018.
Artigo em Chinês | WPRIM | ID: wpr-711546

RESUMO

Objective To investigate the safety and efficacy of endoscopic submucosal dissection ( ESD) for early stage colorectal cancer and precancerous lesions. Methods Clinical data of 108 patients who received ESD for early stage colorectal cancer and precancerous lesions from December 2016 to June 2017 in Renmin Hospital of Wuhan University were analyzed. The lesion characteristics, postoperative pathological features, intraoperative and postoperative complications and postoperative follow-up outcomes were analyzed. Results The 108 patients all underwent ESD successfully with median operation time of 45 min. The rate of intraoperative perforation and postoperative delayed bleeding was 2. 8% ( 3/108) and 2. 8% (3/108), respectively. No postoperative delayed perforation occurred. Postoperative pathology showed that there were 41 cases ( 38. 0%) of tubular adenoma, 4 ( 3. 7%) villous adenoma, 39 ( 36. 1%) villous tubular adenoma [ including 41 ( 38. 0%) low-grade intraepithelial neoplasia and 16 ( 14. 8%) high-grade intraepithelial neoplasia] , 19 ( 17. 6%) adenocarcinoma, and 5 ( 4. 6%) other types. Among the 19 cases of adenocarcinoma, there were 11 cases of well-differentiated, 5 median-differentiated and 3 low-differentiated. The complete resection rate was 100. 0% and the en bloc resection rate was 92. 3% ( 100/108) . The mean follow-up time was 8. 1 months, and no recurrence was found during this period. Conclusion ESD is safe and effective in the treatment of early stage colorectal lesions. It is important to improve preoperative assessment, strengthen surgical skills, analyze postoperative pathological features and regularly follow up to guarantee the treatment quality of ESD.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA