Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
1.
BMC Gastroenterol ; 24(1): 10, 2024 Jan 02.
Article in English | MEDLINE | ID: mdl-38166722

ABSTRACT

BACKGROUND: Double-balloon enteroscopy (DBE) is a standard method for diagnosing and treating small bowel disease. However, DBE may yield false-negative results due to oversight or inexperience. We aim to develop a computer-aided diagnostic (CAD) system for the automatic detection and classification of small bowel abnormalities in DBE. DESIGN AND METHODS: A total of 5201 images were collected from Renmin Hospital of Wuhan University to construct a detection model for localizing lesions during DBE, and 3021 images were collected to construct a classification model for classifying lesions into four classes, protruding lesion, diverticulum, erosion & ulcer and angioectasia. The performance of the two models was evaluated using 1318 normal images and 915 abnormal images and 65 videos from independent patients and then compared with that of 8 endoscopists. The standard answer was the expert consensus. RESULTS: For the image test set, the detection model achieved a sensitivity of 92% (843/915) and an area under the curve (AUC) of 0.947, and the classification model achieved an accuracy of 86%. For the video test set, the accuracy of the system was significantly better than that of the endoscopists (85% vs. 77 ± 6%, p < 0.01). For the video test set, the proposed system was superior to novices and comparable to experts. CONCLUSIONS: We established a real-time CAD system for detecting and classifying small bowel lesions in DBE with favourable performance. ENDOANGEL-DBE has the potential to help endoscopists, especially novices, in clinical practice and may reduce the miss rate of small bowel lesions.


Subject(s)
Deep Learning , Intestinal Diseases , Humans , Double-Balloon Enteroscopy/methods , Intestine, Small/diagnostic imaging , Intestine, Small/pathology , Intestinal Diseases/diagnostic imaging , Abdomen/pathology , Endoscopy, Gastrointestinal/methods , Retrospective Studies
2.
BMC Med Educ ; 23(1): 525, 2023 Jul 22.
Article in English | MEDLINE | ID: mdl-37479971

ABSTRACT

BACKGROUND: In all international medical student (IMS) programs in China, language barriers between IMSs and Chinese patients greatly reduced the learning in clinical practice and brought great challenges to IMSs in their transition from preclinical to clinical practice. This study aimed to investigate the role of bilingual simulated patients (B-SPs) in IMSs learning of medical history collection in China. METHODS: 48 IMSs of grade 4 between October 2020 to Jan 2021 were enrolled in this study. During the training of medical history collection, students were randomly arranged into two groups trained with either B-SPs (B-SP group) or English-speaking SP (E-SP group). All SPs in Objective Structured Clinical Exam station (OSCE) were trained in the Affiliated Hospital of Wuhan University. Clinical skills in medical history collection were assessed by instructors during pre-clinical, post-clinical OSCE and clinical rotations. RESULTS: The scores of IMSs in each group were analyzed in terms of medical history collection including the ability to effectively consult for information and key communication skills related to patient care. Our results indicated that IMS in B-SP group obtained similar scores in preclinical training for history collection (67.3 ± 8.46 vs 67.69 ± 8.86, P < 0.05) compared to E-SP group, while obtaining significantly higher score improvements between pre- and post-OSCE (17.22 (95% CI 12.74 to 21.70) vs 10.84 (95% CI 3.53 to 18.15), P = 0.0007). CONCLUSION: B-SPs are more conducive to doctor-patient communication and actually improve IMSs learning in medical history collection in China.


Subject(s)
Students, Medical , Humans , Educational Measurement/methods , Patient Simulation , Communication , Clinical Competence , China
3.
Clin Transl Gastroenterol ; 14(3): e00566, 2023 03 01.
Article in English | MEDLINE | ID: mdl-36735539

ABSTRACT

INTRODUCTION: Constructing quality indicators that reflect the defect of colonoscopy operation for quality audit and feedback is very important. Previously, we have established a real-time withdrawal speed monitoring system to control withdrawal speed below the safe speed. We aimed to explore the relationship between the proportion of overspeed frames (POF) of withdrawal and the adenoma detection rate (ADR) and to conjointly analyze the influence of POF and withdrawal time on ADR to evaluate the feasibility of POF combined with withdrawal time as a quality control indicator. METHODS: The POF was defined as the proportion of frames with instantaneous speed ≥44 in the whole colonoscopy video. First, we developed a system for the POF of withdrawal based on a perceptual hashing algorithm. Next, we retrospectively collected 1,804 colonoscopy videos to explore the relationship between POF and ADR. According to withdrawal time and POF cutoff, we conducted a complementary analysis on the effects of POF and withdrawal time on ADR. RESULTS: There was an inverse correlation between the POF and ADR (Pearson correlation coefficient -0.836). When withdrawal time was >6 minutes, the ADR of the POF ≤10% was significantly higher than that of POF >10% (25.30% vs 16.50%; odds ratio 0.463, 95% confidence interval 0.296-0.724, P < 0.01). When the POF was ≤10%, the ADR of withdrawal time >6 minutes was higher than that of withdrawal time ≤6 minutes (25.30% vs 21.14%; odds ratio 0.877, 95% confidence interval 0.667-1.153, P = 0.35). DISCUSSION: The POF was strongly correlated with ADR. The combined assessment of the POF and withdrawal time has profound significance for colonoscopy quality control.


Subject(s)
Adenoma , Colorectal Neoplasms , Humans , Colorectal Neoplasms/diagnosis , Retrospective Studies , Colonoscopy , Adenoma/diagnosis , Time Factors
4.
Surg Endosc ; 37(4): 2897-2907, 2023 04.
Article in English | MEDLINE | ID: mdl-36508008

ABSTRACT

BACKGROUND: Although histopathological evaluation after endoscopic submucosal dissection (ESD) is critical to assess the accuracy of endoscopic diagnosis, it is still challenging to perform precise endoscopic to pathological evaluation. We evaluated the importance of tissue marking dye (TMD)-targeted marking for post-ESD specimen guided by magnificent endoscope on histopathological accuracy and endoscopic-to-histopathological reconstruction. STUDY DESIGN: A total of 81 specimens resected by ESD [43 without TMD marking (N-TMD group), and 38 specimens with TMD-targeted cancerous areas marking guided by post-procedural magnifying endoscopy on resected specimens (TMD group)] between January 31, 2019, and January 31, 2022 at the Renmin Hospital of Wuhan University were included in the study. The baseline characteristics of patients, discrepancies between endoscopic and histopathological diagnosis, and the impact of TMD on histopathological diagnosis and reconstruction were analyzed. RESULTS: Discrepancies between endoscopic (pre-ESD) and histopathological (post-ESD) diagnosis increased significantly in TMD group (68.4% (26/38) for tumor areas, 26.3% (10/38) for tumor margins, and 26.3% (10/38) for tumor differentiations) when compared with N-TMD group (p < 0.0001). Deeper sections were achieved in all TMD-marked resected lesions and 27.9% (12/43) lesions in the N-TMD group (p < 0.001). More pathological evaluations in TMD group were changed from curative resection to non-curative resection [6/38(15.8%) vs 1/43(2.3%)] compared with N-TMD group (p < 0.0001). TMD-targeted marking also improved the efficiency of histopathological reconstruction on pre-procedural endoscopic images and benefit endoscopists training. CONCLUSION: TMD-targeted labeling on resected specimens could improve precise endoscopic-to-pathological diagnosis, reconstruction by point-to-point marking and benefit endoscopists training.


Subject(s)
Endoscopic Mucosal Resection , Stomach Neoplasms , Humans , Endoscopic Mucosal Resection/methods , Stomach Neoplasms/surgery , Stomach Neoplasms/pathology , Controlled Before-After Studies , Endoscopy, Gastrointestinal/methods , Dissection/methods
5.
NPJ Digit Med ; 5(1): 183, 2022 Dec 19.
Article in English | MEDLINE | ID: mdl-36536039

ABSTRACT

Bleeding risk factors for gastroesophageal varices (GEV) detected by endoscopy in cirrhotic patients determine the prophylactical treatment patients will undergo in the following 2 years. We propose a methodology for measuring the risk factors. We create an artificial intelligence system (ENDOANGEL-GEV) containing six models to segment GEV and to classify the grades (grades 1-3) and red color signs (RC, RC0-RC3) of varices. It also summarizes changes in the above results with region in real time. ENDOANGEL-GEV is trained using 6034 images from 1156 cirrhotic patients across three hospitals (dataset 1) and validated on multicenter datasets with 11009 images from 141 videos (dataset 2) and in a prospective study recruiting 161 cirrhotic patients from Renmin Hospital of Wuhan University (dataset 3). In dataset 1, ENDOANGEL-GEV achieves intersection over union values of 0.8087 for segmenting esophageal varices and 0.8141 for gastric varices. In dataset 2, the system maintains fairly accuracy across images from three hospitals. In dataset 3, ENDOANGEL-GEV surpasses attended endoscopists in detecting RC of GEV and classifying grades (p < 0.001). When ranking the risk of patients combined with the Child‒Pugh score, ENDOANGEL-GEV outperforms endoscopists for esophageal varices (p < 0.001) and shows comparable performance for gastric varices (p = 0.152). Compared with endoscopists, ENDOANGEL-GEV may help 12.31% (16/130) more patients receive the right intervention. We establish an interpretable system for the endoscopic diagnosis and risk stratification of GEV. It will assist in detecting the first bleeding risk factors accurately and expanding the scope of quantitative measurement of diseases.

6.
Gastrointest Endosc ; 95(2): 269-280.e6, 2022 02.
Article in English | MEDLINE | ID: mdl-34547254

ABSTRACT

BACKGROUND AND AIMS: White-light endoscopy (WLE) is the most pivotal tool to detect gastric cancer in an early stage. However, the skill among endoscopists varies greatly. Here, we aim to develop a deep learning-based system named ENDOANGEL-LD (lesion detection) to assist in detecting all focal gastric lesions and predicting neoplasms by WLE. METHODS: Endoscopic images were retrospectively obtained from Renmin Hospital of Wuhan University (RHWU) for the development, validation, and internal test of the system. Additional external tests were conducted in 5 other hospitals to evaluate the robustness. Stored videos from RHWU were used for assessing and comparing the performance of ENDOANGEL-LD with that of experts. Prospective consecutive patients undergoing upper endoscopy were enrolled from May 6, 2021 to August 2, 2021 in RHWU to assess clinical practice applicability. RESULTS: Over 10,000 patients undergoing upper endoscopy were enrolled in this study. The sensitivities were 96.9% and 95.6% for detecting gastric lesions and 92.9% and 91.7% for diagnosing neoplasms in internal and external patients, respectively. In 100 videos, ENDOANGEL-LD achieved superior sensitivity and negative predictive value for detecting gastric neoplasms from that of experts (100% vs 85.5% ± 3.4% [P = .003] and 100% vs 86.4% ± 2.8% [P = .002], respectively). In 2010 prospective consecutive patients, ENDOANGEL-LD achieved a sensitivity of 92.8% for detecting gastric lesions with 3.04 ± 3.04 false positives per gastroscopy and a sensitivity of 91.8% and specificity of 92.4% for diagnosing neoplasms. CONCLUSIONS: Our results show that ENDOANGEL-LD has great potential for assisting endoscopists in screening gastric lesions and suspicious neoplasms in clinical work. (Clinical trial registration number: ChiCTR2100045963.).


Subject(s)
Artificial Intelligence , Stomach Neoplasms , Gastroscopy/methods , Humans , Prospective Studies , Retrospective Studies , Stomach Neoplasms/diagnostic imaging , Stomach Neoplasms/pathology
7.
Gastrointest Endosc ; 95(1): 92-104.e3, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34245752

ABSTRACT

BACKGROUND AND AIMS: We aimed to develop and validate a deep learning-based system that covers various aspects of early gastric cancer (EGC) diagnosis, including detecting gastric neoplasm, identifying EGC, and predicting EGC invasion depth and differentiation status. Herein, we provide a state-of-the-art comparison of the system with endoscopists using real-time videos in a nationwide human-machine competition. METHODS: This multicenter, prospective, real-time, competitive comparative, diagnostic study enrolled consecutive patients who received magnifying narrow-band imaging endoscopy at the Peking University Cancer Hospital from June 9, 2020 to November 17, 2020. The offline competition was conducted in Wuhan, China, and the endoscopists and the system simultaneously read patients' videos and made diagnoses. The primary outcomes were sensitivity in detecting neoplasms and diagnosing EGCs. RESULTS: One hundred videos, including 37 EGCs and 63 noncancerous lesions, were enrolled; 46 endoscopists from 44 hospitals in 19 provinces in China participated in the competition. The sensitivity rates of the system for detecting neoplasms and diagnosing EGCs were 87.81% and 100%, respectively, significantly higher than those of endoscopists (83.51% [95% confidence interval [CI], 81.23-85.79] and 87.13% [95% CI, 83.75-90.51], respectively). Accuracy rates of the system for predicting EGC invasion depth and differentiation status were 78.57% and 71.43%, respectively, slightly higher than those of endoscopists (63.75% [95% CI, 61.12-66.39] and 64.41% [95% CI, 60.65-68.16], respectively). CONCLUSIONS: The system outperformed endoscopists in identifying EGCs and was comparable with endoscopists in predicting EGC invasion depth and differentiation status in videos. This deep learning-based system could be a powerful tool to assist endoscopists in EGC diagnosis in clinical practice.


Subject(s)
Deep Learning , Stomach Neoplasms , Endoscopy, Gastrointestinal , Humans , Narrow Band Imaging , Prospective Studies , Stomach Neoplasms/diagnostic imaging
8.
Gastrointest Endosc ; 95(4): 671-678.e4, 2022 Apr.
Article in English | MEDLINE | ID: mdl-34896101

ABSTRACT

BACKGROUND AND AIMS: Endoscopy is a pivotal method for detecting early gastric cancer (EGC). However, skill among endoscopists varies greatly. Here, we proposed a deep learning-based system named ENDOANGEL-ME to diagnose EGC in magnifying image-enhanced endoscopy (M-IEE). METHODS: M-IEE images were retrospectively obtained from 6 hospitals in China, including 4667 images for training and validation, 1324 images for internal tests, and 4702 images for external tests. One hundred eighty-seven stored videos from 2 hospitals were used to evaluate the performance of ENDOANGEL-ME and endoscopists and to assess the effect of ENDOANGEL-ME on improving the performance of endoscopists. Prospective consecutive patients undergoing M-IEE were enrolled from August 17, 2020 to August 2, 2021 in Renmin Hospital of Wuhan University to assess the applicability of ENDOANGEL-ME in clinical practice. RESULTS: A total of 3099 patients undergoing M-IEE were enrolled in this study. The diagnostic accuracy of ENDOANGEL-ME for diagnosing EGC was 88.44% and 90.49% in internal and external images, respectively. In 93 internal videos, ENDOANGEL-ME achieved an accuracy of 90.32% for diagnosing EGC, significantly superior to that of senior endoscopists (70.16% ± 8.78%). In 94 external videos, with the assistance of ENDOANGEL-ME, endoscopists showed improved accuracy and sensitivity (85.64% vs 80.32% and 82.03% vs 67.19%, respectively). In 194 prospective consecutive patients with 251 lesions, ENDOANGEL-ME achieved a sensitivity of 92.59% (25/27) and an accuracy of 83.67% (210/251) in real clinical practice. CONCLUSIONS: This multicenter diagnostic study showed that ENDOANGEL-ME can be well applied in the clinical setting. (Clinical trial registration number: ChiCTR2000035116.).


Subject(s)
Stomach Neoplasms , Artificial Intelligence , Endoscopy, Gastrointestinal , Humans , Narrow Band Imaging/methods , Prospective Studies , Retrospective Studies , Stomach Neoplasms/diagnostic imaging , Stomach Neoplasms/pathology
9.
Endoscopy ; 53(12): 1199-1207, 2021 12.
Article in English | MEDLINE | ID: mdl-33429441

ABSTRACT

BACKGROUND: Esophagogastroduodenoscopy (EGD) is a prerequisite for detecting upper gastrointestinal lesions especially early gastric cancer (EGC). An artificial intelligence system has been shown to monitor blind spots during EGD. In this study, we updated the system (ENDOANGEL), verified its effectiveness in improving endoscopy quality, and pretested its performance in detecting EGC in a multicenter randomized controlled trial. METHODS: ENDOANGEL was developed using deep convolutional neural networks and deep reinforcement learning. Patients undergoing EGD in five hospitals were randomly assigned to the ENDOANGEL-assisted group or to a control group without use of ENDOANGEL. The primary outcome was the number of blind spots. Secondary outcomes included performance of ENDOANGEL in predicting EGC in a clinical setting. RESULTS: 1050 patients were randomized, and 498 and 504 patients in the ENDOANGEL and control groups, respectively, were analyzed. Compared with the control group, the ENDOANGEL group had fewer blind spots (mean 5.38 [standard deviation (SD) 4.32] vs. 9.82 [SD 4.98]; P < 0.001) and longer inspection time (5.40 [SD 3.82] vs. 4.38 [SD 3.91] minutes; P < 0.001). In the ENDOANGEL group, 196 gastric lesions with pathological results were identified. ENDOANGEL correctly predicted all three EGCs (one mucosal carcinoma and two high grade neoplasias) and two advanced gastric cancers, with a per-lesion accuracy of 84.7 %, sensitivity of 100 %, and specificity of 84.3 % for detecting gastric cancer. CONCLUSIONS: In this multicenter study, ENDOANGEL was an effective and robust system to improve the quality of EGD and has the potential to detect EGC in real time.


Subject(s)
Stomach Neoplasms , Artificial Intelligence , Early Detection of Cancer , Endoscopy, Gastrointestinal , Humans , Neural Networks, Computer
10.
Dig Dis Sci ; 66(10): 3578-3587, 2021 10.
Article in English | MEDLINE | ID: mdl-33180244

ABSTRACT

BACKGROUND: Early detection is critical in limiting the spread of 2019 novel coronavirus (COVID-19). Although previous data revealed characteristics of GI symptoms in COVID-19, for patients with only GI symptoms onset, their diagnostic process and potential transmission risk are still unclear. METHODS: We retrospectively reviewed 205 COVID-19 cases from January 16 to March 30, 2020, in Renmin Hospital of Wuhan University. All patients were confirmed by virus nuclei acid tests. The clinical features and laboratory and chest tomographic (CT) data were recorded and analyzed. RESULTS: A total of 171 patients with classic symptoms (group A) and 34 patients with only GI symptoms (group B) were included. In patients with classical COVID-19 symptoms, GI symptoms occurred more frequently in severe cases compared to non-severe cases (20/43 vs. 91/128, respectively, p < 0.05). In group B, 91.2% (31/34) patients were non-severe, while 73.5% (25/34) patients had obvious infiltrates in their first CT scans. Compared to group A, group B patients had a prolonged time to clinic services (5.0 days vs. 2.6 days, p < 0.01) and a longer time to a positive viral swab normalized to the time of admission (6.9 days vs. 3.3 days, respectively, p < 0.01). Two patients in group B had family clusters of SARS-CoV-2 infection. CONCLUSION: Patients with only GI symptoms of COVID-19 may take a longer time to present to healthcare services and receive a confirmed diagnosis. In areas where infection is rampant, physicians must remain vigilant of patients presenting with acute gastrointestinal symptoms and should do appropriate personal protective equipment.


Subject(s)
COVID-19/epidemiology , Gastrointestinal Diseases/epidemiology , Adult , Aged , COVID-19/diagnosis , COVID-19/virology , China/epidemiology , Female , Gastrointestinal Diseases/diagnosis , Gastrointestinal Diseases/virology , Humans , Male , Middle Aged , Retrospective Studies , Young Adult
12.
Lancet Gastroenterol Hepatol ; 5(4): 352-361, 2020 04.
Article in English | MEDLINE | ID: mdl-31981518

ABSTRACT

BACKGROUND: Colonoscopy performance varies among endoscopists, impairing the discovery of colorectal cancers and precursor lesions. We aimed to construct a real-time quality improvement system (ENDOANGEL) to monitor real-time withdrawal speed and colonoscopy withdrawal time and to remind endoscopists of blind spots caused by endoscope slipping. We also aimed to evaluate the effectiveness of this system for improving adenoma yield of everyday colonoscopy. METHODS: The ENDOANGEL system was developed using deep neural networks and perceptual hash algorithms. We recruited consecutive patients aged 18-75 years from Renmin Hospital of Wuhan University in China who provided written informed consent. We randomly assigned patients (1:1) using computer-generated random numbers and block randomisation (block size of four) to either colonoscopy with the ENDOANGEL system or unassisted colonoscopy (control). Endoscopists were not masked to the random assignment but analysts and patients were unaware of random assignments. The primary endpoint was the adenoma detection rate (ADR), which is the proportion of patients having one or more adenomas detected at colonoscopy. The primary analysis was done per protocol (ie, in all patients having colonoscopy done in accordance with the assigned intervention) and by intention to treat (ie, in all randomised patients). This trial is registered with http://www.chictr.org.cn, ChiCTR1900021984. FINDINGS: Between June 18, 2019, and Sept 6, 2019, 704 patients were randomly allocated colonoscopy with the ENDOANGEL system (n=355) or unassisted (control) colonoscopy (n=349). In the intention-to-treat population, ADR was significantly greater in the ENDOANGEL group than in the control group, with 58 (16%) of 355 patients allocated ENDOANGEL-assisted colonoscopy having one or more adenomas detected, compared with 27 (8%) of 349 allocated control colonoscopy (odds ratio [OR] 2·30, 95% CI 1·40-3·77; p=0·0010). In the per-protocol analysis, findings were similar, with 54 (17%) of 324 patients assigned ENDOANGEL-assisted colonoscopy and 26 (8%) of 318 patients assigned control colonoscopy having one or more adenomas detected (OR 2·18, 95% CI 1·31-3·62; p=0·0026). No adverse events were reported. INTERPRETATION: The ENDOANGEL system significantly improved the adenoma yield during colonoscopy and seems to be effective and safe for use during routine colonoscopy. FUNDING: Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Hubei Province Major Science and Technology Innovation Project, and the National Natural Science Foundation of China.


Subject(s)
Adenoma/diagnostic imaging , Colonic Polyps/pathology , Colonoscopy/instrumentation , Diagnosis, Computer-Assisted/methods , Adult , Algorithms , Case-Control Studies , China/epidemiology , Colonoscopy/methods , Early Diagnosis , Female , Humans , Intention to Treat Analysis/statistics & numerical data , Male , Middle Aged , Neural Networks, Computer , Optic Disk , Single-Blind Method
13.
Gastrointest Endosc ; 91(2): 332-339.e3, 2020 02.
Article in English | MEDLINE | ID: mdl-31541626

ABSTRACT

BACKGROUND AND AIMS: EGD is the most vital procedure for the diagnosis of upper GI lesions. We aimed to compare the performance of unsedated ultrathin transoral endoscopy (U-TOE), unsedated conventional EGD (C-EGD), and sedated C-EGD with or without the use of an artificial intelligence (AI) system. METHODS: In this prospective, single-blind, 3-parallel-group, randomized, single-center trial, 437 patients scheduled to undergo outpatient EGD were randomized to unsedated U-TOE, unsedated C-EGD, or sedated C-EGD, and each group was then divided into 2 subgroups: with or without the assistance of an AI system to monitor blind spots during EGD. The primary outcome was the blind spot rate of these 3 groups with the assistance of AI. The secondary outcomes were to compare blind spot rates of unsedated U-TOE, unsedated, and sedated C-EGD with or without the assistance of AI, respectively, and the concordance between AI and the endoscopists' review. RESULTS: The blind spot rate with AI-assisted sedated C-EGD was significantly lower than that of unsedated U-TOE and unsedated C-EGD (3.42% vs 21.77% vs 31.23%, respectively; P < .05). The blind spot rate of the AI subgroup was lower than that of the control subgroup in all 3 groups (sedated C-EGD: 3.42% vs 22.46%, P < .001; unsedated U-TOE: 21.77% vs 29.92%, P < .001; unsedated C-EGD: 31.23% vs 42.46%, P < .001). CONCLUSIONS: The blind spot rate of sedated C-EGD was the lowest among the 3 types of EGD, and the addition of AI had a maximal effect on sedated C-EGD. (Clinical trial registration number: ChiCTR1900020920.).


Subject(s)
Artificial Intelligence , Conscious Sedation/methods , Gastroscopes , Gastroscopy/methods , Image Processing, Computer-Assisted , Adult , Aged , Anxiety , Endoscopy, Digestive System/methods , Female , Humans , Male , Middle Aged , Pain, Procedural , Prospective Studies , Single-Blind Method
14.
Gastrointest Endosc ; 91(2): 428-435.e2, 2020 02.
Article in English | MEDLINE | ID: mdl-31783029

ABSTRACT

BACKGROUND AND AIMS: The quality of bowel preparation is an important factor that can affect the effectiveness of a colonoscopy. Several tools, such as the Boston Bowel Preparation Scale (BBPS) and Ottawa Bowel Preparation Scale, have been developed to evaluate bowel preparation. However, understanding the differences between evaluation methods and consistently applying them can be challenging for endoscopists. There are also subjective biases and differences among endoscopists. Therefore, this study aimed to develop a novel, objective, and stable method for the assessment of bowel preparation through artificial intelligence. METHODS: We used a deep convolutional neural network to develop this novel system. First, we retrospectively collected colonoscopy images to train the system and then compared its performance with endoscopists via a human-machine contest. Then, we applied this model to colonoscopy videos and developed a system named ENDOANGEL to provide bowel preparation scores every 30 seconds and to show the cumulative ratio of frames for each score during the withdrawal phase of the colonoscopy. RESULTS: ENDOANGEL achieved 93.33% accuracy in the human-machine contest with 120 images, which was better than that of all endoscopists. Moreover, ENDOANGEL achieved 80.00% accuracy among 100 images with bubbles. In 20 colonoscopy videos, accuracy was 89.04%, and ENDOANGEL continuously showed the accumulated percentage of the images for different BBPS scores during the withdrawal phase and prompted us for bowel preparation scores every 30 seconds. CONCLUSIONS: We provided a novel and more accurate evaluation method for bowel preparation and developed an objective and stable system-ENDOANGEL-that could be applied reliably and steadily in clinical settings.


Subject(s)
Colon/pathology , Colonoscopy/methods , Deep Learning , Image Processing, Computer-Assisted/methods , Preoperative Care , Rectum/pathology , Artificial Intelligence , Humans , Neural Networks, Computer , Reproducibility of Results
15.
Gut ; 68(12): 2161-2169, 2019 12.
Article in English | MEDLINE | ID: mdl-30858305

ABSTRACT

OBJECTIVE: Esophagogastroduodenoscopy (EGD) is the pivotal procedure in the diagnosis of upper gastrointestinal lesions. However, there are significant variations in EGD performance among endoscopists, impairing the discovery rate of gastric cancers and precursor lesions. The aim of this study was to construct a real-time quality improving system, WISENSE, to monitor blind spots, time the procedure and automatically generate photodocumentation during EGD and thus raise the quality of everyday endoscopy. DESIGN: WISENSE system was developed using the methods of deep convolutional neural networks and deep reinforcement learning. Patients referred because of health examination, symptoms, surveillance were recruited from Renmin hospital of Wuhan University. Enrolled patients were randomly assigned to groups that underwent EGD with or without the assistance of WISENSE. The primary end point was to ascertain if there was a difference in the rate of blind spots between WISENSE-assisted group and the control group. RESULTS: WISENSE monitored blind spots with an accuracy of 90.40% in real EGD videos. A total of 324 patients were recruited and randomised. 153 and 150 patients were analysed in the WISENSE and control group, respectively. Blind spot rate was lower in WISENSE group compared with the control (5.86% vs 22.46%, p<0.001), and the mean difference was -15.39% (95% CI -19.23 to -11.54). There was no significant adverse event. CONCLUSIONS: WISENSE significantly reduced blind spot rate of EGD procedure and could be used to improve the quality of everyday endoscopy. TRIAL REGISTRATION NUMBER: ChiCTR1800014809; Results.


Subject(s)
Endoscopy, Digestive System/standards , Gastrointestinal Diseases/diagnosis , Monitoring, Physiologic/standards , Quality Improvement , Upper Gastrointestinal Tract/diagnostic imaging , Female , Humans , Male , Middle Aged , Monitoring, Physiologic/instrumentation , Prospective Studies , Single-Blind Method , Time Factors
16.
Endoscopy ; 51(6): 522-531, 2019 06.
Article in English | MEDLINE | ID: mdl-30861533

ABSTRACT

BACKGROUND: Gastric cancer is the third most lethal malignancy worldwide. A novel deep convolution neural network (DCNN) to perform visual tasks has been recently developed. The aim of this study was to build a system using the DCNN to detect early gastric cancer (EGC) without blind spots during esophagogastroduodenoscopy (EGD). METHODS: 3170 gastric cancer and 5981 benign images were collected to train the DCNN to detect EGC. A total of 24549 images from different parts of stomach were collected to train the DCNN to monitor blind spots. Class activation maps were developed to automatically cover suspicious cancerous regions. A grid model for the stomach was used to indicate the existence of blind spots in unprocessed EGD videos. RESULTS: The DCNN identified EGC from non-malignancy with an accuracy of 92.5 %, a sensitivity of 94.0 %, a specificity of 91.0 %, a positive predictive value of 91.3 %, and a negative predictive value of 93.8 %, outperforming all levels of endoscopists. In the task of classifying gastric locations into 10 or 26 parts, the DCNN achieved an accuracy of 90 % or 65.9 %, on a par with the performance of experts. In real-time unprocessed EGD videos, the DCNN achieved automated performance for detecting EGC and monitoring blind spots. CONCLUSIONS: We developed a system based on a DCNN to accurately detect EGC and recognize gastric locations better than endoscopists, and proactively track suspicious cancerous lesions and monitor blind spots during EGD.


Subject(s)
Early Detection of Cancer , Gastroscopy , Neural Networks, Computer , Stomach Neoplasms/diagnosis , Clinical Competence , Diagnosis, Differential , Humans , Observer Variation , Sensitivity and Specificity
17.
J Transl Med ; 17(1): 92, 2019 03 18.
Article in English | MEDLINE | ID: mdl-30885234

ABSTRACT

BACKGROUND: Identifying intestinal node-negative gastric adenocarcinoma (INGA) patients with high risk of recurrence could help perceive benefit of adjuvant therapy for INGA patients following surgical resection. This study evaluated whether the computer-extracted image features of nuclear shapes, texture, orientation, and tumor architecture on digital images of hematoxylin and eosin stained tissue, could help to predict recurrence in INGA patients. METHODS: A tissue microarrays cohort of 160 retrospectively INGA cases were digitally scanned, and randomly selected as training cohort (D1 = 60), validation cohort (D2 = 100 and D3 = 100, D2 and D3 are different tumor TMA spots from the same patient), accompanied with immunohistochemistry data cohort (D3' = 100, a duplicate cohort of D3) and negative controls data cohort (D5 = 100, normal adjacent tissues). After nuclear segmentation by watershed-based method, 189 local nuclear features were captured on each TMA core and the top 5 features were selected by Wilcoxon rank sum test within D1. A morphometric-based image classifier (NGAHIC) was composed across the discriminative features and predicted the recurrence in INGA on D2. The intra-tumor heterogeneity was assessed on D3. Manual nuclear atypia grading was conducted on D1 and D2 by two pathologists. The expression of HER2 and Ki67 were detected by immunohistochemistry on D3 and D3', respectively. The association between manual grading and INGA outcome was analysis. RESULTS: Independent validation results showed the NGAHIC achieved an AUC of 0.76 for recurrence prediction. NGAHIC-positive patients had poorer overall survival (P = 0.017) by univariate survival analysis. Multivariate survival analysis, controlling for T-stage, histology stage, invasion depth, demonstrated NGAHIC-positive was a reproducible prognostic factor for poorer disease-specific survival (HR = 17.24, 95% CI 3.93-75.60, P < 0.001). In contrast, human grading was only prognostic for one reader on D2. Moreover, significant correlations were observed between NGAHIC-positive patients and positivity of HER2 and Ki67 labeling index. CONCLUSIONS: The NGAHIC could provide precision oncology, personalized cancer management.


Subject(s)
Cell Nucleus Shape , Image Processing, Computer-Assisted , Lymph Nodes/pathology , Neoplasm Recurrence, Local/pathology , Stomach Neoplasms/diagnostic imaging , Stomach Neoplasms/pathology , Algorithms , Cell Nucleus/pathology , Female , Humans , Male , Middle Aged , Multivariate Analysis , Prognosis , Reproducibility of Results , Survival Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...