Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 59
Filter
1.
Article in Chinese | WPRIM | ID: wpr-1029603

ABSTRACT

Objective:To evaluate the effect of automated flexible endoscope channel brushing system (AFECBS) on endoscope reprocessing.Methods:A prospective randomized controlled study was conducted. The used endoscopes were divided into automatic group and manual group by random number table method, 200 in each group. In the automatic group, the AFECBS was used to scrub each tube 3 times during endoscope cleaning; and in the manual group, scrubbing and disinfection personnel routinely brushed each pipeline for 3 times. The primary end point was the qualified rate of endoscopic cleaning quality in the two groups, and the secondary end point was the time spent by the scrubbing and disinfection personnel on the two groups.Results:The qualified rate of overall cleaning in the automatic group was 90.0% (180/200), and in the manual group was 81.0% (162/200). The qualified rate of the automatic group was higher than that of the manual group ( χ2=6.534, P=0.011). The qualified rate of gastroscope cleaning in the automatic group was higher than that in the manual group [92.0% (127/138) VS 81.6% (120/147), χ2=6.658, P=0.010]. There was no significant difference in the qualified rate of colonoscope cleaning between the automatic group and the manual group [85.5% (53/62) VS 79.2% (42/53), χ2=0.774, P=0.379]. When the cleaning personnel scoured 5 endoscopes in each of the two groups, the time of the automatic group (5.17±0.42 min) was shorter than that of the manual group (9.60±0.53 min) ( t=92.644, P<0.001). Conclusion:Compared with manual scrubbing, AFECBS can improve the qualified rate of endoscope cleaning and the work efficiency of scrubbing and disinfection personnel, which is worthy of clinical application.

2.
Article in Chinese | WPRIM | ID: wpr-995366

ABSTRACT

Objective:To construct an artificial intelligence-assisted diagnosis system to recognize the characteristics of Helicobacter pylori ( HP) infection under endoscopy, and evaluate its performance in real clinical cases. Methods:A total of 1 033 cases who underwent 13C-urea breath test and gastroscopy in the Digestive Endoscopy Center of Renmin Hospital of Wuhan University from January 2020 to March 2021 were collected retrospectively. Patients with positive results of 13C-urea breath test (which were defined as HP infertion) were assigned to the case group ( n=485), and those with negative results to the control group ( n=548). Gastroscopic images of various mucosal features indicating HP positive and negative, as well as the gastroscopic images of HP positive and negative cases were randomly assigned to the training set, validation set and test set with at 8∶1∶1. An artificial intelligence-assisted diagnosis system for identifying HP infection was developed based on convolutional neural network (CNN) and long short-term memory network (LSTM). In the system, CNN can identify and extract mucosal features of endoscopic images of each patient, generate feature vectors, and then LSTM receives feature vectors to comprehensively judge HP infection status. The diagnostic performance of the system was evaluated by sensitivity, specificity, accuracy and area under receiver operating characteristic curve (AUC). Results:The diagnostic accuracy of this system for nodularity, atrophy, intestinal metaplasia, xanthoma, diffuse redness + spotty redness, mucosal swelling + enlarged fold + sticky mucus and HP negative features was 87.5% (14/16), 74.1% (83/112), 90.0% (45/50), 88.0% (22/25), 63.3% (38/60), 80.1% (238/297) and 85.7% (36 /42), respectively. The sensitivity, specificity, accuracy and AUC of the system for predicting HP infection was 89.6% (43/48), 61.8% (34/55), 74.8% (77/103), and 0.757, respectively. The diagnostic accuracy of the system was equivalent to that of endoscopist in diagnosing HP infection under white light (74.8% VS 72.1%, χ2=0.246, P=0.620). Conclusion:The system developed in this study shows noteworthy ability in evaluating HP status, and can be used to assist endoscopists to diagnose HP infection.

3.
Article in Chinese | WPRIM | ID: wpr-995376

ABSTRACT

Objective:To analyze the cost-effectiveness of a relatively mature artificial intelligence (AI)-assisted diagnosis and treatment system (ENDOANGEL) for gastrointestinal endoscopy in China, and to provide objective and effective data support for hospital acquisition decision.Methods:The number of gastrointestinal endoscopy procedures at the Endoscopy Center of Renmin Hospital of Wuhan University from January 2017 to December 2019 were collected to predict the procedures of gastrointestinal endoscopy during the service life (10 years) of ENDOANGEL. The net present value, payback period and average rate of return were used to analyze the cost-effectiveness of ENDOANGEL.Results:The net present value of an ENDOANGEL in the expected service life (10 years) was 6 724 100 yuan, the payback period was 1.10 years, and the average rate of return reached 147.84%.Conclusion:ENDOANGEL shows significant economic benefits, and it is reasonable for hospitals to acquire mature AI-assisted diagnosis and treatment system for gastrointestinal endoscopy.

4.
Article in Chinese | WPRIM | ID: wpr-995384

ABSTRACT

Objective:To assess the diagnostic efficacy of upper gastrointestinal endoscopic image assisted diagnosis system (ENDOANGEL-LD) based on artificial intelligence (AI) for detecting gastric lesions and neoplastic lesions under white light endoscopy.Methods:The diagnostic efficacy of ENDOANGEL-LD was tested using image testing dataset and video testing dataset, respectively. The image testing dataset included 300 images of gastric neoplastic lesions, 505 images of non-neoplastic lesions and 990 images of normal stomach of 191 patients in Renmin Hospital of Wuhan University from June 2019 to September 2019. Video testing dataset was from 83 videos (38 gastric neoplastic lesions and 45 non-neoplastic lesions) of 78 patients in Renmin Hospital of Wuhan University from November 2020 to April 2021. The accuracy, the sensitivity and the specificity of ENDOANGEL-LD for image testing dataset were calculated. The accuracy, the sensitivity and the specificity of ENDOANGEL-LD in video testing dataset for gastric neoplastic lesions were compared with those of four senior endoscopists.Results:In the image testing dataset, the accuracy, the sensitivity, the specificity of ENDOANGEL-LD for gastric lesions were 93.9% (1 685/1 795), 98.0% (789/805) and 90.5% (896/990) respectively; while the accuracy, the sensitivity and the specificity of ENDOANGEL-LD for gastric neoplastic lesions were 88.7% (714/805), 91.0% (273/300) and 87.3% (441/505) respectively. In the video testing dataset, the sensitivity [100.0% (38/38) VS 85.5% (130/152), χ2=6.220, P=0.013] of ENDOANGEL-LD was higher than that of four senior endoscopists. The accuracy [81.9% (68/83) VS 72.0% (239/332), χ2=3.408, P=0.065] and the specificity [ 66.7% (30/45) VS 60.6% (109/180), χ2=0.569, P=0.451] of ENDOANGEL-LD were comparable with those of four senior endoscopists. Conclusion:The ENDOANGEL-LD can accurately detect gastric lesions and further diagnose neoplastic lesions to help endoscopists in clinical work.

5.
Article in Chinese | WPRIM | ID: wpr-995393

ABSTRACT

Objective:To construct a real-time artificial intelligence (AI)-assisted endoscepic diagnosis system based on YOLO v3 algorithm, and to evaluate its ability of detecting focal gastric lesions in gastroscopy.Methods:A total of 5 488 white light gastroscopic images (2 733 images with gastric focal lesions and 2 755 images without gastric focal lesions) from June to November 2019 and videos of 92 cases (288 168 clear stomach frames) from May to June 2020 at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University were retrospectively collected for AI System test. A total of 3 997 prospective consecutive patients undergoing gastroscopy at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University from July 6, 2020 to November 27, 2020 and May 6, 2021 to August 2, 2021 were enrolled to assess the clinical applicability of AI System. When AI System recognized an abnormal lesion, it marked the lesion with a blue box as a warning. The ability to identify focal gastric lesions and the frequency and causes of false positives and false negatives of AI System were statistically analyzed.Results:In the image test set, the accuracy, the sensitivity, the specificity, the positive predictive value and the negative predictive value of AI System were 92.3% (5 064/5 488), 95.0% (2 597/2 733), 89.5% (2 467/ 2 755), 90.0% (2 597/2 885) and 94.8% (2 467/2 603), respectively. In the video test set, the accuracy, the sensitivity, the specificity, the positive predictive value and the negative predictive value of AI System were 95.4% (274 792/288 168), 95.2% (109 727/115 287), 95.5% (165 065/172 881), 93.4% (109 727/117 543) and 96.7% (165 065/170 625), respectively. In clinical application, the detection rate of local gastric lesions by AI System was 93.0% (6 830/7 344). A total of 514 focal gastric lesions were missed by AI System. The main reasons were punctate erosions (48.8%, 251/514), diminutive xanthomas (22.8%, 117/514) and diminutive polyps (21.4%, 110/514). The mean number of false positives per gastroscopy was 2 (1, 4), most of which were due to normal mucosa folds (50.2%, 5 635/11 225), bubbles and mucus (35.0%, 3 928/11 225), and liquid deposited in the fundus (9.1%, 1 021/11 225).Conclusion:The application of AI System can increase the detection rate of focal gastric lesions.

6.
Article in Chinese | WPRIM | ID: wpr-995410

ABSTRACT

Objective:To evaluate deep learning for differentiating invasion depth of colorectal adenomas under image enhanced endoscopy (IEE).Methods:A total of 13 246 IEE images from 3 714 lesions acquired from November 2016 to June 2021 were retrospectively collected in Renmin Hospital of Wuhan University, Shenzhen Hospital of Southern Medical University and the First Hospital of Yichang to construct a deep learning model to differentiate submucosal deep invasion and non-submucosal deep invasion lesions of colorectal adenomas. The performance of the deep learning model was validated in an independent test and an external test. The full test was used to compare the diagnostic performance between 5 endoscopists and the deep learning model. A total of 35 videos were collected from January to June 2021 in Renmin Hospital of Wuhan University to validate the diagnostic performance of the endoscopists with the assistance of deep learning model.Results:The accuracy and Youden index of the deep learning model in image test set were 93.08% (821/882) and 0.86, which were better than those of endoscopists [the highest were 91.72% (809/882) and 0.78]. In video test set, the accuracy and Youden index of the model were 97.14% (34/35) and 0.94. With the assistance of the model, the accuracy of endoscopists was significantly improved [the highest was 97.14% (34/35)].Conclusion:The deep learning model obtained in this study could identify submucosal lesions with deep invasion accurately for colorectal adenomas, and could improve the diagnostic accuracy of endoscopists.

7.
Article in Chinese | WPRIM | ID: wpr-1021100

ABSTRACT

Background:Slight mucosal lesions in the early stage of gastric cancer(GC)are difficult to recognize,and the miss rate of early GC by conventional endoscopy is high.Artificial intelligence(AI)systems can assist in the identification of gastric neoplastic lesions and reduce miss rate,but it is not clear whether AI-assisted endoscopic screening is cost-effective.Aims:The subjects of this study were to evaluate the cost-effectiveness of a population-based endoscopy screening program for GC in high-incidence countries(China,Japan and South Korea),and to explore the applicability of domestic AI--Intelligent and real-time endoscopy analytical device(IREAD)-assisted endoscopy for GC screening in these three countries.Methods:Based on the natural history of GC,a Markov model with cycle year of 1 year was constructed to compare cost-effectiveness of two strategies for GC screening in recommended age group:no screening(the control strategy),conventional endoscopy screening and IREAD-assisted endoscopy screening.Data such as transition probabilities of different states and treatment costs were obtained from previously published studies.The cost-effectiveness analysis was conducted from the perspective of society by calculating cost,Quality adjusted life years(QALY),Incremental cost effectiveness ratio(ICER).Results:The cohort results showed that 15.87%and 24.52%of GC-related deaths could be respectively avoid by conventional endoscopy screening and IREAD-assisted endoscopy screening in China,which the screening effects were similar to Japan;In South Korea,Conventional endoscopic screening and IREAD-assisted endoscopic screening averted 41.34%and 53.15%of GC-related deaths,respectively.Between the two strategies,IREAD-assisted endoscopic screening is more economic,with ICER of $34 827.61/QALY,$87 978.71/QALY and $10 574.30/QALY in China,Japan and South Korea,respectively,which were lower than the willingness-to-pay(WTP)threshold.Conclusions:When the threshold of WTP is 3 times Gross domestic product per capita,the application of AI-assisted endoscopy for GC screening in age-specific population in high-incidence countries may be more cost-effective.Meanwhile,this study provides important evidence for the promotion of domestic IRAED-assisted endoscopy in GC screening in China,Japan and South Korea.

8.
Article in Chinese | WPRIM | ID: wpr-1029560

ABSTRACT

Objective:To explore the effectiveness of the artificial intelligence-endoscopic ultrasound (AI-EUS) biliary and pancreatic recognition system in assisting the recognition of EUS images.Methods:Subjects who received EUS due to suspicious biliary and pancreatic diseases from December 2019 to August 2020 were prospectively collected from the database of Department of Gastroenterology, Renmin Hospital of Wuhan University. Pancreatic EUS images of 28 subjects were included for recognition of pancreas standard station. EUS images of bile duct of 29 subjects were included for recognition of bile duct standard station. Eight new endoscopists from the Gastroenterology Department of Renmin Hospital of Wuhan University read the 57 EUS videos with and without the assistance of AI-EUS biliary and pancreatic recognition system. Accuracy of endoscopists' identification of biliary and pancreatic standard sites with and without the assistance of AI-EUS was compared.Results:The accuracy of pancreas standard station identification of the new endoscopists increased from 67.2% (903/1 344) to 78.4% (1 054/1 344) with the assistance of AI-EUS. The accuracy of bile duct standard station identification increased from 56.4% (523/928) to 73.8% (685/928).Conclusion:AI-EUS biliary and pancreatic recognition system can improve the accuracy of EUS images recognition of biliary and pancreatic system, which can assist diagnosis in clinical work.

9.
Article in Chinese | WPRIM | ID: wpr-1029585

ABSTRACT

Objective:To compare the cost-effectiveness before and after using an artificial intelligence gastroscopy-assisted system for early gastric cancer screening.Methods:The gastroscopy cases before (non-AI group) and after (AI group) the use of artificial intelligence gastroscopy-assisted system were retrospectively collected in Renmin Hospital of Wuhan University from January 1, 2017 to February 28, 2022. The proportion of early gastric cancer among all gastric cancer was analyzed. Costs were estimated based on the standards of Renmin Hospital of Wuhan University and the 2021 edition of Wuhan Disease Diagnosis-related Group Payment Standards. Cost-effectiveness analysis was conducted per 100 thousand cases with and without the system. And the incremental cost-effectiveness ratio was calculated.Results:For the non-AI group, the proportion of early gastric cancer among all gastric cancer was 28.81% (70/243). The cost of gastroscopy screening per 100 thousand was 54 598.0 thousand yuan, early gastric treatment cost was 221.8 thousand yuan, and a total cost was 54 819.8 thousand yuan. The direct effectiveness was 894.2 thousand yuan, the indirect effectiveness was 1 828.2 thousand yuan and the total effectiveness was 2 722.4 thousand yuan per 100 thousand cases. For the AI group, the early gastric cancer diagnositic rate was 36.56%(366/1 001), where gastroscopy cost was 53 440.0 thousand yuan, early gastric treatment cost 315.8 thousand yuan, the total cost 53 755.8 thousand yuan. The direct effectiveness was 1 273.5 thousand yuan, indirect effectiveness 2 603.1 thousand yuan and the total effectiveness 3 876.6 thousand yuan per 100 thousand cases. The use of the system reduced the cost of early gastric cancer screening by 1 064.0 thousand yuan, and increased the benefit by 1 154.2 thousand yuan per 100 thousand cases. The incremental cost-effectiveness ratio was -0.92.Conclusion:The use of artificial intelligence gastroscopy-assisted system for gastric early cancer screening can reduce medical costs as well as improve the efficiency of screening, and it is recommended for gastroscopy screening .

10.
Article in Chinese | WPRIM | ID: wpr-934086

ABSTRACT

Objective:To evaluate the intelligent gastrointestinal endoscopy quality control system in gastroscopy.Methods:Fourteen endoscopists from Renmin Hospital of Wuhan University were assigned to the quality-control group and the control group by the random number table. In the pre-quality-control stage (from April 20, 2019 to May 31, 2019), data of gastroscopies performed by the enrolled endoscopists were collected. In the training stage (June 1 to 30, 2019), the quality-control group was trained in quality control knowledge and the instructions of intelligent gastrointestinal endoscopy quality control system; but the control group was only trained in quality control knowledge. In the post-quality-control stage (from July 1, 2019 to August 20, 2019), a quality report was submitted weekly to the endoscopists in the quality-control group with a review and feedback, while the control group had no quality control report. Simultaneously, the gastroscopies performed by the enrolled endoscopists were collected during the period. Changes of precancerous lesion detection rate in the two groups were compared.Results:Seven endoscopists were assigned to each group. A total of 3 446 gastroscopies were included in the pre-quality-control stage ( n=1 651, including 753 cases in the quality-control group and 898 cases in the control group) and post-quality-control stage (n=1 795, including 892 cases in the quality-control group and 903 cases in the control group). The detection rate of precancerous lesions in the quality-control group increased by 3.6% [3.3% (29/892) VS 6.9% (52/753), χ2=11.65, P<0.01], while that of the control group increased by 0.4% [3.3% (30/903) VS 3.7% (33/898), χ2=0.17, P=0.684]. Conclusion:The intelligent gastrointestinal endoscopy quality control system with a review and feedback could monitor and improve the quality of gastroscopy.

11.
Article in Chinese | WPRIM | ID: wpr-934107

ABSTRACT

Objective:To construct a deep learning-based artificial intelligence endoscopic ultrasound (EUS) bile duct scanning substation system to assist endoscopists in learning multi-station imaging and improve their operation skills.Methods:A total of 522 EUS videos in Renmin Hospital of Wuhan University and Wuhan Union Hospital from May 2016 to October 2020 were collected, and images were captured from these videos, including 3 000 white light images and 31 003 EUS images from Renmin Hospital of Wuhan University, and 799 EUS images from Wuhan Union Hospital. The pictures were divided into training set and test set in the EUS bile duct scanning system. The system included filtering model of white light gastroscopy images (model 1), distinguishing model of standard station images and non-standard station images (model 2) and substation model of EUS bile duct scanning standard images (model 3), which were used to classify the standard images into liver window, stomach window, duodenal bulb window, and duodenal descending window. Then 110 pictures were randomly selected from the test set for a man-machine competition to compare the accuracy of multi-station imaging by experts, advanced endoscopists and the artificial intelligence model.Results:The accuracies of model 1 and model 2 were 100.00% (1 200/1 200) and 93.36% (2 938/3 147) respectively. Those of model 3 on the internal validation dataset in each classification were 97.23% (1 687/1 735) in liver window, 96.89% (1 681/1 735) in stomach window, 98.73% (1 713/1 735) in duodenal bulb window, and 97.18% (1 686/1 735) in duodenal descending window. And those on the external validation dataset were 89.61% (716/799) in liver window, 92.74% (741/799) in stomach window, 90.11% (720/799) in duodenal bulb window, and 92.24% (737/799) in duodenal descending window. In the man-machine competition, the accuracy of the substation model was 89.09% (98/110), which was higher than that of senior endoscopists [85.45% (94/110), 74.55% (82/110), and 85.45% (94/110)] and close to the level of experts [92.73% (102/110) and 90.00% (99/110)].Conclusion:The deep learning-based EUS bile duct scanning system constructed in the current study can assist endoscopists to perform standard multi-station scanning in real time more accurately and improve the completeness and quality of EUS.

12.
Chinese Journal of Digestion ; (12): 42-49, 2022.
Article in Chinese | WPRIM | ID: wpr-934133

ABSTRACT

Objective:To analyze the expression of circular RNA circ_0008274 in cetuximab-resistant colorectal cancer cells using bioinformatics technology and to explore its involvement in the development of cetuximab resistance.Methods:Five concentrations of cetuximab (10, 50, 100, 150, 200 nmol/L) were set. Cetuximab-resistant cells DiFi-R and Caco-2-R were screened out and established by concentration increasing method using colorectal cancer cells DiFi and Caco-2. The expression of circ_0008274 in DiFi-R and Caco-2-R cells was detected by reverse transcription-polymerase chain reaction(RT-PCR). The interaction and regulation between circ_0008274 and microRNA(miR)-140-3p were analyzed by double-luciferase reporter assay. The highly expressed gene SMARCC1 related to cetuximab resistance was determined by Western blotting. Circ_0008274 in DiFi-R and Caco-2-R cells were knocked out with small interfering RNA si-circ_0008274 transfection. After knock out, the differences in the colony formation and cell proliferation in DiFi-R and Caco-2-R cells were compared. MiR-140-3p mimic and blank control miR were transfected into DiFi-R and Caco-2-R cells. After transfection the difference in cell proliferation between transfected with miR-140-3p mimic and blank control miR in DiFi-R and Caco-2-R cells were analyzed. After Caco-2-R cell was knocked out with si-circ_0008274, the changes of SMARCC1 protein expression rescued by pcDNA3.1 SMARCC1 and cell viability were analyzed. The tumor specimens of 15 colorectal cancer patients hospitalized in Renmin Hospital of Wuhan University from March 2019 to August 2020 were included. According to the treatment effect, the patients were divided into sensitive group (11 cases) and drug-resistant group (4 cases). The relative expression levels of circ_0008274, downstream SMARCC1and miR-140-3p in colorectal cancer tissues in the two groups were detected by RT-PCR. Independent sample t test was used for statistical analysis. Results:The level of circ_0008274 in DiFi-R cells was 2.33±0.12 times of that of DiFi cells, while the level in Caco-2-R was (2.92±0.42) times of that of Caco-2 cells, and the differences were statistically significant ( t=19.97 and 7.80, both P<0.05). The results of double-luciferase reporter showed that after miR-140-3p mimic combined with wild-type circ_0008274, the relative fluorescence intensity was lower than before (0.28±0.04 vs. 1.00±0.00), and the difference was statistically significant ( t=-30.71, P=0.001). The expression of SMARCC1 protein in DiFi-R and Caco-2-R cells was significantly increased, the expression at protein level was higher than that of DiFi and Caco-2 cells (2.22±0.36 vs. 0.61±0.17, 0.85±0.11 vs. 0.35±0.08), and the differences were statistically significant ( t=6.23 and 6.32, both P<0.01). After circ_0008274 was knocked out, the numbers of colony formation of DiFi-R and Caco-2-R cells were both lower than those of before knockout (36.67±4.04 vs. 66.00±9.54, 17.35±4.04 vs. 52.33±8.02), the relative active cell ratios after interventing by 10, 50, 100, 150 and 200 nmol/L cetuximab were also lower than those of before knockout (DiFi-R cells: (73.75±2.75)% vs. (88.10±2.48)%, (56.50±6.66)% vs. (75.15±6.03)%, (35.75±5.32)% vs. (59.63±6.67)%, (24.25±3.30)% vs. (52.40±6.71)%, (6.25±2.75)% vs. (48.60±5.38)%; Caco-2-R cells: (63.74±5.25)% vs. (85.76±4.79)%, (56.50±4.20)% vs.(83.50±3.90)%, (46.00±2.94)% vs. (80.00±6.05)%, (35.30±5.56)% vs. (68.30±4.57)%, (12.25±7.37)% vs. (62.40±7.51)%), and the differences were statistically significant ( t=4.90, 6.71, -7.75, -4.16, -5.60, -7.53, -14.02, -6.19, -8.33, -10.10, -9.17 and -9.56, all P<0.01). After transfecting with miR-140-3p mimic, the relative active cell ratios of DiFi-R and Caco-2-R cells interventing by 10, 50, 100, 150 and 200 nmol/L cetuximab were both lower than those transfected with blank control miR (DiFi-R cells: (71.55±4.97)% vs. (85.90±2.66)%, (51.58±3.91)% vs. (74.95±6.35)%, (41.23±8.84)% vs. (58.43±7.05)%, (28.60±5.26)% vs. (53.75±5.65)%, (18.90±5.13)% vs. (51.30±3.30)%; Caco-2-R cells: (61.75±2.22)% vs. (90.10±1.41)%, (53.25±4.17)% vs. (86.18±2.69)%, (46.38±4.55)% vs. (77.75±6.70)%, (36.10±8.76)% vs. (70.15±4.18)%, (24.25±2.63)% vs. (65.10±7.62)%), and the differences were statistically significant ( t=-5.09, -6.47, -3.05, -6.28, -10.30, -21.48, -12.83, -8.01, -6.79 and -10.12, all P<0.01). After circ_0008274 was knocked out, the SMARCC1 protein level of Caco-2-R cells rescued by pcDNA3.1 SMARCC1 was higher than that of before rescue (0.63±0.19 vs. 0.09±0.03), and the relative active cell ratios after interventing by 10, 50, 100, 150 and 200 nmol/L cetuximab were also higher than that of before rescue ((93.10±3.56)% vs. (83.83±3.97)%, (83.28±4.26)% vs. (60.90±7.02)%, (61.83±2.12)% vs. (50.10±5.59)%, (53.20±3.74)% vs. (40.50±3.42)%, (46.20±4.08)% vs. (30.80±4.82)%), and the differences were statistically significant( t=3.55, 3.52, 5.44, 3.87, 4.64 and 4.88, all P<0.01). The relative expression levels of circ_0008274 and downstream SMARCC1 of colon cancer tissues in the drug-resistant group were higher than those in the sensitive group (6.45±1.32 vs. 2.26±1.39, 12.53±1.60 vs. 3.82±1.56), and the relative expression level of miR-140-3p was lower than that in the sensitive group (3.91±1.25 vs. 7.43±2.23), and the differences were statistically significant ( t=5.22, 9.51, -2.93, all P<0.01). Conclusions:Circular RNA circ_0008274 is highly expressed in colorectal cancer tissues and cetuximab resistant cells, interacts and inhibits miR-140-3p expression, up-regulates SMARCC1, and participates in the occurrence of cetuximab resistance. PcDNA3.1 SMARCC1 rescue can block the sensitization effect of si-circ_0008274 on cetuximab, and can significantly increase cetuximab resistance of colorectal cancer cells.

13.
Article in Chinese | WPRIM | ID: wpr-958290

ABSTRACT

Objective:To evaluate the impact of artificial intelligence (AI) system on the diagnosis rate of precancerous state of gastric cancer.Methods:A single center self-controlled study was conducted under the premise that such factors were controlled as mainframe and model of the endoscope, operating doctor, season and climate, and pathology was taken as the gold standard. The diagnosis rate of precancerous state of gastric cancer, including atrophic gastritis (AG) and intestinal metaplasia (IM) in traditional gastroscopy (from September 1, 2019 to November 30, 2019) and AI assisted endoscopy (from September 1, 2020 to November 15, 2020) in the Eighth Hospital of Wuhan was statistically analyzed and compared, and the subgroup analysis was conducted according to the seniority of doctors.Results:Compared with traditional gastroscopy, AI system could significantly improve the diagnosis rate of AG [13.3% (38/286) VS 7.4% (24/323), χ2=5.689, P=0.017] and IM [33.9% (97/286) VS 26.0% (84/323), χ2=4.544, P=0.033]. For the junior doctors (less than 5 years of endoscopic experience), AI system had a more significant effect on the diagnosis rate of AG [11.9% (22/185) VS 5.8% (11/189), χ2=4.284, P=0.038] and IM [30.3% (56/185) VS 20.6% (39/189), χ2=4.580, P=0.032]. For the senior doctors (more than 10 years of endoscopic experience), although the diagnosis rate of AG and IM increased slightly, the difference was not statistically significant. Conclusion:AI system shows the potential to improve the diagnosis rate of precancerous state of gastric cancer, especially for junior endoscopists, and to reduce missed diagnosis of early gastric cancer.

14.
Article in Chinese | WPRIM | ID: wpr-958309

ABSTRACT

Objective:To evaluate the Kyoto gastritis score for diagnosing Helicobacter pylori ( HP) infection in Chinese people. Methods:A total of 902 cases who underwent 13C-urea breath test and gastroscopy at the same time at the Digestive Endoscopy center of Renmin Hospital of Wuhan University from January 2020 to December 2020 were studied retrospectively, including 345 cases of HP-positive and 557 of HP-negative. The differences of mucosal features and Kyoto gastritis score between HP-positive and HP-negative patients were analyzed. A receiver operating characteristic curve was plotted to predict HP infection by Kyoto gastritis score. Results:Compared with HP-negative patients, nodules [8.1% (28/345) VS 0.2% (1/557), χ2=86.29, P<0.001], diffuse redness [47.8% (165/345) VS 6.6% (37/557), χ2=413.63, P<0.001], atrophy [27.8% (96/345) VS 13.8% (77/557), χ2=52.90, P<0.001] and fold enlargement [69.0% (238/345) VS 36.6% (204/557), χ2=175.38, P<0.001] occurred more frequently in HP-positive patients. For predicting HP infection, nodules showed the highest specificity [99.8% (556/557)] and positive predictive value [96.6% (28/29)], diffuse redness showed the largest area under the receiver operating characteristic curve (AUC) of 0.707, and fold enlargement showed the highest sensitivity [69.0% (238/345)] and negative predictive value [76.7% (353/460)]. The Kyoto gastritis score of HP-positive patients was higher than that of HP-negative patients [2 (1, 2) VS 0 (0, 1), Z=20.82, P<0.001]. Furthermore, at an optimal threshold of 2, the AUC of the Kyoto gastritis score for predicting HP infection was 0.779. Conclusion:Nodules, diffuse redness, atrophy and fold enlargement under gastroscopy can suggest positive of HP infection, and the Kyoto gastritis score≥2 is sufficient reference to diagnose HP positive.

15.
Chinese Journal of Digestion ; (12): 433-438, 2022.
Article in Chinese | WPRIM | ID: wpr-958330

ABSTRACT

Objective:To compare the ability of deep convolutional neural network-crop (DCNN-C) and deep convolutional neural network-whole (DCNN-W), 2 artificial intelligence systems based on different training methods to dignose early gastric cancer (EGC) diagnosis under magnifying image-enhanced endoscopy (M-IEE).Methods:The images and video clips of EGC and non-cancerous lesions under M-IEE under narrow band imaging or blue laser imaging mode were retrospectively collected in the Endoscopy Center of Renmin Hospital of Wuhan University, for the training set and test set for DCNN-C and DCNN-W. The ability of DCNN-C and DCNN-W in EGC identity in image test set were compared. The ability of DCNN-C, DCNN-W and 3 senior endoscopists (average performance) in EGC identity in video test set were also compared. Paired Chi-squared test and Chi-squared test were used for statistical analysis. Inter-observer agreement was expressed as Cohen′s Kappa statistical coefficient (Kappa value).Results:In the image test set, the accuracy, sensitivity, specificity and positive predictive value of DCNN-C in EGC diagnosis were 94.97%(1 133/1 193), 97.12% (202/208), 94.52% (931/985), and 78.91%(202/256), respectively, which were higher than those of DCNN-W(86.84%, 1 036/1 193; 92.79%, 193/208; 85.58%, 843/985 and 57.61%, 193/335), and the differences were statistically significant ( χ2=4.82, 4.63, 61.04 and 29.69, P=0.028, =0.035, <0.001 and <0.001). In the video test set, the accuracy, specificity and positive predictive value of senior endoscopists in EGC diagnosis were 67.67%, 60.42%, and 53.37%, respectively, which were lower than those of DCNN-C (93.00%, 92.19% and 87.18%), and the differences were statistically significant ( χ2=20.83, 16.41 and 11.61, P<0.001, <0.001 and =0.001). The accuracy, specificity and positive predictive value of DCNN-C in EGC diagnosis were higher than those of DCNN-W (79.00%, 70.31% and 64.15%, respectively), and the differences were statistically significant ( χ2=7.04, 8.45 and 6.18, P=0.007, 0.003 and 0.013). There were no significant differences in accuracy, specificity and positive predictive value between senior endoscopists and DCNN-W in EGC diagnosis (all P>0.05). The sensitivity of senior endoscopists, DCNN-W and DCNN-C in EGC diagnosis were 80.56%, 94.44%, and 94.44%, respectively, and the differences were not statistically significant (all P>0.05). The results of the agreement analysis showed that the agreement between senior endoscopists and the gold standard was fair to moderate (Kappa=0.259, 0.532, 0.329), the agreement between DCNN-W and the gold standard was moderate (Kappa=0.587), and the agreement between DCNN-C and the gold standard was very high (Kappa=0.851). Conclusion:When the training set is the same, the ability of DCNN-C in EGC diagnosis is better than that of DCNN-W and senior endoscopists, and the diagnostic level of DCNN-W is equivalent to that of senior endoscopists.

16.
Chinese Journal of Digestion ; (12): 464-469, 2022.
Article in Chinese | WPRIM | ID: wpr-958335

ABSTRACT

Objective:To construct a deep learning-based diagnostic system for gastrointestinal submucosal tumor (SMT) under endoscopic ultrasonography (EUS), so as to help endoscopists diagnose SMT.Methods:From January 1, 2019 to December 15, 2021, at the Digestive Endoscopy Center of Renmin Hospital of Wuhan University, 245 patients with SMT confirmed by pathological diagnosis who underwent EUS and endoscopic submucosal dissection were enrolled. A total of 3 400 EUS images were collected. Among the images, 2 722 EUS images were used for training of lesion segmentation model, while 2 209 EUS images were used for training of stromal tumor and leiomyoma classification model; 283 and 191 images were selected as independent test sets to evaluate lesion segmentation model and classification model, respectively. Thirty images were selected as an independent data set for human-machine competition to compare the lesion classification accuracy between lesion classification models and 6 endoscopists. The performance of the segmentation model was evaluated by indexes such as Intersection-over-Union and Dice coefficient. The performance of the classification model was evaluated by accuracy. Chi-square test was used for statistical analysis.Results:The average Intersection-over-Union and Dice coefficient of lesion segmentation model were 0.754 and 0.835, respectively, and the accuracy, recall and F1 score were 95.2%, 98.9% and 97.0%, respectively. Based on the lesion segmentation, the accuracy of classification model increased from 70.2% to 92.1%. The results of human-machine competition showed that the accuracy of classification model in differential diagnosis of stromal tumor and leiomyoma was 86.7% (26/30), which was superior to that of 4 out of the 6 endoscopists(56.7%, 17/30; 56.7%, 17/30; 53.3%, 16/30; 60.0%, 18/30; respectively), and the differences were statistically significant( χ2=7.11, 7.36, 8.10, 6.13; all P<0.05). There was no significant difference between the accuracy of the other 2 endoscopists(76.7%, 23/30; 73.3%, 22/30; respectively) and model(both P<0.05). Conclusion:This system could be used for the auxiliary diagnosis of SMT under ultrasonic endoscope in the future, and to provide a powerful evidence for the selection of subsequent treatment decisions.

17.
Article in Chinese | WPRIM | ID: wpr-995348

ABSTRACT

Objective:To develop an artificial intelligence-based system for measuring the size of gastrointestinal lesions under white light endoscopy in real time.Methods:The system consisted of 3 models. Model 1 was used to identify the biopsy forceps and mark the contour of the forceps in continuous pictures of the video. The results of model 1 were submitted to model 2 and classified into open and closed forceps. And model 3 was used to identify the lesions and mark the boundary of lesions in real time. Then the length of the lesions was compared with the contour of the forceps to calculate the size of lesions. Dataset 1 consisted of 4 835 images collected retrospectively from January 1, 2017 to November 30, 2019 in Renmin Hospital of Wuhan University, which were used for model training and validation. Dataset 2 consisted of images collected prospectively from December 1, 2019 to June 4, 2020 at the Endoscopy Center of Renmin Hospital of Wuhan University, which were used to test the ability of the model to segment the boundary of the biopsy forceps and lesions. Dataset 3 consisted of 302 images of 151 simulated lesions, each of which included one image of a larger tilt angle (45° from the vertical line of the lesion) and one image of a smaller tilt angle (10° from the vertical line of the lesion) to test the ability of the model to measure the lesion size with the biopsy forceps in different states. Dataset 4 was a video test set, which consisted of prospectively collected videos taken from the Endoscopy Center of Renmin Hospital of Wuhan University from August 5, 2019 to September 4, 2020. The accuracy of model 1 in identifying the presence or absence of biopsy forceps, model 2 in classifying the status of biopsy forceps (open or closed) and model 3 in identifying the presence or absence of lesions were observed with the results of endoscopist review or endoscopic surgery pathology as the gold standard. Intersection over union (IoU) was used to evaluate the segmentation effect of biopsy forceps in model 1 and lesion segmentation effect in model 3, and the absolute error and relative error were used to evaluate the ability of the system to measure lesion size.Results:(1)A total of 1 252 images were included in dataset 2, including 821 images of forceps (401 images of open forceps and 420 images of closed forceps), 431 images of non-forceps, 640 images of lesions and 612 images of non-lesions. Model 1 judged 433 images of non-forceps (430 images were accurate) and 819 images of forceps (818 images were accurate), and the accuracy was 99.68% (1 248/1 252). Based on the data of 818 images of forceps to evaluate the accuracy of model 1 on judging the segmentation effect of biopsy forceps lobe, the mean IoU was 0.91 (95% CI: 0.90-0.92). The classification accuracy of model 2 was evaluated by using 818 forceps pictures accurately judged by model 1. Model 2 judged 384 open forceps pictures (382 accurate) and 434 closed forceps pictures (416 accurate), and the classification accuracy of model 2 was 97.56% (798/818). Model 3 judged 654 images containing lesions (626 images were accurate) and 598 images of non-lesions (584 images were accurate), and the accuracy was 96.65% (1 210/1 252). Based on 626 images of lesions accurately judged by model 3, the mean IoU was 0.86 (95% CI: 0.85-0.87). (2) In dataset 3, the mean absolute error of systematic lesion size measurement was 0.17 mm (95% CI: 0.08-0.28 mm) and the mean relative error was 3.77% (95% CI: 0.00%-10.85%) when the tilt angle of biopsy forceps was small. The mean absolute error of systematic lesion size measurement was 0.17 mm (95% CI: 0.09-0.26 mm) and the mean relative error was 4.02% (95% CI: 2.90%-5.14%) when the biopsy forceps was tilted at a large angle. (3) In dataset 4, a total of 780 images of 59 endoscopic examination videos of 59 patients were included. The mean absolute error of systematic lesion size measurement was 0.24 mm (95% CI: 0.00-0.67 mm), and the mean relative error was 9.74% (95% CI: 0.00%-29.83%). Conclusion:The system could measure the size of endoscopic gastrointestinal lesions accurately and may improve the accuracy of endoscopists.

18.
Article in Chinese | WPRIM | ID: wpr-912172

ABSTRACT

Objective:To develop an endoscopic ultrasonography (EUS) station recognition and pancreatic segmentation system based on deep learning and to validate its efficacy.Methods:Data of 269 EUS procedures were retrospectively collected from Renmin Hospital of Wuhan University between December 2016 and December 2019, and were divided into 3 datasets: (1)Dataset A of 205 procedures for model training containing 16 305 images for classification training and 1 953 images for segmentation training; (2)Dataset B of 44 procedures for model testing containing 1 606 images for classification testing and 480 images for segmentation testing; (3) Dataset C of 20 procedures with 150 images for comparing the performance between models and endoscopists. EUS experts (with more than 10 years of experience) A and B classified and labeled all images of dataset A, B and C through discussion, and the results were used as the gold standard. EUS expert C and senior EUS endoscopists (with more than 5 years of experience) D and E classified and labeled the images in dataset C, and the results were used for comparison with model. The main outcomes included accuracy of classification, Dice (F1 score) of segmentation and Cohen Kappa coefficient of consistency analysis.Results:In test dataset B, the model achieved a mean accuracy of 94.1% in classification. The mean Dice of pancreatic and vascular segmentation were 0.826 and 0.841 respectively. In dataset C, the classification accuracy of the model reached 90.0%. The classification accuracy of expert C, senior endoscopist D and E were 89.3%, 88.7% and 87.3%, respectively. The Dice of pancreatic and vascular segmentation in the model were 0.740 and 0.859, 0.708 and 0.778 for expert C, 0.747 and 0.875 for senior endoscopist D, and 0.774 and 0.789 for senior endoscopist E. The model was comparable to the expert level.Consistency analysis showed that there was high consistency between the model and endoscopists (the Kappa coefficient was 0.823 between model and expert C, 0.840 between model and senior endoscopist D, and 0.799 between model and senior endoscopist E).Conclusion:EUS station classification and pancreatic segmentation system based on deep learning can be used for quality control of pancreatic EUS, with a comparable performance of classification and segmentation to that of EUS experts.

19.
Article in Chinese | WPRIM | ID: wpr-912173

ABSTRACT

Objective:To assess the influence of an artificial intelligence (AI) -assisted diagnosis system on the performance of endoscopists in diagnosing gastric cancer by magnifying narrow banding imaging (M-NBI).Methods:M-NBI images of early gastric cancer (EGC) and non-gastric cancer from Renmin Hospital of Wuhan University from March 2017 to January 2020 and public datasets were collected, among which 4 667 images (1 950 images of EGC and 2 717 of non-gastric cancer)were included in the training set and 1 539 images (483 images of EGC and 1 056 of non-gastric cancer) composed a test set. The model was trained using deep learning technique. One hundred M-NBI videos from Beijing Cancer Hospital and Renmin Hospital of Wuhan University between 9 June 2020 and 17 November 2020 were prospectively collected as a video test set, 38 of gastric cancer and 62 of non-gastric cancer. Four endoscopists from four other hospitals participated in the study, diagnosing the video test twice, with and without AI. The influence of the system on endoscopists′ performance was assessed.Results:Without AI assistance, accuracy, sensitivity, and specificity of endoscopists′ diagnosis of gastric cancer were 81.00%±4.30%, 71.05%±9.67%, and 87.10%±10.88%, respectively. With AI assistance, accuracy, sensitivity and specificity of diagnosis were 86.50%±2.06%, 84.87%±11.07%, and 87.50%±4.47%, respectively. Diagnostic accuracy ( P=0.302) and sensitivity ( P=0.180) of endoscopists with AI assistance were improved compared with those without. Accuracy, sensitivity and specificity of AI in identifying gastric cancer in the video test set were 88.00% (88/100), 97.37% (37/38), and 82.26% (51/62), respectively. Sensitivity of AI was higher than that of the average of endoscopists ( P=0.002). Conclusion:AI-assisted diagnosis system is an effective tool to assist diagnosis of gastric cancer in M-NBI, which can improve the diagnostic ability of endoscopists. It can also remind endoscopists of high-risk areas in real time to reduce the probability of missed diagnosis.

20.
Article in Chinese | WPRIM | ID: wpr-912176

ABSTRACT

Objective:To evaluate deep learning in improving the diagnostic rate of adenomatous and non-adenomatous polyps.Methods:Non-magnifying narrow band imaging (NBI) polyp images obtained from Endoscopy Center of Renmin Hospital, Wuhan University were divided into three datasets. Dataset 1 (2 699 adenomatous and 1 846 non-adenomatous non-magnifying NBI polyp images from January 2018 to October 2020) was used for model training and validation of the diagnosis system. Dataset 2 (288 adenomatous and 210 non-adenomatous non-magnifying NBI polyp images from January 2018 to October 2020) was used to compare the accuracy of polyp classification between the system and endoscopists. At the same time, the accuracy of 4 trainees in polyp classification with and without the assistance of this system was compared. Dataset 3 (203 adenomatous and 141 non-adenomatous non-magnifying NBI polyp images from November 2020 to January 2021) was used to prospectively test the system.Results:The accuracy of the system in polyp classification was 90.16% (449/498) in dataset 2, superior to that of endoscopists. With the assistance of the system, the accuracy of colorectal polyp diagnosis was significantly improved. In the prospective study, the accuracy of the system was 89.53% (308/344).Conclusion:The colorectal polyp classification system based on deep learning can significantly improve the accuracy of trainees in polyp classification.

SELECTION OF CITATIONS
SEARCH DETAIL