Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
Article in English | MEDLINE | ID: mdl-38744667

ABSTRACT

BACKGROUND AND AIM: False positives (FPs) pose a significant challenge in the application of artificial intelligence (AI) for polyp detection during colonoscopy. The study aimed to quantitatively evaluate the impact of computer-aided polyp detection (CADe) systems' FPs on endoscopists. METHODS: The model's FPs were categorized into four gradients: 0-5, 5-10, 10-15, and 15-20 FPs per minute (FPPM). Fifty-six colonoscopy videos were collected for a crossover study involving 10 endoscopists. Polyp missed rate (PMR) was set as primary outcome. Subsequently, to further verify the impact of FPPM on the assistance capability of AI in clinical environments, a secondary analysis was conducted on a prospective randomized controlled trial (RCT) from Renmin Hospital of Wuhan University in China from July 1 to October 15, 2020, with the adenoma detection rate (ADR) as primary outcome. RESULTS: Compared with routine group, CADe reduced PMR when FPPM was less than 5. However, with the continuous increase of FPPM, the beneficial effect of CADe gradually weakens. For secondary analysis of RCT, a total of 956 patients were enrolled. In AI-assisted group, ADR is higher when FPPM ≤ 5 compared with FPPM > 5 (CADe group: 27.78% vs 11.90%; P = 0.014; odds ratio [OR], 0.351; 95% confidence interval [CI], 0.152-0.812; COMBO group: 38.40% vs 23.46%, P = 0.029; OR, 0.427; 95% CI, 0.199-0.916). After AI intervention, ADR increased when FPPM ≤ 5 (27.78% vs 14.76%; P = 0.001; OR, 0.399; 95% CI, 0.231-0.690), but no statistically significant difference was found when FPPM > 5 (11.90% vs 14.76%, P = 0.788; OR, 1.111; 95% CI, 0.514-2.403). CONCLUSION: The level of FPs of CADe does affect its effectiveness as an aid to endoscopists, with its best effect when FPPM is less than 5.

2.
Sci Transl Med ; 16(743): eadk5395, 2024 Apr 17.
Article in English | MEDLINE | ID: mdl-38630847

ABSTRACT

Endoscopy is the primary modality for detecting asymptomatic esophageal squamous cell carcinoma (ESCC) and precancerous lesions. Improving detection rate remains challenging. We developed a system based on deep convolutional neural networks (CNNs) for detecting esophageal cancer and precancerous lesions [high-risk esophageal lesions (HrELs)] and validated its efficacy in improving HrEL detection rate in clinical practice (trial registration ChiCTR2100044126 at www.chictr.org.cn). Between April 2021 and March 2022, 3117 patients ≥50 years old were consecutively recruited from Taizhou Hospital, Zhejiang Province, and randomly assigned 1:1 to an experimental group (CNN-assisted endoscopy) or a control group (unassisted endoscopy) based on block randomization. The primary endpoint was the HrEL detection rate. In the intention-to-treat population, the HrEL detection rate [28 of 1556 (1.8%)] was significantly higher in the experimental group than in the control group [14 of 1561 (0.9%), P = 0.029], and the experimental group detection rate was twice that of the control group. Similar findings were observed between the experimental and control groups [28 of 1524 (1.9%) versus 13 of 1534 (0.9%), respectively; P = 0.021]. The system's sensitivity, specificity, and accuracy for detecting HrELs were 89.7, 98.5, and 98.2%, respectively. No adverse events occurred. The proposed system thus improved HrEL detection rate during endoscopy and was safe. Deep learning assistance may enhance early diagnosis and treatment of esophageal cancer and may become a useful tool for esophageal cancer screening.


Subject(s)
Deep Learning , Esophageal Neoplasms , Esophageal Squamous Cell Carcinoma , Precancerous Conditions , Humans , Middle Aged , Esophageal Neoplasms/diagnosis , Esophageal Neoplasms/epidemiology , Esophageal Neoplasms/pathology , Esophageal Squamous Cell Carcinoma/pathology , Prospective Studies , Precancerous Conditions/pathology
3.
Gastrointest Endosc ; 99(1): 91-99.e9, 2024 01.
Article in English | MEDLINE | ID: mdl-37536635

ABSTRACT

BACKGROUND AND AIMS: The efficacy and safety of colonoscopy performed by artificial intelligence (AI)-assisted novices remain unknown. The aim of this study was to compare the lesion detection capability of novices, AI-assisted novices, and experts. METHODS: This multicenter, randomized, noninferiority tandem study was conducted across 3 hospitals in China from May 1, 2022, to November 11, 2022. Eligible patients were randomized into 1 of 3 groups: the CN group (control novice group, withdrawal performed by a novice independently), the AN group (AI-assisted novice group, withdrawal performed by a novice with AI assistance), or the CE group (control expert group, withdrawal performed by an expert independently). Participants underwent a repeat colonoscopy conducted by an AI-assisted expert to evaluate the lesion miss rate and ensure lesion detection. The primary outcome was the adenoma miss rate (AMR). RESULTS: A total of 685 eligible patients were analyzed: 229 in the CN group, 227 in the AN group, and 229 in the CE group. Both AMR and polyp miss rate were lower in the AN group than in the CN group (18.82% vs 43.69% [P < .001] and 21.23% vs 35.38% [P < .001], respectively). The noninferiority margin was met between the AN and CE groups of both AMR and polyp miss rate (18.82% vs 26.97% [P = .202] and 21.23% vs 24.10% [P < .249]). CONCLUSIONS: AI-assisted colonoscopy lowered the AMR of novices, making them noninferior to experts. The withdrawal technique of new endoscopists can be enhanced by AI-assisted colonoscopy. (Clinical trial registration number: NCT05323279.).


Subject(s)
Adenoma , Colonic Polyps , Colorectal Neoplasms , Polyps , Humans , Artificial Intelligence , Prospective Studies , Colonoscopy/methods , Research Design , Adenoma/diagnosis , Adenoma/pathology , Colonic Polyps/diagnostic imaging , Colorectal Neoplasms/diagnosis
4.
Clin Transl Gastroenterol ; 14(10): e00606, 2023 10 01.
Article in English | MEDLINE | ID: mdl-37289447

ABSTRACT

INTRODUCTION: Endoscopic evaluation is crucial for predicting the invasion depth of esophagus squamous cell carcinoma (ESCC) and selecting appropriate treatment strategies. Our study aimed to develop and validate an interpretable artificial intelligence-based invasion depth prediction system (AI-IDPS) for ESCC. METHODS: We reviewed the PubMed for eligible studies and collected potential visual feature indices associated with invasion depth. Multicenter data comprising 5,119 narrow-band imaging magnifying endoscopy images from 581 patients with ESCC were collected from 4 hospitals between April 2016 and November 2021. Thirteen models for feature extraction and 1 model for feature fitting were developed for AI-IDPS. The efficiency of AI-IDPS was evaluated on 196 images and 33 consecutively collected videos and compared with a pure deep learning model and performance of endoscopists. A crossover study and a questionnaire survey were conducted to investigate the system's impact on endoscopists' understanding of the AI predictions. RESULTS: AI-IDPS demonstrated the sensitivity, specificity, and accuracy of 85.7%, 86.3%, and 86.2% in image validation and 87.5%, 84%, and 84.9% in consecutively collected videos, respectively, for differentiating SM2-3 lesions. The pure deep learning model showed significantly lower sensitivity, specificity, and accuracy (83.7%, 52.1% and 60.0%, respectively). The endoscopists had significantly improved accuracy (from 79.7% to 84.9% on average, P = 0.03) and comparable sensitivity (from 37.5% to 55.4% on average, P = 0.27) and specificity (from 93.1% to 94.3% on average, P = 0.75) after AI-IDPS assistance. DISCUSSION: Based on domain knowledge, we developed an interpretable system for predicting ESCC invasion depth. The anthropopathic approach demonstrates the potential to outperform deep learning architecture in practice.


Subject(s)
Carcinoma, Squamous Cell , Esophageal Neoplasms , Esophageal Squamous Cell Carcinoma , Humans , Esophageal Squamous Cell Carcinoma/diagnosis , Esophageal Squamous Cell Carcinoma/pathology , Esophageal Neoplasms/diagnostic imaging , Esophageal Neoplasms/pathology , Carcinoma, Squamous Cell/diagnostic imaging , Carcinoma, Squamous Cell/pathology , Esophagoscopy/methods , Artificial Intelligence , Cross-Over Studies , Sensitivity and Specificity , Multicenter Studies as Topic
5.
Am J Clin Pathol ; 160(4): 394-403, 2023 10 03.
Article in English | MEDLINE | ID: mdl-37279532

ABSTRACT

OBJECTIVES: The histopathologic diagnosis of colorectal sessile serrated lesions (SSLs) and hyperplastic polyps (HPs) is of low consistency among pathologists. This study aimed to develop and validate a deep learning (DL)-based logical anthropomorphic pathology diagnostic system (LA-SSLD) for the differential diagnosis of colorectal SSL and HP. METHODS: The diagnosis framework of the LA-SSLD system was constructed according to the current guidelines and consisted of 4 DL models. Deep convolutional neural network (DCNN) 1 was the mucosal layer segmentation model, DCNN 2 was the muscularis mucosa segmentation model, DCNN 3 was the glandular lumen segmentation model, and DCNN 4 was the glandular lumen classification (aberrant or regular) model. A total of 175 HP and 127 SSL sections were collected from Renmin Hospital of Wuhan University during November 2016 to November 2022. The performance of the LA-SSLD system was compared to 11 pathologists with different qualifications through the human-machine contest. RESULTS: The Dice scores of DCNNs 1, 2, and 3 were 93.66%, 58.38%, and 74.04%, respectively. The accuracy of DCNN 4 was 92.72%. In the human-machine contest, the accuracy, sensitivity, and specificity of the LA-SSLD system were 85.71%, 86.36%, and 85.00%, respectively. In comparison with experts (pathologist D: accuracy 83.33%, sensitivity 90.91%, specificity 75.00%; pathologist E: accuracy 85.71%, sensitivity 90.91%, specificity 80.00%), LA-SSLD achieved expert-level accuracy and outperformed all the senior and junior pathologists. CONCLUSIONS: This study proposed a logical anthropomorphic diagnostic system for the differential diagnosis of colorectal SSL and HP. The diagnostic performance of the system is comparable to that of experts and has the potential to become a powerful diagnostic tool for SSL in the future. It is worth mentioning that a logical anthropomorphic system can achieve expert-level accuracy with fewer samples, providing potential ideas for the development of other artificial intelligence models.


Subject(s)
Colonic Polyps , Colorectal Neoplasms , Deep Learning , Humans , Colonic Polyps/diagnosis , Colonic Polyps/pathology , Artificial Intelligence , Neural Networks, Computer , Colorectal Neoplasms/diagnosis , Colorectal Neoplasms/pathology
6.
Gastrointest Endosc ; 98(2): 181-190.e10, 2023 08.
Article in English | MEDLINE | ID: mdl-36849056

ABSTRACT

BACKGROUND AND AIMS: EGD is essential for GI disorders, and reports are pivotal to facilitating postprocedure diagnosis and treatment. Manual report generation lacks sufficient quality and is labor intensive. We reported and validated an artificial intelligence-based endoscopy automatic reporting system (AI-EARS). METHODS: The AI-EARS was designed for automatic report generation, including real-time image capturing, diagnosis, and textual description. It was developed using multicenter datasets from 8 hospitals in China, including 252,111 images for training, 62,706 images, and 950 videos for testing. Twelve endoscopists and 44 endoscopy procedures were consecutively enrolled to evaluate the effect of the AI-EARS in a multireader, multicase, crossover study. The precision and completeness of the reports were compared between endoscopists using the AI-EARS and conventional reporting systems. RESULTS: In video validation, the AI-EARS achieved completeness of 98.59% and 99.69% for esophageal and gastric abnormality records, respectively, accuracies of 87.99% and 88.85% for esophageal and gastric lesion location records, and 73.14% and 85.24% for diagnosis. Compared with the conventional reporting systems, the AI-EARS achieved greater completeness (79.03% vs 51.86%, P < .001) and accuracy (64.47% vs 42.81%, P < .001) of the textual description and completeness of the photo-documents of landmarks (92.23% vs 73.69%, P < .001). The mean reporting time for an individual lesion was significantly reduced (80.13 ± 16.12 seconds vs 46.47 ± 11.68 seconds, P < .001) after the AI-EARS assistance. CONCLUSIONS: The AI-EARS showed its efficacy in improving the accuracy and completeness of EGD reports. It might facilitate the generation of complete endoscopy reports and postendoscopy patient management. (Clinical trial registration number: NCT05479253.).


Subject(s)
Artificial Intelligence , Deep Learning , Humans , Cross-Over Studies , China , Hospitals
7.
NPJ Digit Med ; 5(1): 183, 2022 Dec 19.
Article in English | MEDLINE | ID: mdl-36536039

ABSTRACT

Bleeding risk factors for gastroesophageal varices (GEV) detected by endoscopy in cirrhotic patients determine the prophylactical treatment patients will undergo in the following 2 years. We propose a methodology for measuring the risk factors. We create an artificial intelligence system (ENDOANGEL-GEV) containing six models to segment GEV and to classify the grades (grades 1-3) and red color signs (RC, RC0-RC3) of varices. It also summarizes changes in the above results with region in real time. ENDOANGEL-GEV is trained using 6034 images from 1156 cirrhotic patients across three hospitals (dataset 1) and validated on multicenter datasets with 11009 images from 141 videos (dataset 2) and in a prospective study recruiting 161 cirrhotic patients from Renmin Hospital of Wuhan University (dataset 3). In dataset 1, ENDOANGEL-GEV achieves intersection over union values of 0.8087 for segmenting esophageal varices and 0.8141 for gastric varices. In dataset 2, the system maintains fairly accuracy across images from three hospitals. In dataset 3, ENDOANGEL-GEV surpasses attended endoscopists in detecting RC of GEV and classifying grades (p < 0.001). When ranking the risk of patients combined with the Child‒Pugh score, ENDOANGEL-GEV outperforms endoscopists for esophageal varices (p < 0.001) and shows comparable performance for gastric varices (p = 0.152). Compared with endoscopists, ENDOANGEL-GEV may help 12.31% (16/130) more patients receive the right intervention. We establish an interpretable system for the endoscopic diagnosis and risk stratification of GEV. It will assist in detecting the first bleeding risk factors accurately and expanding the scope of quantitative measurement of diseases.

8.
EClinicalMedicine ; 46: 101366, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35521066

ABSTRACT

Background: Prompt diagnosis of early gastric cancer (EGC) is crucial for improving patient survival. However, most previous computer-aided-diagnosis (CAD) systems did not concretize or explain diagnostic theories. We aimed to develop a logical anthropomorphic artificial intelligence (AI) diagnostic system named ENDOANGEL-LA (logical anthropomorphic) for EGCs under magnifying image enhanced endoscopy (M-IEE). Methods: We retrospectively collected data for 692 patients and 1897 images from Renmin Hospital of Wuhan University, Wuhan, China between Nov 15, 2016 and May 7, 2019. The images were randomly assigned to the training set and test set by patient with a ratio of about 4:1. ENDOANGEL-LA was developed based on feature extraction combining quantitative analysis, deep learning (DL), and machine learning (ML). 11 diagnostic feature indexes were integrated into seven ML models, and an optimal model was selected. The performance of ENDOANGEL-LA was evaluated and compared with endoscopists and sole DL models. The satisfaction of endoscopists on ENDOANGEL-LA and sole DL model was also compared. Findings: Random forest showed the best performance, and demarcation line and microstructures density were the most important feature indexes. The accuracy of ENDOANGEL-LA in images (88.76%) was significantly higher than that of sole DL model (82.77%, p = 0.034) and the novices (71.63%, p<0.001), and comparable to that of the experts (88.95%). The accuracy of ENDOANGEL-LA in videos (87.00%) was significantly higher than that of the sole DL model (68.00%, p<0.001), and comparable to that of the endoscopists (89.00%). The accuracy (87.45%, p<0.001) of novices with the assistance of ENDOANGEL-LA was significantly improved. The satisfaction of endoscopists on ENDOANGEL-LA was significantly higher than that of sole DL model. Interpretation: We established a logical anthropomorphic system (ENDOANGEL-LA) that can diagnose EGC under M-IEE with diagnostic theory concretization, high accuracy, and good explainability. It has the potential to increase interactivity between endoscopists and CADs, and improve trust and acceptability of endoscopists for CADs. Funding: This work was partly supported by a grant from the Hubei Province Major Science and Technology Innovation Project (2018-916-000-008) and the Fundamental Research Funds for the Central Universities (2042021kf0084).

9.
Endoscopy ; 54(8): 771-777, 2022 08.
Article in English | MEDLINE | ID: mdl-35272381

ABSTRACT

BACKGROUND AND STUDY AIMS: Endoscopic reports are essential for the diagnosis and follow-up of gastrointestinal diseases. This study aimed to construct an intelligent system for automatic photo documentation during esophagogastroduodenoscopy (EGD) and test its utility in clinical practice. PATIENTS AND METHODS: Seven convolutional neural networks trained and tested using 210,198 images were integrated to construct the endoscopic automatic image reporting system (EAIRS). We tested its performance through man-machine comparison at three levels: internal, external, and prospective test. Between May 2021 and June 2021, patients undergoing EGD at Renmin Hospital of Wuhan University were recruited. The primary outcomes were accuracy for capturing anatomical landmarks, completeness for capturing anatomical landmarks, and detected lesions. RESULTS: The EAIRS outperformed endoscopists in retrospective internal and external test. A total of 161 consecutive patients were enrolled in the prospective test. The EAIRS achieved an accuracy of 95.2% in capturing anatomical landmarks in the prospective test. It also achieved higher completeness on capturing anatomical landmarks compared with endoscopists: (93.1% vs. 88.8%), and was comparable to endoscopists on capturing detected lesions: (99.0% vs. 98.0%). CONCLUSIONS: The EAIRS can generate qualified image reports and could be a powerful tool for generating endoscopic reports in clinical practice.


Subject(s)
Deep Learning , Endoscopy, Digestive System , Endoscopy/methods , Endoscopy, Digestive System/methods , Humans , Prospective Studies
11.
Clin Transl Gastroenterol ; 12(6): e00366, 2021 06 15.
Article in English | MEDLINE | ID: mdl-34128480

ABSTRACT

INTRODUCTION: Gastrointestinal endoscopic quality is operator-dependent. To ensure the endoscopy quality, we constructed an endoscopic audit and feedback system named Endo.Adm and evaluated its effect in a form of pretest and posttest trial. METHODS: Endo.Adm system was developed using Python and Deep Convolutional Neural Ne2rk models. Sixteen endoscopists were recruited from Renmin Hospital of Wuhan University and were randomly assigned to undergo feedback of Endo.Adm or not (8 for the feedback group and 8 for the control group). The feedback group received weekly quality report cards which were automatically generated by Endo.Adm. We then compared the adenoma detection rate (ADR) and gastric precancerous conditions detection rate between baseline and postintervention phase for endoscopists in each group to evaluate the impact of Endo.Adm feedback. In total, 1,191 colonoscopies and 3,515 gastroscopies were included for analysis. RESULTS: ADR was increased after Endo.Adm feedback (10.8%-20.3%, P < 0.01,

Subject(s)
Adenoma/diagnostic imaging , Clinical Competence , Colonoscopy/standards , Deep Learning , Quality Indicators, Health Care/statistics & numerical data , Adenoma/epidemiology , Adult , China , Early Detection of Cancer , Feedback , Female , Humans , Male , Middle Aged , Quality Improvement , Risk Factors
12.
Front Oncol ; 11: 622827, 2021.
Article in English | MEDLINE | ID: mdl-33959495

ABSTRACT

BACKGROUND AND AIMS: Prediction of intramucosal gastric cancer (GC) is a big challenge. It is not clear whether artificial intelligence could assist endoscopists in the diagnosis. METHODS: A deep convolutional neural networks (DCNN) model was developed via retrospectively collected 3407 endoscopic images from 666 gastric cancer patients from two Endoscopy Centers (training dataset). The DCNN model's performance was tested with 228 images from 62 independent patients (testing dataset). The endoscopists evaluated the image and video testing dataset with or without the DCNN model's assistance, respectively. Endoscopists' diagnostic performance was compared with or without the DCNN model's assistance and investigated the effects of assistance using correlations and linear regression analyses. RESULTS: The DCNN model discriminated intramucosal GC from advanced GC with an AUC of 0.942 (95% CI, 0.915-0.970), a sensitivity of 90.5% (95% CI, 84.1%-95.4%), and a specificity of 85.3% (95% CI, 77.1%-90.9%) in the testing dataset. The diagnostic performance of novice endoscopists was comparable to those of expert endoscopists with the DCNN model's assistance (accuracy: 84.6% vs. 85.5%, sensitivity: 85.7% vs. 87.4%, specificity: 83.3% vs. 83.0%). The mean pairwise kappa value of endoscopists was increased significantly with the DCNN model's assistance (0.430-0.629 vs. 0.660-0.861). The diagnostic duration reduced considerably with the assistance of the DCNN model from 4.35s to 3.01s. The correlation between the perseverance of effort and diagnostic accuracy of endoscopists was diminished using the DCNN model (r: 0.470 vs. 0.076). CONCLUSIONS: An AI-assisted system was established and found useful for novice endoscopists to achieve comparable diagnostic performance with experts.

SELECTION OF CITATIONS
SEARCH DETAIL
...