Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
2.
Gastrointest Endosc ; 97(2): 268-278.e1, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36007584

ABSTRACT

BACKGROUND AND AIMS: Accurately diagnosing malignant biliary strictures (MBSs) as benign or malignant remains challenging. It has been suggested that direct visualization and interpretation of cholangioscopy images provide greater accuracy for stricture classification than current sampling techniques (ie, brush cytology and forceps biopsy sampling) using ERCP. We aimed to develop a convolutional neural network (CNN) model capable of accurate stricture classification and real-time evaluation based solely on cholangioscopy image analysis. METHODS: Consecutive patients with cholangioscopy examinations from 2012 to 2021 were reviewed. A CNN was developed and tested using cholangioscopy images with direct expert annotations. The CNN was then applied to a multicenter, reserved test set of cholangioscopy videos. CNN performance was then directly compared with that of ERCP sampling techniques. Occlusion block heatmap analyses were used to evaluate and rank cholangioscopy features associated with MBSs. RESULTS: One hundred fifty-four patients with available cholangioscopy examinations were included in the study. The final image database comprised 2,388,439 still images. The CNN demonstrated good performance when tasked with mimicking expert annotations of high-quality malignant images (area under the receiver-operating characteristic curve, .941). Overall accuracy of CNN-based video analysis (.906) was significantly greater than that of brush cytology (.625, P = .04) or forceps biopsy sampling (.609, P = .03). Occlusion block heatmap analysis demonstrated that the most frequent image feature for an MBS was the presence of frond-like mucosa/papillary projections. CONCLUSIONS: This study demonstrates that a CNN developed using cholangioscopy data alone has greater accuracy for biliary stricture classification than traditional ERCP-based sampling techniques.


Subject(s)
Cholestasis , Deep Learning , Humans , Constriction, Pathologic/diagnosis , Artificial Intelligence , Prospective Studies , Cholestasis/diagnostic imaging , Cholestasis/etiology
3.
Gastrointest Endosc Clin N Am ; 31(2): 387-397, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33743933

ABSTRACT

Artificial intelligence (AI) research for medical applications has expanded quickly. Advancements in computer processing now allow for the development of complex neural network architectures (eg, convolutional neural networks) that are capable of extracting and learning complex features from massive data sets, including large image databases. Gastroenterology and endoscopy are well suited for AI research. Video capsule endoscopy is an ideal platform for AI model research given the large amount of data produced by each capsule examination and the annotated databases that are already available. Studies have demonstrated high performance for applications of capsule-based AI models developed for various pathologic conditions.


Subject(s)
Capsule Endoscopy , Gastroenterology , Artificial Intelligence , Humans , Research
4.
Gastrointest Endosc ; 93(5): 1121-1130.e1, 2021 05.
Article in English | MEDLINE | ID: mdl-32861752

ABSTRACT

BACKGROUND AND AIMS: Detection and characterization of focal liver lesions (FLLs) is key for optimizing treatment for patients who may have a primary hepatic cancer or metastatic disease to the liver. This is the first study to develop an EUS-based convolutional neural network (CNN) model for the purpose of identifying and classifying FLLs. METHODS: A prospective EUS database comprising cases of FLLs visualized and sampled via EUS was reviewed. Relevant still images and videos of liver parenchyma and FLLs were extracted. Patient data were then randomly distributed for the purpose of CNN model training and testing. Once a final model was created, occlusion heatmap analysis was performed to assess the ability of the EUS-CNN model to autonomously identify FLLs. The performance of the EUS-CNN for differentiating benign and malignant FLLs was also analyzed. RESULTS: A total of 210,685 unique EUS images from 256 patients were used to train, validate, and test the CNN model. Occlusion heatmap analyses demonstrated that the EUS-CNN model was successful in autonomously locating FLLs in 92.0% of EUS video assets. When evaluating any random still image extracted from videos or physician-captured images, the CNN model was 90% sensitive and 71% specific (area under the receiver operating characteristic [AUROC], 0.861) for classifying malignant FLLs. When evaluating full-length video assets, the EUS-CNN model was 100% sensitive and 80% specific (AUROC, 0.904) for classifying malignant FLLs. CONCLUSIONS: This study demonstrated the capability of an EUS-CNN model to autonomously identify FLLs and to accurately classify them as either malignant or benign lesions.


Subject(s)
Artificial Intelligence , Liver Neoplasms , Humans , Liver Neoplasms/diagnostic imaging , Neural Networks, Computer , Prospective Studies , Sensitivity and Specificity
5.
Gut ; 70(7): 1335-1344, 2021 07.
Article in English | MEDLINE | ID: mdl-33028668

ABSTRACT

OBJECTIVE: The diagnosis of autoimmune pancreatitis (AIP) is challenging. Sonographic and cross-sectional imaging findings of AIP closely mimic pancreatic ductal adenocarcinoma (PDAC) and techniques for tissue sampling of AIP are suboptimal. These limitations often result in delayed or failed diagnosis, which negatively impact patient management and outcomes. This study aimed to create an endoscopic ultrasound (EUS)-based convolutional neural network (CNN) model trained to differentiate AIP from PDAC, chronic pancreatitis (CP) and normal pancreas (NP), with sufficient performance to analyse EUS video in real time. DESIGN: A database of still image and video data obtained from EUS examinations of cases of AIP, PDAC, CP and NP was used to develop a CNN. Occlusion heatmap analysis was used to identify sonographic features the CNN valued when differentiating AIP from PDAC. RESULTS: From 583 patients (146 AIP, 292 PDAC, 72 CP and 73 NP), a total of 1 174 461 unique EUS images were extracted. For video data, the CNN processed 955 EUS frames per second and was: 99% sensitive, 98% specific for distinguishing AIP from NP; 94% sensitive, 71% specific for distinguishing AIP from CP; 90% sensitive, 93% specific for distinguishing AIP from PDAC; and 90% sensitive, 85% specific for distinguishing AIP from all studied conditions (ie, PDAC, CP and NP). CONCLUSION: The developed EUS-CNN model accurately differentiated AIP from PDAC and benign pancreatic conditions, thereby offering the capability of earlier and more accurate diagnosis. Use of this model offers the potential for more timely and appropriate patient care and improved outcome.


Subject(s)
Autoimmune Pancreatitis/diagnostic imaging , Carcinoma, Pancreatic Ductal/diagnostic imaging , Endosonography , Image Interpretation, Computer-Assisted/methods , Neural Networks, Computer , Pancreatic Neoplasms/diagnostic imaging , Area Under Curve , Diagnosis, Differential , Humans , Machine Learning , Observer Variation , Pancreas/diagnostic imaging , ROC Curve
SELECTION OF CITATIONS
SEARCH DETAIL
...