Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Lancet Digit Health ; 5(11): e786-e797, 2023 11.
Article in English | MEDLINE | ID: mdl-37890902

ABSTRACT

BACKGROUND: Histopathological examination is a crucial step in the diagnosis and treatment of many major diseases. Aiming to facilitate diagnostic decision making and improve the workload of pathologists, we developed an artificial intelligence (AI)-based prescreening tool that analyses whole-slide images (WSIs) of large-bowel biopsies to identify typical, non-neoplastic, and neoplastic biopsies. METHODS: This retrospective cohort study was conducted with an internal development cohort of slides acquired from a hospital in the UK and three external validation cohorts of WSIs acquired from two hospitals in the UK and one clinical laboratory in Portugal. To learn the differential histological patterns from digitised WSIs of large-bowel biopsy slides, our proposed weakly supervised deep-learning model (Colorectal AI Model for Abnormality Detection [CAIMAN]) used slide-level diagnostic labels and no detailed cell or region-level annotations. The method was developed with an internal development cohort of 5054 biopsy slides from 2080 patients that were labelled with corresponding diagnostic categories assigned by pathologists. The three external validation cohorts, with a total of 1536 slides, were used for independent validation of CAIMAN. Each WSI was classified into one of three classes (ie, typical, atypical non-neoplastic, and atypical neoplastic). Prediction scores of image tiles were aggregated into three prediction scores for the whole slide, one for its likelihood of being typical, one for its likelihood of being non-neoplastic, and one for its likelihood of being neoplastic. The assessment of the external validation cohorts was conducted by the trained and frozen CAIMAN model. To evaluate model performance, we calculated area under the convex hull of the receiver operating characteristic curve (AUROC), area under the precision-recall curve, and specificity compared with our previously published iterative draw and rank sampling (IDaRS) algorithm. We also generated heat maps and saliency maps to analyse and visualise the relationship between the WSI diagnostic labels and spatial features of the tissue microenvironment. The main outcome of this study was the ability of CAIMAN to accurately identify typical and atypical WSIs of colon biopsies, which could potentially facilitate automatic removing of typical biopsies from the diagnostic workload in clinics. FINDINGS: A randomly selected subset of all large bowel biopsies was obtained between Jan 1, 2012, and Dec 31, 2017. The AI training, validation, and assessments were done between Jan 1, 2021, and Sept 30, 2022. WSIs with diagnostic labels were collected between Jan 1 and Sept 30, 2022. Our analysis showed no statistically significant differences across prediction scores from CAIMAN for typical and atypical classes based on anatomical sites of the biopsy. At 0·99 sensitivity, CAIMAN (specificity 0·5592) was more accurate than an IDaRS-based weakly supervised WSI-classification pipeline (0·4629) in identifying typical and atypical biopsies on cross-validation in the internal development cohort (p<0·0001). At 0·99 sensitivity, CAIMAN was also more accurate than IDaRS for two external validation cohorts (p<0·0001), but not for a third external validation cohort (p=0·10). CAIMAN provided higher specificity than IDaRS at some high-sensitivity thresholds (0·7763 vs 0·6222 for 0·95 sensitivity, 0·7126 vs 0·5407 for 0·97 sensitivity, and 0·5615 vs 0·3970 for 0·99 sensitivity on one of the external validation cohorts) and showed high classification performance in distinguishing between neoplastic biopsies (AUROC 0·9928, 95% CI 0·9927-0·9929), inflammatory biopsies (0·9658, 0·9655-0·9661), and atypical biopsies (0·9789, 0·9786-0·9792). On the three external validation cohorts, CAIMAN had AUROC values of 0·9431 (95% CI 0·9165-0·9697), 0·9576 (0·9568-0·9584), and 0·9636 (0·9615-0·9657) for the detection of atypical biopsies. Saliency maps supported the representation of disease heterogeneity in model predictions and its association with relevant histological features. INTERPRETATION: CAIMAN, with its high sensitivity in detecting atypical large-bowel biopsies, might be a promising improvement in clinical workflow efficiency and diagnostic decision making in prescreening of typical colorectal biopsies. FUNDING: The Pathology Image Data Lake for Analytics, Knowledge and Education Centre of Excellence; the UK Government's Industrial Strategy Challenge Fund; and Innovate UK on behalf of UK Research and Innovation.


Subject(s)
Artificial Intelligence , Colorectal Neoplasms , Humans , Portugal , Retrospective Studies , Biopsy , United Kingdom , Tumor Microenvironment
2.
Mod Pathol ; 36(11): 100297, 2023 11.
Article in English | MEDLINE | ID: mdl-37544362

ABSTRACT

As digital pathology replaces conventional glass slide microscopy as a means of reporting cellular pathology samples, the annotation of digital pathology whole slide images is rapidly becoming part of a pathologist's regular practice. Currently, there is no recognizable organization of these annotations, and as a result, pathologists adopt an arbitrary approach to defining regions of interest, leading to irregularity and inconsistency and limiting the downstream efficient use of this valuable effort. In this study, we propose a Standardized Annotation Reporting Style for digital whole slide images. We formed a list of 167 commonly annotated entities (under 12 specialty subcategories) based on review of Royal College of Pathologists and College of American Pathologists documents, feedback from reporting pathologists in our NHS department, and experience in developing annotation dictionaries for PathLAKE research projects. Each entity was assigned a suitable annotation shape, SNOMED CT (SNOMED International) code, and unique color. Additionally, as an example of how the approach could be expanded to specific tumor types, all lung tumors in the fifth World Health Organization of thoracic tumors 2021 were included. The proposed standardization of annotations increases their utility, making them identifiable at low power and searchable across and between cases. This would aid pathologists reporting and reviewing cases and enable annotations to be used for research. This structured approach could serve as the basis for an industry standard and be easily adopted to ensure maximum functionality and efficiency in the use of annotations made during routine clinical examination of digital slides.


Subject(s)
Pathology, Clinical , Pathology, Surgical , Thoracic Neoplasms , Humans , Pathology, Clinical/methods , Pathology, Surgical/methods , Pathologists , Microscopy/methods
3.
Gut ; 72(9): 1709-1721, 2023 09.
Article in English | MEDLINE | ID: mdl-37173125

ABSTRACT

OBJECTIVE: To develop an interpretable artificial intelligence algorithm to rule out normal large bowel endoscopic biopsies, saving pathologist resources and helping with early diagnosis. DESIGN: A graph neural network was developed incorporating pathologist domain knowledge to classify 6591 whole-slides images (WSIs) of endoscopic large bowel biopsies from 3291 patients (approximately 54% female, 46% male) as normal or abnormal (non-neoplastic and neoplastic) using clinically driven interpretable features. One UK National Health Service (NHS) site was used for model training and internal validation. External validation was conducted on data from two other NHS sites and one Portuguese site. RESULTS: Model training and internal validation were performed on 5054 WSIs of 2080 patients resulting in an area under the curve-receiver operating characteristic (AUC-ROC) of 0.98 (SD=0.004) and AUC-precision-recall (PR) of 0.98 (SD=0.003). The performance of the model, named Interpretable Gland-Graphs using a Neural Aggregator (IGUANA), was consistent in testing over 1537 WSIs of 1211 patients from three independent external datasets with mean AUC-ROC=0.97 (SD=0.007) and AUC-PR=0.97 (SD=0.005). At a high sensitivity threshold of 99%, the proposed model can reduce the number of normal slides to be reviewed by a pathologist by approximately 55%. IGUANA also provides an explainable output highlighting potential abnormalities in a WSI in the form of a heatmap as well as numerical values associating the model prediction with various histological features. CONCLUSION: The model achieved consistently high accuracy showing its potential in optimising increasingly scarce pathologist resources. Explainable predictions can guide pathologists in their diagnostic decision-making and help boost their confidence in the algorithm, paving the way for its future clinical adoption.


Subject(s)
Artificial Intelligence , State Medicine , Humans , Male , Female , Retrospective Studies , Algorithms , Biopsy
4.
J Pathol Clin Res ; 8(2): 116-128, 2022 03.
Article in English | MEDLINE | ID: mdl-35014198

ABSTRACT

Recent advances in whole-slide imaging (WSI) technology have led to the development of a myriad of computer vision and artificial intelligence-based diagnostic, prognostic, and predictive algorithms. Computational Pathology (CPath) offers an integrated solution to utilise information embedded in pathology WSIs beyond what can be obtained through visual assessment. For automated analysis of WSIs and validation of machine learning (ML) models, annotations at the slide, tissue, and cellular levels are required. The annotation of important visual constructs in pathology images is an important component of CPath projects. Improper annotations can result in algorithms that are hard to interpret and can potentially produce inaccurate and inconsistent results. Despite the crucial role of annotations in CPath projects, there are no well-defined guidelines or best practices on how annotations should be carried out. In this paper, we address this shortcoming by presenting the experience and best practices acquired during the execution of a large-scale annotation exercise involving a multidisciplinary team of pathologists, ML experts, and researchers as part of the Pathology image data Lake for Analytics, Knowledge and Education (PathLAKE) consortium. We present a real-world case study along with examples of different types of annotations, diagnostic algorithm, annotation data dictionary, and annotation constructs. The analyses reported in this work highlight best practice recommendations that can be used as annotation guidelines over the lifecycle of a CPath project.


Subject(s)
Artificial Intelligence , Semantics , Algorithms , Humans , Pathologists
5.
Article in English | MEDLINE | ID: mdl-29930989

ABSTRACT

BACKGROUND: Low and middle income countries (LMICs) face severe resource limitations but the highest burden of disease. There is a growing evidence base on effective and cost-effective interventions for these diseases. However, questions remain about the most cost-effective method of delivery for these interventions. We aimed to review the scope, quality, and findings of economic evaluations of service delivery interventions in LMICs. METHODS: We searched PUBMED, MEDLINE, EconLit, and NHS EED for studies published between 1st January 2000 and 30th October 2016 with no language restrictions. We included all economic evaluations that reported incremental costs and benefits or summary measures of the two such as an incremental cost effectiveness ratio. Studies were grouped by both disease area and outcome measure and permutation plots were completed for similar interventions. Quality was judged by the Drummond checklist. RESULTS: Overall, 3818 potentially relevant abstracts were identified of which 101 studies were selected for full text review. Thirty-seven studies were included in the final review. Twenty-three studies reported on interventions we classed as "changing by whom and where care was provided", specifically interventions that entailed task-shifting from doctors to nurses or community health workers or from facilities into the community. Evidence suggests this type of intervention is likely to be cost-effective or cost-saving. Nine studies reported on quality improvement initiatives, which were generally found to be cost-effective. Quality and methods differed widely limiting comparability of the studies and findings. CONCLUSIONS: There is significant heterogeneity in the literature, both methodologically and in quality. This renders further comparisons difficult and limits the utility of the available evidence to decision makers.

SELECTION OF CITATIONS
SEARCH DETAIL
...