Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38557617

ABSTRACT

Histological images are frequently impaired by local artifacts from scanner malfunctions or iatrogenic processes - caused by preparation - impacting the performance of Deep Learning models. Models often struggle with the slightest out-of-distribution shifts, resulting in compromised performance. Detecting artifacts and failure modes of the models is crucial to ensure open-world applicability to whole slide images for tasks like segmentation or diagnosis. We introduce a novel technique for out-of-distribution detection within whole slide images, compatible with any segmentation or classification model. Our approach tiles multi-layer features into sliding window patches and leverages optimal transport to align them with recognized in-distribution samples. We average the optimal transport costs over tiles and layers to detect out-of-distribution samples. Notably, our method excels in identifying failure modes that would harm downstream performance, surpassing contemporary out-of-distribution detection techniques. We evaluate our method for both natural and synthetic artifacts, considering distribution shifts of various sizes and types. The results confirm that our technique outperforms alternative methods for artifact detection. We assess our method components and the ability to negate the impact of artifacts on the downstream tasks. Finally, we demonstrate that our method can mitigate the risk of performance drops in downstream tasks, enhancing reliability by up to 77%. In testing 7 annotated whole slide images with natural artifacts, our method boosted the Dice score by 68%, highlighting its real open-world utility.

2.
Rofo ; 196(2): 154-162, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37582385

ABSTRACT

BACKGROUND: In recent years, AI has made significant advancements in medical diagnosis and prognosis. However, the incorporation of AI into clinical practice is still challenging and under-appreciated. We aim to demonstrate a possible vertical integration approach to close the loop for AI-ready radiology. METHOD: This study highlights the importance of two-way communication for AI-assisted radiology. As a key part of the methodology, it demonstrates the integration of AI systems into clinical practice with structured reports and AI visualization, giving more insight into the AI system. By integrating cooperative lifelong learning into the AI system, we ensure the long-term effectiveness of the AI system, while keeping the radiologist in the loop.  RESULTS: We demonstrate the use of lifelong learning for AI systems by incorporating AI visualization and structured reports. We evaluate Memory Aware-Synapses and Rehearsal approach and find that both approaches work in practice. Furthermore, we see the advantage of lifelong learning algorithms that do not require the storing or maintaining of samples from previous datasets. CONCLUSION: In conclusion, incorporating AI into the clinical routine of radiology requires a two-way communication approach and seamless integration of the AI system, which we achieve with structured reports and visualization of the insight gained by the model. Closing the loop for radiology leads to successful integration, enabling lifelong learning for the AI system, which is crucial for sustainable long-term performance. KEY POINTS: · The integration of AI systems into the clinical routine with structured reports and AI visualization.. · Two-way communication between AI and radiologists is necessary to enable AI that keeps the radiologist in the loop.. · Closing the loop enables lifelong learning, which is crucial for long-term, high-performing AI in radiology..


Subject(s)
Artificial Intelligence , Radiology , Humans , Radiology/methods , Algorithms , Radiologists , Radiography
3.
Mod Pathol ; 36(12): 100327, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37683932

ABSTRACT

Digital pathology adoption allows for applying computational algorithms to routine pathology tasks. Our study aimed to develop a clinical-grade artificial intelligence (AI) tool for precise multiclass tissue segmentation in colorectal specimens (resections and biopsies) and clinically validate the tool for tumor detection in biopsy specimens. The training data set included 241 precisely manually annotated whole-slide images (WSIs) from multiple institutions. The algorithm was trained for semantic segmentation of 11 tissue classes with an additional module for biopsy WSI classification. Six case cohorts from 5 pathology departments (4 countries) were used for formal and clinical validation, digitized by 4 different scanning systems. The developed algorithm showed high precision of segmentation of different tissue classes in colorectal specimens with composite multiclass Dice score of up to 0.895 and pixel-wise tumor detection specificity and sensitivity of up to 0.958 and 0.987, respectively. In the clinical validation study on multiple external cohorts, the AI tool reached sensitivity of 1.0 and specificity of up to 0.969 for tumor detection in biopsy WSI. The AI tool analyzes most biopsy cases in less than 1 minute, allowing effective integration into clinical routine. We developed and extensively validated a highly accurate, clinical-grade tool for assistive diagnostic processing of colorectal specimens. This tool allows for quantitative deciphering of colorectal cancer tissue for development of prognostic and predictive biomarkers and personalization of oncologic care. This study is a foundation for a SemiCOL computational challenge. We open-source multiple manually annotated and weakly labeled test data sets, representing a significant contribution to the colorectal cancer computational pathology field.


Subject(s)
Artificial Intelligence , Colorectal Neoplasms , Humans , Algorithms , Biopsy , Medical Oncology , Radiopharmaceuticals , Colorectal Neoplasms/diagnosis
4.
Int J Comput Assist Radiol Surg ; 18(7): 1217-1224, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37219806

ABSTRACT

PURPOSE: Image-to-image translation methods can address the lack of diversity in publicly available cataract surgery data. However, applying image-to-image translation to videos-which are frequently used in medical downstream applications-induces artifacts. Additional spatio-temporal constraints are needed to produce realistic translations and improve the temporal consistency of translated image sequences. METHODS: We introduce a motion-translation module that translates optical flows between domains to impose such constraints. We combine it with a shared latent space translation model to improve image quality. Evaluations are conducted regarding translated sequences' image quality and temporal consistency, where we propose novel quantitative metrics for the latter. Finally, the downstream task of surgical phase classification is evaluated when retraining it with additional synthetic translated data. RESULTS: Our proposed method produces more consistent translations than state-of-the-art baselines. Moreover, it stays competitive in terms of the per-image translation quality. We further show the benefit of consistently translated cataract surgery sequences for improving the downstream task of surgical phase prediction. CONCLUSION: The proposed module increases the temporal consistency of translated sequences. Furthermore, imposed temporal constraints increase the usability of translated data in downstream tasks. This allows overcoming some of the hurdles of surgical data acquisition and annotation and enables improving models' performance by translating between existing datasets of sequential frames.


Subject(s)
Cataract Extraction , Cataract , Humans , Artifacts , Benchmarking , Motion , Image Processing, Computer-Assisted
5.
Lancet Digit Health ; 5(5): e265-e275, 2023 05.
Article in English | MEDLINE | ID: mdl-37100542

ABSTRACT

BACKGROUND: Oesophageal adenocarcinoma and adenocarcinoma of the oesophagogastric junction are among the most common malignant epithelial tumours. Most patients receive neoadjuvant therapy before complete tumour resection. Histological assessment after resection includes identification of residual tumour tissue and areas of regressive tumour, data which are used to calculate a clinically relevant regression score. We developed an artificial intelligence (AI) algorithm for tumour tissue detection and tumour regression grading in surgical specimens from patients with oesophageal adenocarcinoma or adenocarcinoma of the oesophagogastric junction. METHODS: We used one training cohort and four independent test cohorts to develop, train, and validate a deep learning tool. The material consisted of histological slides from surgically resected specimens from patients with oesophageal adenocarcinoma and adenocarcinoma of the oesophagogastric junction from three pathology institutes (two in Germany, one in Austria) and oesophageal cancer cohort of The Cancer Genome Atlas (TCGA). All slides were from neoadjuvantly treated patients except for those from the TCGA cohort, who were neoadjuvant-therapy naive. Data from training cohort and test cohort cases were extensively manually annotated for 11 tissue classes. A convolutional neural network was trained on the data using a supervised principle. First, the tool was formally validated using manually annotated test datasets. Next, tumour regression grading was assessed in a retrospective cohort of post-neoadjuvant therapy surgical specimens. The grading of the algorithm was compared with that of a group of 12 board-certified pathologists from one department. To further validate the tool, three pathologists processed whole resection cases with and without AI assistance. FINDINGS: Of the four test cohorts, one included 22 manually annotated histological slides (n=20 patients), one included 62 sides (n=15), one included 214 slides (n=69), and the final one included 22 manually annotated histological slides (n=22). In the independent test cohorts the AI tool had high patch-level accuracy for identifying both tumour and regression tissue. When we validated the concordance of the AI tool against analyses by a group of pathologists (n=12), agreement was 63·6% (quadratic kappa 0·749; p<0·0001) at case level. The AI-based regression grading triggered true reclassification of resected tumour slides in seven cases (including six cases who had small tumour regions that were initially missed by pathologists). Use of the AI tool by three pathologists increased interobserver agreement and substantially reduced diagnostic time per case compared with working without AI assistance. INTERPRETATION: Use of our AI tool in the diagnostics of oesophageal adenocarcinoma resection specimens by pathologists increased diagnostic accuracy, interobserver concordance, and significantly reduced assessment time. Prospective validation of the tool is required. FUNDING: North Rhine-Westphalia state, Federal Ministry of Education and Research of Germany, and the Wilhelm Sander Foundation.


Subject(s)
Adenocarcinoma , Esophageal Neoplasms , Humans , Artificial Intelligence , Retrospective Studies , Esophageal Neoplasms/diagnosis , Esophageal Neoplasms/pathology , Esophageal Neoplasms/surgery , Algorithms , Adenocarcinoma/diagnosis , Adenocarcinoma/pathology , Adenocarcinoma/surgery
6.
Med Image Anal ; 82: 102596, 2022 11.
Article in English | MEDLINE | ID: mdl-36084564

ABSTRACT

Automatic segmentation of ground glass opacities and consolidations in chest computer tomography (CT) scans can potentially ease the burden of radiologists during times of high resource utilisation. However, deep learning models are not trusted in the clinical routine due to failing silently on out-of-distribution (OOD) data. We propose a lightweight OOD detection method that leverages the Mahalanobis distance in the feature space and seamlessly integrates into state-of-the-art segmentation pipelines. The simple approach can even augment pre-trained models with clinically relevant uncertainty quantification. We validate our method across four chest CT distribution shifts and two magnetic resonance imaging applications, namely segmentation of the hippocampus and the prostate. Our results show that the proposed method effectively detects far- and near-OOD samples across all explored scenarios.


Subject(s)
COVID-19 , Lung Diseases , Humans , Male , Tomography, X-Ray Computed/methods , Magnetic Resonance Imaging , Lung/diagnostic imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...