Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Diagn Interv Radiol ; 2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38682670

ABSTRACT

The rapid evolution of artificial intelligence (AI), particularly in deep learning, has significantly impacted radiology, introducing an array of AI solutions for interpretative tasks. This paper provides radiology departments with a practical guide for selecting and integrating AI solutions, focusing on interpretative tasks that require the active involvement of radiologists. Our approach is not to list available applications or review scientific evidence, as this information is readily available in previous studies; instead, we concentrate on the essential factors radiology departments must consider when choosing AI solutions. These factors include clinical relevance, performance and validation, implementation and integration, clinical usability, costs and return on investment, and regulations, security, and privacy. We illustrate each factor with hypothetical scenarios to provide a clearer understanding and practical relevance. Through our experience and literature review, we provide insights and a practical roadmap for radiologists to navigate the complex landscape of AI in radiology. We aim to assist in making informed decisions that enhance diagnostic precision, improve patient outcomes, and streamline workflows, thus contributing to the advancement of radiological practices and patient care.

2.
Eur J Radiol ; 173: 111356, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38364587

ABSTRACT

BACKGROUND: Explainable Artificial Intelligence (XAI) is prominent in the diagnostics of opaque deep learning (DL) models, especially in medical imaging. Saliency methods are commonly used, yet there's a lack of quantitative evidence regarding their performance. OBJECTIVES: To quantitatively evaluate the performance of widely utilized saliency XAI methods in the task of breast cancer detection on mammograms. METHODS: Three radiologists drew ground-truth boxes on a balanced mammogram dataset of women (n = 1496 cancer-positive and negative scans) from three centers. A modified, pre-trained DL model was employed for breast cancer detection, using MLO and CC images. Saliency XAI methods, including Gradient-weighted Class Activation Mapping (Grad-CAM), Grad-CAM++, and Eigen-CAM, were evaluated. We utilized the Pointing Game to assess these methods, determining if the maximum value of a saliency map aligned with the bounding boxes, representing the ratio of correctly identified lesions among all cancer patients, with a value ranging from 0 to 1. RESULTS: The development sample included 2,244 women (75%), with the remaining 748 women (25%) in the testing set for unbiased XAI evaluation. The model's recall, precision, accuracy, and F1-Score in identifying cancer in the testing set were 69%, 88%, 80%, and 0.77, respectively. The Pointing Game Scores for Grad-CAM, Grad-CAM++, and Eigen-CAM were 0.41, 0.30, and 0.35 in women with cancer and marginally increased to 0.41, 0.31, and 0.36 when considering only true-positive samples. CONCLUSIONS: While saliency-based methods provide some degree of explainability, they frequently fall short in delineating how DL models arrive at decisions in a considerable number of instances.


Subject(s)
Breast Neoplasms , Deep Learning , Humans , Female , Artificial Intelligence , Mammography , Mental Recall , Breast Neoplasms/diagnostic imaging
3.
Cancers (Basel) ; 15(16)2023 Aug 08.
Article in English | MEDLINE | ID: mdl-37627043

ABSTRACT

Machine learning (ML) models have become capable of making critical decisions on our behalf. Nevertheless, due to complexity of these models, interpreting their decisions can be challenging, and humans cannot always control them. This paper provides explanations of decisions made by ML models in diagnosing four types of posterior fossa tumors: medulloblastoma, ependymoma, pilocytic astrocytoma, and brainstem glioma. The proposed methodology involves data analysis using kernel density estimations with Gaussian distributions to examine individual MRI features, conducting an analysis on the relationships between these features, and performing a comprehensive analysis of ML model behavior. This approach offers a simple yet informative and reliable means of identifying and validating distinguishable MRI features for the diagnosis of pediatric brain tumors. By presenting a comprehensive analysis of the responses of the four pediatric tumor types to each other and to ML models in a single source, this study aims to bridge the knowledge gap in the existing literature concerning the relationship between ML and medical outcomes. The results highlight that employing a simplistic approach in the absence of very large datasets leads to significantly more pronounced and explainable outcomes, as expected. Additionally, the study also demonstrates that the pre-analysis results consistently align with the outputs of the ML models and the clinical findings reported in the existing literature.

SELECTION OF CITATIONS
SEARCH DETAIL
...