Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Med Syst ; 46(4): 20, 2022 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-35249179

RESUMO

Adoption of Artificial Intelligence (AI) algorithms into the clinical realm will depend on their inherent trustworthiness, which is built not only by robust validation studies but is also deeply linked to the explainability and interpretability of the algorithms. Most validation studies for medical imaging AI report the performance of algorithms on study-level labels and lay little emphasis on measuring the accuracy of explanations generated by these algorithms in the form of heat maps or bounding boxes, especially in true positive cases. We propose a new metric - Explainability Failure Ratio (EFR) - derived from Clinical Explainability Failure (CEF) to address this gap in AI evaluation. We define an Explainability Failure as a case where the classification generated by an AI algorithm matches with study-level ground truth but the explanation output generated by the algorithm is inadequate to explain the algorithm's output. We measured EFR for two algorithms that automatically detect consolidation on chest X-rays to determine the applicability of the metric and observed a lower EFR for the model that had lower sensitivity for identifying consolidation on chest X-rays, implying that the trustworthiness of a model should be determined not only by routine statistical metrics but also by novel 'clinically-oriented' models.


Assuntos
Algoritmos , Inteligência Artificial , Diagnóstico por Imagem , Humanos , Radiografia
2.
Sci Rep ; 11(1): 23210, 2021 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-34853342

RESUMO

SARS-CoV2 pandemic exposed the limitations of artificial intelligence based medical imaging systems. Earlier in the pandemic, the absence of sufficient training data prevented effective deep learning (DL) solutions for the diagnosis of COVID-19 based on X-Ray data. Here, addressing the lacunae in existing literature and algorithms with the paucity of initial training data; we describe CovBaseAI, an explainable tool using an ensemble of three DL models and an expert decision system (EDS) for COVID-Pneumonia diagnosis, trained entirely on pre-COVID-19 datasets. The performance and explainability of CovBaseAI was primarily validated on two independent datasets. Firstly, 1401 randomly selected CxR from an Indian quarantine center to assess effectiveness in excluding radiological COVID-Pneumonia requiring higher care. Second, curated dataset; 434 RT-PCR positive cases and 471 non-COVID/Normal historical scans, to assess performance in advanced medical settings. CovBaseAI had an accuracy of 87% with a negative predictive value of 98% in the quarantine-center data. However, sensitivity was 0.66-0.90 taking RT-PCR/radiologist opinion as ground truth. This work provides new insights on the usage of EDS with DL methods and the ability of algorithms to confidently predict COVID-Pneumonia while reinforcing the established learning; that benchmarking based on RT-PCR may not serve as reliable ground truth in radiological diagnosis. Such tools can pave the path for multi-modal high throughput detection of COVID-Pneumonia in screening and referral.


Assuntos
COVID-19/complicações , Aprendizado Profundo , Sistemas Inteligentes , Processamento de Imagem Assistida por Computador/métodos , Pneumonia/diagnóstico , Radiografia Torácica/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , COVID-19/virologia , Humanos , Incidência , Índia/epidemiologia , Redes Neurais de Computação , Pneumonia/diagnóstico por imagem , Pneumonia/epidemiologia , Pneumonia/virologia , Estudos Retrospectivos , SARS-CoV-2/isolamento & purificação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...