Your browser doesn't support javascript.
A systematic review on the use of explainability in deep learning systems for computer aided diagnosis in radiology: Limited use of explainable AI?
Groen, Arjan M; Kraan, Rik; Amirkhan, Shahira F; Daams, Joost G; Maas, Mario.
  • Groen AM; Department of Radiology and Nuclear Medicine, Amsterdam Movement Sciences, Amsterdam UMC Location AMC, Amsterdam, Netherlands. Electronic address: a.m.groen@amsterdamumc.nl.
  • Kraan R; Department of Radiology and Nuclear Medicine, Amsterdam Movement Sciences, Amsterdam UMC Location AMC, Amsterdam, Netherlands.
  • Amirkhan SF; Department of Radiology and Nuclear Medicine, Amsterdam Movement Sciences, Amsterdam UMC Location AMC, Amsterdam, Netherlands.
  • Daams JG; Medical Library, Amsterdam UMC Location AMC, Amsterdam, Netherlands.
  • Maas M; Department of Radiology and Nuclear Medicine, Amsterdam Movement Sciences, Amsterdam UMC Location AMC, Amsterdam, Netherlands.
Eur J Radiol ; 157: 110592, 2022 Dec.
Article in English | MEDLINE | ID: covidwho-2261340
ABSTRACT

OBJECTIVES:

This study aims to contribute to an understanding of the explainability of computer aided diagnosis studies in radiology that use end-to-end deep learning by providing a quantitative overview of methodological choices and by discussing the implications of these choices for their explainability.

METHODS:

A systematic review was executed using the preferred reporting items for systemic reviews and meta-analysis guidelines. Primary diagnostic test accuracy studies using end-to-end deep learning for radiology were identified from the period January 1st, 2016, to January 20th, 2021. Results were synthesized by identifying the explanation goals, measures, and explainable AI techniques.

RESULTS:

This study identified 490 primary diagnostic test accuracy studies using end-to-end deep learning for radiology, of which 179 (37%) used explainable AI. In 147 out of 179 (82%) of studies, explainable AI was used for the goal of model visualization and inspection. Class activation mapping is the most common technique, being used in 117 out of 179 studies (65%). Only 1 study used measures to evaluate the outcome of their explainable AI.

CONCLUSIONS:

A considerable portion of computer aided diagnosis studies provide a form of explainability of their deep learning models for the purpose of model visualization and inspection. The techniques commonly chosen by these studies (class activation mapping, feature activation mapping and t-distributed stochastic neighbor embedding) have potential limitations. Because researchers generally do not measure the quality of their explanations, we are agnostic about how effective these explanations are at addressing the black box issues of deep learning in radiology.
Subject(s)
Keywords

Full text: Available Collection: International databases Database: MEDLINE Main subject: Radiology / Deep Learning Type of study: Experimental Studies / Prognostic study / Reviews / Systematic review/Meta Analysis Limits: Humans Language: English Journal: Eur J Radiol Year: 2022 Document Type: Article

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: International databases Database: MEDLINE Main subject: Radiology / Deep Learning Type of study: Experimental Studies / Prognostic study / Reviews / Systematic review/Meta Analysis Limits: Humans Language: English Journal: Eur J Radiol Year: 2022 Document Type: Article