Your browser doesn't support javascript.
Explainable Deep Learning for Medical Imaging Models Through Class Specific Semantic Dictionaries
2022 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2247150
ABSTRACT
Explainability is important in the design and deployment of neural networks. It allows engineers to design better models and can give end-users an improved understanding of the outputs. However, many explainability methods are unsuited to the domain of medical imaging. Saliency mapping methods only describe what regions of an input image contributed to the output, but don't explain the important visual features within those regions. Feature visualization methods have not yet been useful in the domain of medical imaging due to the visual complexity of images generally resulting in un-interpretable features. In this work, we propose a novel explainability technique called 'Class Specific Semantic Dictionaries'. This extends saliency mapping and feature visualisation methods to enable the analysis of neural network decision-making in the context of medical image diagnosis. By utilising gradient information from the fully connected layers, our approach is able to give insight into the channels deemed important by the network for the diagnosis of each particular disease. The important channels for a class are contextualised by showing the highly activating examples from the training data, providing an understanding of the learned features through example. The explainability techniques are combined into a single User Interface (UI) to streamline the evaluation of neural networks. To demonstrate how our new method overcomes the explainability challenges of medical imaging models we analyse COVID-Net, an open source convolutional neural network for diagnosing COVID-19 from chest x-rays. We present evidence that, despite achieving 96.3% accuracy on the test data, COVID-Net uses confounding variables not indicative of underlying disease to discriminate between COVID-Positive and COVID-Negative patients and may not generalise well on new data. © 2022 IEEE.
Keywords

Full text: Available Collection: Databases of international organizations Database: Scopus Language: English Journal: 2022 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2022 Year: 2022 Document Type: Article

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: Databases of international organizations Database: Scopus Language: English Journal: 2022 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2022 Year: 2022 Document Type: Article