Information Bottleneck Attribution for Visual Explanations of Diagnosis and Prognosis.
Mach Learn Med Imaging
; 12966: 396-405, 2021 Sep.
Article
in English
| MEDLINE | ID: covidwho-1469662
ABSTRACT
Visual explanation methods have an important role in the prognosis of the patients where the annotated data is limited or unavailable. There have been several attempts to use gradient-based attribution methods to localize pathology from medical scans without using segmentation labels. This research direction has been impeded by the lack of robustness and reliability. These methods are highly sensitive to the network parameters. In this study, we introduce a robust visual explanation method to address this problem for medical applications. We provide an innovative visual explanation algorithm for general purpose and as an example application we demonstrate its effectiveness for quantifying lesions in the lungs caused by the Covid-19 with high accuracy and robustness without using dense segmentation labels. This approach overcomes the drawbacks of commonly used Grad-CAM and its extended versions. The premise behind our proposed strategy is that the information flow is minimized while ensuring the classifier prediction stays similar. Our findings indicate that the bottleneck condition provides a more stable severity estimation than the similar attribution methods. The source code will be publicly available upon publication.
Full text:
Available
Collection:
International databases
Database:
MEDLINE
Type of study:
Diagnostic study
/
Prognostic study
Language:
English
Journal:
Mach Learn Med Imaging
Year:
2021
Document Type:
Article
Affiliation country:
978-3-030-87589-3_41
Similar
MEDLINE
...
LILACS
LIS