Your browser doesn't support javascript.
Improving Explainability of Image Classification in Scenarios with Class Overlap: Application to COVID-19 and Pneumonia
Proc. - IEEE Int. Conf. Mach. Learn. Appl., ICMLA ; : 1402-1409, 2020.
Article in English | Scopus | ID: covidwho-1142803
ABSTRACT
Trust in predictions made by machine learning models is increased if the model generalizes well on previously unseen samples and when inference is accompanied by cogent explanations of the reasoning behind predictions. In the image classification domain, generalization can be assessed through accuracy, sensitivity, and specificity. Explainability can be assessed by how well the model localizes the object of interest within an image. However, both generalization and explainability through localization are degraded in scenarios with significant overlap between classes. We propose a method based on binary expert networks that enhances the explainability of image classifications through better localization by mitigating the model uncertainty induced by class overlap. Our technique performs discriminative localization on images that contain features with significant class overlap, without explicitly training for localization. Our method is particularly promising in real-world class overlap scenarios, such as COVID-19 and pneumonia, where expertly labeled data for localization is not readily available. This can be useful for early, rapid, and trustworthy screening for COVID-19. © 2020 IEEE.

Full text: Available Collection: Databases of international organizations Database: Scopus Language: English Journal: Proc. - IEEE Int. Conf. Mach. Learn. Appl., ICMLA Year: 2020 Document Type: Article

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: Databases of international organizations Database: Scopus Language: English Journal: Proc. - IEEE Int. Conf. Mach. Learn. Appl., ICMLA Year: 2020 Document Type: Article