Your browser doesn't support javascript.
An Initial Study of Machine Learning Underspecification Using Feature Attribution Explainable AI Algorithms: A COVID-19 Virus Transmission Case Study
18th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2021 ; 13031 LNAI:323-335, 2021.
Article in English | Scopus | ID: covidwho-1525495
ABSTRACT
From a dataset, one can construct different machine learning (ML) models with different parameters and/or inductive biases. Although these models give similar prediction performances when tested on data that are currently available, they may not generalise equally well on unseen data. The existence of multiple equally performing models exhibits underspecification of the ML pipeline used for producing such models. In this work, we propose identifying underspecification using feature attribution algorithms developed in Explainable AI. Our hypothesis is by studying the range of explanations produced by ML models, one can identify underspecification. We validate this by computing explanations using the Shapley additive explainer and then measuring statistical correlations between them. We experiment our approach on multiple datasets drawn from the literature, and in a COVID-19 virus transmission case study. © 2021, Springer Nature Switzerland AG.

Full text: Available Collection: Databases of international organizations Database: Scopus Type of study: Case report Language: English Journal: 18th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2021 Year: 2021 Document Type: Article

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: Databases of international organizations Database: Scopus Type of study: Case report Language: English Journal: 18th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2021 Year: 2021 Document Type: Article