Your browser doesn't support javascript.
loading
Deep learning models for COVID-19 chest x-ray classification: Preventing shortcut learning using feature disentanglement
Caleb Robinson; Anusua Trivedi; Marian Blazes; Anthony Ortiz; Jocelyn Desbiens; Sunil Gupta; Rahul Dodhia; Pavan K Bhatraju; W. Conrad Liles; Aaron Lee; Jayashree Kalpathy-Cramer; Juan M Lavista Ferres.
Affiliation
  • Caleb Robinson; Microsoft AI for Good
  • Anusua Trivedi; Microsoft AI for Good
  • Marian Blazes; University of Washington
  • Anthony Ortiz; Microsoft AI for Good
  • Jocelyn Desbiens; Intelligent Retinal Imaging Systems
  • Sunil Gupta; Intelligent Retinal Imaging Systems
  • Rahul Dodhia; Microsoft AI for Good
  • Pavan K Bhatraju; Department of Medicine and Sepsis Center of Research Excellence, University of Washington (SCORE-UW)
  • W. Conrad Liles; Department of Medicine and Sepsis Center of Research Excellence, University of Washington (SCORE-UW)
  • Aaron Lee; University of Washington
  • Jayashree Kalpathy-Cramer; Massachusetts General Hospital
  • Juan M Lavista Ferres; Microsoft AI for Good
Preprint in En | PREPRINT-MEDRXIV | ID: ppmedrxiv-20196766
ABSTRACT
In response to the COVID-19 global pandemic, recent research has proposed creating deep learning based models that use chest radiographs (CXRs) in a variety of clinical tasks to help manage the crisis. However, the size of existing datasets of CXRs from COVID-19+ patients are relatively small, and researchers often pool CXR data from multiple sources, for example, using different x-ray machines in various patient populations under different clinical scenarios. Deep learning models trained on such datasets have been shown to overfit to erroneous features instead of learning pulmonary characteristics - a phenomenon known as shortcut learning. We propose adding feature disentanglement to the training process, forcing the models to identify pulmonary features from the images while penalizing them for learning features that can discriminate between the original datasets that the images come from. We find that models trained in this way indeed have better generalization performance on unseen data; in the best case we found that it improved AUC by 0.13 on held out data. We further find that this outperforms masking out non-lung parts of the CXRs and performing histogram equalization, both of which are recently proposed methods for removing biases in CXR datasets.
License
cc_by
Full text: 1 Collection: 09-preprints Database: PREPRINT-MEDRXIV Type of study: Prognostic_studies Language: En Year: 2021 Document type: Preprint
Full text: 1 Collection: 09-preprints Database: PREPRINT-MEDRXIV Type of study: Prognostic_studies Language: En Year: 2021 Document type: Preprint