Your browser doesn't support javascript.
Interpretable sequence learning for COVID-19 forecasting
34th Conference on Neural Information Processing Systems, NeurIPS 2020 ; 2020-December, 2020.
Article in English | Scopus | ID: covidwho-1282878
ABSTRACT
We propose a novel approach that integrates machine learning into compartmental disease modeling (e.g., SEIR) to predict the progression of COVID-19. Our model is explainable by design as it explicitly shows how different compartments evolve and it uses interpretable encoders to incorporate covariates and improve performance. Explainability is valuable to ensure that the model’s forecasts are credible to epidemiologists and to instill confidence in end-users such as policy makers and healthcare institutions. Our model can be applied at different geographic resolutions, and we demonstrate it for states and counties in the United States. We show that our model provides more accurate forecasts compared to the alternatives, and that it provides qualitatively meaningful explanatory insights. © 2020 Neural information processing systems foundation. All rights reserved.
Search on Google
Collection: Databases of international organizations Database: Scopus Language: English Journal: 34th Conference on Neural Information Processing Systems, NeurIPS 2020 Year: 2020 Document Type: Article

Similar

MEDLINE

...
LILACS

LIS

Search on Google
Collection: Databases of international organizations Database: Scopus Language: English Journal: 34th Conference on Neural Information Processing Systems, NeurIPS 2020 Year: 2020 Document Type: Article