Improving Effectiveness of Simulation-Based Inference in the Massively Parallel Regime
IEEE Transactions on Parallel and Distributed Systems
; : 2015/01/01 00:00:00.000, 2023.
Article
in English
| Scopus | ID: covidwho-2232135
ABSTRACT
Simulation-based Inference (SBI) is a widely used set of algorithms to learn the parameters of complex scientific simulation models. While primarily run on CPUs in High-Performance Compute clusters, these algorithms have been shown to scale in performance when developed to be run on massively parallel architectures such as GPUs. While parallelizing existing SBI algorithms provides us with performance gains, this might not be the most efficient way to utilize the achieved parallelism. This work proposes a new parallelism-aware adaptation of an existing SBI method, namely approximate Bayesian computation with Sequential Monte Carlo(ABC-SMC). This new adaptation is designed to utilize the parallelism not only for performance gain, but also toward qualitative benefits in the learnt parameters. The key idea is to replace the notion of a single ‘step-size’hyperparameter, which governs how the state space of parameters is explored during learning, with step-sizes sampled from a tuned Beta distribution. This allows this new ABC-SMC algorithm to more efficiently explore the state-space of the parameters being learned. We test the effectiveness of the proposed algorithm to learn parameters for an epidemiology model running on a Tesla T4 GPU. Compared to the parallelized state-of-the-art SBI algorithm, we get similar quality results in <inline-formula><tex-math notation="LaTeX">$\sim 100 \times$</tex-math></inline-formula> fewer simulations and observe <inline-formula><tex-math notation="LaTeX">$\sim 80 \times$</tex-math></inline-formula> lower run-to-run variance across 10 independent trials. IEEE
Approximate bayesian computation; Approximation algorithms; Bayes methods; bayesian inference; Biological system modeling; compartmental models; Computational modeling; COVID-19; epidemiology; GPU; high performance computing; Inference algorithms; likelihood-free inference; Optimization; parallel algorithms; Perturbation methods; simulation-based inference; statistical machine learning; stochastic optimization; Bayesian networks; Bioinformatics; Clustering algorithms; Graphics processing unit; Inference engines; Learning algorithms; Learning systems; Monte Carlo methods; Parallel architectures; Parameter estimation; Program processors; Approximate Bayesian; Bayes method; Bayesian computation; Compartmental modelling; Computational modelling; Inference algorithm; Optimisations; Performance computing; Perturbation method; Stochastic optimizations
Full text:
Available
Collection:
Databases of international organizations
Database:
Scopus
Type of study:
Experimental Studies
/
Qualitative research
Language:
English
Journal:
IEEE Transactions on Parallel and Distributed Systems
Year:
2023
Document Type:
Article
Similar
MEDLINE
...
LILACS
LIS