This article is a Preprint
Preprints are preliminary research reports that have not been certified by peer review. They should not be relied on to guide clinical practice or health-related behavior and should not be reported in news media as established information.
Preprints posted online allow authors to receive rapid feedback and the entire scientific community can appraise the work for themselves and respond appropriately. Those comments are posted alongside the preprints for anyone to read them and serve as a post publication assessment.
Optimal Neural Network Approximation of Wasserstein Gradient Direction via Convex Optimization (preprint)
arxiv; 2022.
Preprint
in English
| PREPRINT-ARXIV | ID: ppzbmed-2205.13098v1
ABSTRACT
The computation of Wasserstein gradient direction is essential for posterior sampling problems and scientific computing. The approximation of the Wasserstein gradient with finite samples requires solving a variational problem. We study the variational problem in the family of two-layer networks with squared-ReLU activations, towards which we derive a semi-definite programming (SDP) relaxation. This SDP can be viewed as an approximation of the Wasserstein gradient in a broader function family including two-layer networks. By solving the convex SDP, we obtain the optimal approximation of the Wasserstein gradient direction in this class of functions. Numerical experiments including PDE-constrained Bayesian inference and parameter estimation in COVID-19 modeling demonstrate the effectiveness of the proposed method.
Full text:
Available
Collection:
Preprints
Database:
PREPRINT-ARXIV
Main subject:
COVID-19
Language:
English
Year:
2022
Document Type:
Preprint
Similar
MEDLINE
...
LILACS
LIS