Your browser doesn't support javascript.
Optimal Neural Network Approximation of Wasserstein Gradient Direction via Convex Optimization (preprint)
arxiv; 2022.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2205.13098v1
ABSTRACT
The computation of Wasserstein gradient direction is essential for posterior sampling problems and scientific computing. The approximation of the Wasserstein gradient with finite samples requires solving a variational problem. We study the variational problem in the family of two-layer networks with squared-ReLU activations, towards which we derive a semi-definite programming (SDP) relaxation. This SDP can be viewed as an approximation of the Wasserstein gradient in a broader function family including two-layer networks. By solving the convex SDP, we obtain the optimal approximation of the Wasserstein gradient direction in this class of functions. Numerical experiments including PDE-constrained Bayesian inference and parameter estimation in COVID-19 modeling demonstrate the effectiveness of the proposed method.
Subject(s)

Full text: Available Collection: Preprints Database: PREPRINT-ARXIV Main subject: COVID-19 Language: English Year: 2022 Document Type: Preprint

Similar

MEDLINE

...
LILACS

LIS


Full text: Available Collection: Preprints Database: PREPRINT-ARXIV Main subject: COVID-19 Language: English Year: 2022 Document Type: Preprint