Description
Inverse problems are ubiquitous because they formalize the integration of data with mathematical models. In many scientific applications the forward model is expensive to evaluate, and adjoint computations are difficult to employ; in this setting derivative-free methods which involve a small number of forward model evaluations are an attractive proposition. Ensemble Kalman based interacting particle systems (and variants such as consensus based and unscented Kalman approaches) have proven empirically successful in this context, but suffer from the limitation that they cannot be systematically refined to return the true solution, except in the setting of linear forward models. In this talk, we present a new derivative-free approach to Bayesian inversion, which may be employed for posterior sampling or for maximum a posteriori (MAP) estimation, and can be systematically refined. The method relies on a fast/slow system of stochastic differential equations (SDEs) for the local approximation of the gradient of the log-likelihood appearing in a Langevin diffusion.