9–10 juin 2016
Ecole Centrale Lille
Fuseau horaire Europe/Paris

Stochastic optimization and high-dimensional sampling: when Moreau inf-convolution meets Langevin diffusion

10 juin 2016, 14:45
45m
Grand Amphithéâtre (Ecole Centrale Lille)

Grand Amphithéâtre

Ecole Centrale Lille

Campus Lille 1 à Villeneuve d'Ascq

Orateur

Eric Moulines (Télécom ParisTech)

Description

Recently, the problem of designing MCMC samplers adapted to high-dimensional Bayesian inference with sensible theoretical guarantees has received a lot of interest. The applications are numerous, including large-scale inference in machine learning, Bayesian nonparametrics, Bayesian inverse problem, aggregation of experts among others. When the density is L-smooth (the log-density is continuously differentiable and its derivative is Lipshitz), we will advocate the use of a “rejection-free” algorithm, based on the discretization of the Euler diffusion with either constant or decreasing stepsizes. We will present several new results allowing convergence to stationarity under different conditions for the log-density (from the weakest, bounded oscillations on a compact set and super-exponential in the tails to the strong concavity). When the log-density is not smooth (a problem which typically appears when using sparsity inducing priors for example), we still suggest to use a Euler discretization but of the Moreau envelope of the non-smooth part of the log-density. An importance sampling correction may be later applied to correct the target. Several numerical illustrations will be presented to show that this algorithm (named MYULA) can be practically used in a high dimensional setting. Finally, non-asymptotic bounds convergence bounds (in total variation and Wasserstein distances) are derived.

Documents de présentation