BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CERN//INDICO//EN
BEGIN:VEVENT
SUMMARY:Stochastic optimization and high-dimensional sampling: when Moreau
inf-convolution meets Langevin diffusion
DTSTART;VALUE=DATE-TIME:20160610T124500Z
DTEND;VALUE=DATE-TIME:20160610T133000Z
DTSTAMP;VALUE=DATE-TIME:20200813T000238Z
UID:indico-contribution-2667@indico.math.cnrs.fr
DESCRIPTION:Speakers: Eric Moulines (Télécom ParisTech)\nRecently\, the
problem of designing MCMC samplers adapted to high-dimensional Bayesian in
ference with sensible theoretical guarantees has received a lot of inter
est. The applications are numerous\, including large-scale inference in ma
chine learning\, Bayesian nonparametrics\, Bayesian inverse problem\, aggr
egation of experts among others. When the density is L-smooth (the log-den
sity is continuously differentiable and its derivative is Lipshitz)\, we w
ill advocate the use of a “rejection-free” algorithm\, based on the di
scretization of the Euler diffusion with either constant or decreasing ste
psizes. We will present several new results allowing convergence to statio
narity under different conditions for the log-density (from the weakest\,
bounded oscillations on a compact set and super-exponential in the tails t
o the strong concavity). When the log-density is not smooth (a problem whi
ch typically appears when using sparsity inducing priors for example)\, we
still suggest to use a Euler discretization but of the Moreau envelope
of the non-smooth part of the log-density. An importance sampling correc
tion may be later applied to correct the target. Several numerical illustr
ations will be presented to show that this algorithm (named MYULA) can be
practically used in a high dimensional setting. Finally\, non-asymptotic b
ounds convergence bounds (in total variation and Wasserstein distances) ar
e derived.\n\nhttps://indico.math.cnrs.fr/event/830/contributions/2667/
LOCATION:Ecole Centrale Lille Grand Amphithéâtre
URL:https://indico.math.cnrs.fr/event/830/contributions/2667/
END:VEVENT
END:VCALENDAR