7–9 févr. 2023
Ecole polytechnique
Fuseau horaire Europe/Paris

Programme Scientifique

Abstract.

Variational methods are a quite universal and flexible approach for
solving inverse problems, in particular in imaging sciences. Taking into
account their specific structure as sum of several different terms,
splitting algorithms provide a canonical tool for their efficient
solution. Their strength consists in the splitting of the original
problem into a sequence of smaller proximal problems which are easy and
fast to compute.

Operator splitting methods were first applied to linear, single-valued
operators for solving partial differential equations in the 60th of the
last century. More than 20 years later these methods were generalized in
the convex analysis community to the solution of inclusion problems and
again more than 20 years they became popular in image processing and
machine learning. Nowadays they are accomplished by so-called
Plug-and-Play techniques, where a proximal denoising step is substituted
by another denoiser. Popular denoisers were BM3D or MMSE methods which
are based on (nonlocal) image patches.

Meanwhile certain learned neural network do a better job. However,
convergence of the PnP splitting algorithms is still an issue.
Normalizing flows are special generative neural networks which are
invertible.

We demonstrate how they can be used as regularizers in inverse problems
for learning from few images by using e.g. image patches. Unfortunately,
normalizing flows suffer from a limited expressivity. This can be
improved by applying generalized normalizing flows consisting of a
forward and a backward Markov chain. Such Markov chains may in
particular contain Langevin layers.

We will also consider Wasserstein-2 spaces and Wasserstein gradient
flows, where the above Langevin flow appears as a special instance. We
will discuss recent developments for estimating Wasserstein gradient
flows by neural networks.