We present a list of counterexamples to conjectures in smooth convex coercive optimization. We will detail two extensions of the gradient descent method, of interest in machine learning: gradient descent with exact line search, and Bregman descent (also known as mirror descent). We show that both are non convergent in general. These examples are based on general smooth convex interpolation...
We study degree bounds for the denominator-free Positivstellens »atze in real algebra, based on sums of squares (SOS), or equivalently the convergence rate for the moment-sums of squares hierarchy in polynomial optimization, from a numerical point of view. As standard semidefinite programming (SDP) solvers do not provide reliable answers in many important instances, we use a new high-precision...
Weak optimal transport is a nonlinear version of the classical mass transport of Monge and Kantorovich which has received a lot of attention since its introduction by Gozlan Roberto, Samson and Tetali, ten years ago. In this talk, I will address weak optimal problems (possibly entropically penalized) incorporating both soft and hard (including the case of the martingale condition) moment...
We present two recent applications of the Moment-SOS hierarchy.
I. We first consider an optimal transport formulation for computing
mixtures of Gaussians that minimize the W_2-Wasserstein distance to a given measure.
II. We next consider the problem of computing the total variation between two given measures.
For each problem we provide an associated hierarchy of semidefinite relaxations...