Première rencontre nationale du RT Optimisation
de
mercredi 26 novembre 2025 (09:40)
à
vendredi 28 novembre 2025 (18:10)
lundi 24 novembre 2025
mardi 25 novembre 2025
mercredi 26 novembre 2025
14:00
Mots d'accueil
Mots d'accueil
14:00 - 14:05
14:05
APPROXIMATION OF BLASCHKE-SANTAL'O DIAGRAMS
-
Edouard Oudet
(
Université Grenoble Alpes
)
APPROXIMATION OF BLASCHKE-SANTAL'O DIAGRAMS
Edouard Oudet
(
Université Grenoble Alpes
)
14:05 - 14:45
Identifying Blaschke-Santal´o diagrams is an important topic that essentially consists in determining the image Y = F (X) of a map F : X → Rd, where the dimension of the source space X is much larger than the one of the target space. In some cases, that occur for instance in shape optimization problems, X can even be a subset of an infinite-dimensional space. The usual Monte Carlo method, consisting in randomly choosing a number N of points x1, . . . , xN in X and plotting them in the target space Rd, produces in many cases areas in Y of very high and very low concentration leading to a rather rough numerical identification of the image set. On the contrary, our goal is to choose the points xi in an appropriate way that produces a uniform distribution in the target space. In this way we may obtain a good representation of the image set Y by a relatively small number N of samples which is very useful when the dimension of the source space X is large (or even infinite) and the evaluation of F (xi) is costly. Joint work with BENIAMIN BOGOSEL, GIUSEPPE BUTTAZZO
14:50
A phase field approximation for the Reifenberg Plateau's problem : existence and regularity of solutions
-
Eve Machefert
(
Institut Camille Jordan
)
A phase field approximation for the Reifenberg Plateau's problem : existence and regularity of solutions
Eve Machefert
(
Institut Camille Jordan
)
14:50 - 15:30
The goal of this work is to use a phase field method to approximate the notorious Plateau problem, which consists of finding a surface of minimum area that spans a given curve. To this aim, we want to generalise to Plateau’s problem, using a Reifenberg formulation, the functional introduced by M. Bonnivard, A. Lemenant, and F. Santambrogio for Steiner’s problem (searching for the shortest path connecting a given set of points). The novelty of this approach is to deal with the topological constraint by penalising a geodesic distance. This geodesic distance is defined by minimizing the surface created by a path in a space of closed curves connecting two curves. The approach is based on the study of a decoupled functional depending on both the phase and a given closed curve. I will present, in particular, the existence result for this decoupled functional. Then, following an approach proposed by M. Bonnivard, E. Bretin, and A. Lemenant in the case of Steiner's problem, I will present numerical simulations for Plateau's problem that illustrate the method's applicability in practice.
15:35
Branched Optimal Transport and Fractal Measures in Type-I Superconductors
-
Alessandro Cosenza
(
Université Paris Cité
)
Branched Optimal Transport and Fractal Measures in Type-I Superconductors
Alessandro Cosenza
(
Université Paris Cité
)
15:35 - 16:15
In this talk I will introduce a branched transport problem with weakly imposed boundary conditions. This problem was first derived as a reduced model for pattern formation in type-I superconductors in [1]. For minima of the reduced model with weak boundary conditions, it is conjectured in [2] that the dimension of the boundary measure is non-integer. The conjecture was linked to local scaling laws in [5]. I will present some recent advances in solving this conjecture. This talk is based on some works with Michael Goldman, Melanie Koser and Felix Otto [3, 4]. [1] S. Conti, M. Goldman, F. Otto, and S. Serfaty. “A branched transport limit of the Ginzburg- Landau functional”. In: J. Éc. polytech. Math. 5 (2018), pp. 317–375. [2] S. Conti, F. Otto, and S. Serfaty. Personal communication. [3] A. Cosenza, M. Goldman, and M. Koser. New dimensional bounds for a branched transport problem. 2024. arXiv: 2411.14547 [math.AP]. [4] A. Cosenza, M. Goldman, and F. Otto. Concentration phenomena in a branched transport problem in the half space. In preparation. [5] G. De Philippis, M. Goldman, and B. Ruffini. From energy bounds to dimensional esti- mates in a branched transport model for type-I superconductors. 2023. arXiv: 2304.12715
16:15
Pause café
Pause café
16:15 - 16:45
16:45
Deterministic Mean Field Games with jumps
-
Annette Dumas
(
Université de Limoges
)
Deterministic Mean Field Games with jumps
Annette Dumas
(
Université de Limoges
)
16:45 - 17:25
The Mean Field Game we consider is motivated by the modelization of the housing dynamics where each inhabitant can move from one place to another. In particular, the trajectories of the agents are piecewise constant and they minimize a cost consisting in the number of jumps (or relocations) and two terms depending on the density: the first one is variational and the other one is non-variational. A Nash equilibrium for this mean field game is a measure over the curves minimizing a problem in a Lagrangian form which depends on the measure itself. To prove the existence of a Nash equilibrium, we reformulate the problem, thanks to an optimal transport result, in a Eulerian form for which we prove regularity results. The Eulerian formulation also allows us to perform numerical simulations thanks to a proximal dual gradient method.
17:30
A tentative phase-field approach for the curvature flow of non-oriented boundaries
-
Antonin Chambolle
(
CEREMADE, CNRS & Université Paris-Dauphine
)
A tentative phase-field approach for the curvature flow of non-oriented boundaries
Antonin Chambolle
(
CEREMADE, CNRS & Université Paris-Dauphine
)
17:30 - 18:10
In this talk, we start from a neural-net based approach developed by Bretin et al (Bretin, Denis, Masnou, Terii, 2022), which extends the classical phase-field method for the mean curvature flow of boundaries to the case of non-oriented interfaces. We introduce an analytical, energy-based approach which yields an evolution PDE reproducing the numerical experiments obtained with the neural networks. A preliminary analysis explains in part the behaviour of the PDE. joint work with Élie Bretin and Simon Masnou
jeudi 27 novembre 2025
09:40
Dynamic Splitting Games
-
Jérôme Renault
(
TSE (Université Toulouse 1 Capitole)
)
Dynamic Splitting Games
Jérôme Renault
(
TSE (Université Toulouse 1 Capitole)
)
09:40 - 10:20
We consider zero-sum dynamic games of information revelation. Each player controls a martingale of beliefs by choosing in each period a particular splitting (balayée) of the current belief, and the payoffs are determined by the limit beliefs. We introduce constraints on the set of available splittings, and characterize the value of the game as the unique solution of an extended Mertens-Zamir system: v=cav min (u,v)= vex max (u,v). In particular, we define the Mertens-Zamir transform of a real-valued matrix. Based on « Long Information Design » (Theoretical Economics 22) and « Splitting Games over finite sets » (Maths Prog 24), joint works with F. Koessler, M. Laclau and T. Tomala.
10:25
Coordination mechanisms with Partially Specified Probabilties
-
Francesco Giordano
(
HEC
)
Coordination mechanisms with Partially Specified Probabilties
Francesco Giordano
(
HEC
)
10:25 - 11:05
Decision makers act based on the data they observe. However, the nature of the true data-generating process is often only partially known: we model such partial knowledge as a set of moment conditions. Given the partial information available, we consider an heuristic model of belief formation derived from the maximization of the Shannon entropy. This paper characterizes the outcomes that can be implemented through partial information disclosure both under canonical and generic information structures.
11:10
Session Posters (et pause café)
Session Posters (et pause café)
11:10 - 12:30
Jules Berry : Mean Field Games on networks with sticky transition conditions. Adrien Cances : Discrétisation lagrangienne pour la résolution numérique d'un problème de transport multi-marginal Giuseppe Carrino : Numerical frugality in optimization: mixed-precision Newton's method Benjamin Capdeville : Discrete-to-continuum convergence of gradient flow structures for the Moran process and the Kimura equation Sofiane Cherf : A convergence rate for the entropic JKO scheme Guillaume Dalle : Linear and combinatorial optimization on GPUs Edgar Desainte-Mareville : How to Handle High Frequencies in Multilevel Algorithms for Image Restoration ? A Comparison with Block-Coordinate Descent Methods Alejandro Fernández-Jiménez : An optimal control problem to get an Aronson-Bénilan estimate for Keller-Segel William Ford : Uniqueness of Kantorovich Potentials Quentin Giton : Score learning using optimal transport Richard Joly : Coupled optimization of a lithium-ion battery cell James Larrouy : Optimisation colinéaire sur les espaces de Geoffroy Hung Le : High-Resolution Inertial Dynamics with Time-Rescaled Gradients for Nonsmooth Convex Optimization Iskander Sabri Legheraba : Complexity of Newton-type methods with quadratic regularization for nonlinear least squares Stéphane Le Menec : Approximation des équations des jeux à champ moyen par réseaux antagonistes génératifs Paul Mangold : Refined Analysis of Constant Step Size Federated Averaging and Federated Richardson-Romberg Extrapolation Nicolas Masson : Weighted projection problems for a new crowd-motion model Ivan Novikov : Stochastic Games with Vanishing Stage Duration and Public Signals Khadidja Sabri : Evolution and Trends in Mathematical Modeling Tools Yana Teplitskaya : Open problems on Steiner trees and maximal distance minimizers Kunlai Wu Entire torus-like minimizers for the Landau–de Gennes model with $\mathbb{S}^4$ constraint:
12:30
Last Iterate Convergence for Uncoupled Learning in Zero-Sum Games with Bandit Feedback
-
Vianney Perchet
(
ENSAE & Criteo AI Lab
)
Last Iterate Convergence for Uncoupled Learning in Zero-Sum Games with Bandit Feedback
Vianney Perchet
(
ENSAE & Criteo AI Lab
)
12:30 - 13:10
In this talk, I will introduce the problem of learning in zero-sum game, and especially for the problem of "last-iterate" convergence, unlike the traditional literature that looks at the average convergence (we argue it makes more sense). The interesting property is that the optimal rate is T^{-1/4} which is quite unusual (and unexpected) in this literature.
13:30
Repas
Repas
13:30 - 14:30
14:50
Optimal control of systems described by measures: motivation, challenge and perspectives
-
Charles Bertucci
(
Université Paris-Dauphine & CNRS
)
Optimal control of systems described by measures: motivation, challenge and perspectives
Charles Bertucci
(
Université Paris-Dauphine & CNRS
)
14:50 - 15:30
I will motivate the study of optimal control problems of systems described by positive measures, namely for optimization and large deviations of mean field systems. I will explain why the associated Hamilton-Jacobi equation plays a crucial role in these problems, as well as the main mathematical challenges it raises. I will focus in particular on the difficulties arising when trying to prove a comparison principle for such PDEs, which is the crucial step in their mathematical analysis. I will then present some results on such equations, as well as some tools that have been developed in their study that might be of independent interest.
15:35
Propagation d'une notion de log-concavité faible le long des flots de chaleur généralisés
-
Katharina Eichinger
(
Université Paris-Saclay & INRIA
)
Propagation d'une notion de log-concavité faible le long des flots de chaleur généralisés
Katharina Eichinger
(
Université Paris-Saclay & INRIA
)
15:35 - 16:15
Une conséquence bien connue de l’inégalité de Prékopa–Leindler est la préservation de la log-concavité par le semi-groupe de la chaleur. Cette propriété ne s’étend cependant pas aux semi-groupes plus généraux. Nous étudions donc une notion de log-concavité plus faible qui peut être propagée le long de semi-groupes de chaleur généralisés. Nous en déduisons des plusieurs propriétés. Ici, je vais mentionner les consequences concernant la continuité Lipschitz pour l’application Kim-Milmann, la convergence exponentielle de l’algorithme de Sinkhorn et de log-semi-concavité pour l’état fondamental des opérateurs de Schrödinger associés à des potentiels non-convexes. Ces propriétés sont obtenues comme conséquence de nouveaux résultats de propagation de convexité faible pour les équations Hamilton-Jacobi-Bellman (HJB). Les preuves s’appuient sur une interprétation via le contrôle stochastique et une analyse du second ordre avec le couplage par réflexion le long des caractéristiques HJB. Il s'agit d'un travail en collaboration avec Louis-Pierre Chaintron et Giovanni Conforti.
16:15
Pause café
Pause café
16:15 - 16:45
16:45
The competitive spectral radius of families of nonexpansive mappings
-
Marianne AKIAN
(
Inria and CMAP
)
The competitive spectral radius of families of nonexpansive mappings
Marianne AKIAN
(
Inria and CMAP
)
16:45 - 17:25
We introduce a new class of perfect information repeated zero-sum games in which the payoff of one player is the escape rate of a switched dynamical system which evolves according to a nonexpansive nonlinear operator depending on the actions of both players. This is motivated by applications to population dynamics (growth maximization and minimization). Considering order preserving finite dimensional linear operators over the positive cone endowed with Hilbert's projective (semi-)metric or the Funk hemi-metric, we recover subclasses of Matrix multiplication games, a 2-player extension of the joint spectral radius of sets of nonnegative matrices, introduced by Asarin and coauthors (2016). We prove that escape rate games have a uniform value, which is characterized by a nonlinear eigenproblem. Then, we discuss the continuity and approximability of the value of the game with respect to the parameters. We show in particular that the competitive spectral radius of positive matrices can be approximated up to any accuracy. This is joint works with Stéphane Gaubert, Loïc Marchesini, and Ian Morris.
17:30
A gradient flow for every c(x,y) cost: EVI-inspired convexity
-
Pierre-Cyril Aubin
(
ENPC
)
A gradient flow for every c(x,y) cost: EVI-inspired convexity
Pierre-Cyril Aubin
(
ENPC
)
17:30 - 18:10
How to go beyond the square distance d^2 in optimization algorithms and flows in metric spaces? Replacing it with a general cost function c(x,y) and using a majorize-minimize framework I will detail a generic class of algorithms encompassing Newton/mirror/natural/Riemannian gradient descent/Sinkhorn/EM by reframing them as an alternating minimization, each for a different cost c(x,y). Rooted in cross-differences, the convergence theory to the infimum and to the continuous flow is investigated is based on a (discrete) evolution variational inequality (EVI) which enjoys similar properties to the EVI with d^2 regularizer. This provides a theoretical framework for studying splitting schemes beyond the usual implicit Euler in gradient flows. This talk is based on the works https://arxiv.org/abs/2305.04917 with Flavien Léger (INRIA Paris), and https://arxiv.org/abs/2505.00559 with Giacomo Sodini and Ulisse Stefanelli (Uni Vienna).
vendredi 28 novembre 2025
09:40
New advances in the complexity analysis of SGD
-
Guillaume Garrigos
(
LPSM
)
New advances in the complexity analysis of SGD
Guillaume Garrigos
(
LPSM
)
09:40 - 10:20
This talk will showcase new results about the Stochastic Gradient Descent (SGD) algorithm for solving convex and smooth stochastic problems. After introducing the algorithm and its main features (convergence rates vs complexity, notion of interpolation) we will present two new results. In the first part, we ask ourselves: what are the best possible complexity bounds one can hope for SGD? This question was -up to now- completely open, and we will provide a partial first answer under the form of new bounds which are provably optimal, in a certain sense. We will not inflict a proof to the audience, but will discuss how we were led to this result with two main tools: a new Lyapunov analysis and the use of computer assistance (namely: the PEP framework). In a second part, we will show that SGD enjoys "last iterate" complexity, a property which was thought to be a distinctive feature of the SGD+Momentum algorithm.
10:25
Restarted Truncated unrolled networks to bridge the gap between Plug-and-Play and Unfolded Neural Networks for image restoration
-
Nelly PUSTELNIK
Restarted Truncated unrolled networks to bridge the gap between Plug-and-Play and Unfolded Neural Networks for image restoration
Nelly PUSTELNIK
10:25 - 11:05
In recent years, deep learning methods have transformed the field of image restoration, achieving significantly higher reconstruction quality than traditional variational approaches. Many current strategies combine principles from both variational models and neural networks, forming what are known as model-based neural networks. Among these, two main frameworks stand out: Unfolded neural networks and Plug-and-Play (PnP) methods. Unfolded networks emulate the iterations of proximal algorithms (also referred to as unrolled algorithms, e.g., unrolled forward–backward) to create task-specific end-to-end architectures, while PnP methods rely on pretrained denoisers to solve reconstruction problems without additional training. However, automatic differentiation in unfolded networks requires relatively shallow architectures (a small number of unrolled iterations) due to computational constraints, whereas PnP methods, though theoretically convergent, must be iterated until convergence and often underperform unfolded approaches. This work focuses on a specific training procedure for unfolded neural networks that still uses automatic differentiation while maintaining theoretical convergence guarantees. More precisely, our analysis shows that an unfolded neural network can be restarted to infer solutions of a variational problem with fixed parameters. Furthermore, we introduce the ReTune (Restarted Truncated unrolled networks) procedure to estimate the underlying parameters in a simple fashion, building on theoretical developments inspired by bi-level literature (Deep Equilibrium,Jacobian-free propagation). A deeper analysis is provided in the context of unrolled forward–backward iterations to highlight the interplay between the network depth and the number of restarts in ReTune, based on the Lipschitz properties of the algorithmic scheme. In particular, the depth controls the approximation error of a Jacobian-free step, whereas restarting the unrolled neural network allows one to reach the equilibrium point. This theoretical analysis is supported by numerical experiments showing that the proposed ReTune procedure goes beyond traditional learning schemes, improves performance compared to PnP strategies, and provides stronger guarantees than standard unfolded approaches.
11:05
Pause café
Pause café
11:05 - 11:45
11:45
Principes de comparaison pour des problèmes variationels et application aux transports optimaux.
-
Maxime Sylvestre
(
Université Paris Dauphine PSL CEREMADE
)
Principes de comparaison pour des problèmes variationels et application aux transports optimaux.
Maxime Sylvestre
(
Université Paris Dauphine PSL CEREMADE
)
11:45 - 12:25
Nous étudions des problèmes variationnels dans des espaces de Banach faisant intervenir des énergies sous-modulaires. Nous étendons la notion de substituabilité à ce cadre de dimension infinie et montrons qu’elle est en dualité avec la sous-modularité. Ces deux notions nous permettent de dériver des principes de comparaison de manière abstraite. Nous appliquons ensuite nos résultats aux problèmes de transport optimal, de transport optimal entropique et de transport optimal non équilibré. Nous en déduisons des principes de comparaison sur les potentiels de Kantorovich, de Schrödinger et non équilibrés, ainsi que sur les schémas JKO associés.
12:30
The Entropic JKO Scheme
-
Aymeric Baradat
The Entropic JKO Scheme
Aymeric Baradat
12:30 - 13:10
The JKO scheme (named after Jordan, Kinderlehrer, and Otto, 1996) is an implicit Euler-type scheme that provides a variational way to construct weak solutions to nonlinear diffusion equations, relying on their gradient flow structure in the Wasserstein space. In 2015, Peyré proposed an entropic version which, although it only yields approximate solutions to the original problem, leads to remarkably efficient numerical computations based on the Sinkhorn algorithm. But what exactly does this approximate scheme compute? Can we estimate the error it introduces? Does this approach also have interesting theoretical implications? These are the questions I will address in my presentation. This talk is based on joint work with Sofiane Cherf, Anastasiia Hraivoronska, and Filippo Santambrogio.
13:30
Repas
Repas
13:30 - 14:30
14:50
Sur une inégalité isopérimétrique quantitative dans le plan
-
Gisella Croce
(
Université Paris 1
)
Sur une inégalité isopérimétrique quantitative dans le plan
Gisella Croce
(
Université Paris 1
)
14:50 - 15:30
Ce séminaire portera sur des avancées récentes sur une inégalité isopérimétrique quantitative dans le plan, avec une contrainte géométrique. Plus précisément si $\Omega$ est un ouvert borné, nous étudierons l'inégalité de type $$ \delta(\Omega)\geq C \lambda_0^2(\Omega,B)\,, $$ où $\delta(\Omega)$ est le déficit isopérimétrique (différence entre le périmètre de $\Omega$ et le périmètre de la boule de même mesure), et $\lambda_0$ est l'asymétrie barycentrique (aire de la différence symétrique entre $\Omega$ et la boule de même aire centrée en le centre de gravité de $\Omega$). Nous montrerons l'existence d'un ensemble optimal parmi tous les ensembles contenus dans une boule de rayon fixé. Nous donnerons aussi des propriétés qualitatives du minimiseur, en particulier le fait que son bord n'est pas composé par des arcs de cercle. Les résultats exposés sont en collaboration avec Antoine Henrot.
15:35
Training dynamics of ReLU Networks: a Path-lifting Perspective
-
Rémi GRIBONVAL
(
Inria
)
Training dynamics of ReLU Networks: a Path-lifting Perspective
Rémi GRIBONVAL
(
Inria
)
15:35 - 16:15
Can we hope to decipher the role of the well-known rescaling symmetries of ReLU networks parameterizations in their training dynamics ? The talk will explore recent advances in this direction that exploit the path-lifting, a rescaling-invariant polynomial representation of the parameters of general ReLU networks. Despite its combinatorial dimension, the path-lifting turns out to be not only a convenient mathematical analysis tool: it also gives rise to a computational toolbox to reveal useful properties of the function corresponding to a ReLU network, from Lipschitz regularity to convexity.