Orateur
Description
In this talk, we explore the convergence properties of inexact Jordan-Kinderlehrer-Otto (JKO) schemes and proximal-gradient algorithms in Wasserstein spaces. While the classical JKO scheme assumes exact evaluations at each step, practical implementations rely on approximate solutions due to computational constraints. We analyze two types of inexactness: errors in Wasserstein distance and errors in functional evaluations. We establish rigorous convergence guarantees under controlled error conditions.
Beyond the inexact setting, we also extend the convergence results by considering varying stepsizes. This generalization allows for a broader class of stepsize choices, addressing scenarios where stepsizes evolve dynamically rather than remaining fixed. Our analysis expands previous approaches, providing new insights into discrete Wasserstein gradient flows. These findings contribute to the theoretical understanding of approximate optimization methods in Wasserstein spaces, with possible implications for applications into sampling, PDEs and machine learning.