Orateur
Description
Many problems in machine learning can be framed as variational problems that minimize the relative entropy between two probability measures. Many recent works have exploited the connection between the (Otto-)Wasserstein gradient flow of the Kullback-–Leibler (KL) divergence and various sampling, Bayesian inference, and generative modeling algorithms. In this talk, I will first contrast the Wasserstein flow with the Fisher-Rao flows of those divergences, and showcase their distinct analysis properties when working with different relative entropy driving energies, including the reverse and forward KL divergence. Building upon recent advances in the mathematical foundation of the Hellinger-Kantorovich (HK, a.k.a. Wasserstein-Fisher-Rao) gradient flows, I will then show the analysis of the HK flows and its implications for computational algorithms for machine learning and optimization.