Orateur
Description
The Polyak-Łojasiewicz (PL) constant for a given function exactly characterizes the exponential rate of convergence of gradient flow uniformly over initializations, and has been of major recent interest in optimization and machine learning because it is strictly weaker than strong convexity yet implies many of the same results. In the world of sampling, the log-Sobolev inequality plays an analogous role, governing the convergence of Langevin dynamics from arbitrary initialization in Kullback-Leibler divergence. In this talk, we present a new connection between optimization and sampling by showing that the PL constant is exactly the low temperature limit of the re-scaled log-Sobolev constant, under mild assumptions. Based on joint work with Sinho Chewi.