This talk explores the connection between classical inverse problems in heat equations and recent developments in generative AI.
In the first part, we consider the initial source identification problem for the heat equation, a prototypical ill-posed inverse problem. Traditional Tikhonov-type regularization methods fail to eliminate the intrinsic ill-posedness, particularly over long time horizons. By analyzing the moment dynamics of heat flow and leveraging representer theorems, we introduce a moment-based method that achieves strong numerical performance. Furthermore, we establish convergence rates with respect to the moment order, offering a rigorous understanding of its stability and accuracy.
In the second part, we turn to diffusion models in modern generative AI, which bear a deep connection to the previous inverse problem. The generative process can be interpreted through a Fokker–Planck representation of the backward heat flow. Within this framework, the Li–Yau estimate provides a natural energy stability bound. Building on the entropy stability of the generative process, we demonstrate that the generated samples remain confined to the data manifold, ensuring imitation fidelity and theoretical consistency of the generative model.
This talk is based on joint work with Enrique Zuazua (FAU Erlangen–Nürnberg).