On the Closed-Form of Flow Matching: Generalization Does Not Arise from Target Stochasticity
by
Salle K. Johnson
1R3, 1er étage
The first part of the talk will be devoted to an introduction to Flow Matching. In the remaining part, we will try to understand why recent generative methods -- namely diffusion and flow matching techniques -- generalize so effectively. Among the proposed explanations are the inductive biases of deep learning architectures and the stochastic nature of the conditional flow matching loss. In this talk, we rule out the latter -- the noisy nature of the loss -- as a primary contributor to generalization in flow matching. First, we empirically show that in high-dimensional settings, the stochastic and closed-form versions of the flow matching loss yield nearly equivalent losses. Then, using state-of-the-art flow matching models on standard image datasets, we demonstrate that both variants achieve comparable statistical performance, with the surprising observation that using the closed-form can even improve performance.
Talk based on https://arxiv.org/abs/2506.03719