Understanding the geometric properties of gradient descent dynamics is a key ingredient in deciphering the recent success of very large machine learning models. A striking observation is that trained over-parameterized models retain some properties of the optimization initialization. This “implicit bias” is believed to be responsible for some favorable properties of the trained models and could explain their good generalization properties. In this work, we expose the definition and properties of “conservation laws”, that define quantities conserved during gradient flows of a given model (e.g. of a ReLU network with a given architecture) with any training data and any loss. Then we explain how to find the exact number of independent conservation laws via Lie algebra computations. This procedure recovers the conservation laws already known for linear and ReLU neural networks for Euclidean gradient flows, and prove that there are no other laws. We identify new laws for certain flows with momentum and/or non-Euclidean geometries.
Joint work with Gabriel Peyré and Rémi Gribonval. Associated papers: https://arxiv.org/abs/2307.00144 https://arxiv.org/abs/2405.12888