Séminaire de Statistique et Optimisation

La parcimonie, une valeur d'avenir pour l'apprentissage frugal ?

par Rémi Gribonval (INRIA, ENS Lyon)

Europe/Paris
Salle K. Johnson, 1er étage (1R3)

Salle K. Johnson, 1er étage

1R3

Description

Promoting sparse connections in neural networks is a natural objective to control their complexity and allow flexible tradeoffs between performance and "frugal" resource consumption. In inverse problems and variable selection, sparsity has indeed led to well mastered algorithms with provably good performance and bounded complexity. To what extent can we leverage this body of knowledge in a deep context ? Through an overview of recent explorations around the theme of deep sparsity, I will compare and contrast classical sparse regularization for inverse problems with multilayer sparse regularization. During our journey, I will notably highlight the role of rescaling-invariances in deep ReLU parameterizations.