5 septembre 2022 à 9 décembre 2022
IHP
Fuseau horaire Europe/Paris
Financial support for the participation to the quarter is now closed

Mikhail Belkin- Neural networks, wide and deep, singular kernels and Bayes optimality

3 oct. 2022, 16:30
1h
Amphitheater Hermite, IHP

Amphitheater Hermite, IHP

Description

Wide and deep neural networks are used in many important practical setting.
In this talk I will discuss some aspects of width and depth related to optimization and generalization.
I will first discuss what happens when neural networks become infinitely wide,
giving a general result for the transition to linearity (i.e., showing that neural networks become linear functions of parameters) for a broad class of wide neural networks corresponding to directed graphs.
I will then proceed to the question of depth, showing equivalence between infinitely wide and deep fully connected networks trained with gradient descent and Nadaraya-Watson predictors based on certain singular kernels.
Using this connection we show that for certain activation functions these wide and deep networks are (asymptotically) optimal for classification but, interestingly, never for regression.
Based on joint work with Chaoyue Liu, Adit Radhakrishnan, Caroline Uhler and Libin Zhu.

Documents de présentation

Aucun document.