16–17 déc. 2021
Online
Fuseau horaire Europe/Paris

Talk by Franck Gabriel

17 déc. 2021, 09:00
1h 30m
Online

Online

Via Zoom

Description

In recent years, randomness has become more important in machine learning. Through two examples, we will see that it can be used to [1] "select" well-behaved regions of parameters and [2] provide an easier optimization problem.

[1] In deep learning, I will present the NTK regime, where, by considering a "wide" random initialization, it can be shown that neural networks with large width converge, and the dynamics of the output function can be described.

[2] In kernel methods, instead of looking for an optimal function in the RKHS, one can look for an optimal function in a random vector space: this is the random feature method. After explaining why it provides an approximation to kernel methods, I will present the implicit bias that finite sampling induces on the output function.

Documents de présentation

Aucun document.