Séminaire de Maths-Info

Séance séminaire Math-Info avec Bruno Grenet et Georg Loho

par M. Bruno Grenet (Université Grenoble-Alpes), M. Georg Loho (University of Twente)

Europe/Paris
Salle du Conseil (LAAS)

Salle du Conseil

LAAS

Instructions pour se rendre à la Salle du Conseil : à partir de l'entrée principale du LAAS il faut aller tout droit jusqu'au hall. Ensuite, il faut monter l'escalier jusqu'au premier étage. La Salle du Conseil se trouve dans le couloir situé à droite de l'escalier.
Description

Le séminaire a lieux à la Salle du Conseil (LAAS-CNRS, 7, avenue du Colonel Roche). On trouve plus d'informations ici : https://www.laas.fr/public/fr/comment-se-rendre-au-laas . La salle du conseil est facile à trouver :  à partir de l'entrée principale du LAAS il faut aller tout droit jusqu'au hall. Ensuite, il faut monter l'escalier jusqu'au premier étage. La Salle du Conseil se trouve dans le couloir situé à droite de l'escalier.
. Avant de venir au LAAS, il est utile de remplir une déclaration de la visite : https://www.laas.fr/visiteurs/ (responsable de la visite : Mateusz Skomra, équipe POP). 

 



*15h : Georg Loho (University of Twente, https://lohomath.github.io/)

Title: Lower bounds on neural network depth via lattice polytopes

Abstract: We study the set of functions representable by ReLU neural networks, a standard model in the machine learning community. It is an open question whether this set strictly increases with the number of layers used. We prove that this is indeed the case if one considers neural networks with only integer weights.
We show that at least log(n) many layers are required to compute the maximum of n numbers, matching known upper bounds. To obtain our result, we first use previously discovered connections between neural networks and tropical geometry to translate the problem into the language of Newton polytopes. These Newton polytopes are lattice polytopes arising from alternatingly taking convex hulls and Minkowski sums. Our depth lower bounds then follow from a parity argument for the volume of faces of such polytopes, which might be of independent interest. This is joint work with Christian Haase and Christoph Hertrich.

 

 

14h : REPORTE à Janvier! Bruno Grenet, (Université Grenoble-Alpes, https://membres-ljk.imag.fr/Bruno.Grenet/home.html)

Title: Quasi-linear sparse interpolation over the integers

Abstract: Sparse interpolation is the task of learning the sparse representation of a polynomial, given a way to evaluate it at chosen points. Compared to standard (dense) interpolation, the goal is to use a number of evaluations points that is linear in the number of nonzero monomials rather than the degree. This problem has a rich history due to its relevance for polynomial factorization, system solving or more recently sparse polynomial arithmetic.

Many algorithms have been designed, depending on the input representation (black-box or arithmetic circuit for instance) and the ring of coefficients. Black-box algorithms à la Prony reconstruct the polynomial from its evaluations on a geometric progression, and usually run in exponential time. Arithmetic-circuit algorithms à la Garg and Schost reconstruct the polynomial from its reductions modulo binomials, and have a polynomial-time complexity.

In a joint work with Pascal Giorgi, Armelle Perret du Cray and Daniel S. Roche, we combine both techniques to obtain the first quasi-linear time algorithm, that works for polynomials over the integers. In my talk, I will give an overview of the two main techniques, and then explain how we combined them to obtain our new algorithm.