-
Solenne Gaucher (École polytechnique)01/04/2025 10:00
Artificial intelligence (AI) is increasingly shaping the decisions that affect our lives—from hiring and education to healthcare and access to social services. While AI promises efficiency and objectivity, it also carries the risk of perpetuating and even amplifying societal biases embedded in the data used to train these systems. Many real-world examples highlight the dangers of relying on...
Aller à la page de la contribution -
Nicolas Vayatis (ENS Paris-Saclay)01/04/2025 11:20
In this talk, we present a practical solution to the lack of prediction diversity observed recently for deep learning approaches when used out-of-distribution. Considering that this issue is mainly related to a lack of weight diversity, we introduce the maximum entropy principle for the weight distribution coupled with the standard, task-dependent, in-distribution data fitting term. We prove...
Aller à la page de la contribution -
Anne Auger (Inria Saclay)01/04/2025 12:10
Many approaches to optimization without derivatives rooted in probability theory are variants of stochastic approximation such as the well-known Kiefer-Wolfowitz method, a finite-difference stochastic approximation (FDSA) algorithm that estimates gradients using finite differences. Such methods are known to converge slowly: in many cases the best possible convergence rate is governed by the...
Aller à la page de la contribution -
Charlotte Laclau (Télécom Paris)01/04/2025 14:30
Group fairness is a central research topic in text classification, where reaching fair treatment between sensitive groups (e.g., women and men) remains an open challenge. In this talk, I will present an approach that extends the use of the Wasserstein Independence measure for learning unbiased neural text classifiers. Given the challenge of distinguishing fair from unfair information in a text...
Aller à la page de la contribution -
Arshak Minasyan (CentraleSupélec)01/04/2025 15:20
In this talk, we consider the problem of estimating the matching map between two sequences of $d$-dimensional noisy observations of feature vectors, possibly of different sizes ($n \neq m$). We begin with the simplest case of permutation estimation and then extend it to the more general setting of estimating a matching map of unknown size $k^* < \min(n, m)$.
Aller à la page de la contribution
Our main result shows that, in the... -
Etienne Boursier (Inria, Université Paris-Saclay)01/04/2025 16:40
The training of neural networks with first order methods still remains misunderstood in theory, despite compelling empirical evidence. Not only it is believed that neural networks converge towards global minimizers, but the implicit bias of optimisation algorithms makes them converge towards specific minimisers with nice generalisation properties. This talk focuses on the early alignment phase...
Aller à la page de la contribution
Choisissez le fuseau horaire
Le fuseau horaire de votre profil: