Artificial intelligence (AI) is increasingly shaping the decisions that affect our lives—from hiring and education to healthcare and access to social services. While AI promises efficiency and objectivity, it also carries the risk of perpetuating and even amplifying societal biases embedded in the data used to train these systems. Many real-world examples highlight the dangers of relying on...
In this talk, we present a practical solution to the lack of prediction diversity observed recently for deep learning approaches when used out-of-distribution. Considering that this issue is mainly related to a lack of weight diversity, we introduce the maximum entropy principle for the weight distribution coupled with the standard, task-dependent, in-distribution data fitting term. We prove...
Group fairness is a central research topic in text classification, where reaching fair treatment between sensitive groups (e.g., women and men) remains an open challenge. In this talk, I will present an approach that extends the use of the Wasserstein Independence measure for learning unbiased neural text classifiers. Given the challenge of distinguishing fair from unfair information in a text...
In this talk, we consider the problem of estimating the matching map between two sequences of
Our main result shows that, in the...
The training of neural networks with first order methods still remains misunderstood in theory, despite compelling empirical evidence. Not only it is believed that neural networks converge towards global minimizers, but the implicit bias of optimisation algorithms makes them converge towards specific minimisers with nice generalisation properties. This talk focuses on the early alignment phase...