Liste des Contributions

6 sur 6 affichés
Exporter en PDF
  1. Richard Combes (CentraleSupélec)
    03/04/2026 10:00

    We consider linear stochastic bandits where the set of actions is an ellipsoid. We provide the first known minimax optimal algorithm for this problem. We first derive a novel information-theoretic lower bound on the regret of any algorithm, which must be at least $\Omega(\min(d \sigma \sqrt{T} + d \|\theta\|_{A}, \|\theta\|_{A} T))$ where $d$ is the dimension, $T$ the time horizon, $\sigma^2$...

    Aller à la page de la contribution
  2. Tabea Rebafka (AgroParisTech)
    03/04/2026 11:20

    This talk provides a brief introduction to statistical network analysis and random graph models. We then focus on the problem of estimating the graphon function, which characterizes nonparametric exchangeable random graph models. Our main emphasis is on the setting where multiple networks are observed, which introduces additional challenges compared to the classical single-network framework....

    Aller à la page de la contribution
  3. Anna Korba (ENSAE)
    03/04/2026 12:10

    Several emerging post-Bayesian methods target a probability distribution for which an entropy-regularised variational objective is minimised. This increased flexibility introduces a computational challenge, as one loses access to an explicit unnormalised density for the target. To mitigate this difficulty, we introduce a novel measure of suboptimality called 'gradient discrepancy', and in...

    Aller à la page de la contribution
  4. Hugo Cui (CNRS, Paris-Saclay)
    03/04/2026 14:30

    We study a class of iterated empirical risk minimization (ERM) procedures in which two successive ERMs are performed on the same dataset, and the predictions of the first estimator enter as an argument in the loss function of the second. This setting, which arises naturally in active learning and reweighting schemes, introduces intricate statistical dependencies across samples and...

    Aller à la page de la contribution
  5. Paul Mangold (École polytechnique)
    03/04/2026 15:20

    In federated learning, multiple users collaboratively train a machine learning model without sharing local data. To reduce communication, users perform multiple local stochastic gradient steps that are then aggregated by a central server. However, due to data heterogeneity, local training introduces bias. In this talk, I will present a novel interpretation of the Federated Averaging algorithm,...

    Aller à la page de la contribution
  6. Ekhine Irurozki (Télécom paris)
    03/04/2026 16:40

    Summarising a distribution over rankings by a single Kemeny median fails whenever the distribution is multimodal or heterogeneous. Drawing on the histogram analogy, we introduce Consensus Ranking Distributions (CRD): sparse mixtures of local Kemeny medians indexed by a partition of the space of rankings, interpolating between a single consensus ranking and the raw empirical distribution. We...

    Aller à la page de la contribution