28 juillet 2025 à 1 août 2025
Fuseau horaire Europe/Paris

Convergence and Linear Speed-Up in Stochastic Federated Learning

28 juil. 2025, 15:10
25m
Navier

Navier

Invited talk Communication-Efficient methods for distributed optimization and federated learning Mini-symposium

Orateur

Paul Mangold (Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 - CRIStAL, F-59000 Lille, France)

Description

  • Abstract: In federated learning, multiple users collaboratively train a machine learning model without sharing local data. To reduce communication, users perform multiple local stochastic gradient steps that are then aggregated by a central server. However, due to data heterogeneity, local training introduces bias. In this talk, I will present a novel interpretation of the Federated Averaging algorithm, establishing its convergence to a stationary distribution. By analyzing this distribution, we show that the bias consists of two components: one due to heterogeneity and another due to gradient stochasticity. I will then extend this analysis to the Scaffold algorithm, demonstrating that it effectively mitigates heterogeneity bias but not stochasticity bias. Finally, we show that both algorithms achieve linear speed-up in the number of agents, a key property in federated stochastic optimization.

Author

Paul Mangold (Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 - CRIStAL, F-59000 Lille, France)

Documents de présentation

Aucun document.