Orateur
Description
In distributed optimization and machine learning, a large number of machines perform computations in parallel and communicate back and forth with a server. In particular, in federated learning, the distributed training process is run on personal devices such as mobile phones. In this context, communication, that can be slow, costly and unreliable, forms the main bottleneck. To reduce communication, two strategies are popular: 1) local training, that consists in communicating less frequently; 2) compression. Also, a robust algorithm should allow for partial participation. I will present several randomized algorithms we developed recently, with proved convergence guarantees and accelerated complexity. Our most recent paper “LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression” has been presented at ICLR 2025 as a Spotlight.