Colloquium de l'Institut

Swarm-Based Gradient Descent Method for Non-Convex Optimization

by Eitan Tadmor

Europe/Paris
Amphi Schwartz (Institut de Mathématiques de Toulouse, bât.1R3)

Amphi Schwartz

Institut de Mathématiques de Toulouse, bât.1R3

118, route de Narbonne, 31062 Toulouse
Description

We discuss a new swarm-based gradient descent (SBGD) method for non-convex optimization. The swarm consists of agents, identified with position $x$ and mass $m$. There are three key aspects to the SBGD dynamics: (i) persistent transition of mass from high to lower ground; (ii) a random marching direction, aligned with the steepest gradient descent; and (iii) a time stepping protocol, $h(x,m)$, which decreases with $m$. The interplay between positions and masses leads to dynamic distinction between `heavier leaders’ near local minima, and `lighter explorers’ which explore for improved position with large(r) time steps. Convergence analysis and numerical simulations demonstrate the effectiveness of SBGD method as a global optimizer. 

Organized by

Dan Popovici, Mark Spivakovsky