Speaker
Description
Iteratively reweighted least square (IRLS) is a popular approach to solve sparsity-enforcing regression problems in machine learning. State of the art approaches are more efficient but typically rely on specific coordinate pruning schemes. In this work, we show how a surprisingly simple reparametrization of IRLS, coupled with a bilevel resolution (instead of an alternating scheme) is able to achieve top performances on a wide range of sparsity (such as Lasso, group Lasso and trace norm regularizations), regularization strength (including hard constraints), and design matrices (ranging from correlated designs to differential operators). Similarly to IRLS, our method only involves linear systems resolutions, but in sharp contrast, corresponds to the minimization of a smooth function. Despite being non-convex, we show that there is no spurious minima and that saddle points are “ridable”, so that there always exists a descent direction. We thus advocate for the use of a BFGS quasi-Newton solver, which makes our approach simple, robust and efficient. At the end of the talk, I will discuss the associated gradient flows as well as the connection with Hessian geometry and mirror descent. This is a joint work with Clarice Poon (Bath Univ.). The corresponding article is available: https://arxiv.org/abs/2106.01429. A python notebook introducing the method is available at this address: https://nbviewer.org/github/gpeyre/numerical-tours/blob/master/python/optim_7_noncvx_pro.ipynb