Orateur
Description
This talk addresses the problem of saving the computational resource in Statistics when an estimator is not known in closed-form.
In such a realistic situation, an optimization algorithm (Gradient Descent) has to be used for approximating the unknown estimator value.
Unlike the usual intuition, a striking remark is that iterating the optimization algorithm too many times can reveal suboptimal in terms of statistical performance.
The purpose of the talk is firstly to explain reasons for this suboptimality, and secondly to provide a theoretical analysis of the behaviour of Gradient Descent along the iterations.
All of this results in a new stopping rule designed from the data which outputs the optimal number of iterations to be performed by Gradient Descent (and other related algorithms).