Les personnes qui possèdent un compte PLM-Mathrice sont invités à l'utiliser.
Rencontres Statistiques Lyonnaises

Multi-armed bandits, from the original theory to recent advances

by Vianney Perchet

vendredi 13 octobre 2017 de au (Europe/Paris)
at Bât. Braconnier ( Fokko du Cloux )
In the first part of this talk, I will present the problem of sequential allocations called « multi-armed bandit ». Given several i.i.d. processes, the objective is to sample them sequentially (and thus get a sequence of random rewards) in order to maximize the expected cumulative reward. This framework simultaneously encompasses issues of estimation and optimization (the so-called « exploration vs exploitation » dilemma). A recent successful example of applications is the ad placement on web sites. 

In the second part, I will focus on how to handle the real constraints of some practical exploration-exploitation problems, namely random clinical trials and repeated auctions. The first ones usually consist of a small number of phases (typically three or four). The first phase is a pilot study and treatment can be allocated at random. In the following phases, treatments are re-allocated depending on the result of the pilot study (and the subsequent phases). We will show how to theoretically choose the sizes of these phases and we shall look whether having more phases leads to significant improvements.
In repeated auctions, the optimal strategy is more or less clear if the valuations of the goods sold are known. Unfortunately, this is not the case in most of internet induced problems. The objectives are to construct non-trivial bidding and learning strategies; the crucial difficulty being that the bidding space is continuous and reward functions typically non-continuous.