CMStatistics 2018: Start Registration
View Submission - CMStatistics
B1206
Title: On the optimality of the standard Hedge algorithm in the stochastic setting Authors:  Jaouad Mourtada - Ecole polytechnique (France) [presenting]
Stephane Gaiffas - Universite Paris-Diderot (France)
Abstract: Consider the standard problem of prediction with expert advice: at each step, a learner chooses a probability distribution on a set of $M$ experts, then experts' losses are revealed, and the learner suffers the average loss of the experts under its chosen distribution. The goal is to control the learner's regret, defined as the difference between its cumulated loss and that of the best expert. Arguably the most well-known strategy is the exponential weights or Hedge algorithm, which has optimal $\sqrt{T \log M}$ regret in the worst-case, when experts' losses may be arbitrary and set by an adversary. Departing from the pessimistic worst-case analysis, recent work has sought to design algorithms combining minimax regret with improved guarantees on easier instances. We show that, surprisingly, the standard variant of Hedge with decreasing learning rate already exhibits such adaptivity. Indeed, we obtain a constant regret bound for Hedge in the stochastic case, when loss vectors are iid. Further, this bound has the optimal dependence on the number of experts and on the suboptimality gap between the leading expert and the rest, meaning that Hedge automatically adapts to this hardness parameter. Finally, we contrast this version of Hedge with the fixed learning rate and ``doubling trick'' variants, both of which fail to adapt to the stochastic case.