Title: Parameter-free machine learning through coin betting
Authors: Francesco Orabona - Boston University (United States) [presenting]
Abstract: Machine Learning (ML) has been described as the fuel of the next industrial revolution. Yet, despite their name, the majority of the ML algorithms still heavily rely on having humans in the loop in order to set their ``parameters''. For example, when using regularized empirical risk minimization, the choice of the weight of the regularizer is critical to obtain theoretical and practical optimal performance. Moreover, the minimization itself, usually done through stochastic gradient descent (SGD), requires to set ``learning rates'' in order to get good performance. Are these parameters strictly necessary? Is it possible to have ``parameter-free'' ML algorithms? It will be shown that many ML problems can be reduced to a game of betting on a non-stochastic coin. Betting on a non-stochastic coin is a well known problem whose optimal strategy turns out to be a simple generalization of the Kelly betting criterion. Moreover, the optimal coin betting algorithm is parameter-free, giving rise to parameter-free ML and stochastic optimization algorithms. For example, this approach gives: 1) optimal rates of convergence in RKHS in the capacity independent setting without any parameter to tune; 2) a differentially private SGD without learning rates; 3) a new way to obtain finite-time iterated-logarithm martingale concentrations in Banach spaces.