CFE 2020: Start Registration
View Submission - CMStatistics
Title: Interpreting a penalty as the influence of a Bayesian prior Authors:  Pierre Wolinski - Inria Grenoble (France) [presenting]
Guillaume Charpiat - Inria Saclay (France)
Yann Ollivier - Facebook (France)
Abstract: In machine learning, it is common to optimize the parameters of a probabilistic model, modulated by a somewhat ad hoc regularization term that penalizes some values of the parameters. Regularization terms appear naturally in Variational Inference (VI), a tractable way to approximate Bayesian posteriors: the loss to optimize contains a Kullback--Leibler divergence term between the approximate posterior and a Bayesian prior. We fully characterize which regularizers can arise this way, and provide a systematic way to compute the corresponding prior. This viewpoint also provides a prediction for useful values of the regularization factor in neural networks. We apply this framework to regularizers, such as L1 or group-Lasso.