Title: Generative neural networks via scoring rule minimization for probabilistic forecasting and likelihood-free inference
Authors: Lorenzo Pacchiardi - University of Oxford (United Kingdom) [presenting]
Ritabrata Dutta - Warwick University (United Kingdom)
Abstract: Generative neural networks represent probability distributions by transforming samples from a simple base measure via a flexible transformation parametrized by a neural network. Unfortunately, they do not allow evaluating the probability density but only sampling from it, which makes training by maximum likelihood unfeasible. Usually, therefore, such neural networks are fit to a set of samples using adversarial training, which involves iteratively optimizing a min-max objective. This procedure is unstable and often leads to a learned distribution underestimating the uncertainty - in extreme cases collapsing to a single point. We discuss training generative networks via scoring rule minimization, an overlooked adversarial-free method which allows smooth training and leads to better uncertainty quantification. We show applications of this method to probabilistic forecasting and Bayesian likelihood-free inference; in both cases, the scoring rule approach leads to better performances in shorter training time.