CMStatistics 2022: Start Registration
View Submission - CFE
A0371
Title: Sequentially valid tests for forecast calibration Authors:  Alexander Henzi - ETH Zurich (Switzerland) [presenting]
Abstract: Forecasting and forecast evaluation are sequential tasks. Most predictions are issued on a regular basis, such as every hour, day, or quarter, and their accuracy can be monitored continuously. However, standard statistical tools for forecast evaluation are static, in the sense that they require the evaluation period to be fixed in advance, independent of available observations at the time of evaluation. We propose to apply sequential testing methods instead, and develop sequentially valid tests for the calibration of probabilistic forecasts for a real-valued outcome. Our methods are based on e-values, a recently introduced tool for assessing statistical significance, which generalize Wald's sequential probability ratio test. An e-value is a non-negative random variable with expected value at most one under a null hypothesis. Large e-values give evidence against the null hypothesis, and the multiplicative inverse of an e-value is a conservative p-value. It is demonstrated that the proposed tests can yield useful insights when testing the calibration of probabilistic forecasts in practical applications.