CMStatistics 2016: Start Registration
View Submission - CFE
Title: Evaluating the performance of risk models: A quantile score approach Authors:  Christos Argyropoulos - Lancaster University (United Kingdom) [presenting]
Ekaterini Panopoulou - University of Essex (United Kingdom)
Abstract: The evaluation of a risk model may be futile given that there is no observable alternative to verify it. Imposing specific assumptions in order to circumvent this problem reduces the power of the tests and therefore their reliability. In addition, for the cases where all models are rejected or not rejected, there is no information about which model would provide the best performing forecast. Therefore, risk model evaluation should at least be accompanied by a measurement of the performance of forecasts. The underlying idea states that even if the model is not accurate the implicit structure may approximate a part of the real, but unknown, dynamics and thus add value to the forecast. Amongst simple scoring functions, the literature proposes the usage of likelihood based scoring rules in order to evaluate risk forecasts. We argue that this is not optimal since we have to consider not just the likelihood of a density but also the tail probability. In order to do so, we focus on quantile scoring rules which target the tail of the density forecast. The proposed method evaluates the null hypothesis of equal performing models by comparing the average scores via a simple Diebold-Mariano test. Initial results for six common risk models suggest performance inequality. The use of the ``least bad'' model can result in significant gains compared to the alternative ones. Finally, the proposed method can serve as a performance index for the Expected Shortfall risk measure.