CMStatistics 2022: Start Registration
View Submission - CFE
Title: Quantifying model uncertainty of machine learning methods for loss given default estimation Authors:  Matthias Nagl - Universtiy of Regensburg (Germany)
Maximilian Nagl - University of Regensburg (Germany) [presenting]
Daniel Roesch - University of Regensburg (Germany)
Abstract: The use of machine learning methods has increasingly found its way into the credit risk literature. They focus mainly on a sounder prognosis of the main credit risk parameters and are shown to be superior to standard statistical models. However, the quantification of their accompanied (model) uncertainty is neglected so far. This type of uncertainty measures how certain the model is with each prediction. Therefore, it is imminent for risk managers and regulators, and its quantification increases the transparency and stability of machine learning methods to risk management tasks. We fill this gap by using a novel approach called evidential learning. We evaluate the model uncertainty for loss-given default estimation techniques and apply explainable artificial intelligence (XAI) methods to evaluate its drivers. We discover that model uncertainty increases out of time and with extreme realizations of macroeconomic drivers.