B1194
Title: Interpretable machine learning for imbalanced credit scoring datasets
Authors: Yujia Chen - University of Edinburgh Business School (United Kingdom) [presenting]
Raffaella Calabrese - University of Edinburgh (United Kingdom)
Belen Martin-Barragan - The University of Edinburgh (United Kingdom)
Abstract: The class imbalance problem is common in the credit scoring domain, as the number of defaulters is usually much less than the number of non-defaulters. To date, research on the class imbalance problem has mainly focused on indicating and reducing the adverse effect of the class imbalance on the predictive accuracy of machine learning techniques, while the impact of that on machine learning interpretability has never been studied in the literature. This paper fills this gap by analysing how the stability of Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), two popular interpretation methods, are affected by class imbalance. Our experiments use 2016-2020 UK residential mortgage data collected from European Datawarehouse. The results show that interpretations generated from LIME and SHAP are less stable as the class imbalance increases, which indicates that the class imbalance does have an adverse effect on machine learning interpretability.