A0900
Title: Detecting local bias and error in credit risk models
Authors: Anthony Bellotti - University of Nottingham Ningbo China (China) [presenting]
Abstract: Financial institutions have been using predictive analytics for credit risk estimation for many decades and have been deploying complex machine learning algorithms more recently. These models operate as black boxes, in the sense that although they give an uplift in predictive performance, it is difficult to explain how they make decisions, and they may hide undesirable behaviour. In particular, the models may give rise to undetected bias in certain population subgroups. If such a subgroup is a protected group, such as ethnic or gender, the deployment of the model may infringe equality laws. Although manual checking for bias in prescribed subgroups is possible, this may still mean bias in other subgroups remains undetected and also may miss bias in intersectional subgroups. We use meta-modelling to detect local bias and error in the population. The methodology is tested on both traditional and machine learning credit scoring models, revealing bias in intersectional subgroups in both cases.