Title: From unbiased MDI feature importance to explainable AI for trees
Authors: Markus Loecher - Berlin School of Economics and Law (Germany) [presenting]
Abstract: Various recent attempts are unified to (i) improve the interpretability of tree-based models and (ii) debias the default variable-importance measure (MDI) in random forests. In particular, we demonstrate a common thread among the out-of-bag based bias correction methods and their connection to local explanation for trees. In addition, we point out a bias caused by the inclusion of inbag data in the newly developed SHAP values. Empirical and simulational studies indicate substantial improvements in the discriminative power of SHAP values when out-of-sample data are used instead.