CFE 2020: Start Registration
View Submission - CMStatistics
B0165
Title: Stop making excuses for black-box models Authors:  Cynthia Rudin - Duke University (United States) [presenting]
Abstract: With the widespread use of machine learning, there have been serious societal consequences from using black-box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice. Explanations for black-box models are not reliable and can be misleading. If we use interpretable machine learning models, they come with their own explanations, which are faithful to what the model actually computes. Several reasons will be given why we should use interpretable models, the most compelling of which is that for high stakes decisions, interpretable models do not seem to lose accuracy over black boxes - in fact, the opposite is true, where when we understand what the models are doing, we can troubleshoot them to gain accuracy ultimately.