B1750
Title: Friends do not let friends deploy black-box models: The importance of intelligibility in machine learning in healthcare
Authors: Rich Caruana - Microsoft Research (United States) [presenting]
Abstract: In machine learning sometimes a tradeoff must be made between accuracy and intelligibility: the most accurate models often are black-box models that are not very intelligible, and the most intelligible models usually are less accurate. This can limit the accuracy of models that can safely be deployed in mission-critical applications such as healthcare where being able to understand, validate, edit, and ultimately trust a model is important. We have developed a learning method based on generalized additive models (GAMs) that is as accurate as full complexity models such as neural nets, boosted trees and random forests, but more intelligible than linear models. This makes it easy to understand what models have learned and to edit models when they learn inappropriate things. Making it possible for medical experts to understand and repair a model is critical because most clinical data is complex and has unanticipated problems. We will present several healthcare case studies where these high-accuracy GAMs discover surprising patterns in the data that would have made deploying black-box models risky. The case studies include surprising findings in pregnancy, pneumonia, ICU and COVID-19 risk prediction.