SEPTEMBER 13, 2018

Friends don’t let friends deploy black-box models: The importance of intelligibility in machine learning

In machine learning, a trade-off often must be made between accuracy and intelligibility. The most accurate models usually are not very intelligible (e.g., deep nets), and the most intelligible models usually are less accurate (e.g., linear regression). Frequently, this trade-off limits the accuracy of models that can safely be deployed in mission-critical applications such as health care, where being able to understand, validate, edit, and ultimately trust a learned model is important.

In this talk, Rich Caruana, Principal Researcher at Microsoft, discusses developing a learning method based on generalized additive models (GAMs) that is often as accurate as full complexity models but even more intelligible than linear models. This makes it easy to understand what a model has learned, and also makes it easier to edit the model when it learns inappropriate things because of unanticipated problems with the data. Making it possible for experts to understand a model and repair it is critical, because most data has unanticipated land mines.

Rich presents a case study in which these high-accuracy GAMs discover surprising patterns in data that would have made deploying a black-box model risky. He also briefly shows how these models are used to detect bias in domains where fairness and transparency are paramount.

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy