Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

被引:12
作者
Cynthia Rudin
机构
[1] Duke University,
来源
Nature Machine Intelligence | 2019年 / 1卷
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Black box machine learning models are currently being used for high-stakes decision making throughout society, causing problems in healthcare, criminal justice and other domains. Some people hope that creating methods for explaining these black box models will alleviate some of the problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practice and can potentially cause great harm to society. The way forward is to design models that are inherently interpretable. This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.
引用
收藏
页码:206 / 215
页数:9
相关论文
共 47 条
[1]  
Varshney KR(2016)On the safety of machine learning: cyber-physical systems, decision sciences and data products Big Data 10 5-10
[2]  
Alemzadeh H(2014)Comprehensible classification models: a position paper ACM SIGKDD Explorations Newsletter 15 1-154
[3]  
Freitas AA(2011)An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models Decision Support Syst. 51 141-47
[4]  
Huysmans J(2016)Monotonic calibrated interpolated look-up tables J. Mach. Learn. Res. 17 1-97
[5]  
Dejaeger K(1956)The magical number seven, plus or minus two: some limits on our capacity for processing information Psychol. Rev. 63 81-57
[6]  
Mues C(2010)The magical mystery four: How is working memory capacity limited, and why? Curr. Dir. Psychol. Sci. 19 51-91
[7]  
Vanthienen J(1993)Very simple classification rules perform well on most commonly used datasets Mach. Learn. 11 63-54
[8]  
Baesens B(1996)From data mining to knowledge discovery in databases AI Magazine 17 37-14
[9]  
Gupta M(2006)Classifier technology and the illusion of progress Statist. Sci. 21 1-31
[10]  
Miller G(2010)A process for predicting manhole events in Manhattan Mach. Learn. 80 1-486