Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission

被引:953
作者
Caruana, Rich [1 ]
Lou, Yin [2 ]
Gehrke, Johannes [3 ]
Koch, Paul [1 ]
Sturm, Marc [4 ]
Elhadad, Noemie [5 ]
机构
[1] Microsoft Res, Redmond, WA 98052 USA
[2] LinkedIn Corp, Sunnyvale, CA USA
[3] Microsoft, Redmond, WA USA
[4] NewYork Presbyterian Hosp, New York, NY USA
[5] Columbia Univ, New York, NY 10027 USA
来源
KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING | 2015年
关键词
intelligibility; classification; interaction detection; additive models; logistic regression; healthcare; risk prediction;
D O I
10.1145/2783258.2788613
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In machine learning often a tradeoff must be made between accuracy and intelligibility. More accurate models such as boosted trees, random forests, and neural nets usually are not intelligible, but more intelligible models such as logistic regression, naive-Bayes, and single decision trees often have significantly worse accuracy. This tradeoff sometimes limits the accuracy of models that can be applied in mission-critical applications such as healthcare where being able to understand, validate, edit, and trust a learned model is important. We present two case studies where high-performance generalized additive models with pairwise interactions (GA(2)Ms) are applied to real healthcare problems yielding intelligible models with state-of-the-art accuracy. In the pneumonia risk prediction case study, the intelligible model uncovers surprising patterns in the data that previously had prevented complex learned models from being fielded in this domain, but because it is intelligible and modular allows these patterns to be recognized and removed. In the 30 day hospital readmission case study, we show that the same methods scale to large datasets containing hundreds of thousands of patients and thousands of attributes while remaining intelligible and providing accuracy comparable to the best (unintelligible) machine learning methods.
引用
收藏
页码:1721 / 1730
页数:10
相关论文
共 7 条
[1]  
Ambrosino R., 1995, P ANN S COMP APPL ME
[2]   Predicting dire outcomes of patients with community acquired pneumonia [J].
Cooper, GF ;
Abraham, V ;
Aliferis, CF ;
Aronis, JM ;
Buchanan, BG ;
Caruana, R ;
Fine, MJ ;
Janosky, JE ;
Livingston, G ;
Mitchell, T ;
Monti, S ;
Spirtes, P .
JOURNAL OF BIOMEDICAL INFORMATICS, 2005, 38 (05) :347-366
[3]   An evaluation of machine-learning methods for predicting pneumonia mortality [J].
Cooper, GF ;
Aliferis, CF ;
Ambrosino, R ;
Aronis, J ;
Buchanan, BG ;
Caruana, R ;
Fine, MJ ;
Glymour, C ;
Gordon, G ;
Hanusa, BH ;
Janosky, JE ;
Meek, C ;
Mitchell, T ;
Richardson, T ;
Spirtes, P .
ARTIFICIAL INTELLIGENCE IN MEDICINE, 1997, 9 (02) :107-138
[4]  
Hastie T., 1986, STAT SCI, V1, P297, DOI DOI 10.1214/SS/1177013604
[5]  
Lou Y., 2012, KDD
[6]   Accurate Intelligible Models with Pairwise Interactions [J].
Lou, Yin ;
Caruana, Rich ;
Gehrke, Johannes ;
Hooker, Giles .
19TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'13), 2013, :623-631
[7]  
Wood S. N., 2017, GEN ADDITIVE MODELS, DOI DOI 10.1201/9781315370279