Significance tests or confidence intervals: which are preferable for the comparison of classifiers?

被引:20
作者
Berrar, Daniel [1 ]
Lozano, Jose A. [2 ]
机构
[1] Tokyo Inst Technol, Interdisciplinary Grad Sch Sci & Engn, Midori Ku, Yokohama, Kanagawa 2268502, Japan
[2] Univ Basque Country UPV EHU, Dept Comp Sci & Artificial Intelligence, Intelligent Syst Grp, Donostia San Sebastian 20018, Gipuzkoa, Spain
关键词
null hypothesis significance testing; p-value; confidence interval; classification; reasoning; STATISTICAL COMPARISONS; MODEL SELECTION; P-VALUES; INFERENCE; ILLUSION; BIAS;
D O I
10.1080/0952813X.2012.680252
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Null hypothesis significance tests and their p-values currently dominate the statistical evaluation of classifiers in machine learning. Here, we discuss fundamental problems of this research practice. We focus on the problem of comparing multiple fully specified classifiers on a small-sample test set. On the basis of the method by Quesenberry and Hurst, we derive confidence intervals for the effect size, i.e. the difference in true classification performance. These confidence intervals disentangle the effect size from its uncertainty and thereby provide information beyond the p-value. This additional information can drastically change the way in which classification results are currently interpreted, published and acted upon. We illustrate how our reasoning can change, depending on whether we focus on p-values or confidence intervals. We argue that the conclusions from comparative classification studies should be based primarily on effect size estimation with confidence intervals, and not on significance tests and p-values.
引用
收藏
页码:189 / 206
页数:18
相关论文
共 48 条
[1]  
[Anonymous], 1997, JAMA, V277, P927
[2]  
BERGER JO, 1988, AM SCI, V76, P159
[3]   Avoiding model selection bias in small-sample genomic datasets [J].
Berrar, D ;
Bradbury, I ;
Dubitzky, W .
BIOINFORMATICS, 2006, 22 (10) :1245-1250
[4]  
Bouckaert RR, 2004, LECT NOTES ARTIF INT, V3056, P3
[5]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[6]  
Cawley GC, 2010, J MACH LEARN RES, V11, P2079
[7]  
Cummings G., 2012, UNDERSTANDING NEW ST
[8]  
Demsar J, 2006, J MACH LEARN RES, V7, P1
[9]  
Demsar J., 2008, P ICML 2008 WORKSH E
[10]  
Denis D.J., 2003, THEORY SCI, V4