Prediction games and arcing algorithms

被引:324
作者
Breiman, L [1 ]
机构
[1] Univ Calif Berkeley, Dept Stat, Berkeley, CA 94720 USA
关键词
D O I
10.1162/089976699300016106
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The theory behind the success of adaptive reweighting and combining algorithms (arcing) such as Adaboost (Freund & Schapire, 1996a, 1997) and others in reducing generalization error has not been well understood. By formulating prediction as a game where one player makes a selection from instances in the training set and the other a convex linear combination of predictors from a finite set, existing arcing algorithms are shown to be algorithms for finding good game strategies. The minimax theorem is an essential ingredient of the convergence proofs. An arcing algorithm is described that converges to the optimal strategy. A bound on the generalization error for the combined predictors in terms of their maximum error is proven that is sharper than bounds to date. Schapire, Freund, Bartlett, and Lee (1997) offered an explanation of why Adaboost works in terms of its ability to produce generally high margins. The empirical comparison of Adaboost to the optimal arcing algorithm shows that their explanation is not complete.
引用
收藏
页码:1493 / 1517
页数:25
相关论文
共 18 条
[1]  
BAUER E, 1998, MACH LEARN, P1
[2]  
Blackwell D, 1954, Theory of Games and Statistical Decisions
[3]   Bagging predictors [J].
Breiman, L .
MACHINE LEARNING, 1996, 24 (02) :123-140
[4]  
Breiman L, 1996, rapport technique n 460
[5]  
Breiman Leo, 1997, Arcing the edge
[6]  
Drucker H., 1995, ADV NEURAL INFORMATI, V8, P479
[7]  
Freund Y., 1996, Machine Learning. Proceedings of the Thirteenth International Conference (ICML '96), P148
[8]  
FREUND Y, 1997, J COMPUTER SYSTEM SC, V55, P1219
[9]  
FREUND Y, 1998, SELF BOUNDING LEARNI
[10]  
Freund Yoav, 1996, P 9 ANN C COMP LEARN