Boosting the margin: A new explanation for the effectiveness of voting methods

被引:9
作者
Schapire, RE
Freund, Y
Bartlett, P
Lee, WS
机构
[1] AT&T Bell Labs, Florham Pk, NJ 07932 USA
[2] Australian Natl Univ, Dept Syst Engn, RSISE, Canberra, ACT 0200, Australia
[3] Australian Def Force Acad, Univ Coll UNSW, Sch Elect Engn, Canberra, ACT 2600, Australia
关键词
ensemble methods; decision trees; neural networks; bagging; boosting; error-correcting; output coding; Markov chain; Monte Carlo;
D O I
暂无
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.
引用
收藏
页码:1651 / 1686
页数:36
相关论文
共 42 条
[1]  
[Anonymous], 1998, UCI REPOSITORY MACHI
[2]  
[Anonymous], P 5 ANN WORKSH COMP
[3]  
[Anonymous], P 12 INT C MACH LEAR
[4]   UNIVERSAL APPROXIMATION BOUNDS FOR SUPERPOSITIONS OF A SIGMOIDAL FUNCTION [J].
BARRON, AR .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1993, 39 (03) :930-945
[5]   The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network [J].
Bartlett, PL .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1998, 44 (02) :525-536
[6]  
BAUER E, 1997, UNPUB EMPIRICAL COMP
[7]   What Size Net Gives Valid Generalization? [J].
Baum, Eric B. ;
Haussler, David .
NEURAL COMPUTATION, 1989, 1 (01) :151-160
[8]  
Breiman L, 1998, ANN STAT, V26, P801
[9]   Bagging predictors [J].
Breiman, L .
MACHINE LEARNING, 1996, 24 (02) :123-140
[10]  
BREIMAN L, 1997, 504 U CAL DEP STAT