An empirical comparison of voting classification algorithms: Bagging, boosting, and variants

被引:1593
作者
Bauer, E [1 ]
Kohavi, R
机构
[1] Stanford Univ, Dept Comp Sci, Stanford, CA 94305 USA
[2] Blue Martini Software, San Mateo, CA 94403 USA
关键词
classification; boosting; Bagging; decision trees; Naive-Bayes; mean-squared error;
D O I
10.1023/A:1007515423169
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world datasets. We review these algorithms and describe a large empirical study comparing several variants in conjunction with a decision tree inducer (three variants) and a Naive-Bayes inducer. The purpose of the study is to improve our understanding of why and when these algorithms, which use perturbation, reweighting, and combination techniques, affect classification error. We provide a bias and variance decomposition of the error to show how different methods and variants influence these two terms. This allowed us to determine that Bagging reduced variance of unstable methods, while boosting methods (AdaBoost and Arc-x4) reduced both the bias and variance of unstable methods but increased the variance for Naive-Bayes, which was very stable. We observed that Arc-x4 behaves differently than AdaBoost if reweighting is used instead of resampling, indicating a fundamental difference. Voting variants, some of which are introduced in this paper, include: pruning versus no pruning, use of probabilistic estimates, weight perturbations (Wagging), and backfitting of data. We found that Bagging improves when probabilistic estimates in conjunction with no-pruning are used, as well as when the data was backfit. We measure tree sizes and show an interesting positive correlation between the increase in the average tree size in AdaBoost trials and its success in reducing the error. We compare the mean-squared error of voting methods to non-voting methods and show that the voting methods lead to large and significant reductions in the mean-squared errors. Practical problems that arise in implementing boosting algorithms are explored, including numerical instabilities and underflows. We use scatterplots that graphically show how AdaBoost reweights instances, emphasizing not only "hard" areas but also outliers and noise.
引用
收藏
页码:105 / 139
页数:35
相关论文
共 55 条
  • [1] ALI KM, 1996, LEARNING PROBABILIST
  • [2] [Anonymous], 1995, Proceedings of the International Conference on Knowledge Discovery and Data Mining
  • [3] [Anonymous], J COMPUT SYST SCI, DOI DOI 10.1006/JCSS.1997.1504
  • [4] [Anonymous], 1998, NEURAL COMPUTATION
  • [5] [Anonymous], 1998, UCI repository of machine learning databases.
  • [6] [Anonymous], 1993, P 13 INT JOINT C ART
  • [7] [Anonymous], 1992, Machine Learning: Proceedings of the Ninth International Conference
  • [8] Becker B, 1997, KDD WORKSH ISS INT D
  • [9] BERNARDO J, 1993, BAYESIAN THEORY
  • [10] Bagging predictors
    Breiman, L
    [J]. MACHINE LEARNING, 1996, 24 (02) : 123 - 140