Ensemble methods in machine learning

被引:4944
作者
Dietterich, TG [1 ]
机构
[1] Oregon State Univ, Corvallis, OR 97331 USA
来源
MULTIPLE CLASSIFIER SYSTEMS | 2000年 / 1857卷
关键词
D O I
10.1007/3-540-45014-9_1
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include error-correcting output coding, Bagging, and boosting. This paper reviews these methods and explains why ensembles can often perform better than any single classifier. Some previous studies comparing ensemble methods are reviewed, and some new experiments are presented to uncover the reasons that Adaboost does not overfit rapidly.
引用
收藏
页码:1 / 15
页数:15
相关论文
共 22 条
[1]  
Ali KM, 1996, MACH LEARN, V24, P173, DOI 10.1007/BF00058611
[2]   An empirical comparison of voting classification algorithms: Bagging, boosting, and variants [J].
Bauer, E ;
Kohavi, R .
MACHINE LEARNING, 1999, 36 (1-2) :105-139
[3]  
BLUM A, 1988, P 1 ANN WORKSH COMP, P00009
[4]   Bagging predictors [J].
Breiman, L .
MACHINE LEARNING, 1996, 24 (02) :123-140
[5]  
Cherkauer Kevin J., 1996, Working Notes of the AAAI Workshop on Integrating Multiple Learned Models, P15
[6]  
Dietterich T., 2000, MACHINE LEARNING
[7]  
Dietterich TG, 1994, J ARTIF INTELL RES, V2, P263
[8]  
Freund Y., 1995, A Decision-Theoretic Generalization of on-Line Learning and an Application to Boosting
[9]  
Freund Y, 1996, Experiments with a new boosting algorithm. In proceedings 13th Int Conf Mach learn. Pp.148-156, P45
[10]   NEURAL NETWORK ENSEMBLES [J].
HANSEN, LK ;
SALAMON, P .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1990, 12 (10) :993-1001