Controlling the diversity in classifier ensembles through a measure of agreement

被引:22
作者
Zouari, H [1 ]
Heutte, L [1 ]
Lecourtier, Y [1 ]
机构
[1] Univ Rouen, Lab PSI, FRE 2645, CNRS, F-76821 Mont St Aignan, France
关键词
diversity measure; classifier ensemble; output generation algorithm; classifier agreement; kappa measure; dependency; classifier simulation;
D O I
10.1016/j.patcog.2005.02.012
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, a simulation method is proposed to generate a set of classifier outputs with specified individual accuracies and fixed pairwise agreement. A diversity measure (kappa) is used to control the agreement among classifiers for building the classifier teams. The generated team outputs can be used to study the behaviour of class-type combination methods such as voting rules over multiple dependent classifiers. (c) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
引用
收藏
页码:2195 / 2199
页数:5
相关论文
共 5 条
[1]  
Fleiss J. L., 1981, Statistical Methods for Rates and Proportions, V2nd
[2]   Generating classifier outputs of fixed accuracy and diversity [J].
Kuncheva, LI ;
Kountchev, RK .
PATTERN RECOGNITION LETTERS, 2002, 23 (05) :593-600
[3]   Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy [J].
Kuncheva, LI ;
Whitaker, CJ .
MACHINE LEARNING, 2003, 51 (02) :181-207
[4]  
LECCE VD, 2000, 7 INT WORKSH FRONT H, P143
[5]  
Zouari H, 2003, LECT NOTES COMPUT SC, V2709, P296