Selective multiple kernel learning for classification with ensemble strategy

被引:49
作者
Sun, Tao [1 ]
Jiao, Licheng [1 ]
Liu, Fang [2 ]
Wang, Shuang [1 ]
Feng, Jie [1 ]
机构
[1] Xidian Univ, Minist Educ China, Key Lab Intelligent Percept & Image Understanding, Xian 710071, Peoples R China
[2] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Ensemble learning; Kernel evaluation; Multiple kernel learning; Selective multiple kernel learning; Fast selective multiple kernel learning;
D O I
10.1016/j.patcog.2013.04.003
中图分类号
TP18 [人工智能理论];
学科分类号
140502 [人工智能];
摘要
Multiple Kernel Learning (MKL) aims to seek a better result than single kernel learning by combining a compact set of sub-kernels. However, MKL. with L1-norm easily discards the sub-kernels with complementary information and MKL with Lp-norm(p >= 2) often gets the redundant solution. To address these problems, a Selective Multiple Kernel Learning (SMKL) method, inspired by Ensemble Learning (EL), is proposed. Comparing MKL with Lp-norm(p >= 2), SMKL obtains a sparse solution by a pre-selection procedure. Comparing MKL with Lp-norm, SMKL preserves the sub-kernels with complementary information by guaranteeing the high discrimination and large diversity of pre-selected sub-kernels. For quantifying the discrimination and diversity of sub-kernels, a new kernel evaluation is designed. SMKL reduces the scale of MKL optimization and saves the memory storing of the sub-kernels, which extends the scale of problem that MKL could solve. Specially, a fast SMKL method using L infinity-norm constraint is focused, which needs no MIC optimization process. It means that the memory is hardly a limitation for MKL with the large scale problem. Experiments state that our method is effective for classification. (C) 2013 Elsevier Ltd. All rights reserved.
引用
收藏
页码:3081 / 3090
页数:10
相关论文
共 32 条
[1]
[Anonymous], 2009, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2009.5459169
[2]
[Anonymous], 2003, Advances in Neural Informaiton Processing Systems
[3]
Bach F., 2009, TECHNICAL REPORT
[4]
Bach F., 2004, P 21 INT C MACH LEAR
[5]
Crammer K., 2002, ADV NEURAL INFORM PR, P367
[6]
Cristianini N, 2002, ADV NEUR IN, V14, P367
[7]
Linear programming boosting via column generation [J].
Demiriz, A ;
Bennett, KP ;
Shawe-Taylor, J .
MACHINE LEARNING, 2002, 46 (1-3) :225-254
[8]
Jawanpuria Pratik, 2011, P 28 INT C MACH LEAR, P161
[9]
Kloft M., 2009, ADV NEURAL INFORM PR, P844
[10]
Kloft M, 2011, J MACH LEARN RES, V12, P953