Ensemble Manifold Regularization

被引:202
作者
Geng, Bo [1 ]
Tao, Dacheng [2 ]
Xu, Chao [1 ]
Yang, Linjun [3 ]
Hua, Xian-Sheng [4 ]
机构
[1] Peking Univ, Minist Educ, Key Lab Machine Percept, Beijing 100871, Peoples R China
[2] Univ Technol, Fac Engn & Informat Technol, Ctr Quantum Computat & Intelligent Syst, Broadway, NSW 2007, Australia
[3] Microsoft Res Asia, Beijing 100190, Peoples R China
[4] Microsoft Corp, Redmond, WA 98052 USA
关键词
Manifold learning; semi-supervised learning; ensemble manifold regularization; CLASSIFICATION;
D O I
10.1109/TPAMI.2012.57
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.
引用
收藏
页码:1227 / 1233
页数:7
相关论文
共 28 条
[1]  
Argyriou A., 2005, Advances in Neural Information Processing Systems, P67
[2]   Laplacian eigenmaps for dimensionality reduction and data representation [J].
Belkin, M ;
Niyogi, P .
NEURAL COMPUTATION, 2003, 15 (06) :1373-1396
[3]  
Belkin M., 2002, P NEUR INF PROC SYST
[4]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[5]  
Bennett KP, 1999, ADV NEUR IN, V11, P368
[6]  
Bezdek J. C., 2003, Neural, Parallel & Scientific Computations, V11, P351
[7]  
Blum A., 1998, Proceedings of the Eleventh Annual Conference on Computational Learning Theory, P92, DOI 10.1145/279943.279962
[8]  
Boyd S.P, 2004, Convex optimization, DOI [DOI 10.1017/CBO9780511804441, 10.1017/CBO9780511804441]
[9]  
Chapelle O., 2001, P ADV NEUR INF PROC, V15
[10]  
Chapelle O, 2008, J MACH LEARN RES, V9, P203