Combining Committee-Based Semi-Supervised Learning and Active Learning

被引:5
作者
Mohamed Farouk Abdel Hady
Friedhelm Schwenker
机构
[1] InstituteofNeuralInformationProcessing,UniversityofUlm
关键词
data mining; classification; active learning; co-training; semi-supervised learning; ensemble learning; random subspace method; decision tree; nearest neighbor classifier;
D O I
暂无
中图分类号
TP181 [自动推理、机器学习];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
<正>Many data mining applications have a large amount of data but labeling data is usually difficult,expensive,or time consuming,as it requires human experts for annotation.Semi-supervised learning addresses this problem by using unlabeled data together with labeled data in the training process.Co-Training is a popular semi-supervised learning algorithm that has the assumptions that each example is represented by multiple sets of features(views) and these views are sufficient for learning and independent given the class.However,these assumptions are strong and are not satisfied in many real-world domains.In this paper,a single-view variant of Co-Training,called Co-Training by Committee(CoBC) is proposed,in which an ensemble of diverse classifiers is used instead of redundant and independent views.We introduce a new labeling confidence measure for unlabeled examples based on estimating the local accuracy of the committee members on its neighborhood.Then we introduce two new learning algorithms,QBC-then-CoBC and QBC-with-CoBC,which combine the merits of committee-based semi-supervised learning and active learning.The random subspace method is applied on both C4.5 decision trees and 1-nearest neighbor classifiers to construct the diverse ensembles used for semi-supervised learning and active learning.Experiments show that these two combinations can outperform other non committee-based ones.
引用
收藏
页码:681 / 698
页数:18
相关论文
共 6 条
[1]   Tree induction for probability-based ranking [J].
Provost, F ;
Domingos, P .
MACHINE LEARNING, 2003, 52 (03) :199-215
[2]   Text Classification from Labeled and Unlabeled Documents using EM [J].
Kamal Nigam ;
Andrew Kachites Mccallum ;
Sebastian Thrun ;
Tom Mitchell .
Machine Learning, 2000, 39 :103-134
[3]  
Selective Sampling Using the Query by Committee Algorithm[J] . Yoav Freund,H. Sebastian Seung,Eli Shamir,Naftali Tishby.Machine Learning . 1997 (2)
[4]   Bagging predictors [J].
Breiman, L .
MACHINE LEARNING, 1996, 24 (02) :123-140
[5]  
Maximum Likelihood from Incomplete Data via the EM Algorithm[J] . A. P. Dempster,N. M. Laird,D. B. Rubin.Journal of the Royal Statistical Society. Series . 1977 (1)
[6]  
Enhanced learning for evolutive neural architecture(ELENA) .2 Aviles-Cruz C,Guerin-Degue A,Voz JL,et al. Technical Report R3-B1-P.Universite Catholique de Louvain,Neural Network Group,Belgium . 1995