A combinational incremental ensemble of classifiers as a technique for predicting students' performance in distance education

被引:114
作者
Kotsiantis, S. [1 ]
Patriarcheas, K. [1 ]
Xenos, M. [1 ]
机构
[1] Hellen Open Univ, Software Qual Lab, Sch Sci & Technol, Patras 26222, Greece
关键词
Educational data mining; Online learning algorithms; Classifiers; Voting methods; SUCCESS;
D O I
10.1016/j.knosys.2010.03.010
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The ability to predict a student's performance could be useful in a great number of different ways associated with university-level distance learning. Students' marks in a few written assignments can constitute the training set for a supervised machine learning algorithm. Along with the explosive increase of data and information, incremental learning ability has become more and more important for machine learning approaches. The online algorithms try to forget irrelevant information instead of synthesizing all available information (as opposed to classic batch learning algorithms) Nowadays, combining classifiers is proposed as a new direction for the improvement of the classification accuracy. However, most ensemble algorithms operate in batch mode. Therefore a better proposal is an online ensemble of classifiers that combines an incremental version of Naive Bayes, the 1-NN and the WINNOW algorithms using the voting methodology. Among other significant conclusions it was found that the proposed algorithm is the most appropriate to be used for the construction of a software support tool. (C) 2010 Elsevier B.V All rights reserved.
引用
收藏
页码:529 / 535
页数:7
相关论文
共 49 条
[1]  
Aha D., 1997, Lazy learning
[2]  
AUER P, 1998, TRACKING BEST DISJUN, V32
[3]  
AVOURIS N, 2005, WORKSH US AN LEARN S
[4]  
Barron A., 1999, Teacher's Guide to Distance Learning
[5]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[6]  
Burton L.J., 2005, Proceedings of the 28th HERDSA annual conference: Higher education in a changing world, P68
[7]  
CASTRO F, 2007, APPL DATA MINING TEC
[8]  
Cohen W.W., 1995, P 12 INT C MACH LEAR, P115, DOI [10.1016/b978-1-55860-377-6.50023-2, DOI 10.1016/B978-1-55860-377-6.50023-2]
[9]   On the optimality of the simple Bayesian classifier under zero-one loss [J].
Domingos, P ;
Pazzani, M .
MACHINE LEARNING, 1997, 29 (2-3) :103-130
[10]  
Fan W., 1999, P 5 ACM SIGKDD INT C, P362