A fast pruned-extreme learning machine for classification problem

被引:291
作者
Rong, Hai-Jun [1 ]
Ong, Yew-Soon [1 ]
Tan, Ah-Hwee [1 ]
Zhu, Zexuan [1 ]
机构
[1] Nanyang Technol Univ, Sch Comp Engn, Div Informat Syst, Singapore 639798, Singapore
关键词
Feedforward networks; Extreme learning machine (ELM); Pattern classification;
D O I
10.1016/j.neucom.2008.01.005
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Extreme learning machine (ELM) represents one of the recent successful approaches in machine learning, particularly for performing pattern classification. One key strength of ELM is the significantly low computational time required for training new classifiers since the weights of the hidden and output nodes are randomly chosen and analytically determined, respectively. In this paper, we address the architectural design of the ELM classifier network, since too few/many hidden nodes employed would lead to underfitting/overfitting issues in pattern classification. In particular, we describe the proposed pruned-ELM (P-ELM) algorithm as a systematic and automated approach for designing ELM classifier network. P-ELM uses statistical methods to measure the relevance of hidden nodes. Beginning from an initial large number of hidden nodes, irrelevant nodes are then pruned by considering their relevance to the class labels. As a result, the architectural design of ELM network classifier can be automated. Empirical study of P-ELM on several commonly used classification benchmark problems and with diverse forms of hidden node functions show that the proposed approach leads to compact network classifiers that generate fast response and robust prediction accuracy on unseen data, comparing with traditional ELM and other popular machine learning approaches. (C) 2008 Elsevier B.V. All rights reserved.
引用
收藏
页码:359 / 366
页数:8
相关论文
共 20 条
[1]  
[Anonymous], 1965, INFORM THEORY
[2]  
[Anonymous], 1971, GEN INVERSES MATRICE
[3]  
[Anonymous], 1993, P 13 INT JOINT C ART
[4]  
Blake C.L., 1998, UCI repository of machine learning databases
[5]  
Handoko SD, 2006, LECT NOTES COMPUT SC, V3973, P716
[6]   Can threshold networks be trained directly? [J].
Huang, GB ;
Zhu, QY ;
Mao, KZ ;
Siew, CK ;
Saratchandran, P ;
Sundararajan, N .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2006, 53 (03) :187-191
[7]  
Huang GB, 2005, PROCEEDINGS OF THE IASTED INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE, P232
[8]   Learning capability and storage capacity of two-hidden-layer feedforward networks [J].
Huang, GB .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2003, 14 (02) :274-281
[9]   Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions [J].
Huang, GB ;
Babri, HA .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1998, 9 (01) :224-229
[10]   Convex incremental extreme learning machine [J].
Huang, Guang-Bin ;
Chen, Lei .
NEUROCOMPUTING, 2007, 70 (16-18) :3056-3062