Extreme learning machine: Theory and applications

被引:9723
作者
Huang, Guang-Bin [1 ]
Zhu, Qin-Yu [1 ]
Siew, Chee-Kheong [1 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
关键词
feedforward neural networks; back-propagation algorithm; extreme learning machine; support vector machine; real-time learning; random node;
D O I
10.1016/j.neucom.2005.12.126
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: (1) the slow gradient-based learning algorithms are extensively used to train neural networks, and (2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these conventional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide good generalization performance at extremely fast learning speed. The experimental results based on a few artificial and real benchmark function approximation and classification problems including very large complex applications show that the new algorithm can produce good generalization performance in most cases and can learn thousands of times faster than conventional popular learning algorithms for feedforward neural networks.(I) (c) 2006 Elsevier B.V. All rights reserved.
引用
收藏
页码:489 / 501
页数:13
相关论文
共 26 条
[1]   The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network [J].
Bartlett, PL .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1998, 44 (02) :525-536
[2]  
Blake C.L., 1998, UCI repository of machine learning databases
[3]   A parallel mixture of SVMs for very large scale problems [J].
Collobert, R ;
Bengio, S ;
Bengio, Y .
NEURAL COMPUTATION, 2002, 14 (05) :1105-1114
[4]   Smooth function approximation using neural networks [J].
Ferrari, S ;
Stengel, RF .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2005, 16 (01) :24-38
[5]  
Freund Y, 1996, ICML
[6]  
Haykin S., 1999, Neural Networks: A Comprehensive Foundation, V2nd ed
[7]   APPROXIMATION CAPABILITIES OF MULTILAYER FEEDFORWARD NETWORKS [J].
HORNIK, K .
NEURAL NETWORKS, 1991, 4 (02) :251-257
[8]   A comparison of methods for multiclass support vector machines [J].
Hsu, CW ;
Lin, CJ .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (02) :415-425
[9]  
Huang G.B., 2003, ICIS452003 NAN TU SC
[10]  
Huang G.-B., 2006, IEEE T NEURAL NETWOR, V17