A fast and accurate online sequential learning algorithm for feedforward networks

被引:1628
作者
Liang, Nan-Ying [1 ]
Huang, Guang-Bin [1 ]
Saratchandran, P. [1 ]
Sundararajan, N. [1 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2006年 / 17卷 / 06期
关键词
extreme learning machine (ELM); growing and pruning RBF network (GAP-RBF); GGAP-RBF; minimal resource allocation network (MRAN); online sequential ELM (OS-ELM); resource allocation network (RAN); resource allocation network via extended kalman filter (RANEKF); stochastic gradient descent back-propagation (SGBP);
D O I
10.1109/TNN.2006.880583
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with. additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.
引用
收藏
页码:1411 / 1423
页数:13
相关论文
共 33 条
[1]  
[Anonymous], IEEE T NEURAL NETWOR
[2]  
[Anonymous], 1971, GEN INVERSES MATRICE
[3]  
ASIRVADAM VS, 2002, P 12 IEEE WORKSH NEU, P129
[4]  
Blake C.L., 1998, UCI repository of machine learning databases
[5]  
Chong E., 2001, An Introduction to Optimization, V4th
[6]   Smooth function approximation using neural networks [J].
Ferrari, S ;
Stengel, RF .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2005, 16 (01) :24-38
[7]  
Golub G. H., 1996, MATRIX COMPUTATIONS
[8]   Can threshold networks be trained directly? [J].
Huang, GB ;
Zhu, QY ;
Mao, KZ ;
Siew, CK ;
Saratchandran, P ;
Sundararajan, N .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2006, 53 (03) :187-191
[9]  
Huang GB, 2005, PROCEEDINGS OF THE IASTED INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE, P232
[10]  
Huang GB, 2004, I C CONT AUTOMAT ROB, P1029