Error Minimized Extreme Learning Machine With Growth of Hidden Nodes and Incremental Learning

被引:616
作者
Feng, Guorui [1 ]
Huang, Guang-Bin [2 ]
Lin, Qingping [2 ]
Gay, Robert [2 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200072, Peoples R China
[2] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2009年 / 20卷 / 08期
关键词
Echo state network (ESN); extreme learning machine (ELM); feedforward neural networks (FNNs); growing algorithm; incremental learning; minimizing error; sequential learning; FEEDFORWARD NETWORKS; MULTILAYER PERCEPTRONS; FUNCTION APPROXIMATION; NEURAL-NETWORKS; CAPABILITY;
D O I
10.1109/TNN.2009.2024147
中图分类号
TP18 [人工智能理论];
学科分类号
140502 [人工智能];
摘要
One of the open problems in neural network research is how to automatically determine network architectures for given applications. In this brief, we propose a simple and efficient approach to automatically determine the number of hidden nodes in generalized single-hidden-layer feedforward networks (SLFNs) which need not be neural alike. This approach referred to as error minimized extreme learning machine (EM-ELM) can add random hidden nodes to SLFNs one by one or group by group (with varying group size). During the growth of the networks, the output weights are updated incrementally. The convergence of this approach is proved in this brief as well. Simulation results demonstrate and verify that our new approach is much faster than other sequential/incremental/growing algorithms with good generalization performance.
引用
收藏
页码:1352 / 1357
页数:6
相关论文
共 25 条
[1]
[Anonymous], 1971, Generalized Inverses of Matrices and its Applications
[2]
Boyd S., 1994, LINEAR MATRIX INEQUA
[4]
THE WAVELET TRANSFORM, TIME-FREQUENCY LOCALIZATION AND SIGNAL ANALYSIS [J].
DAUBECHIES, I .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1990, 36 (05) :961-1005
[5]
ORTHONORMAL BASES OF COMPACTLY SUPPORTED WAVELETS [J].
DAUBECHIES, I .
COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, 1988, 41 (07) :909-996
[6]
Linear-least-squares initialization of multilayer perceptrons through backpropagation of the desired response [J].
Erdogmus, D ;
Fontenla-Romero, O ;
Principe, JC ;
Alonso-Betanzos, A ;
Castillo, E .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2005, 16 (02) :325-337
[7]
LEARNING, INVARIANCE, AND GENERALIZATION IN HIGH-ORDER NEURAL NETWORKS [J].
GILES, CL ;
MAXWELL, T .
APPLIED OPTICS, 1987, 26 (23) :4972-4978
[8]
Improved extreme learning machine for function approximation by encoding a priori information [J].
Han, Fei ;
Huang, De-Shuang .
NEUROCOMPUTING, 2006, 69 (16-18) :2369-2373
[9]
Huang GB, 2005, PROCEEDINGS OF THE IASTED INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE, P232
[10]
Huang GB, 2004, IEEE IJCNN, P985