Universal approximation using incremental constructive feedforward networks with random hidden nodes

被引:2029
作者
Huang, Guang-Bin [1 ]
Chen, Lei [1 ]
Siew, Chee-Kheong [1 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2006年 / 17卷 / 04期
关键词
ensemble; feedforward network; incremental extreme learning machine; radial basis function; random hidden nodes; support vector machine; threshold network; universal approximation;
D O I
10.1109/TNN.2006.875977
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
According to conventional neural network theories, single-hidden-layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes are universal approximators when all the parameters of the networks are allowed adjustable. However, as observed in most neural network implementations, tuning all the parameters of the networks may cause learning complicated and inefficient, and it may be difficult to train networks with nondifferential activation functions such as threshold networks. Unlike conventional neural network theories, this paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additive nodes can be any bounded nonconstant piecewise continuous functions g : R -> R and the activation functions for RBF nodes can be any integrable piecewise continuous functions g : R -> Rand f(R) g(x)dx not equal 0. The proposed incremental method is efficient not only for SFLNs with continuous (including nondifferentiable) activation functions but also for SLFNs with piecewise continuous (such as threshold) activation functions. Compared to other popular methods such a new network is fully automatic and users need not intervene the learning process by manually tuning control parameters.
引用
收藏
页码:879 / 892
页数:14
相关论文
共 53 条
[1]  
[Anonymous], LIBSVM LIB SUPPORT V
[2]  
[Anonymous], IEEE T NEURAL NETWOR
[3]  
[Anonymous], 1981, ADV CALCULUS INTRO M
[4]  
[Anonymous], NC2TR1998030
[5]   UNIVERSAL APPROXIMATION BOUNDS FOR SUPERPOSITIONS OF A SIGMOIDAL FUNCTION [J].
BARRON, AR .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1993, 39 (03) :930-945
[6]  
Baum E. B., 1988, Journal of Complexity, V4, P193, DOI 10.1016/0885-064X(88)90020-9
[7]  
Blake C.L., 1998, UCI repository of machine learning databases
[8]  
CHEN TP, 1995, IEEE T NEURAL NETWOR, V6, P25
[9]   CONSTRUCTIVE NEURAL NETWORKS WITH PIECEWISE INTERPOLATION CAPABILITIES FOR FUNCTION APPROXIMATIONS [J].
CHOI, CH ;
CHOI, JY .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (06) :936-944
[10]  
Cybenko G., 1989, Mathematics of Control, Signals, and Systems, V2, P303, DOI 10.1007/BF02551274