Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions

被引:394
作者
Huang, GB [1 ]
Babri, HA [1 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 1998年 / 9卷 / 01期
关键词
activation functions; feedforward networks; hidden neurons; upper bounds;
D O I
10.1109/72.655045
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is well known that standard single-hidden layer feed-forward networks (SLFN's) with at most N hidden neurons (including biases) can learn N distinct samples (x(i), t(i)) with zero error, and the weights connecting the input neurons and the hidden neurons can be chosen "almost" arbitrarily, However, these results have been obtained for the case when the activation function for the hidden neurons is the signum function, This paper rigorously proves that standard single-hidden layer feedforward networks (SLFN's) with at most N hidden neurons and with any bounded nonlinear activation function which has a limit at one infinity can learn N distinct samples (x(i), t(i)) with zero error, The previous method of arbitrarily choosing weights is not feasible for any SLFN, The proof of our result is constructive and thus gives a method to directly find the weights of the standard SLFN's with any such bounded nonlinear activation function as opposed to iterative training algorithms in the literature.
引用
收藏
页码:224 / 229
页数:6
相关论文
共 24 条