Learning capability and storage capacity of two-hidden-layer feedforward networks

被引:622
作者
Huang, GB [1 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2003年 / 14卷 / 02期
关键词
learning capability; neural-network modularity; storage capacity; two-hidden-layer feedforward networks (TLFNs);
D O I
10.1109/TNN.2003.809401
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The problem of the necessary complexity of neural networks is of interest in applications. In this paper, learning capability and storage capacity of feedforward neural networks are considered. We markedly improve recent results by introducing neural-network modularity logically. This paper rigorously proves in a constructive method that two-hidden-layer. feedforward networks (TLFNs) with 2root(m+2)N (much less thanN) hidden neurons can learn any N distinct samples (x(i), t(i)) with any arbitrarily small error, where m is the required number of output neurons. It implies that the required number of hidden neurons needed in feedforward networks can be decreased significantly, comparing with previous results-Conversely, a TLFN with Q hidden neurons can store at least Q(2)/4(m + 2) any distinct data (X-i, t(i)) with any desired precision.
引用
收藏
页码:274 / 281
页数:8
相关论文
共 29 条