Comparing neural network approximations for different functional forms

被引:8
作者
Morgan, P [1 ]
Curry, B [1 ]
Beynon, M [1 ]
机构
[1] Univ Wales, Cardiff Business Sch, Cardiff CF1 3EU, S Glam, Wales
关键词
multilayer perceptron; hidden layers; universal approximation; generalization; peak functions;
D O I
10.1111/1468-0394.00096
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper examines the capacity of feedforward neural networks (NNs) to approximate certain functional forms Its purpose is to show that the theoretical property of 'universal approximation', which provides the basic rationale behind the NN approach, should not be interpreted too literally. The most important issue considered involves the number of hidden layers in the network. We show that for a number of interesting functional forms better generalization is possible with more than one hidden layer, despite theoretical results to the contrary. Our experiments constitute a useful set of counter-examples.
引用
收藏
页码:60 / 71
页数:12
相关论文
共 11 条
[1]  
CARLEY AF, 1989, COMPUTATIONAL METHOD
[2]  
Cichocki A., 1992, Neural networks for optimization and signal processing
[3]  
Ciuca I, 1997, LECT NOTES COMPUT SC, V1226, P411
[4]  
CURRY B, 1996, 3 INT C NEUR METH MA
[5]  
Cybenko G., 1989, Mathematics of Control, Signals, and Systems, V2, P303, DOI 10.1007/BF02551274
[6]   MULTILAYER FEEDFORWARD NETWORKS ARE UNIVERSAL APPROXIMATORS [J].
HORNIK, K ;
STINCHCOMBE, M ;
WHITE, H .
NEURAL NETWORKS, 1989, 2 (05) :359-366
[8]   KOLMOGOROV THEOREM AND MULTILAYER NEURAL NETWORKS [J].
KURKOVA, V .
NEURAL NETWORKS, 1992, 5 (03) :501-506
[9]   A constructive design method for two-layer perceptrons and its application to the design of modular neural networks [J].
Moon, YJ ;
Oh, SY .
EXPERT SYSTEMS, 1996, 13 (03) :183-194
[10]  
MULIER F, 1995, NEURAL COMPUT, V8, P164