A LEARNING ALGORITHM FOR MULTILAYERED NEURAL NETWORKS BASED ON LINEAR LEAST-SQUARES PROBLEMS

被引:59
作者
BIEGLERKONIG, F [1 ]
BARMANN, F [1 ]
机构
[1] BAYER AG,W-5090 LEVERKUSEN,GERMANY
关键词
NEURAL NETWORKS; MULTILAYER; LEARNING ALGORITHM; BACK PROPAGATION; LEAST SQUARES PROBLEM; QR-DECOMPOSITION;
D O I
10.1016/S0893-6080(05)80077-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An algorithm for the training of multilayered neural networks solely based on linear algebraic methods is presented. Its convergence speed up to a certain limit of learning accuracy is orders of magnitude better than that of the classical back propagation. Furthermore, its learning aptitude increases with the number of internal nodes in the network (contrary to backprop). Especially if the network includes a hidden layer with more nodes than the number of examples to be learned and if the number of nodes in succeeding layers decreases monotonically, the presented algorithm in general finds an exact solution.
引用
收藏
页码:127 / 131
页数:5
相关论文
共 4 条
[1]   ON A CLASS OF EFFICIENT LEARNING ALGORITHMS FOR NEURAL NETWORKS [J].
BARMANN, F ;
BIEGLERKONIG, F .
NEURAL NETWORKS, 1992, 5 (01) :139-144
[2]  
BUNSE W, 1985, NUMERISCHE LINEARE A
[3]  
McClelland JL., 1986, PARALLEL DISTRIBUTED, V1-2
[4]  
STOER J, 1983, EINFUHRUNG NUMERISHE, V1