ON A CLASS OF EFFICIENT LEARNING ALGORITHMS FOR NEURAL NETWORKS

被引:27
作者
BARMANN, F
BIEGLERKONIG, F
机构
[1] Bayer AG, Bayerwerk, Germany
关键词
FEEDFORWARD NETWORKS; CYCLIC BACK PROPAGATION; BATCH SUPERVISED LEARNING; SOLVABILITY CONDITION; GENERALIZED DELTA-RULE; HIDDEN LAYER; GRADIENT METHOD; ACCURATE LEARNING;
D O I
10.1016/S0893-6080(05)80012-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The ability of a neural network with one hidden layer to accurately learn a specified learning set increases with the number of nodes in the hidden layer; if a network has exactly the same number of internal nodes as the number of examples to be learnt, it is theoretically able to learn these examples exactly. If, however, the generalized delta rule (or back propagation) is used as the learning algorithm in numerical experiments, a network's learning aptitude generally declines with increasing number of internal nodes. The approach to iterate the solvability condition for accurate learning, instead of using total error minimization, results in learning algorithms in which learning aptitude increases with the number of internal nodes. At the same time, these methods enable further nodes to be added dynamically in a particularly simple manner. A numerical implementation showed that, if the solvability condition was valid, the algorithm was able to learn the learning set to the limits of computer accuracy in all cases tested, and thus, especially, did not get caught up in local minima of the error function. Furthermore, the convergence speed is considerably higher than that of back propagation.
引用
收藏
页码:139 / 144
页数:6
相关论文
共 5 条
[1]  
[Anonymous], 1987, LEARNING INTERNAL RE
[2]  
FAHLMANN SE, 1988, CMUCS88162 CARN MELL
[3]   IMPROVEMENT OF THE BACKPROPAGATION ALGORITHM FOR TRAINING NEURAL NETWORKS [J].
LEONARD, J ;
KRAMER, MA .
COMPUTERS & CHEMICAL ENGINEERING, 1990, 14 (03) :337-341
[4]  
Ortega J.M., 1970, OCLC1154227410, Patent No. 1154227410
[5]  
Schwetlick H, 1979, NUMERISCHE LOSUNG NI