A successive overrelaxation backpropagation algorithm for neural-network training

被引:9
作者
De Leone, R [1 ]
Capparuccia, R
Merelli, E
机构
[1] Univ Camerino, Dipartimento Matemat & Fis, I-62032 Camerino, Italy
[2] Univ Camerino, Comp Sch, I-62032 Camerino, Italy
[3] Univ Ancona, Inst Informat, I-60128 Ancona, Italy
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 1998年 / 9卷 / 03期
关键词
backpropagation; successive overrelaxation;
D O I
10.1109/72.668881
中图分类号
TP18 [人工智能理论];
学科分类号
081104 [模式识别与智能系统]; 0812 [计算机科学与技术]; 0835 [软件工程]; 1405 [智能科学与技术];
摘要
A variation of the classical backpropagation algorithm for neural network training is proposed and convergence is established using the perturbation results of Mangasarian and Solodov, The algorithm is similar to the successive overrelaxation (SOR) algorithm for systems of linear equations and linear complementary problems in using the most recently computed values of the weights to update the values on the remaining arcs.
引用
收藏
页码:381 / 388
页数:8
相关论文
共 24 条
[1]
Anderson J. A., 1988, Neurocomputing: Foundations of research
[2]
[Anonymous], 1990, Report No
[3]
[Anonymous], 1943, B MATH BIOPHYS
[4]
[Anonymous], 1990, SIAM NEWS
[5]
Denker J., 1987, Complex Systems, V1, P877
[6]
FAHLMAN SE, 1989, ADV NEURAL INFORMATI, V2, P524
[7]
NEOCOGNITRON - A NEURAL NETWORK MODEL FOR A MECHANISM OF VISUAL-PATTERN RECOGNITION [J].
FUKUSHIMA, K ;
MIYAKE, S ;
ITO, T .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1983, 13 (05) :826-834
[8]
GRIPPO L, 1984, OPTIMIZATION METHODS, V4, P135
[9]
HEBBIAN LEARNING RECONSIDERED - REPRESENTATION OF STATIC AND DYNAMIC OBJECTS IN ASSOCIATIVE NEURAL NETS [J].
HERZ, AV ;
SULZER, B ;
KUHN, R ;
VANHEMMEN, JL .
BIOLOGICAL CYBERNETICS, 1989, 60 (06) :457-467
[10]
INCREASED RATES OF CONVERGENCE THROUGH LEARNING RATE ADAPTATION [J].
JACOBS, RA .
NEURAL NETWORKS, 1988, 1 (04) :295-307