AN EFFICIENT CONSTRAINED LEARNING ALGORITHM WITH MOMENTUM ACCELERATION

被引:53
作者
PERANTONIS, SJ
KARRAS, DA
机构
[1] Institute of Informatics and Telecommunications, National Research Center Demokritos
关键词
FEEDFORWARD NEURAL NETWORKS; SUPERVISED LEARNING; MOMENTUM ACCELERATION; NONLINEAR PROGRAMMING; CONSTRAINTS; LAGRANGE MULTIPLIERS;
D O I
10.1016/0893-6080(94)00067-V
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An algorithm for efficient learning in feedforward networks is presented. Momentum acceleration is achieved by solving a constrained optimization problem using nonlinear programming techniques. In particular, minimization of the usual mean square error cost function is attempted under an additional condition for which the purpose is to optimize the alignment of the weight update vectors in successive epochs. The algorithm is applied to several benchmark training tasks (exclusive-or, encoder, multiplexer, and counter problems). Its performance, in terms of learning speed and scalability properties, is evaluated and found superior to the performance of reputedly fast variants of the back-propagation algorithm in the above benchmarks.
引用
收藏
页码:237 / 249
页数:13
相关论文
共 45 条
[1]  
ABUMOSTAFA YS, 1986, AIP C P SNOWBIRD UT, V151, P1
[2]  
[Anonymous], 1990, ADV NEURAL INF PROCE
[3]  
BECKER S, 1988, SUM P CONN MOD SCH S, P29
[4]  
BEIGHTLER CS, 1979, F OPTIMIZATION
[5]  
Bryson A. P., 1962, J APPL MECH, V29, P247, DOI DOI 10.1115/1.3640537
[6]  
FAHLMAN SE, 1988, 1988 P CONN MOD SUMM, P38
[7]  
FELDMAN JA, 1985, COGNITIVE SCI, V9, P1
[8]  
Fletcher R., 1980, PRACTICAL METHODS OP, V1
[9]   ON THE APPROXIMATE REALIZATION OF CONTINUOUS-MAPPINGS BY NEURAL NETWORKS [J].
FUNAHASHI, K .
NEURAL NETWORKS, 1989, 2 (03) :183-192
[10]  
GATOS B, 1993, P WORKSHOP NEURAL NE, P65