Convergent on-line algorithms for supervised learning in neural networks

被引:27
作者
Grippo, L [1 ]
机构
[1] Univ Roma La Sapienza, Dipartimento Informat & Sistemist, Rome, Italy
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2000年 / 11卷 / 06期
关键词
neural networks; on-line algorithms; supervised learning; training algorithms; unconstrained optimization;
D O I
10.1109/72.883426
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we define on-line algorithms for neural-network training, based on the construction of multiple copies of the network, which are trained by employing different data blocks. It is shown that suitable training algorithms can be defined, in a way that the disagreement between the different copies of the network is asymptotically reduced, and convergence toward stationary points of the global error function can be guaranteed, Relevant features of the proposed approach are that the learning rate must be not necessarily forced to zero and that real-time learning is permitted.
引用
收藏
页码:1284 / 1299
页数:16
相关论文
共 32 条
[1]  
BATTITI R, 1990, P INT NEUR NETW C PA
[2]  
BERTSEKAS D. P, 1978, Neuro-dynamic programming
[3]  
Bertsekas D.P., 2014, Constrained optimization and Lagrange multiplier methods
[4]  
Bertsekas Dimitri P., 1989, PARALLEL DISTRIBUTED
[5]   A new class of incremental gradient methods for least squares problems [J].
Bertsekas, DP .
SIAM JOURNAL ON OPTIMIZATION, 1997, 7 (04) :913-926
[6]  
Bertsekas DP, 1997, J. Oper. Res. Soc., V48, P334, DOI 10.1057/palgrave.jors.2600425
[7]  
Bishop C. M., 1995, NEURAL NETWORKS PATT
[8]  
Cichocki A., 1993, Neural Networks for Optimization and Signal Processing
[9]  
Dennis, 1996, NUMERICAL METHODS UN
[10]  
DiPillo G, 1996, NONLINEAR OPTIMIZATION AND APPLICATIONS, P85