SUPERSAB - FAST ADAPTIVE BACK PROPAGATION WITH GOOD SCALING PROPERTIES

被引:209
作者
TOLLENAERE, T [1 ]
机构
[1] UNIV EDINBURGH,EDINBURGH EH8 9YL,MIDLOTHIAN,SCOTLAND
关键词
Accelerated learning; Adaptative learning; Error back propagation; Local algorithms; Parallel computers;
D O I
10.1016/0893-6080(90)90006-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents SuperSAB: an adaptive acceleration strategy for error back propagation learning. The strategy is compared with the original back propagation algorithm, as well as with previously proposed acceleration techniques. It will be shown that SuperSAB may converge orders of magnitude faster than the original back propagation algorithm, and is only slightly instable. In addition, the algorithm is very insensitive to the choice of parameter values, and has excellent scaling properties. All simulations have been carried out on the Edinburgh concurrent supercomputer: a large parallel computer system, based on transputers. The power of this machine made it possible to test both SuperSAB and the original back propagation algorithm very extensively. As a result, this paper presents some interesting phenomenology results on the influence of parameter values in the original back propagation algorithm. © 1990.
引用
收藏
页码:561 / 573
页数:13
相关论文
共 15 条
[1]  
BOWLER K, 1988, P C DESIGN APPLICATI
[2]  
Chan L.-W., 1987, Computer Speech and Language, V2, P205, DOI 10.1016/0885-2308(87)90009-X
[3]  
DEVOS MR, 1988, 1988 P NEUR NANT
[4]  
FAHLMAN SE, 1988, CMUCS88162 CARN U
[5]   INCREASED RATES OF CONVERGENCE THROUGH LEARNING RATE ADAPTATION [J].
JACOBS, RA .
NEURAL NETWORKS, 1988, 1 (04) :295-307
[6]  
JONES J, 1988, PRGRAMMING OCCAM, V2
[7]  
Minsky M., 1969, PERCEPTRONS
[8]  
RICHARDS GD, 1989, THESIS U EDINBURGH
[9]  
RICHARDS GD, 1989, DOCUMENTATION RHWYDW
[10]  
RUMELHART DE, 1986, PARALLEL DISTRIBUTED, pCH8