A Subgrouping Strategy that Reduces Complexity and Speeds Up Learning in Recurrent Networks

被引:21
作者
Zipser, David [1 ]
机构
[1] Univ Calif San Diego, Dept Cognit Sci, La Jolla, CA 92093 USA
关键词
D O I
10.1162/neco.1989.1.4.552
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An algorithm, called RTRL, for training fully recurrent neural networks has recently been studied by Williams and Zipser (1989a, b). Whereas RTRL has been shown to have great power and generality, it has the disadvantage of requiring a great deal of computation time. A technique is described here for reducing the amount of computation required by RTRL without changing the connectivity of the networks. This is accomplished by dividing the original network into subnets for the purpose of error propagation while leaving them undivided for activity propagation. An example is given of a 12-unit network that learns to be the finite-state part of a Turing machine and runs 10 times faster using the subgrouping strategy than the original algorithm.
引用
收藏
页码:552 / 558
页数:7
相关论文
共 4 条
[1]  
SERVANSCHREIBER D, 1988, CMUCS88183
[2]  
Smith A. W., 1989, IJCNN: International Joint Conference on Neural Networks (Cat. No.89CH2765-6), P645, DOI 10.1109/IJCNN.1989.118646
[3]  
Williams R. J., 1989, Connection Science, V1, P87, DOI 10.1080/09540098908915631
[4]  
Williams R. J., 1989, NEURAL COMPUT, V1, P268