Learning State Space Trajectories in Recurrent Neural Networks

被引:386
作者
Pearlmutter, Barak A. [1 ]
机构
[1] Carnegie Mellon Univ, Sch Comp Sci, Pittsburgh, PA 15213 USA
基金
美国国家科学基金会;
关键词
D O I
10.1162/neco.1989.1.2.263
中图分类号
TP18 [人工智能理论];
学科分类号
081104 [模式识别与智能系统]; 0812 [计算机科学与技术]; 0835 [软件工程]; 1405 [智能科学与技术];
摘要
Many neural network learning procedures compute gradients of the errors on the output layer of units after they have settled to their final values. We describe a procedure for finding partial derivative E/partial derivative(wij) where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and w(ij) are the weights of that network. Computing these quantities allows one to perform gradient descent in the weights to minimize E. Simulations in which networks are taught to move through limit cycles are shown. This type of recurrent network seems particularly suited for temporally continuous domains, such as signal processing, control, and speech.
引用
收藏
页码:263 / 269
页数:7
相关论文
共 8 条
[1]
Bryson A. E., 1962, J APPL MECH, V29, P247, DOI [10.1115/1.3640537, DOI 10.1115/1.3640537]
[2]
Furst M., 1988, COMMUNICATION
[3]
Jordan M.I., 1986, PROC C COGNITIVE SCI, P531
[4]
Pearlmutter B., 1988, CMUCS88191 SCH COMP
[5]
GENERALIZATION OF BACK-PROPAGATION TO RECURRENT NEURAL NETWORKS [J].
PINEDA, FJ .
PHYSICAL REVIEW LETTERS, 1987, 59 (19) :2229-2232
[6]
Rumelhart D., 1986, FOUNDATIONS, V1, P318
[7]
GENERALIZATION OF BACKPROPAGATION WITH APPLICATION TO A RECURRENT GAS MARKET MODEL [J].
WERBOS, PJ .
NEURAL NETWORKS, 1988, 1 (04) :339-356
[8]
A Learning Algorithm for Continually Running Fully Recurrent Neural Networks [J].
Williams, Ronald J. ;
Zipser, David .
NEURAL COMPUTATION, 1989, 1 (02) :270-280