LEARNING IN THE RECURRENT RANDOM NEURAL NETWORK

被引:263
作者
GELENBE, E
机构
关键词
D O I
10.1162/neco.1993.5.1.154
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The capacity to learn from examples is one of the most desirable features of neural network models. We present a learning algorithm for the recurrent random network model (Gelenbe 1989, 1990) using gradient descent of a quadratic error function. The analytical properties of the model lead to a ''backpropagation'' type algorithm that requires the solution of a system of n linear and n nonlinear equations each time the n-neuron network ''learns'' a new input-output pair.
引用
收藏
页码:154 / 164
页数:11
相关论文
共 18 条
[11]  
KEMENY JG, 1965, FINITE MARKOV CHAINS
[12]  
LECUN Y, 1985, P COGNITIVA, V85, P599
[13]  
McClelland J.L., 1986, PARALLEL DISTRIBUTED
[14]  
MOKHTARI M, 1992, IN PRESS INT J ARTIF
[15]   Learning State Space Trajectories in Recurrent Neural Networks [J].
Pearlmutter, Barak A. .
NEURAL COMPUTATION, 1989, 1 (02) :263-269
[16]  
Pineda F.J., 1987, NEURAL INFORM PROCES, P602
[17]   Recurrent Backpropagat ion and the Dynamical Approach to Adaptive Neural Computation [J].
Pineda, Fernando J. .
NEURAL COMPUTATION, 1989, 1 (02) :161-172
[18]  
Rumelhart D.E., 1986, PARALLEL DISTRIBUTED, DOI 10.7551/mitpress/5236.001.0001