LEARNING IN THE RECURRENT RANDOM NEURAL NETWORK

被引:262
作者
GELENBE, E
机构
关键词
D O I
10.1162/neco.1993.5.1.154
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The capacity to learn from examples is one of the most desirable features of neural network models. We present a learning algorithm for the recurrent random network model (Gelenbe 1989, 1990) using gradient descent of a quadratic error function. The analytical properties of the model lead to a ''backpropagation'' type algorithm that requires the solution of a system of n linear and n nonlinear equations each time the n-neuron network ''learns'' a new input-output pair.
引用
收藏
页码:154 / 164
页数:11
相关论文
共 18 条
[1]  
ACKLEY DH, 1985, COGNITIVE SCI, V9, P147
[2]  
ALMEIDA LB, 1987, 1ST P IEEE INT C NEU, V2, P609
[3]  
ATALAY V, 1991, ARTIFICIAL NEURAL NE, V1, P111
[4]  
BAUM EB, 1991, COMMUNICATION 0511
[5]  
BEHRENS H, 1991, ARTIFICIAL NEURAL NE, V2, P1511
[6]  
GELENBE E, 1992, JAN ORSA TC COMP SCI
[7]  
GELENBE E, 1991, ARTIFICIAL NEURAL NE, V1, P307
[8]   Stability of the Random Neural Network Model [J].
Gelenbe, Erol .
NEURAL COMPUTATION, 1990, 2 (02) :239-247
[10]  
Kandel E., 1985, VOLUNTARY MOVEMENT, Vsecond, P666