APPROXIMATION OF DISCRETE-TIME STATE-SPACE TRAJECTORIES USING DYNAMIC RECURRENT NEURAL NETWORKS

被引:82
作者
JIN, L
NIKIFORUK, PN
GUPTA, MM
机构
[1] Intelligent Systems Research Laboratory, College of Engineering, University of Saskatchewan, Saskatoon, Saskatchewan
关键词
D O I
10.1109/9.400480
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this note, the approximation capability of a class of discrete-time dynamic recurrent neural networks (DRNN's) is studied. Analytical results presented show that some of the states of such a DRNN described by a set of difference equations may be used to approximate uniformly a state-space trajectory produced by either a discrete-time nonlinear system or a continuous function on a closed discrete-time interval. This approximation process, however, has to be carried out by an adaptive learning process. This capability provides the potential for applications such as identification and adaptive control.
引用
收藏
页码:1266 / 1270
页数:5
相关论文
共 19 条
[11]  
JIN L, 1995, INTELLIGENT CONTROL
[12]  
JIN L, 1994, ASME, V116, P565
[13]  
LI K, 1992, 1992 P IJCNN, V2, P266
[14]  
NARENDRA KS, 1991, IEEE T NEURAL NETWOR, V2, P4
[15]   RECURRENT NEURAL-NETWORK TRAINING WITH FEEDFORWARD COMPLEXITY [J].
OLUROTIMI, O .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (02) :185-197
[16]   Learning State Space Trajectories in Recurrent Neural Networks [J].
Pearlmutter, Barak A. .
NEURAL COMPUTATION, 1989, 1 (02) :263-269
[17]  
Pineda F. J., 1988, Journal of Complexity, V4, P216, DOI 10.1016/0885-064X(88)90021-0
[18]   GENERALIZATION OF BACK-PROPAGATION TO RECURRENT NEURAL NETWORKS [J].
PINEDA, FJ .
PHYSICAL REVIEW LETTERS, 1987, 59 (19) :2229-2232
[19]   A Learning Algorithm for Continually Running Fully Recurrent Neural Networks [J].
Williams, Ronald J. ;
Zipser, David .
NEURAL COMPUTATION, 1989, 1 (02) :270-280