APPROXIMATION OF DYNAMICAL-SYSTEMS BY CONTINUOUS-TIME RECURRENT NEURAL NETWORKS

被引:651
作者
FUNAHASHI, K
NAKAMURA, Y
机构
关键词
APPROXIMATION; CONTINUOUS TIME RECURRENT NEURAL NETWORK; DYNAMICAL SYSTEM; AUTONOMOUS SYSTEM; TRAJECTORY; INTERNAL STATE; HIDDEN UNIT; CONTINUOUS CURVE;
D O I
10.1016/S0893-6080(05)80125-X
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we prove that any finite time trajectory of a given n-dimensional dynamical system can be approximately realized by the internal state of the output units of a continuous time recurrent neural network with n output units, some hidden units, and an appropriate initial condition. The essential idea of the proof is to embed the n-dimensional dynamical system into a higher dimensional one which defines a recurrent neural network. As a corollary, we also show that any continuous curve can be approximated by the output of a recurrent neural network.
引用
收藏
页码:801 / 806
页数:6
相关论文
共 14 条
[1]  
[Anonymous], 1987, LEARNING INTERNAL RE
[2]  
Cybenko G., 1989, Mathematics of Control, Signals, and Systems, V2, P303, DOI 10.1007/BF02551274
[3]   ON THE APPROXIMATE REALIZATION OF CONTINUOUS-MAPPINGS BY NEURAL NETWORKS [J].
FUNAHASHI, K .
NEURAL NETWORKS, 1989, 2 (03) :183-192
[4]  
Hirsch M. W., 1976, DIFFERENTIAL TOPOLOG, V33
[5]   CONVERGENT ACTIVATION DYNAMICS IN CONTINUOUS-TIME NETWORKS [J].
HIRSCH, MW .
NEURAL NETWORKS, 1989, 2 (05) :331-349
[6]  
Hirsch MW., 1974, DIFFERENTIAL EQUATIO
[7]   MULTILAYER FEEDFORWARD NETWORKS ARE UNIVERSAL APPROXIMATORS [J].
HORNIK, K ;
STINCHCOMBE, M ;
WHITE, H .
NEURAL NETWORKS, 1989, 2 (05) :359-366
[8]  
PEARLMUTTER BA, 1989, P INT JOINT C NEUR N, V2, P365
[9]   Learning State Space Trajectories in Recurrent Neural Networks [J].
Pearlmutter, Barak A. .
NEURAL COMPUTATION, 1989, 1 (02) :263-269
[10]   GENERALIZATION OF BACK-PROPAGATION TO RECURRENT NEURAL NETWORKS [J].
PINEDA, FJ .
PHYSICAL REVIEW LETTERS, 1987, 59 (19) :2229-2232