BACKPROPAGATION THROUGH TIME - WHAT IT DOES AND HOW TO DO IT

被引:2815
作者
WERBOS, PJ
机构
[1] National Science Foundation, Washington, DC 20550
关键词
D O I
10.1109/5.58337
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Backpropagation is now the most widely used tool in the field of artificial neural networks. At the core of backpropagation is a method for calculating derivatives exactly and efficiently in any large system made up of elementary subsystems or calculations which are represented by known, differentiable functions; thus, backpropagation has many applications which do not involve neural networks as such. This paper first reviews basic backpropagation, a simple method which is now being widely used in areas like pattern recognition and fault diagnosis. Next, it presents the basic equations for back-propagation through time, and discusses applications to areas like pattern recognition involving dynamic systems, systems identification, and control. Finally, it describes further extensions of this method, to deal with systems other than neural networks, systems involving simultaneous equations or true recurrent networks, and other practical issues which arise with this method. Pseudocode is provided to clarify the algorithms. The chain rule forordered derivatives—the theorem which underlies backpropagation—is briefly discussed. © 1990, IEEE
引用
收藏
页码:1550 / 1560
页数:11
相关论文
共 24 条
  • [21] WERBOS P, 1988, NEURAL NETWORKS OCT
  • [22] WERBOS P, 1989, IEEE T SYST MAN MAR
  • [23] WERBOS P, 1982, SSTEMS MODELING OPTI
  • [24] WILLIAMS R, 1990, NEURAL NETWORKS ROBO