WEIGHT PERTURBATION - AN OPTIMAL ARCHITECTURE AND LEARNING TECHNIQUE FOR ANALOG VLSI FEEDFORWARD AND RECURRENT MULTILAYER NETWORKS

被引:130
作者
JABRI, M
FLOWER, B
机构
[1] School of Electrical Engineering, University of Sydney, Sydney
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 1992年 / 3卷 / 01期
关键词
D O I
10.1109/72.105429
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. In this paper we show that using gradient descent with direct approximation of the gradient instead of back-propagation is more economical for parallel analog implementations. We also show that this technique (which we call "weight perturbation") is suitable for multilayer recurrent networks as well. A discrete level analog implementation showing the training of an XOR network as an example is also presented.
引用
收藏
页码:154 / 157
页数:4
相关论文
共 4 条
[1]  
Furman B., 1988, 1ST ANN INNS M BOST, P381
[2]  
HWANG JN, 1989, J VLSI SIGNAL PROCES, P221
[3]   Recurrent Backpropagat ion and the Dynamical Approach to Adaptive Neural Computation [J].
Pineda, Fernando J. .
NEURAL COMPUTATION, 1989, 1 (02) :161-172
[4]   30 YEARS OF ADAPTIVE NEURAL NETWORKS - PERCEPTRON, MADALINE, AND BACKPROPAGATION [J].
WIDROW, B ;
LEHR, MA .
PROCEEDINGS OF THE IEEE, 1990, 78 (09) :1415-1442