DIGITAL VLSI BACKPROPAGATION NETWORKS

被引:4
作者
CARD, H
机构
[1] Univ of Manitoba, Winnipeg, Manit
来源
CANADIAN JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING-REVUE CANADIENNE DE GENIE ELECTRIQUE ET INFORMATIQUE | 1995年 / 20卷 / 01期
关键词
D O I
10.1109/CJECE.1995.7102060
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
An overview is presented of digital VLSI implementations of artificial neural networks (ANNs) configured as multilayer:perceptrons employing the backpropagation learning algorithm. Several other network architectures and learning algorithms are also mentioned for comparison. We focus on those implementations which employ parallel hardware in the learning computations, not simply in the retrieval or classification process. The treatment extends from serial and parallel general-purpose simulators, which are simply programmed to implement these learning algorithms, to full custom CMOS chips or neurocomputers dedicated to one version of the learning model. Among the themes of this paper are topologies, bit-serial communications, arithmetic systems, and trade-offs between flexibility and performance.
引用
收藏
页码:15 / 23
页数:9
相关论文
共 76 条
[1]  
ACKLEY DH, 1985, COGNITIVE SCI, V9, P147
[2]  
Aleksander I., 1984, Sensor Review, V4, P120, DOI 10.1108/eb007637
[3]   A VLSI-EFFICIENT TECHNIQUE FOR GENERATING MULTIPLE UNCORRELATED NOISE SOURCES AND ITS APPLICATION TO STOCHASTIC NEURAL NETWORKS [J].
ALSPECTOR, J ;
GANNETT, JW ;
HABER, S ;
PARKER, MB ;
CHU, R .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, 1991, 38 (01) :109-123
[4]  
[Anonymous], 1991, INTRO THEORY NEURAL, DOI DOI 10.1201/9780429499661
[5]  
[Anonymous], 1988, DARPA NEURAL NETWORK
[6]  
[Anonymous], 1989, ANALOG VLSI NEURAL S
[7]  
ATLAS LE, 1989, IEEE CIRCUITS DEVICE, P20
[8]  
BAILEY J, 1988, P IEEE INT C NEURAL, V2, P173
[9]   1ST-ORDER AND 2ND-ORDER METHODS FOR LEARNING - BETWEEN STEEPEST DESCENT AND NEWTON METHOD [J].
BATTITI, R .
NEURAL COMPUTATION, 1992, 4 (02) :141-166
[10]  
BENYON T, 1987, 7TH P OCCAM US GROUP