ANALOG VLSI NEURAL NETWORKS - IMPLEMENTATION ISSUES AND EXAMPLES IN OPTIMIZATION AND SUPERVISED LEARNING

被引:20
作者
EBERHARDT, SP
TAWEL, R
BROWN, TX
DAUD, T
THAKOOR, AP
机构
[1] Department of Engineering, Swarthmore College, Swarthmore
[2] Center for Space Microelectronics Technology, Jet Propulsion Laboratory, California Institute of Technology, Pasadena.
基金
美国国家航空航天局;
关键词
D O I
10.1109/41.170975
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Many time-critical neural network applications require fully parallel hardware implementations for maximal throughput. We first survey the rich array of technologies that are being pursued, then focus on the analog CMOS VLSI medium. Although analog VLSI holds great promise for implementing dense neural networks in a fully parallel manner, the medium is "messy" in that limited dynamic range, offset voltages, and noise sources all conspire to reduce precision. Many traditional neural models may be difficult to implement in analog technology. In this paper, we examine how neural networks may be directly implemented in analog VLSI, giving examples of approaches that have been pursued to date. Two important application areas are highlighted: Optimization, because neural hardware may offer a speed advantage of orders of magnitude over other methods, and supervised learning, because of the widespread use and generality of gradient-descent learning algorithms as applied to feedforward networks.
引用
收藏
页码:552 / 564
页数:13
相关论文
共 80 条
[1]  
Lippmann R.P., Pattern classification using neural networks, IEEE Communications Mag., 27, 11, pp. 47-64, (1989)
[2]  
Hopfield J.J., Tank D.H., ‘Neural’ computation of decisions in optimization problems, Biol. Cybern., 52, pp. 141-152, (1985)
[3]  
Boser B.E., Saeckinger E., Bromley J., Le Cun T., Howard R.E., Jackel L.D., An analog neural network processor and its application to high-speed character recognition, Proc. IEEE / INNS Int. Joint Conf. Neural Networks, 1, pp. 415-420, (1991)
[4]  
Eberhardt S.P., Daud T., Kerns D.A., Brown T.X., Thakoor A.P., Competitive neural architecture for hardware solution to the assignment problem, Neural Networks, 4, 4, pp. 431-442, (1991)
[5]  
ANZA/DP Plus High-Speed Neurocomputing Coprocessor, HNC, Inc
[6]  
Delta II Floating Point Processor, Science Applications International Corp. (SAIC)
[7]  
Neural Emulation Tool (NET) Processor Board, Ford Aerospace
[8]  
SU3232/NU32 chipset specification sheet, Neural Semiconductor
[9]  
80170NW Electrically Trainable Artificial Neural Network (ETANN) chip specification sheet, Intel Corp.
[10]  
Johnson R.W., Teng R.K.F., Balde J.W., Multichip Modules: Systems Advantages, Major Constructions, and Materials Technologies