1ST-ORDER VERSUS 2ND-ORDER SINGLE-LAYER RECURRENT NEURAL NETWORKS

被引:61
作者
GOUDREAU, MW
GILES, CL
CHAKRADHAR, ST
CHEN, D
机构
[1] NEC RES INST INC,PRINCETON,NJ 08540
[2] NEC USA INC,C&CRL,PRINCETON,NJ 08540
[3] PRINCETON UNIV,PRINCETON,NJ 08544
[4] UNIV MARYLAND,DEPT PHYS & ASTRON,INST ADV COMP STUDIES,COLL PK,MD 20742
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 1994年 / 5卷 / 03期
关键词
D O I
10.1109/72.286928
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We examine the representational capabilities of first-order and second-order single-layer recurrent neural networks (SLRNN's) with hard-limiting neurons. We show that a second-order SLRNN is strictly more powerful than a first-order SLRNN. However, if the first-order SLRNN is augmented with output layers of feedforward neurons, it can implement any finite-state recognizer, but only if state-splitting is employed. When a state is split, it is divided into two equivalent states. The judicious use of state-splitting allows for efficient implementation of finite-state recognizers using augmented first-order SLRNN's.
引用
收藏
页码:511 / 513
页数:3
相关论文
共 11 条
[1]   EFFICIENT SIMULATION OF FINITE AUTOMATA BY NEURAL NETS [J].
ALON, N ;
DEWDNEY, AK ;
OTT, TJ .
JOURNAL OF THE ACM, 1991, 38 (02) :495-514
[2]   DIGITAL-SYSTEMS FOR ARTIFICIAL NEURAL NETWORKS [J].
ATLAS, LE ;
SUZUKI, Y .
IEEE CIRCUITS & DEVICES, 1989, 5 (06) :20-24
[3]   FINDING STRUCTURE IN TIME [J].
ELMAN, JL .
COGNITIVE SCIENCE, 1990, 14 (02) :179-211
[4]  
Fahlman SE, 1991, ADV NEURAL INFORMATI, P190
[5]   LEARNING AND EXTRACTING FINITE STATE AUTOMATA WITH 2ND-ORDER RECURRENT NEURAL NETWORKS [J].
GILES, CL ;
MILLER, CB ;
CHEN, D ;
CHEN, HH ;
SUN, GZ ;
LEE, YC .
NEURAL COMPUTATION, 1992, 4 (03) :393-405
[6]  
Kohavi Z., 1978, SWITCHING FINITE AUT
[7]  
MINSKY ML, 1967, COMPUTATION FINITE I, P32
[8]   THE INDUCTION OF DYNAMIC RECOGNIZERS [J].
POLLACK, JB .
MACHINE LEARNING, 1991, 7 (2-3) :227-252
[9]  
SEIDL D, 1991, P INT JOINT C NEUR N, V2, P709
[10]  
SIEGELMANN HT, 1992, 5TH P ACM WORKSH COM