Simple strategies to encode tree automata in sigmoid recursive neural networks

被引:11
作者
Carrasco, RC [1 ]
Forcada, ML [1 ]
机构
[1] Univ Alacant, Dept Llenguatges & Sistemes Informat, E-03071 Alacant, Spain
关键词
tree automata; recursive neural networks; neural computation; analog neural networks;
D O I
10.1109/69.917555
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, a number of authors have explored the use of recursive neural nets (RNN) for the adaptive processing of trees or tree-like structures. One of the most important language-theoretical formalizations of the processing of tree-structured data is that of deterministic finite-state tree automata (DFSTA). DFSTA may easily be realized as RNN using discrete-state units. such as the threshold linear unit. A recent result by Siima (Neural Network World 7 (1997), pp. 679-686) shows that any threshold linear unit operating on binary inputs can be implemented in an analog unit using a continuous activation function and bounded real inputs. The constructive proof finds a scaling factor for the weights and reestimates the bias accordingly. In this paper, we explore the application of this result to simulate DFSTA in sigmoid RNN (that is, analog RNN using monotonically growing activation functions) and also present an alternative scheme for one-hot encoding of the input that yields smaller weight values and, therefore, works at a lower saturation level.
引用
收藏
页码:148 / 156
页数:9
相关论文
共 19 条
  • [1] AN ALGEBRAIC FRAMEWORK TO REPRESENT FINITE-STATE MACHINES IN SINGLE-LAYER RECURRENT NEURAL NETWORKS
    ALQUEZAR, R
    SANFELIU, A
    [J]. NEURAL COMPUTATION, 1995, 7 (05) : 931 - 949
  • [2] [Anonymous], CURRENTS THEORY COMP
  • [3] Stable encoding of finite-state machines in discrete-time recurrent neural nets with sigmoid units
    Carrasco, RC
    Forcada, ML
    Valdés-Muñoz, MA
    Ñeco, RP
    [J]. NEURAL COMPUTATION, 2000, 12 (09) : 2129 - 2174
  • [4] FINDING STRUCTURE IN TIME
    ELMAN, JL
    [J]. COGNITIVE SCIENCE, 1990, 14 (02) : 179 - 211
  • [5] LEARNING THE INITIAL-STATE OF A 2ND-ORDER RECURRENT NEURAL-NETWORK DURING REGULAR-LANGUAGE INFERENCE
    FORCADA, ML
    CARRASCO, RC
    [J]. NEURAL COMPUTATION, 1995, 7 (05) : 923 - 930
  • [6] A general framework for adaptive processing of data structures
    Frasconi, P
    Gori, M
    Sperduti, A
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1998, 9 (05): : 768 - 786
  • [7] LEARNING AND EXTRACTING FINITE STATE AUTOMATA WITH 2ND-ORDER RECURRENT NEURAL NETWORKS
    GILES, CL
    MILLER, CB
    CHEN, D
    CHEN, HH
    SUN, GZ
    LEE, YC
    [J]. NEURAL COMPUTATION, 1992, 4 (03) : 393 - 405
  • [8] 1ST-ORDER VERSUS 2ND-ORDER SINGLE-LAYER RECURRENT NEURAL NETWORKS
    GOUDREAU, MW
    GILES, CL
    CHAKRADHAR, ST
    CHEN, D
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (03): : 511 - 513
  • [9] Hopcroft J. E., 2007, Introduction to Automata Theory, Languages and Computation
  • [10] KREMER S, 1998, P 8 INT C ART NEUR N, V2, P529