Stable encoding of finite-state machines in discrete-time recurrent neural nets with sigmoid units

被引:19
作者
Carrasco, RC [1 ]
Forcada, ML
Valdés-Muñoz, MA
Ñeco, RP
机构
[1] Univ Alacant, Dept Llenguatges & Sistemes Informat, E-03071 Alacant, Spain
[2] Univ Miguel Hernandez, Dept Ciencias Expt & Tecnol, E-03202 Elx, Spain
关键词
D O I
10.1162/089976600300015097
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
There has been a lot of interest in the use of discrete-time recurrent neural nets (DTRNN) to learn finite-state tasks, with interesting results regarding the induction of simple finite-state machines from input-output strings. Parallel work has studied the computational power of DTRNN in connection with finite-state computation. This article describes a simple strategy to devise stable encodings of finite-state machines in computationally capable discrete-time recurrent neural architectures with sigmoid units and gives a detailed presentation on how this strategy may be applied to encode a general class of finite- state machines in a variety of commonly used first- and second-order recurrent neural networks. Unlike previous work that either imposed some restrictions to state values or used a detailed analysis based on fixed-point attractors, our approach applies to any positive, bounded, strictly growing, continuous activation function and uses simple bounding criteria based on a study of the conditions under which a proposed encoding scheme guarantees that the DTRNN is actually behaving as a finite-state machine.
引用
收藏
页码:2129 / 2174
页数:46
相关论文
共 56 条