Nonrecurrent Neural Structure for Long-Term Dependence

被引:31
作者
Zhang, Shiliang [1 ]
Liu, Cong [2 ]
Jiang, Hui [3 ]
Wei, Si [2 ]
Dai, Lirong [1 ]
Hu, Yu [2 ]
机构
[1] Univ Sci & Technol China, Natl Engn Lab Speech & Language Informat Proc, Hefei 230027, Peoples R China
[2] IFLYTEK Res, Hefei 230088, Peoples R China
[3] York Univ, Lassonde Sch Engn, Dept Elect Engn & Comp Sci, Toronto, ON M3J 1P3, Canada
关键词
CFSMN; Deep neural networks; feedforward sequential memory networks; language modeling; speech recognition; NETWORKS; ERROR;
D O I
10.1109/TASLP.2017.2672398
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
In this paper, we propose a novel neural network structure, namely feedforward sequential memory networks (FSMN), to model long-term dependence in time series without using recurrent feedback. The proposed FSMN is a standard fully connected feedforward neural network equipped with some learnable memory blocks in its hidden layers. The memory blocks use a tapped-delay line structure to encode the long context information into a fixed-size representation as short-term memory mechanism which are somehow similar to the time-delay neural networks layers. We have evaluated the FSMNs in several standard benchmark tasks, including speech recognition and language modeling. Experimental results have shown that FSMNs outperform the conventional recurrent neural networks (RNN) while can be learned much more reliably and faster in modeling sequential signals like speech or language. Moreover, we also propose a compact feedforward sequential memory networks (cFSMN) by combining FSMN with low-rank matrix factorization and make a slight modification to the encoding method used in FSMNs in order to further simplify the network architecture. On the speech recognition Switchboard task, the proposed cFSMN structures can reduce the model size by 60% and speed up the learning by more than seven times while the model can still significantly outperform the popular bidirectional LSTMs for both frame-level cross-entropy criterion-based training and MMI-based sequence training.
引用
收藏
页码:871 / 884
页数:14
相关论文
共 64 条
  • [1] Convolutional Neural Networks for Speech Recognition
    Abdel-Hamid, Ossama
    Mohamed, Abdel-Rahman
    Jiang, Hui
    Deng, Li
    Penn, Gerald
    Yu, Dong
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2014, 22 (10) : 1533 - 1545
  • [2] Abdel-Hamid O, 2012, INT CONF ACOUST SPEE, P4277, DOI 10.1109/ICASSP.2012.6288864
  • [3] [Anonymous], 2014, Generating sequences with recurrent neural networks
  • [4] [Anonymous], 2013, INTERSPEECH
  • [5] [Anonymous], PREDICTING FUTURE UN
  • [6] [Anonymous], P INT C LEARN REP IC
  • [7] [Anonymous], 2015, Large-scale simple question answering with memory networks
  • [8] [Anonymous], 1997, Neural Computation
  • [9] [Anonymous], 2011, Large text compression benchmark
  • [10] [Anonymous], 1993, COMPUT LINGUIST, DOI DOI 10.21236/ADA273556