Framewise phoneme classification with bidirectional LSTM and other neural network architectures

被引:3455
作者
Graves, A
Schmidhuber, J
机构
[1] IDSIA, CH-6928 Manno Lugano, Switzerland
[2] Tech Univ Munich, D-85748 Munich, Germany
关键词
D O I
10.1016/j.neunet.2005.06.042
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it. (c) 2005 Elsevier Ltd. All rights reserved.
引用
收藏
页码:602 / 610
页数:9
相关论文
共 25 条
  • [1] BALDI P, 2001, LECT NOTES COMPUTER, V1828, P80
  • [2] BALDI P, 1999, BIOINF BIOINFORMATIC, P15
  • [3] BERINGER N, 2004, P 8 INT C SPOK LANG, P2233
  • [4] BERINGER N, 2004, HUMAN LANGUAGE ACQUI
  • [5] Bishop C. M., 1996, Neural networks for pattern recognition
  • [6] Bourlard H. A., 1994, Connectionist speech recognition: a hybrid approach
  • [7] Chen JM, 2004, LECT NOTES COMPUT SC, V3174, P494
  • [8] CHEN R, 1996, P 13 ANN AS C SIGN S, P779
  • [9] ECK D, 2003, IDSIA1403 IDISA
  • [10] Fukada T., 1999, Systems and Computers in Japan, V30, P20, DOI 10.1002/(SICI)1520-684X(199904)30:4<20::AID-SCJ3>3.0.CO