Fast Adaptation of Deep Neural Network Based on Discriminant Codes for Speech Recognition

被引:119
作者
Xue, Shaofei [1 ]
Abdel-Hamid, Ossama [2 ]
Jiang, Hui [2 ]
Dai, Lirong [1 ]
Liu, Qingfeng [1 ]
机构
[1] Univ Sci & Technol China, Natl Engn Lab Speech & Language Informat Proc, Hefei 230026, Peoples R China
[2] York Univ, Dept Elect Engn & Comp Sci, Toronto, ON M3J 1P3, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Condition code; cross entropy (CE); deep neural network (DNN); fast adaptation; maximum mutual information (MMI); speaker code; SPEAKER ADAPTATION; FEATURES;
D O I
10.1109/TASLP.2014.2346313
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Fast adaptation of deep neural networks (DNN) is an important research topic in deep learning. In this paper, we have proposed a general adaptation scheme for DNN based on discriminant condition codes, which are directly fed to various layers of a pre-trained DNN through a new set of connection weights. Moreover, we present several training methods to learn connection weights from training data as well as the corresponding adaptation methods to learn new condition code from adaptation data for each new test condition. In this work, the fast adaptation scheme is applied to supervised speaker adaptation in speech recognition based on either frame-level cross-entropy or sequence-level maximum mutual information training criterion. We have proposed three different ways to apply this adaptation scheme based on the so-called speaker codes: i) Nonlinear feature normalization in feature space; ii) Direct model adaptation of DNN based on speaker codes; iii) Joint speaker adaptive training with speaker codes. We have evaluated the proposed adaptation methods in two standard speech recognition tasks, namely TIMIT phone recognition and large vocabulary speech recognition in the Switchboard task. Experimental results have shown that all three methods are quite effective to adapt large DNN models using only a small amount of adaptation data. For example, the Switchboard results have shown that the proposed speaker-code-based adaptation methods may achieve up to 8-10% relative error reduction using only a few dozens of adaptation utterances per speaker. Finally, we have achieved very good performance in Switchboard (12.1% in WER) after speaker adaptation using sequence training criterion, which is very close to the best performance reported in this task ("Deep convolutional neural networks for LVCSR," T. N. Sainath et al., Proc. IEEE Acoust., Speech, Signal Process., 2013).
引用
收藏
页码:1713 / 1725
页数:13
相关论文
共 40 条
[1]  
Abdel-Hamid O., 2013, P INTERSPEECH
[2]  
Abdel-Hamid O, 2013, INT CONF ACOUST SPEE, P7942, DOI 10.1109/ICASSP.2013.6639211
[3]  
Anastasakos T, 1996, ICSLP 96 - FOURTH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, PROCEEDINGS, VOLS 1-4, P1137, DOI 10.1109/ICSLP.1996.607807
[4]  
Anastasakos T, 1997, INT CONF ACOUST SPEE, P1043, DOI 10.1109/ICASSP.1997.596119
[5]  
[Anonymous], P INTERSPEECH
[6]  
[Anonymous], P INT S CHIN SPOK LA
[7]  
[Anonymous], 1995, P EUR
[8]  
Bao YB, 2012, INT CONF SIGN PROCES, P562, DOI 10.1109/ICoSP.2012.6491550
[9]  
Bao YB, 2013, INT CONF ACOUST SPEE, P6980, DOI 10.1109/ICASSP.2013.6639015
[10]  
Bengio Y., 2006, Advances in Neural Information Processing Systems, V19, DOI DOI 10.7551/MITPRESS/7503.003.0024