ALPHA-NETS - A RECURRENT NEURAL NETWORK ARCHITECTURE WITH A HIDDEN MARKOV MODEL INTERPRETATION

被引:50
作者
BRIDLE, JS
机构
关键词
D O I
10.1016/0167-6393(90)90049-F
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
A hidden Markov model isolated word recogniser using full likelihood scoring for each word model can be treated as a recurrent 'neural' network. The units in the recurrent loop are linear, but the observations enter the loop via a multiplication. Training can use back-propagation of partial derivatives to hill-climb on a measure of discriminability between words. The back-propagation has exactly the same form as the backward pass of the Baum-Welch (EM) algorithm for maximum-likelihood HMM training. The use of a particular error criterion based on relative entropy (equivalent to the so-called Mutual Information criterion which has been used for discriminative training of HMMs) can have derivatives which are interestingly related to the Baum-Welch re-estimates and to Corrective Training. © 1990.
引用
收藏
页码:83 / 92
页数:10
相关论文
共 22 条
[1]  
[Anonymous], 1989, NIPS 1989
[2]  
[Anonymous], NEUROCOMPUTING ALGOR
[3]  
Bahl L. R., 1987, Computer Speech and Language, V2, P219, DOI 10.1016/0885-2308(87)90010-6
[4]  
BAHL LR, 1986, P IEEE INT C AC SPEE, P49
[5]  
BAHL LR, 1988, P ICASSP 88 NEW YORK, P493
[6]  
BEDWORTH MD, 1989, IEE CONF PUBL, P86
[7]  
BOTTOU L, 1989, P EUROSPEECH 89
[8]  
BOURLARD H, 1989, COMPUTER SPEECH LANG
[9]  
BRIDLE JS, 1989, P IEEE C NEURAL INFO
[10]  
GOPALAKRISHNAN PS, 1989, P ICASSP, P631