Audio-Visual Speech Modeling for Continuous Speech Recognition

被引:350
作者
Dupont, Stephane [1 ]
Luettin, Juergen [2 ]
机构
[1] Mons Polytech Inst FPMs, TCTS Lab, Mons, Belgium
[2] IDIAP, Martigny, Switzerland
关键词
Joint audio-video sensor integration; multistream hidden Markov models; speech recognition; visual feature extraction;
D O I
10.1109/6046.865479
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper describes a speech recognition system that uses both acoustic and visual speech information to improve the recognition performance in noisy environments. The system consists of three components: 1) a visual module; 2) an acoustic module; and 3) a sensor fusion module. The visual module locates and tracks the lip movements of a given speaker and extracts relevant speech features. This task is performed with an appearance-based lip model that is learned from example images. Visual speech features are represented by contour information of the lips and grey-level information of the mouth area. The acoustic module extracts noise-robust features from the audio signal. Finally, the sensor fusion module is responsible for the joint temporal modeling of the acoustic and visual feature streams and is realized using multistream hidden Markov models (HMMs). The multistream method allows the definition of different temporal topologies and levels of stream integration and hence enables the modeling of temporal dependencies more accurately than traditional approaches. We present two different methods to learn the asynchrony between the two modalities and how to incorporate them in the multistream models. The superior performance for the proposed system is demonstrated on a large multispeaker database of continuously spoken digits. On a recognition task at 15 dB acoustic signal-to-noise ratio (SNR), acoustic perceptual linear prediction (PLP) features lead to 56% error rate, noise robust RASTA-PLP (Relative Spectra) acoustic features to 7.2% error rate and combined noise robust acoustic features and visual features to 2.5% error rate.
引用
收藏
页码:141 / 151
页数:11
相关论文
共 48 条
[1]   How Do Humans Process and Recognize Speech? [J].
Allen, Jont B. .
IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, 1994, 2 (04) :567-577
[2]  
Arai T, 1998, INT CONF ACOUST SPEE, P933, DOI 10.1109/ICASSP.1998.675419
[3]  
BASU S, 1995, P IEEE INT C COMP VI
[4]  
Bourlard H, 1997, INT CONF ACOUST SPEE, P1251, DOI 10.1109/ICASSP.1997.596172
[5]  
Bourlard H. A., 1994, Connectionist speech recognition: a hybrid approach
[6]   CROSSMODAL INTEGRATION IN THE IDENTIFICATION OF CONSONANT SEGMENTS [J].
BRAIDA, LD .
QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY SECTION A-HUMAN EXPERIMENTAL PSYCHOLOGY, 1991, 43 (03) :647-677
[7]  
BREGLER C, 1995, FIFTH INTERNATIONAL CONFERENCE ON COMPUTER VISION, PROCEEDINGS, P494, DOI 10.1109/ICCV.1995.466899
[8]  
CHOLLET G, 1996, SWISS FRENCH POLYPHO
[9]  
COIANIZ T, 1996, NATO ASI SER, V150, P391
[10]  
COLEMAN EA, 1995, J VINYL ADDIT TECHN, V1, P1