Speech fragment decoding techniques for simultaneous speaker identification and speech recognition

被引:24
作者
Barker, Jon [1 ]
Ma, Ning [1 ]
Coy, Andre [1 ]
Cooke, Martin [2 ]
机构
[1] Univ Sheffield, Dept Comp Sci, Sheffield S1 4DP, S Yorkshire, England
[2] Univ Basque Country, Fac Ciencias & Tecnol, Dept Elect & Elect, Leioa 48940, Spain
基金
英国工程与自然科学研究理事会;
关键词
Speech recognition; Speech separation; Speaker identification; Simultaneous speech; Auditory scene analysis; Noise robustness; CONCURRENT VOWELS; PERCEPTION; MODEL;
D O I
10.1016/j.csl.2008.05.003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper addresses the problem of recognising speech in the presence of a competing speaker. We review a speech fragment decoding technique that treats segregation and recognition as coupled problems. Data-driven techniques are used to segment a spectro-temporal representation into a set of fragments, such that each fragment is dominated by one or other of the speech sources. A speech fragment decoder is used which employs missing data techniques and clean speech models to simultaneously search for the set of fragments and the word sequence that best matches the target speaker model. The paper investigates the performance of the system oil a recognition task employing artificially mixed target and masker speech utterances. The fragment decoder produces significantly lower error rates than a conventional recogniser, and mimics the pattern of human performance that is produced by the interplay between energetic and informational masking. However, at around 0 dB the performance is generally quite poor. An analysis of the errors shows that a large number of target/masker confusions are being made. The paper presents a novel fragment-based speaker identification approach that allows the target speaker to be reliably identified across a wide range of SNRs. This component is combined with the recognition system to produce significant improvements. When the target and masker utterance have the same gender, the recognition system has a performance at 0 dB equal to that of humans; in other conditions the error rate is roughly twice the human error rate. (C) 2008 Elsevier Ltd. All rights reserved.
引用
收藏
页码:94 / 111
页数:18
相关论文
共 23 条
[1]   MODELING THE PERCEPTION OF CONCURRENT VOWELS - VOWELS WITH DIFFERENT FUNDAMENTAL FREQUENCIES [J].
ASSMANN, PF ;
SUMMERFIELD, Q .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1990, 88 (02) :680-697
[2]  
Barker J., 2000, P INT C SPOKEN LANGU, P373
[3]  
Barker J, 2006, INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, VOLS 1-5, P85
[4]   Decoding speech in the presence of other sources [J].
Barker, JP ;
Cooke, MP ;
Ellis, DPW .
SPEECH COMMUNICATION, 2005, 45 (01) :5-25
[5]  
Bregman A., 1990, Auditory Scene Analysis: The Perceptual Organization of Sound, DOI DOI 10.7551/MITPRESS/1486.001.0001
[6]   Informational and energetic masking effects in the perception of multiple simultaneous talkers [J].
Brungart, DS ;
Simpson, BD ;
Ericson, MA ;
Scott, KR .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2001, 110 (05) :2527-2538
[7]   A glimpsing model of speech perception in noise [J].
Cooke, M .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2006, 119 (03) :1562-1573
[8]   Robust automatic speech recognition with missing and unreliable acoustic data [J].
Cooke, M ;
Green, P ;
Josifovski, L ;
Vizinho, A .
SPEECH COMMUNICATION, 2001, 34 (03) :267-285
[9]   The foreign language cocktail party problem: Energetic and informational masking effects in non-native speech perception [J].
Cooke, Martin ;
Lecumberri, M. L. Garcia ;
Barker, Jon .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2008, 123 (01) :414-427
[10]   An audio-visual corpus for speech perception and automatic speech recognition (L) [J].
Cooke, Martin ;
Barker, Jon ;
Cunningham, Stuart ;
Shao, Xu .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2006, 120 (05) :2421-2424