Automatic discrimination between laughter and speech

被引:101
作者
Truong, Khiet P. [1 ]
van Leeuwen, David A. [1 ]
机构
[1] TNO HUman Factors, Dept Human Interfaces, NL-3769 ZG Soesterberg, Netherlands
关键词
automatic detection laughter; automatic detection emotion;
D O I
10.1016/j.specom.2007.01.001
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Emotions can be recognized by audible paralinguistic cues in speech. By detecting these paralinguistic cues that can consist of laughter, a trembling voice, coughs, changes in the intonation contour etc., information about the speaker's state and emotion can be revealed. This paper describes the development of a gender-independent laugh detector with the aim to enable automatic emotion recognition. Different types of features (spectral, prosodic) for laughter detection were investigated using different classification techniques (Gaussian Mixture Models, Support Vector Machines, Multi Layer Perceptron) often used in language and speaker recognition. Classification experiments were carried out with short pre-segmented speech and laughter segments extracted from the ICSI Meeting Recorder Corpus (with a mean duration of approximately 2 s). Equal error rates of around 3% were obtained when tested on speaker-independent speech data. We found that a fusion between classifiers based on Gaussian Mixture Models and classifiers based on Support Vector Machines increases discriminative power. We also found that a fusion between classifiers that use spectral features and classifiers that use prosodic information usually increases the performance for discrimination between laughter and speech. Our acoustic measurements showed differences between laughter and speech in mean pitch and in the ratio of the durations of unvoiced to voiced portions, which indicate that these prosodic features are indeed useful for discrimination between laughter and speech. (C) 2007 Published by Elsevier B.V.
引用
收藏
页码:144 / 158
页数:15
相关论文
共 38 条
  • [1] ADAMI AG, 2003, P 8 EUR C SPEECH COM, P841
  • [2] [Anonymous], 2005, Data Mining Pratical Machine Learning Tools and Techniques
  • [3] [Anonymous], 1997, Proceedings of the uropean Conference on Speech Communication and Technology
  • [4] [Anonymous], 2005, INTERSPEECH
  • [5] Score normalization for text-independent speaker verification systems
    Auckenthaler, R
    Carey, M
    Lloyd-Thomas, H
    [J]. DIGITAL SIGNAL PROCESSING, 2000, 10 (1-3) : 42 - 54
  • [6] The acoustic features of human laughter
    Bachorowski, JA
    Smoski, MJ
    Owren, MJ
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2001, 110 (03) : 1581 - 1597
  • [7] BETT M, 2000, P RIAO 2000 PAR FRAN
  • [8] Bickley C.A., 1992, P INT C SPOK LANG PR, P927
  • [9] Boersma P., 2020, Praat: Doing phonetics by computerversion 6.1.27
  • [10] Cai R, 2003, 2003 INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, VOL III, PROCEEDINGS, P37