Recognizing action units for facial expression analysis

被引:968
作者
Tian, YI [1 ]
Kanade, T
Cohn, JF
机构
[1] Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213 USA
[2] Univ Pittsburgh, Dept Psychol, Pittsburgh, PA 15260 USA
关键词
computer vision; multistate face and facial component models; facial expression analysis; facial action coding system; action units; AU combinations; neural network;
D O I
10.1109/34.908962
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brews, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FAGS-coded for ground-truth by different research teams.
引用
收藏
页码:97 / 115
页数:19
相关论文
共 41 条
[1]  
[Anonymous], 2000, P AS C COMP VIS
[2]   Measuring facial expressions by computer image analysis [J].
Bartlett, MS ;
Hager, JC ;
Ekman, P ;
Sejnowski, TJ .
PSYCHOPHYSIOLOGY, 1999, 36 (02) :253-263
[3]   Recognizing facial expressions in image sequences using local parameterized models of image motion [J].
Black, MJ ;
Yacoob, Y .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 1997, 25 (01) :23-48
[4]  
BLACK MJ, 1995, FIFTH INTERNATIONAL CONFERENCE ON COMPUTER VISION, PROCEEDINGS, P374, DOI 10.1109/ICCV.1995.466915
[5]   FACE RECOGNITION - FEATURES VERSUS TEMPLATES [J].
BRUNELLI, R ;
POGGIO, T .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1993, 15 (10) :1042-1052
[6]  
Canny J., 1986, IEEE T PATTERN ANAL, V8
[7]  
*CARN MELL U U CAL, 2000, UNP FAC EXPR COD PRO
[8]   Facial expressions in Hollywood's portrayal of emotion [J].
Carroll, JM ;
Russell, JA .
JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 1997, 72 (01) :164-176
[9]   Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding [J].
Cohn, JF ;
Zlochower, AJ ;
Lien, J ;
Kanade, T .
PSYCHOPHYSIOLOGY, 1999, 36 (01) :35-43
[10]   Histograms of oriented gradients for human detection [J].
Dalal, N ;
Triggs, B .
2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, :886-893