Automatic sign language analysis: A survey and the future beyond lexical meaning

被引:326
作者
Ong, SCW [1 ]
Ranganath, S [1 ]
机构
[1] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 117576, Singapore
关键词
sign language recognition; hand tracking; hand gesture recognition; gesture analysis; head tracking; head gesture recognition; face tracking; facial expression recognition; review;
D O I
10.1109/TPAMI.2005.112
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Research in automatic analysis of sign language has largely focused on recognizing the lexical (or citation) form of sign gestures as they appear in continuous signing, and developing algorithms that scale well to large vocabularies. However, successful recognition of lexical signs is not sufficient for a full understanding of sign language communication. Nonmanual signals and grammatical processes which result in systematic variations in sign appearance are integral aspects of this communication but have received comparatively little attention in the literature. In this survey, we examine data acquisition, feature extraction and classification methods employed for the analysis of sign language gestures. These are discussed with respect to issues such as modeling transitions between signs in continuous signing, modeling inflectional processes, signer independence, and adaptation. We further examine works that attempt to analyze nonmanual signals and discuss issues related to integrating these with (hand) sign gestures. We also discuss the overall progress toward a true test of sign recognition systems-dealing with natural signing by native signers. We suggest some future directions for this research and also point to contributions it can make to other fields of research. Web-based supplemental materials (appendicies) which contain several illustrative examples and videos of signing can be found at www.computer.org/publications/dlib.
引用
收藏
页码:873 / 891
页数:19
相关论文
共 162 条
[1]  
Akyol S., 2001, Proceedings of the IASTED International Conference Signal Processing, Pattern Recognition, and Applications, P48
[2]  
AKYOL S, 2002, P ITEA WORKSH VIRT H, P61
[3]   Recognition of gestures in Arabic sign language using neuro-fuzzy systems [J].
Al-Jarrah, O ;
Halawani, A .
ARTIFICIAL INTELLIGENCE, 2001, 133 (1-2) :117-138
[4]  
[Anonymous], THESIS U TASMANIA
[5]  
[Anonymous], P VIRT REAL SOFTW TE
[6]  
[Anonymous], 2002, INT C VIS INTERFACE
[7]  
[Anonymous], THESIS U PENNSYLVANI
[8]  
Assan M., 1997, P GEST WORKSH, P97
[9]  
Baker C., 1978, Understanding Language through Sign Language Research, P27
[10]   Independent component representations for face recognition [J].
Bartlett, MS ;
Lades, HM ;
Sejnowski, TJ .
HUMAN VISION AND ELECTRONIC IMAGING III, 1998, 3299 :528-539