Real-time American sign language recognition using desk and wearable computer based video

被引:686
作者
Starner, T [1 ]
Weaver, J [1 ]
Pentland, A [1 ]
机构
[1] MIT, Media Lab, Cambridge, MA 02139 USA
关键词
gesture recognition; hidden Markov models; wearable computers; sign language; motion and pattern analysis;
D O I
10.1109/34.735811
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American Sign Language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon.
引用
收藏
页码:1371 / 1375
页数:5
相关论文
共 23 条
  • [1] [Anonymous], 375 MIT MED LAB PERC
  • [2] Baum L.E., 1972, Inequalities III: Proceedings of the Third Symposium on Inequalities, page, V3, P1
  • [3] CAMPBELL L, 1996, 2 INT C FAC GEST REC, P157
  • [4] DORNER B, 1993, IJCAI WORKSH LOOK PE
  • [5] ESSA I, 1994, P WORKSH MOT NONR AR
  • [6] Horn B., 1986, Robot Vision
  • [7] Huang X., 1990, HIDDEN MARKOV MODELS
  • [8] HUMPHRIES T, 1990, BASIC COURSE AM SIGN
  • [9] LIANG R, 1997, UNPUB REAL TIME CONT
  • [10] Picard R., 1998, IMAGINA98