Telerobotic Pointing Gestures Shape Human Spatial Cognition

被引:24
作者
Cabibihan, John-John [1 ,2 ]
So, Wing-Chee [3 ]
Saj, Sujin [1 ,2 ]
Zhang, Zhengchen [1 ,2 ]
机构
[1] Natl Univ Singapore, Social Robot Lab, Interact & Digital Media Inst, Singapore 117576, Singapore
[2] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 117576, Singapore
[3] Chinese Univ Hong Kong, Dept Educ Psychol, Hong Kong, Hong Kong, Peoples R China
关键词
Pointing gesture; Spatial memory; Telepresence robots; Human-robot interaction; Social robotics; ROBOTIC TELEPRESENCE; REMOTE-PRESENCE; SPEECH; EXPERIENCE; SURGERY; MAPS; CARE;
D O I
10.1007/s12369-012-0148-9
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This paper aimed to explore whether human beings can understand gestures produced by telepresence robots. If it were the case, they can derive meaning conveyed in telerobotic gestures when processing spatial information. We conducted two experiments over Skype in the present study. Participants were presented with a robotic interface that had arms, which were teleoperated by an experimenter. The robot can point to virtual locations that represented certain entities. In Experiment 1, the experimenter described spatial locations of fictitious objects sequentially in two conditions: speech only condition (SO, verbal descriptions clearly indicated the spatial layout) and speech and robotic gesture condition (SR, verbal descriptions were ambiguous but accompanied by robotic pointing gestures). Participants were then asked to recall the objects' spatial locations. We found that the number of spatial locations recalled in the SR condition was on par with that in the SO condition, suggesting that telerobotic pointing gestures compensated ambiguous speech during the process of spatial information. In Experiment 2, the experimenter described spatial locations non-sequentially in the SR and SO conditions. Surprisingly, the number of spatial locations recalled in the SR condition was even higher than that in the SO condition, suggesting that telerobotic pointing gestures were more powerful than speech in conveying spatial information when information was presented in an unpredictable order. The findings provide evidence that human beings are able to comprehend telerobotic gestures, and importantly, integrate these gestures with co-occurring speech. This work promotes engaging remote collaboration among humans through a robot intermediary.
引用
收藏
页码:263 / 272
页数:10
相关论文
共 54 条
  • [41] Distance matters
    Olson, GM
    Olson, JS
    [J]. HUMAN-COMPUTER INTERACTION, 2000, 15 (2-3): : 139 - 178
  • [42] Paivio A., 1986, Mental representations: A dual coding approach
  • [43] Social tele-embodiment: Understanding presence
    Paulos, E
    Canny, J
    [J]. AUTONOMOUS ROBOTS, 2001, 11 (01) : 87 - 95
  • [44] Deployment and early experience with remote-presence patient care in a community hospital
    Petelin, J. B.
    Nelson, M. E.
    Goodman, J.
    [J]. SURGICAL ENDOSCOPY AND OTHER INTERVENTIONAL TECHNIQUES, 2007, 21 (01): : 53 - 56
  • [45] Rothenberg SS, 2009, J LAPAROENDOSC ADV S, V19, pS219, DOI [10.1089/lap.2008.0133.supp, 10.1089/lap.2008.0133]
  • [46] Sampsel D, 2010, CLIN SIMUL NURS
  • [47] The role of telementoring and telerobotic assistance in the provision of laparoscopic colorectal surgery in rural areas
    Sebajang, H.
    Trudeau, P.
    Dougall, A.
    Hegge, S.
    McKinley, C.
    Anvari, M.
    [J]. SURGICAL ENDOSCOPY AND OTHER INTERVENTIONAL TECHNIQUES, 2006, 20 (09): : 1389 - 1393
  • [48] Telementoring for minimally invasive surgical training by wireless robot
    Sereno, S.
    Mutter, D.
    Dallemagne, B.
    Smith, C. D.
    Marescaux, Jacques
    [J]. SURGICAL INNOVATION, 2007, 14 (03) : 184 - 191
  • [49] Smith C Daniel, 2005, Surg Innov, V12, P139, DOI 10.1177/155335060501200212
  • [50] Using the Hands to Identify Who Does What to Whom: Gesture and Speech Go Hand-in-Hand
    So, Wing Chee
    Kita, Sotaro
    Goldin-Meadow, Susan
    [J]. COGNITIVE SCIENCE, 2009, 33 (01) : 115 - 125