NAVIG: augmented reality guidance system for the visually impaired Combining object localization, GNSS, and spatial audio

被引:102
作者
Katz, Brian F. G. [1 ]
Kammoun, Slim [2 ,3 ]
Parseihian, Gaetan [1 ]
Gutierrez, Olivier [2 ,3 ]
Brilhault, Adrien [2 ,3 ,4 ]
Auvray, Malika [1 ]
Truillet, Philippe [2 ,3 ]
Denis, Michel [1 ]
Thorpe, Simon [3 ,4 ]
Jouffrais, Christophe [2 ,3 ]
机构
[1] Univ Paris 11, CNRS, LIMSI, F-91403 Orsay, France
[2] CNRS, IRIT, Toulouse, France
[3] Univ Toulouse 3, F-31062 Toulouse, France
[4] CNRS, CerCo, Toulouse, France
关键词
Assisted navigation; Guidance; Spatial audio; Visually impaired assistive device; Need analysis; AUDITORY DISPLAY; ROUTE; BLIND; SUBSTITUTION; EXPLORATION; PEDESTRIANS; PERCEPTION; PEOPLE; SOUND; AID;
D O I
10.1007/s10055-012-0213-6
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Navigating complex routes and finding objects of interest are challenging tasks for the visually impaired. The project NAVIG (Navigation Assisted by artificial VIsion and GNSS) is directed toward increasing personal autonomy via a virtual augmented reality system. The system integrates an adapted geographic information system with different classes of objects useful for improving route selection and guidance. The database also includes models of important geolocated objects that may be detected by real-time embedded vision algorithms. Object localization (relative to the user) may serve both global positioning and sensorimotor actions such as heading, grasping, or piloting. The user is guided to his desired destination through spatialized semantic audio rendering, always maintained in the head-centered reference frame. This paper presents the overall project design and architecture of the NAVIG system. In addition, details of a new type of detection and localization device are presented. This approach combines a bio-inspired vision system that can recognize and locate objects very quickly and a 3D sound rendering system that is able to perceptually position a sound at the location of the recognized object. This system was developed in relation to guidance directives developed through participative design with potential users and educators for the visually impaired.
引用
收藏
页码:253 / 269
页数:17
相关论文
共 65 条
[1]   Structural properties of spatial representations in blind people: Scanning images constructed from haptic exploration or from locomotion in a 3-D audio virtual environment [J].
Afonso, Amandine ;
Blum, Alan ;
Katz, Brian F. G. ;
Tarroux, Philippe ;
Borst, Gregoire ;
Denis, Michel .
MEMORY & COGNITION, 2010, 38 (05) :591-604
[2]  
Allen GL, 2000, APPL COGNITIVE PSYCH, V14, P333
[3]  
[Anonymous], 1999, Techniques pratiques pour l'etude des activites expertes
[4]  
[Anonymous], 1986, MONOGRAPHS SOIL RESO
[5]  
[Anonymous], 1994, P 1 ANN ACM C ASSIST, DOI [DOI 10.1145/191028.191051, 10.1145/191028.191051]
[6]  
[Anonymous], 2007, Multi-Sensor Data Fusion: An Introduction
[7]   Learning to perceive with a visuo-auditory substitution system: Localisation and object recognition with 'The vOICe' [J].
Auvray, Malika ;
Hanneton, Sylvain ;
O'Regan, J. Kevin .
PERCEPTION, 2007, 36 (03) :416-430
[8]   Perception With Compensatory Devices: From Sensory Substitution to Sensorimotor Extension [J].
Auvray, Malika ;
Myin, Erik .
COGNITIVE SCIENCE, 2009, 33 (06) :1036-1058
[9]  
Bar-Shalom Y., 1987, Tracking and data association
[10]  
Begault DurandR., 1994, 3-D sound for virtual reality and multimedia