Humans integrate visual and haptic information in a statistically optimal fashion

被引:3179
作者
Ernst, MO [1 ]
Banks, MS [1 ]
机构
[1] Univ Calif Berkeley, Sch Optometry, Vis Sci Program, Berkeley, CA 94720 USA
基金
美国国家卫生研究院; 美国国家科学基金会;
关键词
D O I
10.1038/415429a
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
When a person looks at an object while exploring it with their hand, vision and touch both provide information for estimating the properties of the object. Vision frequently dominates the integrated visual-haptic percept, for example when judging size, shape or position(1-3), but in some circumstances the percept is clearly affected by haptics(4-7). Here we propose that a general principle, which minimizes variance in the final estimate, determines the degree to which vision or haptics dominates. This principle is realized by using maximum-likelihood estimation(8-15) to combine the inputs. To investigate cue combination quantitatively, we first measured the variances associated with visual and haptic estimation of height. We then used these measurements to construct a maximum-likelihood integrator. This model behaved very similarly to humans in a visual-haptic task. Thus, the nervous system seems to combine visual and haptic information in a fashion that is similar to a maximum-likelihood integrator. Visual dominance occurs when the variance associated with visual estimation is lower than that associated with haptic estimation.
引用
收藏
页码:429 / 433
页数:6
相关论文
共 25 条