共 44 条
Exposure to asynchronous audiovisual speech extends the temporal window for audiovisual integration
被引:136
作者:
Navarra, J
Vatakis, A
Zampini, M
Soto-Faraco, S
Humphreys, W
Spence, C
机构:
[1] Grp Rec Neurociencia Cognit, Barcelona, Spain
[2] Univ Oxford, Dept Expt Psychol, Oxford OX1 3UD, England
来源:
COGNITIVE BRAIN RESEARCH
|
2005年
/
25卷
/
02期
关键词:
multisensory integration;
asynchrony;
temporal recalibration;
speech;
music;
temporal order judgment;
D O I:
10.1016/j.cogbrainres.2005.07.009
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
We examined whether monitoring asynchronous audiovisual speech induces a general temporal recalibration of auditory and visual sensory processing. Participants monitored a videotape featuring a speaker pronouncing a list of words (Experiments 1 and 3) or a hand playing a musical pattern on a piano (Experiment 2). The auditory and visual channels were either presented in synchrony, or else asynchronously (with the visual signal leading the auditory signal by 300 ms; Experiments 1 and 2). While performing the monitoring task, participants were asked to judge the temporal order of pairs of auditory (white noise bursts) and visual stimuli (flashes) that were presented at varying stimulus onset asynchronies (SOAs) during the session. The results showed that, while monitoring desynchronized speech or music, participants required a longer interval between the auditory and visual stimuli in order to perceive their temporal order correctly, suggesting a widening of the temporal window for audiovisual integration. The fact that no such recalibration occurred when we used a longer asynchrony (1000 ms) that exceeded the temporal window for audiovisual integration (Experiment 3) supports this conclusion. (c) 2005 Elsevier B.V. All rights reserved.
引用
收藏
页码:499 / 507
页数:9
相关论文