When hearing the bark helps to identify the dog: Semantically-congruent sounds modulate the identification of masked pictures

被引:149
作者
Chen, Yi-Chuan [1 ]
Spence, Charles [1 ]
机构
[1] Univ Oxford, Dept Expt Psychol, Crossmodal Res Lab, Oxford OX1 3UD, England
关键词
Semantic congruency; Object identification; Audiovisual; Multisensory; SELECTIVE ATTENTION; REACTION-TIME; AUDIOVISUAL INTEGRATION; UNCONSCIOUS PERCEPTION; OBJECT RECOGNITION; VISUAL INFORMATION; CONCEPTUAL MASKING; UNITY ASSUMPTION; LOW-LEVEL; WORD;
D O I
10.1016/j.cognition.2009.10.012
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
We report a series of experiments designed to assess the effect of audiovisual semantic congruency on the identification of visually-presented pictures. Participants made unspeeded identification responses concerning a series of briefly-presented, and then rapidly-masked. Pictures. A naturalistic sound was sometimes presented together with the picture at a stimulus onset asynchrony (SOA) that varied between 0 and 533 ms (auditory lagging). The sound could be semantically congruent, semantically incongruent, or else neutral (white noise) with respect to the target picture. The results showed that when the onset of the picture and sound occurred simultaneously, a semantically-congruent sound improved, whereas a semantically-incongruent sound impaired, participants' picture identification performance, as compared to performance in the white-noise control condition. A significant facilitatory effect was also observed at SOAs of around 300 ms, whereas no such semantic congruency effects were observed at the longest interval (533 ms). These results therefore suggest that the neural representations associated with visual and auditory stimuli can interact in a shared semantic system. Furthermore, this crossmodal semantic interaction is not constrained by the need for the strict temporal coincidence of the constituent auditory and visual stimuli. We therefore suggest that audiovisual semantic interactions likely occur in a short-term buffer which rapidly accesses, and temporarily retains, the semantic representations of multisensory stimuli in order to form a coherent multisensory object representation. These results are explained in terms of Potter's (1993) notion of conceptual short-term memory. (c) 2009 Elsevier B.V. All rights reserved.
引用
收藏
页码:389 / 404
页数:16
相关论文
共 138 条
[1]   Separate attentional resources for vision and audition [J].
Alais, D ;
Morrone, C ;
Burr, D .
PROCEEDINGS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 2006, 273 (1592) :1339-1345
[2]   Cross-modality attentional blinks without preparatory task-set switching [J].
Arnell, KM ;
Larson, JM .
PSYCHONOMIC BULLETIN & REVIEW, 2002, 9 (03) :497-506
[3]   Timing sight and sound [J].
Arnold, DH ;
Johnston, A ;
Nishida, S .
VISION RESEARCH, 2005, 45 (10) :1275-1284
[4]   COMMON FACTORS IN THE IDENTIFICATION OF AN ASSORTMENT OF BRIEF EVERYDAY SOUNDS [J].
BALLAS, JA .
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-HUMAN PERCEPTION AND PERFORMANCE, 1993, 19 (02) :250-267
[5]  
Barsalou LW, 1999, BEHAV BRAIN SCI, V22, P577, DOI 10.1017/S0140525X99532147
[6]   Unraveling multisensory integration: patchy organization within human STS multisensory cortex [J].
Beauchamp, MS ;
Argall, BD ;
Bodurka, J ;
Duyn, JH ;
Martin, A .
NATURE NEUROSCIENCE, 2004, 7 (11) :1190-1192
[7]   Integration of auditory and visual information about objects in superior temporal sulcus [J].
Beauchamp, MS ;
Lee, KE ;
Argall, BD ;
Martin, A .
NEURON, 2004, 41 (05) :809-823
[8]   Automatic visual bias of perceived auditory location [J].
Bertelson, P ;
Aschersleben, G .
PSYCHONOMIC BULLETIN & REVIEW, 1998, 5 (03) :482-489
[9]   PERCEIVING REAL-WORLD SCENES [J].
BIEDERMA.I .
SCIENCE, 1972, 177 (4043) :77-&
[10]   SCENE PERCEPTION - DETECTING AND JUDGING OBJECTS UNDERGOING RELATIONAL VIOLATIONS [J].
BIEDERMAN, I ;
MEZZANOTTE, RJ ;
RABINOWITZ, JC .
COGNITIVE PSYCHOLOGY, 1982, 14 (02) :143-177