共 41 条
Polysensory interactions along lateral temporal regions evoked by audiovisual speech
被引:220
作者:
Wright, TM
Pelphrey, KA
Allison, T
McKeown, MJ
McCarthy, G
机构:
[1] Duke Univ, Med Ctr, Brain Imaging & Anal Ctr, Durham, NC 27710 USA
[2] Univ N Carolina, Sch Med, Dept Psychiat, Neurodev Disorders Res Ctr, Chapel Hill, NC USA
[3] Yale Univ, Sch Med, Dept Neurol, New Haven, CT 06510 USA
[4] Dept Vet Affairs Med Ctr, Durham, NC USA
关键词:
D O I:
10.1093/cercor/13.10.1034
中图分类号:
Q189 [神经科学];
学科分类号:
071006 ;
摘要:
Many socially significant biological stimuli are polymodal, and information processing is enhanced for polymodal over unimodal stimuli. The human superior temporal sulcus (STS) region has been implicated in processing socially relevant stimuli - particularly those derived from biological motion such as mouth movements. Single unit studies in monkeys have demonstrated that regions of STS are polysensory - responding to visual, auditory and somato-sensory stimuli, and human neuroimaging studies have shown that lip-reading activates auditory regions of the lateral temporal lobe. We evaluated whether concurrent speech sounds and mouth movements were more potent activators of STS than either speech sounds or mouth movements alone. In an event-related fMRI study, subjects observed an animated character that produced audiovisual speech and the audio and visual components of speech alone. Strong activation of the STS region was evoked in all three conditions, with greatest levels of activity elicited by audiovisual speech. Subsets of activated voxels within the STS region demonstrated overadditivity (audiovisual > audio + visual) and underadditivity (audiovisual < audio + visual). These results confirm the polysensory nature of STS region and demonstrate for the first time that polymodal interactions may both potentiate and inhibit activation.
引用
收藏
页码:1034 / 1043
页数:10
相关论文