Hearing Faces: How the Infant Brain Matches the Face It Sees with the Speech It Hears
被引:106
作者:
Bristow, Davina
论文数: 0引用数: 0
h-index: 0
机构:
UCL, London WC1E 6BT, EnglandCEA, INSERM, U562, SAC,DSV,DRM,NeuroSpin, F-91191 Gif Sur Yvette, France
Bristow, Davina
[2
]
Dehaene-Lambertz, Ghislaine
论文数: 0引用数: 0
h-index: 0
机构:
CEA, INSERM, U562, SAC,DSV,DRM,NeuroSpin, F-91191 Gif Sur Yvette, France
AP HP, Le Kremlin Bicetre, France
Neurospin, IFR49, Gif Sur Yvette, FranceCEA, INSERM, U562, SAC,DSV,DRM,NeuroSpin, F-91191 Gif Sur Yvette, France
Dehaene-Lambertz, Ghislaine
[1
,3
,4
]
Mattout, Jeremie
论文数: 0引用数: 0
h-index: 0
机构:
Neurospin, IFR49, Gif Sur Yvette, FranceCEA, INSERM, U562, SAC,DSV,DRM,NeuroSpin, F-91191 Gif Sur Yvette, France
Mattout, Jeremie
[4
]
Soares, Catherine
论文数: 0引用数: 0
h-index: 0
机构:
Neurospin, IFR49, Gif Sur Yvette, FranceCEA, INSERM, U562, SAC,DSV,DRM,NeuroSpin, F-91191 Gif Sur Yvette, France
Soares, Catherine
[4
]
Gliga, Teodora
论文数: 0引用数: 0
h-index: 0
机构:
Neurospin, IFR49, Gif Sur Yvette, FranceCEA, INSERM, U562, SAC,DSV,DRM,NeuroSpin, F-91191 Gif Sur Yvette, France
Gliga, Teodora
[4
]
Baillet, Sylvain
论文数: 0引用数: 0
h-index: 0
机构:
Neurospin, IFR49, Gif Sur Yvette, France
LENA, CNRS, Paris, FranceCEA, INSERM, U562, SAC,DSV,DRM,NeuroSpin, F-91191 Gif Sur Yvette, France
Baillet, Sylvain
[4
,5
]
Mangin, Jean-Francois
论文数: 0引用数: 0
h-index: 0
机构:
Neurospin, IFR49, Gif Sur Yvette, France
Neurospin, CEA, UNAF, Gif Sur Yvette, FranceCEA, INSERM, U562, SAC,DSV,DRM,NeuroSpin, F-91191 Gif Sur Yvette, France
Mangin, Jean-Francois
[4
,6
]
机构:
[1] CEA, INSERM, U562, SAC,DSV,DRM,NeuroSpin, F-91191 Gif Sur Yvette, France
Speech is not a purely auditory signal. From around 2 months of age, infants are able to correctly match the vowel they hear with the appropriate articulating face. However, there is no behavioral evidence of integrated audiovisual perception until 4 months of age, at the earliest, when an illusory percept can be created by the fusion of the auditory stimulus and of the facial cues (McGurk effect). To understand how infants initially match the articulatory movements they see with the sounds they hear, we recorded high-density ERPs in response to auditory vowels that followed a congruent or incongruent silently articulating face in 10-week-old infants. In a first experiment, we determined that auditory-visual integration occurs during the early stages of perception as in adults. The mismatch response was similar in timing and in topography whether the preceding vowels were presented visually or aurally. In the second experiment, we studied audiovisual integration in the linguistic (vowel perception) and nonlinguistic (gender perception) domain. We observed a mismatch response for both types of change at similar latencies. Their topographies were significantly different demonstrating that cross-modal integration of these features is computed in parallel by two different networks. Indeed, brain source modeling revealed that phoneme and gender computations were lateralized toward the left and toward the right hemisphere, respectively, suggesting that each hemisphere possesses an early processing bias. We also observed repetition suppression in temporal regions and repetition enhancement in frontal regions. These results underscore how complex and structured is the human cortical organization which sustains communication from the first weeks of life on.
引用
收藏
页码:905 / 921
页数:17
相关论文
共 86 条
[1]
[Anonymous], 1967, Regional development of the brain in early life