Cross-modal binding and activated attentional networks during audio-visual speech integration: a functional MRI study |
| |
Authors: | Saito Daisuke N Yoshimura Kumiko Kochiyama Takanori Okada Tomohisa Honda Manabu Sadato Norihiro |
| |
Affiliation: | National Institute for Physiological Sciences, Okazaki, Japan. |
| |
Abstract: | We evaluated the neural substrates of cross-modal binding and divided attention during audio-visual speech integration using functional magnetic resonance imaging. The subjects (n = 17) were exposed to phonemically concordant or discordant auditory and visual speech stimuli. Three different matching tasks were performed: auditory-auditory (AA), visual-visual (VV) and auditory-visual (AV). Subjects were asked whether the prompted pair were congruent or not. We defined the neural substrates for the within-modal matching tasks by VV-AA and AA-VV. We defined the cross-modal area as the intersection of the loci defined by AV-AA and AV-VV. The auditory task activated the bilateral anterior superior temporal gyrus and superior temporal sulcus, the left planum temporale and left lingual gyrus. The visual task activated the bilateral middle and inferior frontal gyrus, right occipito-temporal junction, intraparietal sulcus and left cerebellum. The bilateral dorsal premotor cortex, posterior parietal cortex (including the bilateral superior parietal lobule and the left intraparietal sulcus) and right cerebellum showed more prominent activation during AV compared with AA and VV. Within these areas, the posterior parietal cortex showed more activation during concordant than discordant stimuli, and hence was related to cross-modal binding. Our results indicate a close relationship between cross-modal attentional control and cross-modal binding during speech reading. |
| |
Keywords: | cross-modal matching human voice integration intraparietal sulcus visual |
本文献已被 PubMed Oxford 等数据库收录! |
|