首页 | 本学科首页   官方微博 | 高级检索  
     


An ALE meta‐analysis on the audiovisual integration of speech signals
Authors:Laura C. Erickson  Elizabeth Heeg  Josef P. Rauschecker  Peter E. Turkeltaub
Affiliation:1. Department of Neurology, Georgetown University Medical Center, Washington, District of Columbia;2. Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia;3. Research Division, MedStar National Rehabilitation Hospital, Washington, District of Columbia
Abstract:The brain improves speech processing through the integration of audiovisual (AV) signals. Situations involving AV speech integration may be crudely dichotomized into those where auditory and visual inputs contain (1) equivalent, complementary signals (validating AV speech) or (2) inconsistent, different signals (conflicting AV speech). This simple framework may allow the systematic examination of broad commonalities and differences between AV neural processes engaged by various experimental paradigms frequently used to study AV speech integration. We conducted an activation likelihood estimation metaanalysis of 22 functional imaging studies comprising 33 experiments, 311 subjects, and 347 foci examining “conflicting” versus “validating” AV speech. Experimental paradigms included content congruency, timing synchrony, and perceptual measures, such as the McGurk effect or synchrony judgments, across AV speech stimulus types (sublexical to sentence). Colocalization of conflicting AV speech experiments revealed consistency across at least two contrast types (e.g., synchrony and congruency) in a network of dorsal stream regions in the frontal, parietal, and temporal lobes. There was consistency across all contrast types (synchrony, congruency, and percept) in the bilateral posterior superior/middle temporal cortex. Although fewer studies were available, validating AV speech experiments were localized to other regions, such as ventral stream visual areas in the occipital and inferior temporal cortex. These results suggest that while equivalent, complementary AV speech signals may evoke activity in regions related to the corroboration of sensory input, conflicting AV speech signals recruit widespread dorsal stream areas likely involved in the resolution of conflicting sensory signals. Hum Brain Mapp 35:5587–5605, 2014. © 2014 Wiley Periodicals, Inc .
Keywords:cross‐modal   language  superior temporal sulcus  activation likelihood estimation  multisensory  auditory dorsal stream  inferior frontal gyrus  asynchronous  incongruent
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号