首页 | 本学科首页   官方微博 | 高级检索  
     


Dynamic faces speed up the onset of auditory cortical spiking responses during vocal detection
Authors:Chandramouli Chandrasekaran  Luis Lemus  Asif A. Ghazanfar
Affiliation:aPrinceton Neuroscience Institute and;Departments of bPsychology and;cEcology and Evolutionary Biology, Princeton University, Princeton, NJ, 08540
Abstract:How low-level sensory areas help mediate the detection and discrimination advantages of integrating faces and voices is the subject of intense debate. To gain insights, we investigated the role of the auditory cortex in face/voice integration in macaque monkeys performing a vocal-detection task. Behaviorally, subjects were slower to detect vocalizations as the signal-to-noise ratio decreased, but seeing mouth movements associated with vocalizations sped up detection. Paralleling this behavioral relationship, as the signal to noise ratio decreased, the onset of spiking responses were delayed and magnitudes were decreased. However, when mouth motion accompanied the vocalization, these responses were uniformly faster. Conversely, and at odds with previous assumptions regarding the neural basis of face/voice integration, changes in the magnitude of neural responses were not related consistently to audiovisual behavior. Taken together, our data reveal that facilitation of spike latency is a means by which the auditory cortex partially mediates the reaction time benefits of combining faces and voices.In noisy environments, the audiovisual nature of speech is a tremendous benefit to sensory processing. While holding a conversation in a large social setting, your brain must deftly detect when a person is saying something, who is saying it, and discriminate what she is saying. To make the task easier, our brains do not rely entirely on the person’s voice but also take advantage of the speaker’s mouth movements. This visual motion provides spatial and temporal cues (1, 2) that readily integrate with the voice, enhancing both detection (310) and discrimination (1115). How the brain mediates the behavioral benefits achieved by integrating signals from different modalities is the subject of intense debate and investigation (16). For face/voice integration, traditional models emphasize the role of association areas embedded in the temporal, frontal, and parietal lobes (17). Although these regions certainly play important roles, numerous recent studies demonstrate that they are not the sole regions for multisensory convergence (18, 19). The auditory cortex, in particular, has many sources of visual input, and an increasing number of studies in both humans and nonhuman primates demonstrate that dynamic faces influence auditory cortical activity (20).However, the relationship between multisensory behavioral performance and neural activity in the auditory cortex remains unknown for two reasons. First, methodologies typically used to study the auditory cortex in humans are unable to resolve neural activity at the level of action potentials. Second, regardless of the areas explored, none of the face/voice neurophysiological studies in monkeys to date, including auditory cortical studies (2124) and studies of association areas (2527), have required monkeys to perform a multisensory task. All these physiological studies demonstrated that neural activity in response to faces combined with voices is integrative, exhibiting both enhanced and suppressed changes in the magnitude of response when multisensory conditions are compared with unisensory ones. It is presumed that such changes in firing rate mediate behavioral benefits (e.g., faster reaction times, better accuracy) of multisensory signals, but it is possible that integrative neural responses—particularly in the auditory cortex—are epiphenomenal.In this study, we combined an audiovisual vocal-detection task with auditory cortical physiology in macaque monkeys. When detecting voices alone, our data show that the signal-to-noise ratio (SNR) systematically influences behavioral performance; the same systematic effects are observed in the magnitude and latency of spiking activity. The addition of a dynamic face leads to audiovisual neural responses that are faster than auditory-only responses—dynamic faces speed up the latency of auditory cortical spiking activity. Surprisingly, the addition of dynamic faces does not systematically change the magnitude or variability of the firing rate. These data suggest that visual influences have a role in facilitating response latency in the auditory cortex during audiovisual vocal detection. Facial motion speeds up the spiking responses of the auditory cortex but has no systematic influence on firing rate magnitudes.
Keywords:multisensory integration   crossmodal   face processing   monkey vocalization
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号