首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
An important step in perceptual processing is the integration of information from different sensory modalities into a coherent percept. It has been suggested that such crossmodal binding might be achieved by transient synchronization of neurons from different modalities in the gamma-frequency range (> 30 Hz). Here we employed a crossmodal priming paradigm, modulating the semantic congruency between visual–auditory natural object stimulus pairs, during the recording of the high density electroencephalogram (EEG). Subjects performed a semantic categorization task. Analysis of the behavioral data showed a crossmodal priming effect (facilitated auditory object recognition) in response to semantically congruent stimuli. Differences in event-related potentials (ERP) were found between 250 and 350 ms, which were localized to left middle temporal gyrus (BA 21) using a distributed linear source model. Early gamma-band activity (40–50 Hz) was increased between 120 ms and 180 ms following auditory stimulus onset for semantically congruent stimulus pairs. Source reconstruction for this gamma-band response revealed a maximal increase in left middle temporal gyrus (BA 21), an area known to be related to the processing of both complex auditory stimuli and multisensory processing. The data support the hypothesis that oscillatory activity in the gamma-band reflects crossmodal semantic-matching processes in multisensory convergence sites.  相似文献   

3.
Speech perception can use not only auditory signals, but also visual information from seeing the speaker's mouth. The relative timing and relative location of auditory and visual inputs are both known to influence crossmodal integration psychologically, but previous imaging studies of audiovisual speech focused primarily on just temporal aspects. Here we used Positron Emission Tomography (PET) during audiovisual speech processing to study how temporal and spatial factors might jointly affect brain activations. In agreement with previous work, synchronous versus asynchronous audiovisual speech yielded increased activity in multisensory association areas (e.g., superior temporal sulcus [STS]), plus in some unimodal visual areas. Our orthogonal manipulation of relative stimulus position (auditory and visual stimuli presented at same location vs. opposite sides) and stimulus synchrony showed that (i) ventral occipital areas and superior temporal sulcus were unaffected by relative location; (ii) lateral and dorsal occipital areas were selectively activated for synchronous bimodal stimulation at the same external location; (iii) right inferior parietal lobule was activated for synchronous auditory and visual stimuli at different locations, that is, in the condition classically associated with the 'ventriloquism effect' (shift of perceived auditory position toward the visual location). Thus, different brain regions are involved in different aspects of audiovisual integration. While ventral areas appear more affected by audiovisual synchrony (which can influence speech identification), more dorsal areas appear to be associated with spatial multisensory interactions.  相似文献   

4.
Kanayama N  Tamè L  Ohira H  Pavani F 《NeuroImage》2012,59(4):3406-3417
Multisensory integration involves bottom-up as well as top-down processes. We investigated the influences of top-down control on the neural responses to multisensory stimulation using EEG recording and time-frequency analyses. Participants were stimulated at the index or thumb of the left hand, using tactile vibrators mounted on a foam cube. Simultaneously they received a visual distractor from a light emitting diode adjacent to the active vibrator (spatially congruent trial) or adjacent to the inactive vibrator (spatially incongruent trial). The task was to respond to the elevation of the tactile stimulus (upper or lower), while ignoring the simultaneous visual distractor. To manipulate top-down control on this multisensory stimulation, the proportion of spatially congruent (vs. incongruent) trials was changed across blocks. Our results reveal that the behavioral cost of responding to incongruent than congruent trials (i.e., the crossmodal congruency effect) was modulated by the proportion of congruent trials. Most importantly, the EEG gamma band response and the gamma-theta coupling were also affected by this modulation of top-down control, whereas the late theta band response related to the congruency effect was not. These findings suggest that gamma band response is more than a marker of multisensory binding, being also sensitive to the correspondence between expected and actual multisensory stimulation. By contrast, theta band response was affected by congruency but appears to be largely immune to stimulation expectancy.  相似文献   

5.
During object manipulation the brain integrates the visual, auditory, and haptic experience of an object into a unified percept. Previous brain imaging studies have implicated for instance the dorsal part of the lateral occipital complex in visuo-tactile and the posterior superior temporal sulcus in audio-visual integration of object-related inputs (Amedi et al., 2005). Yet it is still unclear which brain regions represent object-specific information of all three sensory modalities. To address this question, we performed two complementary functional magnetic resonance imaging experiments. In the first experiment, we identified brain regions which were consistently activated by unimodal visual, auditory, and haptic processing of manipulable objects relative to non-object control stimuli presented in the same modality. In the second experiment, we assessed regional brain activations when participants had to match object-related information that was presented simultaneously in two or all three modalities. Only a well-defined region in left fusiform gyrus (FG) showed an object-specific activation during unisensory processing in the visual, auditory, and tactile modalities. The same region was also consistently activated during multisensory matching of object-related information across all three senses. Taken together, our results suggest that this region is central to the recognition of manipulable objects. A putative role of this FG region is to unify object-specific information provided by the visual, auditory, and tactile modalities into trisensory object representations.  相似文献   

6.
The processing streams of the various sensory modalities are known to interact within the central nervous system. These interactions differ depending on the level of stimulus representation and attention. The current study focused on cross-sensory influences on stimulus change detection during unattended auditory processing. We employed an oddball paradigm to assess cortical processing using whole-head magnetoencephalography (MEG) in 20 volunteers. While subjects performed distraction tasks of varying difficulties, auditory duration deviants were applied randomly to the left or the right ear preceded (200-400 ms) by oculomotor, static visual, or flow field co-stimulation at either side. Mismatch fields were recorded over both hemispheres. Changes in gaze direction and static visual stimuli elicited the most reliable enhancement of deviance detection at the same side (most prominent at the right auditory cortex). Under both conditions, the lateralized unattended and unpredictive pre-cues acted analogously to shifts in selective attention, but were not reduced by attentional load. Thus, the early cognitive representation of sounds seems to reflect automatic cross-modal interference. Preattentive multisensory integration may provide the neuronal basis for orienting reactions to objects in space and thus for voluntary control of selective attention.  相似文献   

7.
In dynamic cluttered environments, audition and vision may benefit from each other in determining what deserves further attention and what does not. We investigated the underlying neural mechanisms responsible for attentional guidance by audiovisual stimuli in such an environment. Event-related potentials (ERPs) were measured during visual search through dynamic displays consisting of line elements that randomly changed orientation. Search accuracy improved when a target orientation change was synchronized with an auditory signal as compared to when the auditory signal was absent or synchronized with a distractor orientation change. The ERP data show that behavioral benefits were related to an early multisensory interaction over left parieto-occipital cortex (50-60 ms post-stimulus onset), which was followed by an early positive modulation (80-100 ms) over occipital and temporal areas contralateral to the audiovisual event, an enhanced N2pc (210-250 ms), and a contralateral negative slow wave (CNSW). The early multisensory interaction was correlated with behavioral search benefits, indicating that participants with a strong multisensory interaction benefited the most from the synchronized auditory signal. We suggest that an auditory signal enhances the neural response to a synchronized visual event, which increases the chances of selection in a multiple object environment.  相似文献   

8.
For attentional control of behavior, the brain permanently resolves a competition between the impressions supplied by different senses. Here, using a dual-modality temporal order detection task, we studied attentional modulation of oscillatory neuromagnetic activity in the human cerebral cortex. On each trial, after simultaneous exposure to visual and auditory noise, subjects were presented with an asynchronous pair of a visual and an auditory stimulus. Either of the two stimuli could occur first equally often, their order was not cued. Subjects had to determine the leading stimulus in a pair and attentively monitor it to respond upon its offset. With the attended visual or auditory stimuli, spectral power analysis revealed marked enhancements of induced gamma activity within 250 ms post-stimulus onset over the modality-specific cortices (occipital at 64 Hz, right temporal at 53 Hz). When unattended, however, the stimuli led to a significantly decreased (beneath baseline) gamma response in these cortical regions. The gamma decreases occurred at lower frequencies ( approximately 30 Hz) than did the gamma increases. An increase in the gamma power and frequency for the attended modality and their decrease for the unattended modality suggest that attentional regulation of multisensory processing involves reciprocal changes in synchronization of respective cortical networks. We assume that the gamma decrease reflects an active suppression of the task-irrelevant sensory input. This suppression occurs at lower frequencies, suggesting an involvement of larger scale cell assemblies.  相似文献   

9.
The quick detection of dynamic changes in multisensory environments is essential to survive dangerous events and orient attention to informative events. Previous studies have identified multimodal cortical areas activated by changes of visual, auditory, and tactile stimuli. In the present study, we used magnetoencephalography (MEG) to examine time-varying cortical processes responsive to unexpected unimodal changes during continuous multisensory stimulation. The results showed that there were change-driven cortical responses in multimodal areas, such as the temporo-parietal junction and middle and inferior frontal gyri, regardless of the sensory modalities where the change occurred. These multimodal activations accompanied unimodal activations, both of which in general had some peaks within 300 ms after the changes. Thus, neural processes responsive to unimodal changes in the multisensory environment are distributed at different timing in these cortical areas.  相似文献   

10.
Electrophysiological studies in nonhuman primates and other mammals have shown that sensory cues from different modalities that appear at the same time and in the same location can increase the firing rate of multisensory cells in the superior colliculus to a level exceeding that predicted by summing the responses to the unimodal inputs. In contrast, spatially disparate multisensory cues can induce a profound response depression. We have previously demonstrated using functional magnetic resonance imaging (fMRI) that similar indices of crossmodal facilitation and inhibition are detectable in human cortex when subjects listen to speech while viewing visually congruent and incongruent lip and mouth movements. Here, we have used fMRI to investigate whether similar BOLD signal changes are observable during the crossmodal integration of nonspeech auditory and visual stimuli, matched or mismatched solely on the basis of their temporal synchrony, and if so, whether these crossmodal effects occur in similar brain areas as those identified during the integration of audio-visual speech. Subjects were exposed to synchronous and asynchronous auditory (white noise bursts) and visual (B/W alternating checkerboard) stimuli and to each modality in isolation. Synchronous and asynchronous bimodal inputs produced superadditive BOLD response enhancement and response depression across a large network of polysensory areas. The most highly significant of these crossmodal gains and decrements were observed in the superior colliculi. Other regions exhibiting these crossmodal interactions included cortex within the superior temporal sulcus, intraparietal sulcus, insula, and several foci in the frontal lobe, including within the superior and ventromedial frontal gyri. These data demonstrate the efficacy of using an analytic approach informed by electrophysiology to identify multisensory integration sites in humans and suggest that the particular network of brain areas implicated in these crossmodal integrative processes are dependent on the nature of the correspondence between the different sensory inputs (e.g. space, time, and/or form).  相似文献   

11.
Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech.  相似文献   

12.
The purpose of this study was to reveal functional areas of the brain modulating processing of selective auditory or visual attention toward utterances. Regional cerebral blood flow was measured in six normal volunteers using positron emission tomography during two selective attention tasks and a control condition. The auditory task activated the auditory, inferior parietal, prefrontal, and anterior cingulate cortices. The visual task activated the visual association, inferior parietal, and prefrontal cortices. Both conditions activated the same area in the superior temporal sulcus. During the visual task, deactivation was observed in the auditory cortex. These results indicate that there exists a modality-dependent selective attention mechanism which activates or deactivates cortical areas in different ways.  相似文献   

13.
Jessen S  Kotz SA 《NeuroImage》2011,58(2):665-674
Face-to-face communication works multimodally. Not only do we employ vocal and facial expressions; body language provides valuable information as well. Here we focused on multimodal perception of emotion expressions, monitoring the temporal unfolding of the interaction of different modalities in the electroencephalogram (EEG). In the auditory condition, participants listened to emotional interjections such as "ah", while they saw mute video clips containing emotional body language in the visual condition. In the audiovisual condition participants saw video clips with matching interjections. In all three conditions, the emotions "anger" and "fear", as well as non-emotional stimuli were used. The N100 amplitude was strongly reduced in the audiovisual compared to the auditory condition, suggesting a significant impact of visual information on early auditory processing. Furthermore, anger and fear expressions were distinct in the auditory but not the audiovisual condition. Complementing these event-related potential (ERP) findings, we report strong similarities in the alpha- and beta-band in the visual and the audiovisual conditions, suggesting a strong visual processing component in the perception of audiovisual stimuli. Overall, our results show an early interaction of modalities in emotional face-to-face communication using complex and highly natural stimuli.  相似文献   

14.
Deshpande G  Hu X  Stilla R  Sathian K 《NeuroImage》2008,40(4):1807-1814
Although it is accepted that visual cortical areas are recruited during touch, it remains uncertain whether this depends on top-down inputs mediating visual imagery or engagement of modality-independent representations by bottom-up somatosensory inputs. Here we addressed this by examining effective connectivity in humans during haptic perception of shape and texture with the right hand. Multivariate Granger causality analysis of functional magnetic resonance imaging (fMRI) data was conducted on a network of regions that were shape- or texture-selective. A novel network reduction procedure was employed to eliminate connections that did not contribute significantly to overall connectivity. Effective connectivity during haptic perception was found to involve a variety of interactions between areas generally regarded as somatosensory, multisensory, visual and motor, emphasizing flexible cooperation between different brain regions rather than rigid functional separation. The left postcentral sulcus (PCS), left precentral gyrus and right posterior insula were important sources of connections in the network. Bottom-up somatosensory inputs from the left PCS and right posterior insula fed into visual cortical areas, both the shape-selective right lateral occipital complex (LOC) and the texture-selective right medial occipital cortex (probable V2). In addition, top-down inputs from left postero-supero-medial parietal cortex influenced the right LOC. Thus, there is strong evidence for the bottom-up somatosensory inputs predicted by models of visual cortical areas as multisensory processors and suggestive evidence for top-down parietal (but not prefrontal) inputs that could mediate visual imagery. This is consistent with modality-independent representations accessible through both bottom-up sensory inputs and top-down processes such as visual imagery.  相似文献   

15.
Human brain activity associated with audiovisual perception and attention   总被引:1,自引:0,他引:1  
Coherent perception of objects in our environment often requires perceptual integration of auditory and visual information. Recent behavioral data suggest that audiovisual integration depends on attention. The current study investigated the neural basis of audiovisual integration using 3-Tesla functional magnetic resonance imaging (fMRI) in 12 healthy volunteers during attention to auditory or visual features, or audiovisual feature combinations of abstract stimuli (simultaneous harmonic sounds and colored circles). Audiovisual attention was found to modulate activity in the same frontal, temporal, parietal and occipital cortical regions as auditory and visual attention. In addition, attention to audiovisual feature combinations produced stronger activity in the superior temporal cortices than attention to only auditory or visual features. These modality-specific areas might be involved in attention-dependent perceptual binding of synchronous auditory and visual events into coherent audiovisual objects. Furthermore, the modality-specific temporal auditory and occipital visual cortical areas showed attention-related modulations during both auditory and visual attention tasks. This result supports the proposal that attention to stimuli in one modality can spread to encompass synchronously presented stimuli in another modality.  相似文献   

16.
Parallel cortical pathways have been proposed for the processing of auditory pattern and spatial information, respectively. We tested this segregation with human functional magnetic resonance imaging (fMRI) and separate electroencephalographic (EEG) recordings in the same subjects who listened passively to four sequences of repetitive spatial animal vocalizations in an event-related paradigm. Transitions between sequences constituted either a change of auditory pattern, location, or both pattern+location. This procedure allowed us to investigate the cortical correlates of natural auditory "what" and "where" changes independent of differences in the individual stimuli. For pattern changes, we observed significantly increased fMRI responses along the bilateral anterior superior temporal gyrus and superior temporal sulcus, the planum polare, lateral Heschl's gyrus and anterior planum temporale. For location changes, significant increases of fMRI responses were observed in bilateral posterior superior temporal gyrus and planum temporale. An overlap of these two types of changes occurred in the lateral anterior planum temporale and posterior superior temporal gyrus. The analysis of source event-related potentials (ERPs) revealed faster processing of location than pattern changes. Thus, our data suggest that passive processing of auditory spatial and pattern changes is dissociated both temporally and anatomically in the human brain. The predominant role of more anterior aspects of the superior temporal lobe in sound identity processing supports the role of this area as part of the auditory pattern processing stream, while spatial processing of auditory stimuli appears to be mediated by the more posterior parts of the superior temporal lobe.  相似文献   

17.
A major determinant of multisensory integration, derived from single-neuron studies in animals, is the principle of inverse effectiveness (IE), which describes the phenomenon whereby maximal multisensory response enhancements occur when the constituent unisensory stimuli are minimally effective in evoking responses. Human behavioral studies, which have shown that multisensory interactions are strongest when stimuli are low in intensity are in agreement with the IE principle, but the neurophysiologic basis for this finding is unknown. In this high-density electroencephalography (EEG) study, we examined effects of stimulus intensity on multisensory audiovisual processing in event-related potentials (ERPs) and response time (RT) facilitation in the bisensory redundant target effect (RTE). The RTE describes that RTs are faster for bisensory redundant targets than for the respective unisensory targets. Participants were presented with semantically meaningless unisensory auditory, unisensory visual and bisensory audiovisual stimuli of low, middle and high intensity, while they were instructed to make a speeded button response when a stimulus in either modality was presented. Behavioral data showed that the RTE exceeded predictions on the basis of probability summations of unisensory RTs, indicative of integrative multisensory processing, but only for low intensity stimuli. Paralleling this finding, multisensory interactions in short latency (40-60ms) ERPs with a left posterior and right anterior topography were found particularly for stimuli with low intensity. Our findings demonstrate that the IE principle is applicable to early multisensory processing in humans.  相似文献   

18.
Meylan RV  Murray MM 《NeuroImage》2007,35(1):244-254
Effects of multisensory interactions on how subsequent sensory inputs are processed remain poorly understood. We investigated whether multisensory interactions between rudimentary visual and auditory stimuli (flashes and beeps) affect later visual processing. A 2 x 3 design varied the number of flashes (1 or 2) with the number of beeps (0, 1, or 2) presented on each trial, such that '2F1B' refers to the presentation of 2 flashes with 1 beep. Beeps, when present, were synchronous with the first flash, and pairs of stimuli within a trial were separated by 52 ms ISI. Subjects indicated the number of flashes presented. Electrical neuroimaging of 128-channel event-related potentials assessed both the electric field strength and topography. Isolation of responses a visual stimulus that was preceded by a multisensory event was achieved by calculating the difference between the 2F1B and 1F1B conditions, and responses to a visual stimulus preceded by a unisensory event were isolated by calculating the difference between the 2F0B and 1F0B conditions (MUL and VIS, respectively). Comparison of MUL and VIS revealed that the treatment of visual information was significantly attenuated approximately 160 ms after the onset of the second flash when it was preceded by a multisensory event. Source estimations further indicated that this attenuation occurred within low-level visual cortices. Multisensory interactions are ongoing in low-level visual cortices and affect incoming sensory processing. These data provide evidence that multisensory interactions are not restricted in time and can dramatically influence the treatment of subsequent stimuli, opening new lines of multisensory research.  相似文献   

19.
The auditory cortex is anatomically segregated into a central core and a peripheral belt region, which exhibit differences in preference to bandpassed noise and in temporal patterns of response to acoustic stimuli. While it has been shown that visual stimuli can modify response magnitude in auditory cortex, little is known about differential patterns of multisensory interactions in core and belt. Here, we used functional magnetic resonance imaging and examined the influence of a short visual stimulus presented prior to acoustic stimulation on the spatial pattern of blood oxygen level-dependent signal response in auditory cortex. Consistent with crossmodal inhibition, the light produced a suppression of signal response in a cortical region corresponding to the core. In the surrounding areas corresponding to the belt regions, however, we found an inverse modulation with an increasing signal in centrifugal direction. Our data suggest that crossmodal effects are differentially modulated according to the hierarchical core-belt organization of auditory cortex.  相似文献   

20.
Watkins S  Shams L  Tanaka S  Haynes JD  Rees G 《NeuroImage》2006,31(3):1247-1256
When a single brief visual flash is accompanied by two auditory bleeps, it is frequently perceived incorrectly as two flashes. Here, we used high field functional MRI in humans to examine the neural basis of this multisensory perceptual illusion. We show that activity in retinotopic visual cortex is increased by the presence of concurrent auditory stimulation, irrespective of any illusory perception. However, when concurrent auditory stimulation gave rise to illusory visual perception, activity in V1 was enhanced, despite auditory and visual stimulation being unchanged. These findings confirm that responses in human V1 can be altered by sound and show that they reflect subjective perception rather than the physically present visual stimulus. Moreover, as the right superior temporal sulcus and superior colliculus were also activated by illusory visual perception, together with V1, they provide a potential neural substrate for the generation of this multisensory illusion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号