首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The brain integrates information from multiple sensory modalities and, through this process, generates a coherent and apparently seamless percept of the external world. Although multisensory integration typically binds information that is derived from the same event, when multisensory cues are somewhat discordant they can result in illusory percepts such as the ventriloquism effect. These biases in stimulus localization are generally accompanied by the perceptual unification of the two stimuli. In the current study, we sought to further elucidate the relationship between localization biases, perceptual unification and measures of a participants uncertainty in target localization (i.e., variability). Participants performed an auditory localization task in which they were also asked to report on whether they perceived the auditory and visual stimuli to be perceptually unified. The auditory and visual stimuli were delivered at a variety of spatial (0°, 5°, 10°, 15°) and temporal (200, 500, 800 ms) disparities. Localization bias and reports of perceptual unity occurred even with substantial spatial (i.e., 15°) and temporal (i.e., 800 ms) disparities. Trial-by-trial comparison of these measures revealed a striking correlation: regardless of their disparity, whenever the auditory and visual stimuli were perceived as unified, they were localized at or very near the light. In contrast, when the stimuli were perceived as not unified, auditory localization was often biased away from the visual stimulus. Furthermore, localization variability was significantly less when the stimuli were perceived as unified. Intriguingly, on non-unity trials such variability increased with decreasing disparity. Together, these results suggest strong and potentially mechanistic links between the multiple facets of multisensory integration that contribute to our perceptual Gestalt.  相似文献   

2.
Neurophysiological studies have shown in animals that a sudden sound enhanced perceptual processing of subsequent visual stimuli. In the present study, we explored the possibility that such enhancement also exists in humans and can be explained through crossmodal integration effects, whereby the interaction occurs at the level of bimodal neurons. Subjects were required to detect visual stimuli in a unimodal visual condition or in crossmodal audio-visual conditions. The spatial and the temporal proximity of multisensory stimuli were systematically varied. An enhancement of the perceptual sensitivity (d') for luminance detection was found when the audiovisual stimuli followed a rather clear spatial and temporal rule, governing multisensory integration at the neuronal level. Electronic Publication  相似文献   

3.
Multisensory integration of information from different sensory modalities is an essential component of perception. Neurophysiological studies have revealed that audiovisual interactions occur early in time and even within sensory cortical areas believed to be modality-specific. Here we investigated the effect of auditory stimuli on visual perception of phosphenes induced by transcranial magnetic stimulation (TMS) delivered to the occipital visual cortex. TMS applied at subthreshold intensity led to the perception of phosphenes when coupled with an auditory stimulus presented within close spatiotemporal congruency at the expected retinotopic location of the phosphene percept. The effect was maximal when the auditory stimulus preceded the occipital TMS pulse by 40 ms. Follow-up experiments confirmed a high degree of temporal and spatial specificity of this facilitatory effect. Furthermore, audiovisual facilitation was only present at subthreshold TMS intensity for the phosphenes, suggesting that suboptimal levels of excitability within unisensory cortices may be better suited for enhanced crossmodal interactions. Overall, our findings reveal early auditory–visual interactions due to the enhancement of visual cortical excitability by auditory stimuli. These interactions may reflect an underlying anatomical connectivity between unisensory cortices.  相似文献   

4.
Semantic congruency and the Colavita visual dominance effect   总被引:2,自引:2,他引:0  
Participants presented with auditory, visual, or bimodal audiovisual stimuli in a speeded discrimination task, fail to respond to the auditory component of bimodal targets significantly more often than to the visual component, a phenomenon known as the Colavita visual dominance effect. Given that spatial and temporal factors have recently been shown to modulate the Colavita effect, the aim of the present study, was to investigate whether semantic congruency also modulates the effect. In the three experiments reported here, participants were presented with a version of the Colavita task in which the stimulus congruency between the auditory and visual components of the bimodal targets was manipulated. That is, the auditory and visual stimuli could refer to the same or different object (in Experiments 1 and 2) or audiovisual speech event (Experiment 3). Surprisingly, semantic/stimulus congruency had no effect on the magnitude of the Colavita effect in any of the experiments, although it exerted a significant effect on certain other aspects of participants’ performance. This finding contrasts with the results of other recent studies showing that semantic/stimulus congruency can affect certain multisensory interactions.
Camille KoppenEmail:
  相似文献   

5.
The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75 ms, by 75–25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established.  相似文献   

6.
In two experiments, we examined the extent to which audiovisual temporal order judgments (TOJs) were affected by spatial factors and by the dimension along which TOJs were made. Pairs of auditory and visual stimuli were presented from either the left and/or right of fixation at varying stimulus onset asynchronies (SOAs), and participants made unspeeded TOJs regarding either "Which modality was presented first?" (experiment 1), or "Which side was presented first?" (experiment 2). Modality TOJs were more accurate (i.e. just-noticeable differences, JNDs, were smaller) when the auditory and visual stimuli were presented from different spatial positions rather than from the same position, highlighting an important potential confound inherent in previous research. By contrast, spatial TOJs were unaffected by whether or not the two stimuli were presented in different modalities. A between-experiments comparison revealed more accurate performance (i.e. smaller JNDs) when people reported which modality came first than when they reported which side came first for identical bimodal stimulus pairs. These results demonstrate that multisensory TOJs are critically dependent on both the relative spatial position from which stimuli are presented and on the particular dimension being judged.  相似文献   

7.
目的:脑机接口(BCI)技术可以为肢体残障人士提供一种新的交流方式,在医疗康复领域具有很好的应用前景。目前单一视觉刺激的BCI系统难以适用于实际场合中多感觉信息输入的情况,需要进一步了解听觉刺激对视觉诱发电位的影响,为视听混合刺激下的BCI技术研究提供依据。 方法:在闪光刺激为12和42 Hz条件下分别加入12和42 Hz的听觉刺激,研究听觉刺激的加入对视觉刺激下大脑头表额、枕、中央、顶、颞5个空间点脑电功率的影响。 结果:视听脉冲同时刺激条件下,枕区脑电功率最大,其余空间点的功率随测试点到枕区距离的增加而减少;与单一视觉刺激下空间某点的脑电功率相比,听觉刺激的加入对该点脑电功率起增强还是抑制作用,主要取决于该点的空间位置。 结论:研究结果为听觉刺激和视觉刺激在BCI中的整合及多模态脑机接口的研究提供有意义的实验依据。  相似文献   

8.
Many studies now suggest that optimal multisensory integration sometimes occurs under conditions where auditory and visual stimuli are presented asynchronously (i.e. at asynchronies of 100 ms or more). Such observations lead to the suggestion that participants’ speeded orienting responses might be enhanced following the presentation of asynchronous (as compared to synchronous) peripheral audiovisual spatial cues. Here, we report a series of three experiments designed to investigate this issue. Upon establishing the effectiveness of bimodal cuing over the best of its unimodal components (Experiment 1), participants had to make speeded head-turning or steering (wheel-turning) responses toward the cued direction (Experiment 2), or an incompatible response away from the cue (Experiment 3), in response to random peripheral audiovisual stimuli presented at stimulus onset asynchronies ranging from ?100 to 100 ms. Race model inequality analysis of the results (Experiment 1) revealed different mechanisms underlying the observed multisensory facilitation of participants’ head-turning versus steering responses. In Experiments 2 and 3, the synchronous presentation of the component auditory and visual cues gave rise to the largest facilitation of participants’ response latencies. Intriguingly, when the participants had to subjectively judge the simultaneity of the audiovisual stimuli, the point of subjective simultaneity occurred when the auditory stimulus lagged behind the visual stimulus by 22 ms. Taken together, these results appear to suggest that the maximally beneficial behavioural (head and manual) orienting responses resulting from peripherally presented audiovisual stimuli occur when the component signals are presented in synchrony. These findings suggest that while the brain uses precise temporal synchrony in order to control its orienting responses, the system that the human brain uses to consciously judge synchrony appears to be less fine tuned.  相似文献   

9.
Cross-modal binding in auditory-visual speech perception was investigated by using the McGurk effect, a phenomenon in which hearing is altered by incongruent visual mouth movements. We used functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). In each experiment, the subjects were asked to identify spoken syllables ('ba', 'da', 'ga') presented auditorily, visually, or audiovisually (incongruent stimuli). For the auditory component of the stimuli, there were two conditions of intelligibility (High versus Low) as determined by the signal-to-noise (SN) ratio. The control task was visual talker identification of still faces. In the Low intelligibility condition in which the auditory component of the speech was harder to hear, the visual influence was much stronger. Brain imaging data showed bilateral activations specific to the unimodal auditory stimuli (in the temporal cortex) and visual stimuli (in the MT/V5). For the bimodal audiovisual stimuli, activation in the left temporal cortex extended more posteriorly toward the visual-specific area in the Low intelligibility condition. The direct comparison between the Low and High audiovisual conditions showed increased activations in the posterior part of the left superior temporal sulcus (STS), indicating its relationship with the stronger visual influence. It was discussed that this region is likely to be involved in cross-modal binding of auditory-visual speech.  相似文献   

10.
Assessing intentions, direction, and velocity of others is necessary for most daily tasks, and such information is often made available by both visual and auditory motion cues. Therefore, it is not surprising our great ability to perceive human motion. Here, we explore the multisensory integration of cues of biological motion walking speed. After testing for audiovisual asynchronies (visual signals led auditory ones by 30?ms in simultaneity temporal windows of 76.4?ms), in the main experiment, visual, auditory, and bimodal stimuli were compared to a standard audiovisual walker in a velocity discrimination task. Results in variance reduction conformed to optimal integration of congruent bimodal stimuli across all subjects. Interestingly, the perceptual judgements were still close to optimal for stimuli at the smallest level of incongruence. Comparison of slopes allows us to estimate an integration window of about 60?ms, which is smaller than that reported in audiovisual speech.  相似文献   

11.
Multisensory integration affects ERP components elicited by exogenous cues   总被引:2,自引:2,他引:0  
Previous studies have shown that the amplitude of event related brain potentials (ERPs) elicited by a combined audiovisual stimulus is larger than the sum of a single auditory and visual stimulus. This enlargement is thought to reflect multisensory integration. Based on these data, it may be hypothesized that the speeding up of responses, due to exogenous orienting effects induced by bimodal cues, exceeds the sum of single unimodal cues. Behavioral data, however, typically revealed no increased orienting effect following bimodal as compared to unimodal cues, which could be due to a failure of multisensory integration of the cues. To examine this possibility, we computed ERPs elicited by both bimodal (audiovisual) and unimodal (either auditory or visual) cues, and determined their exogenous orienting effects on responses to a to-be-discriminated visual target. Interestingly, the posterior P1 component elicited by bimodal cues was larger than the sum of the P1 components elicited by a single auditory and visual cue (i.e., a superadditive effect), but no enhanced orienting effect was found on response speed. The latter result suggests that multisensory integration elicited by our bimodal cues plays no special role for spatial orienting, at least in the present setting.
Valerio SantangeloEmail:
  相似文献   

12.
Research has shown that people fail to report the presence of the auditory component of suprathreshold audiovisual targets significantly more often than they fail to detect the visual component in speeded response tasks. Here, we investigated whether this phenomenon, known as the “Colavita effect”, also affects people’s perception of visuotactile stimuli as well. In Experiments 1 and 2, participants made speeded detection/discrimination responses to unimodal visual, unimodal tactile, and bimodal (visual and tactile) stimuli. A significant Colavita visual dominance effect was observed (i.e., participants failed to respond to touch far more often than they failed to respond to vision on the bimodal trials). This dominance of vision over touch was significantly larger when the stimuli were presented from the same position than when they were presented from different positions (Experiment 3), and still occurred even when the subjective intensities of the visual and tactile stimuli had been matched (Experiment 4), thus ruling out a simple intensity-based account of the results. These results suggest that the Colavita visual dominance effect (over touch) may result from a competition between the neural representations of the two stimuli for access to consciousness and/or the recruitment of attentional resources.
Alberto GallaceEmail:
  相似文献   

13.
Here, we examined sensitivity of visual, auditory, and audiovisual temporal order in five age-groups (20 to 70 years old). We also measured multisensory integration (MSI) using a phenomenon known as “temporal ventriloquism,” in which click sounds improve sensitivity of visual temporal order. Results showed that sensitivity of visual, auditory, and audiovisual temporal order declined from 50 years on. However, there was no corresponding decline in MSI as the click sounds actually compensated the loss of sensitivity of visual temporal order in the elderly. Sensitivity of audiovisual temporal order did not correlate with MSI, suggesting that well-preserved explicit judgments about cross-modal temporal order are not required for MSI to occur.  相似文献   

14.
We investigated whether the presence of unimodal or bimodal (synchronous) distractors would affect temporal order judgments (TOJs) for pairs of asynchronous audiovisual target stimuli. Participants made unspeeded TOJs regarding which of a pair of auditory and visual stimuli, presented at different stimulus onset asynchronies using the method of constant stimuli, occurred first. These asynchronous target stimuli were presented in a fixed position amongst a stream of three (auditory, visual, or audiovisual) distractors in each block of trials. The largest just noticeable differences (JNDs) were reported when the target stimuli were presented in the middle (position 3) of the distractor stream. Importantly, audiovisual distractors were shown to interfere with TOJ performance far more than unimodal (auditory or visual) distractors. The point of subjective simultaneity (PSS) was also influenced by the modality of the distractors, and by the position of the target within the distractor stream. These results confirm the existence of a specifically bimodal crowding effect, with audiovisual TOJs being impaired far more by the presence of audiovisual distractors that by unimodal auditory or visual distractors.  相似文献   

15.
Saccades to combined audiovisual stimuli often have reduced saccadic reaction times (SRTs) compared with those to unimodal stimuli. Neurons in the intermediate/deep layers of the superior colliculus (dSC) are capable of integrating converging sensory inputs to influence the time to saccade initiation. To identify how neural processing in the dSC contributes to reducing SRTs to audiovisual stimuli, we recorded activity from dSC neurons while monkeys generated saccades to visual or audiovisual stimuli. To evoke crossmodal interactions of varying strength, we used auditory and visual stimuli of different intensities, presented either in spatial alignment or to opposite hemifields. Spatially aligned audiovisual stimuli evoked the shortest SRTs. In the case of low-intensity stimuli, the response to the auditory component of the aligned audiovisual target increased the activity preceding the response to the visual component, accelerating the onset of the visual response and facilitating the generation of shorter-latency saccades. In the case of high-intensity stimuli, the auditory and visual responses occurred much closer together in time and so there was little opportunity for the auditory stimulus to influence previsual activity. Instead, the reduction in SRT for high-intensity, aligned audiovisual stimuli was correlated with increased premotor activity (activity after visual burst but preceding saccade-aligned burst). These data provide a link between changes in neural activity related to stimulus modality with changes in behavior. They further demonstrate how crossmodal interactions are not limited to the initial sensory activity but can also influence premotor activity in the SC.  相似文献   

16.
Previous research has shown that people with one eye have enhanced spatial vision implying intra-modal compensation for their loss of binocularity. The current experiments investigate whether monocular blindness from unilateral eye enucleation may lead to cross-modal sensory compensation for the loss of one eye. We measured speeded detection and discrimination of audiovisual targets presented as a stream of paired objects and familiar sounds in a group of individuals with monocular enucleation compared to controls viewing binocularly or monocularly. In Experiment 1, participants detected the presence of auditory, visual or audiovisual targets. All participant groups were equally able to detect the targets. In Experiment 2, participants discriminated between the visual, auditory or bimodal (audiovisual) targets. Both control groups showed the Colavita effect, that is, preferential processing of visual over auditory information for the bimodal stimuli. The monocular enucleation group, however, showed no Colavita effect, and further, they demonstrated equal processing of visual and auditory stimuli. This finding suggests a lack of visual dominance and equivalent auditory and visual processing in people with one eye. This may be an adaptive form of sensory compensation for the loss of one eye and could result from recruitment of deafferented visual cortical areas by inputs from other senses.  相似文献   

17.
In a focused attention paradigm, saccadic reaction time (SRT) to a visual target tends to be shorter when an auditory accessory stimulus is presented in close temporal and spatial proximity. Observed SRT reductions typically diminish as spatial disparity between the stimuli increases. Here a visual target LED (500 ms duration) was presented above or below the fixation point and a simultaneously presented auditory accessory (2 ms duration) could appear at the same or the opposite vertical position. SRT enhancement was about 35 ms in the coincident and 10 ms in the disparate condition. In order to further probe the audiovisual integration mechanism, in addition to the auditory non-target an auditory masker (200 ms duration) was presented before, simultaneous to, or after the accessory stimulus. In all interstimulus interval (ISI) conditions, SRT enhancement went down both in the coincident and disparate configuration, but this decrement was fairly stable across the ISI values. If multisensory integration solely relied on a feed-forward process, one would expect a monotonic decrease of the masker effect with increasing ISI in the backward masking condition. It is therefore conceivable that the relatively high-energetic masker causes a broad excitatory response of SC neurons. During this state, the spatial audio-visual information from multisensory association areas is fed back and merged with the spatially unspecific excitation pattern induced by the masker. Assuming that a certain threshold of activation has to be achieved in order to generate a saccade in the correct direction, the blurred joint output of noise and spatial audio-visual information needs more time to reach this threshold prolonging SRT to an audio-visual object.  相似文献   

18.
To investigate the neural substrates of the perception of audiovisual speech, we conducted a functional magnetic resonance imaging study with 28 normal volunteers. We hypothesized that the constraint provided by visually-presented articulatory speech (mouth movements) would lessen the workload for speech identification if the two were concordant, but would increase the workload if the two were discordant. In auditory attention sessions, subjects were required to identify vowels based on auditory speech. Auditory vowel stimuli were presented with concordant or discordant visible articulation movements, unrelated lip movements, and without visual input. In visual attention sessions, subjects were required to identify vowels based on the visually-presented vowel articulation movements. The movements were presented with concordant or discordant uttered vowels and noise, and without sound. Irrespective of the attended modality, concordant conditions significantly shortened the reaction time, whereas discordant conditions lengthened the reaction time. Within the neural substrates that were commonly activated by auditory and visual tasks, the mid superior temporal sulcus showed greater activity for discordant stimuli than concordant stimuli. These findings suggest that the mid superior temporal sulcus plays an important role in the auditory–visual integration process underlying vowel identification.  相似文献   

19.
Summary We recorded single neuron responses in the cat's lateral geniculate nucleus (LGN) and visual cortex to compound stimuli composed of two sinusoidal gratings in a 21 frequency ratio. To probe visual receptive field symmetry, we varied the relative spatial phase of the two components and measured the effect on neuronal responses. We expected that on-center LGN neurons would respond best to gratings combined in positive cosine (bright bar) phase, while off-center LGN neurons would respond best to gratings combined in negative cosine (dark bar) phase. When drifting stimuli were used, cells' phase preferences were roughly 90 deg away from the expected values; when stationary, contrast-modulated stimuli were used, phase preferences were as originally predicted. Computer simulations showed that this discrepancy could be explained by taking into account the cells' temporal properties. Thus, tests using drifting stimuli confound the spatial structure of visual neural receptive fields with their temporal response characteristics. A small sample of data from cortical neurons reveals the same confound.  相似文献   

20.
Recognizing a natural object requires one to pool information from various sensory modalities, and to ignore information from competing objects. That the same semantic knowledge can be accessed through different modalities makes it possible to explore the retrieval of supramodal object concepts. Here, object-recognition processes were investigated by manipulating the relationships between sensory modalities, specifically, semantic content, and spatial alignment between auditory and visual information. Experiments were run under realistic virtual environment. Participants were asked to react as fast as possible to a target object presented in the visual and/or the auditory modality and to inhibit a distractor object (go/no-go task). Spatial alignment had no effect on object-recognition time. The only spatial effect observed was a stimulus–response compatibility between the auditory stimulus and the hand position. Reaction times were significantly shorter for semantically congruent bimodal stimuli than would be predicted by independent processing of information about the auditory and visual targets. Interestingly, this bimodal facilitation effect was twice as large as found in previous studies that also used information-rich stimuli. An interference effect was observed (i.e. longer reaction times to semantically incongruent stimuli than to the corresponding unimodal stimulus) only when the distractor was auditory. When the distractor was visual, the semantic incongruence did not interfere with object recognition. Our results show that immersive displays with large visual stimuli may provide large multimodal integration effects, and reveal a possible asymmetry in the attentional filtering of irrelevant auditory and visual information.
Clara SuiedEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号