首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
The Colavita visual dominance effect refers to the phenomenon whereby participants presented with unimodal auditory, unimodal visual, or bimodal audiovisual stimuli in a speeded discrimination task, fail to respond to the auditory component of bimodal targets significantly more often than they fail to respond to the visual component. The Colavita effect was demonstrated in this study when participants were presented with unimodal auditory, unimodal visual, or bimodal stimuli (in the ratios 40:40:20, Experiment 1; or 33:33:33, Experiment 2), to which they had to respond by pressing an auditory response key, a visual response key, or both response keys. The Colavita effect was also demonstrated when participants had to respond to the bimodal targets using a dedicated third (bimodal) response key (Experiment 3). These results therefore suggest that stimulus probability and the response demands of the task do not contribute significantly to the Colavita effect. In Experiment 4, we investigated what role exogenous attention toward a sensory modality plays in the Colavita effect. A significantly larger Colavita effect was observed when a visual cue preceded the bimodal target than when an auditory cue preceded it. This result suggests that the Colavita visual dominance effect can be partially explained in terms of the greater exogenous attention-capturing qualities of visual versus auditory stimuli.  相似文献   

2.
The Colavita visual dominance effect refers to the phenomenon whereby participants presented with auditory, visual, or audiovisual stimuli in a speeded response task sometimes fail to respond to the auditory component of the bimodal targets. We conducted an experiment on the Colavita effect in which the auditory and visual components of the bimodal targets were presented from either the same or different positions (sides) at one of two eccentricities (13 degrees or 26 degrees ). Participants were presented with auditory, visual, and bimodal stimuli to which they had to respond by pressing an auditory response key, a visual response key, or both response keys, respectively. On bimodal trials, participants failed to respond to the auditory stimulus significantly more often than they failed to respond to the visual stimulus, resulting in a significant Colavita visual dominance effect. The Colavita effect was significantly larger when the stimuli were presented from the same position than when they were presented from different positions. These results provide the first empirical evidence that the Colavita effect is modulated by the spatial coincidence of the auditory and visual stimuli.  相似文献   

3.
Participants presented with unimodal auditory (A), unimodal visual (V), or bimodal audiovisual stimuli (AV) in a task in which they have to identify the modality of the targets as rapidly as possible, fail to respond to the auditory component of bimodal targets significantly more often than they fail to respond to the visual component. In the majority of published studies on this phenomenon, known as the Colavita effect, the auditory, visual, and bimodal stimuli have been presented in the ratio 40A:40V:20AV. In the present study, we investigated whether the relatively low frequency with which the bimodal targets in previous studies have been presented may have contributed to participants’ difficulty in responding to such targets correctly. We manipulated the bimodal target probability by presenting the stimuli in the ratios 20A:20V:60AV, in Experiment 1; 5A:5V:90AV, 25A:25V:50AV, and 45A:45V:10AV, in Experiment 2. A significant Colavita visual dominance effect was observed when the bimodal targets were presented on 60% of the trials or less. We suggest that increasing the frequency of bimodal targets may have provided an exogenous cue to performance, that reduced the necessity for endogenous attention when selecting the appropriate response to make to bimodal targets.  相似文献   

4.
People often fail to respond to an auditory target if they have to respond to a visual target presented at the same time, a phenomenon known as the Colavita visual dominance effect. To date, the Colavita effect has only ever been demonstrated in detection tasks in which participants respond to pre-defined visual, auditory, or bimodal audiovisual target stimuli. Here, we tested the Colavita effect when the target was defined by a rule, namely the repetition of any event (a picture, a sound, or both) in simultaneously-presented streams of pictures and sounds. Given previous findings that people are better at detecting auditory repetitions than visual repetitions, we expected that the Colavita visual dominance effect might disappear (or even reverse). Contrary to this prediction, however, visual dominance (i.e., the typical Colavita effect) was observed, with participants still neglecting significantly more auditory events than visual events in response to bimodal targets. The visual dominance for bimodal repetitions was observed despite the fact that participants missed significantly more unimodal visual repetitions than unimodal auditory repetitions. These results therefore extend the Colavita visual dominance effect to a domain where auditory dominance has traditionally been observed. In addition, our results reveal that the Colavita effect occurs at a more abstract, rule-based, level of representation than tested in previous research.  相似文献   

5.
Semantic congruency and the Colavita visual dominance effect   总被引:2,自引:2,他引:0  
Participants presented with auditory, visual, or bimodal audiovisual stimuli in a speeded discrimination task, fail to respond to the auditory component of bimodal targets significantly more often than to the visual component, a phenomenon known as the Colavita visual dominance effect. Given that spatial and temporal factors have recently been shown to modulate the Colavita effect, the aim of the present study, was to investigate whether semantic congruency also modulates the effect. In the three experiments reported here, participants were presented with a version of the Colavita task in which the stimulus congruency between the auditory and visual components of the bimodal targets was manipulated. That is, the auditory and visual stimuli could refer to the same or different object (in Experiments 1 and 2) or audiovisual speech event (Experiment 3). Surprisingly, semantic/stimulus congruency had no effect on the magnitude of the Colavita effect in any of the experiments, although it exerted a significant effect on certain other aspects of participants’ performance. This finding contrasts with the results of other recent studies showing that semantic/stimulus congruency can affect certain multisensory interactions.
Camille KoppenEmail:
  相似文献   

6.
Research has shown that people fail to report the presence of the auditory component of suprathreshold audiovisual targets significantly more often than they fail to detect the visual component in speeded response tasks. Here, we investigated whether this phenomenon, known as the “Colavita effect”, also affects people’s perception of visuotactile stimuli as well. In Experiments 1 and 2, participants made speeded detection/discrimination responses to unimodal visual, unimodal tactile, and bimodal (visual and tactile) stimuli. A significant Colavita visual dominance effect was observed (i.e., participants failed to respond to touch far more often than they failed to respond to vision on the bimodal trials). This dominance of vision over touch was significantly larger when the stimuli were presented from the same position than when they were presented from different positions (Experiment 3), and still occurred even when the subjective intensities of the visual and tactile stimuli had been matched (Experiment 4), thus ruling out a simple intensity-based account of the results. These results suggest that the Colavita visual dominance effect (over touch) may result from a competition between the neural representations of the two stimuli for access to consciousness and/or the recruitment of attentional resources.
Alberto GallaceEmail:
  相似文献   

7.
Many researchers have taken the Colavita effect to represent a paradigm case of visual dominance. Broadly defined, the effect occurs when people fail to respond to an auditory target if they also have to respond to a visual target presented at the same time. Previous studies have revealed the remarkable resilience of this effect to various manipulations. In fact, a reversal of the Colavita visual dominance effect (i.e., auditory dominance) has never been reported. Here, we present a series of experiments designed to investigate whether it is possible to reverse the Colavita effect when the target stimuli consist of repetitions embedded in simultaneously presented auditory and visual streams of stimuli. In line with previous findings, the Colavita effect was still observed for an immediate repetition task, but when an n-1 repetition detection task was used, a reversal of visual dominance was demonstrated. These results suggest that masking from intervening stimuli between n-1 repetition targets was responsible for the elimination and reversal of the Colavita visual dominance effect. They further suggest that varying the presence of a mask (pattern, conceptual, or absent) in the repetition detection task gives rise to different patterns of sensory dominance (i.e., visual dominance, an elimination of the Colavita effect, or even auditory dominance).  相似文献   

8.
Sensory dominance in combinations of audio,visual and haptic stimuli   总被引:1,自引:1,他引:0  
Participants presented with auditory, visual, or bi-sensory audio–visual stimuli in a speeded discrimination task, fail to respond to the auditory component of the bi-sensory trials significantly more often than they fail to respond to the visual component—a ‘visual dominance’ effect. The current study investigated further the sensory dominance phenomenon in all combinations of auditory, visual and haptic stimuli. We found a similar visual dominance effect also in bi-sensory trials of combined haptic–visual stimuli, but no bias towards either sensory modality in bi-sensory trials of haptic–auditory stimuli. When presented with tri-sensory trials of combined auditory–visual–haptic stimuli, participants made more errors of responding only to two corresponding sensory signals than errors of responding only to a single sensory modality, however, there were no biases towards either sensory modality (or sensory pairs) in the distribution of both types of errors (i.e. responding only to a single stimulus or to pairs of stimuli). These results suggest that while vision can dominate both the auditory and the haptic sensory modalities, it is limited to bi-sensory combinations in which the visual signal is combined with another single stimulus. However, in a tri-sensory combination when a visual signal is presented simultaneously with both the auditory and the haptic signals, the probability of missing two signals is much smaller than of missing only one signal and therefore the visual dominance disappears.  相似文献   

9.
The Colavita effect occurs when participants performing a speeded detection/discrimination task preferentially report the visual component of pairs of audiovisual or visuotactile stimuli. To date, however, researchers have failed to demonstrate an analogous effect for audiotactile stimuli (Hecht and Reiner in Exp Brain Res 193:307–314, 2009). Here, we investigate whether an audiotactile Colavita effect can be demonstrated by manipulating either the physical features of the auditory stimuli presented in frontal (Experiment 1) or rear space (Experiment 3), or the relative and absolute position of auditory and tactile stimuli in frontal (Experiment 2) or rear space (Experiment 3). The participants showed no evidence of responding preferentially to one of the sensory components of the bimodal stimuli when they were presented from a single location in frontal space (Experiment 1). However, a significant audiotactile Colavita effect was demonstrated in Experiments 2 and 3, with participants preferentially reporting the auditory (rather than tactile) stimulus on the bimodal target trials. In Experiment 3, an audiotactile Colavita effect was reported for auditory white noise bursts but not for pure tones and selectively for those stimuli presented from the same (rather than from the opposite) side. Taken together, these results therefore suggest that when a tactile and an auditory stimulus are presented from a single frontal location, participants do not preferentially report one of the two sensory components (Experiment 1). In contrast, when the stimuli are presented from different locations, people preferentially report the auditory component, especially when they are spatially coincident (Experiments 2 and 3). Moreover, for stimuli presented from rear space, the Colavita effect was only observed for auditory stimuli consisting of white noise bursts (but not for pure tones), suggesting that this kind of stimuli are more likely to be bound together with somatosensory stimuli in rear space.  相似文献   

10.
Previous research has shown that people with one eye have enhanced spatial vision implying intra-modal compensation for their loss of binocularity. The current experiments investigate whether monocular blindness from unilateral eye enucleation may lead to cross-modal sensory compensation for the loss of one eye. We measured speeded detection and discrimination of audiovisual targets presented as a stream of paired objects and familiar sounds in a group of individuals with monocular enucleation compared to controls viewing binocularly or monocularly. In Experiment 1, participants detected the presence of auditory, visual or audiovisual targets. All participant groups were equally able to detect the targets. In Experiment 2, participants discriminated between the visual, auditory or bimodal (audiovisual) targets. Both control groups showed the Colavita effect, that is, preferential processing of visual over auditory information for the bimodal stimuli. The monocular enucleation group, however, showed no Colavita effect, and further, they demonstrated equal processing of visual and auditory stimuli. This finding suggests a lack of visual dominance and equivalent auditory and visual processing in people with one eye. This may be an adaptive form of sensory compensation for the loss of one eye and could result from recruitment of deafferented visual cortical areas by inputs from other senses.  相似文献   

11.
The simultaneous presentation of a visual and an auditory stimulus can lead to a decrease in people’s ability to perceive or respond to the auditory stimulus. In this study, we investigate the effect that threat has upon this phenomenon, known as the Colavita visual dominance effect. Participants performed two blocks of trials containing 40% visual, 40% auditory, and 20% bimodal trials. The first block of trials was identical for all participants, while in the second block, either the visual stimulus (visual threat condition), auditory stimulus (auditory threat condition), or neither stimulus (control condition) was fear-conditioned using aversive electrocutaneous stimuli. We predicted that, when compared with the control condition, this visual dominance effect would increase in the visual threat condition and decrease in the auditory threat condition. This hypothesis was partially supported by the data. In particular, the results showed that the fear-conditioning of the visual stimulus significantly increased the visual dominance effect relative to the control condition. However, the fear-conditioning of the auditory stimulus did not reduce the visual dominance effect but instead increased it slightly. These findings are discussed in terms of the role that attention and arousal play in the dominance of vision over audition.
Stefaan Van DammeEmail:
  相似文献   

12.
We report a study designed to investigate the effectiveness of task-irrelevant unimodal and bimodal audiotactile stimuli in capturing a person’s spatial attention away from a highly perceptually demanding central rapid serial visual presentation (RSVP) task. In “Experiment 1”, participants made speeded elevation discrimination responses to peripheral visual targets following the presentation of auditory stimuli that were either presented alone or else were paired with centrally presented tactile stimuli. The results showed that the unimodal auditory stimuli only captured spatial attention when participants were not performing the RSVP task, while the bimodal audiotactile stimuli did not result in any performance change in any of the conditions. In “Experiment 2”, spatial auditory stimuli were either presented alone or else were paired with a tactile stimulus presented from the same direction. In contrast to the results of “Experiment 1”, the bimodal audiotactile stimuli were especially effective in capturing participants’ spatial attention from the concurrent RSVP task. These results therefore provide support for the claim that auditory and tactile stimuli should be presented from the same direction if they are to capture attention effectively. Implications for multisensory warning signal design are discussed.  相似文献   

13.
Adaptation to visual motion can induce marked distortions of the perceived spatial location of subsequently viewed stationary objects. These positional shifts are direction specific and exhibit tuning for the speed of the adapting stimulus. In this study, we sought to establish whether comparable motion-induced distortions of space can be induced in the auditory domain. Using individually measured head related transfer functions (HRTFs) we created auditory stimuli that moved either leftward or rightward in the horizontal plane. Participants adapted to unidirectional auditory motion presented at a range of speeds and then judged the spatial location of a brief stationary test stimulus. All participants displayed direction-dependent and speed-tuned shifts in perceived auditory position relative to a ‘no adaptation’ baseline measure. To permit direct comparison between effects in different sensory domains, measurements of visual motion-induced distortions of perceived position were also made using stimuli equated in positional sensitivity for each participant. Both the overall magnitude of the observed positional shifts, and the nature of their tuning with respect to adaptor speed were similar in each case. A third experiment was carried out where participants adapted to visual motion prior to making auditory position judgements. Similar to the previous experiments, shifts in the direction opposite to that of the adapting motion were observed. These results add to a growing body of evidence suggesting that the neural mechanisms that encode visual and auditory motion are more similar than previously thought.  相似文献   

14.
Temporally synchronous, auditory cues can facilitate participants’ performance on dynamic visual search tasks. Making auditory cues spatially informative with regard to the target location can reduce search latencies still further. In the present study, we investigated how multisensory integration, and temporal and spatial attention, might conjointly influence participants’ performance on an elevation discrimination task for a masked visual target presented in a rapidly-changing sequence of masked visual distractors. Participants were presented with either spatially uninformative (centrally presented), spatially valid (with the target side), or spatially invalid tones that were synchronous with the presentation of the visual target. Participants responded significantly more accurately following the presentation of the spatially valid as compared to the uninformative or invalid auditory cues. Participants endogenously shifted their attention to the likely location of the target indicated by the valid spatial auditory cue (reflecting top-down, cognitive processing mechanisms), which facilitated their processing of the visual target over and above any bottom-up benefits associated solely with the synchronous presentation of the auditory and visual stimuli. The results of the present study therefore suggest that crossmodal attention (both spatial and temporal) and multisensory integration can work in parallel to facilitate people's ability to most efficiently respond to multisensory information.  相似文献   

15.
BACKGROUND: It has recently been suggested that auditory hallucinations are the result of a criterion shift when deciding whether or not a meaningful signal has emerged. The approach proposes that a liberal criterion may result in increased false-positive identifications, without additional perceptual deficit. To test this hypothesis, we devised a speech discrimination task and used signal detection theory (SDT) to investigate the underlying cognitive mechanisms. METHOD: Schizophrenia patients with and without auditory hallucinations and a healthy control group completed a speech discrimination task. They had to decide whether a particular spoken word was identical to a previously presented speech stimulus, embedded in noise. SDT was used on the accuracy data to calculate a measure of perceptual sensitivity (Az) and a measure of response bias (beta). Thresholds for the perception of simple tones were determined. RESULTS: Compared to healthy controls, perceptual thresholds were higher and perceptual sensitivity in the speech task was lower in both patient groups. However, hallucinating patients showed increased sensitivity to speech stimuli compared to non-hallucinating patients. In addition, we found some evidence of a positive response bias in hallucinating patients, indicating a tendency to readily accept that a certain stimulus had been presented. CONCLUSIONS: Within the context of schizophrenia, patients with auditory hallucinations show enhanced sensitivity to speech stimuli, combined with a liberal criterion for deciding that a perceived event is an actual stimulus.  相似文献   

16.
Summary We have investigated the responses of neurones in the guinea-pig superior colliculus to combinations of visual and auditory stimuli. When these stimuli were presented separately, some of these neurones responded only to one modality, others to both and a few neurones reliably to neither. To bimodal stimulation, many of these neurones exhibited some form of cross-modality interaction, the degree and nature of which depended on the relative timing and location of the two stimuli. Facilitatory and inhibitory interactions were observed and, occasionally, both effects were found in the same neurone at different inter-stimulus intervals. Neurones whose responses to visual stimuli were enhanced by an auditory stimulus were found in the superficial layers. Although visual-enhanced and visual-depressed auditory neurones were found throughout the deep layers, the majority of them were recorded in the stratum griseum profundum. Neurones that responded to both visual and auditory stimuli presented separately and gave enhanced or depressed responses to bimodal stimulation were found throughout the deep layers, but were concentrated in the stratum griseum intermediale and extended into the stratum opticum.  相似文献   

17.
Neurophysiological studies have recently documented multisensory properties in ‘unimodal’ visual neurons of the cat posterolateral lateral suprasylvian (PLLS) cortex, a retinotopically organized area involved in visual motion processing. In this extrastriate visual area, a region has been identified where both visual and auditory stimuli were independently effective in activating neurons (bimodal zone), as well as a second region where visually-evoked activity was significantly facilitated by concurrent auditory stimulation but was unaffected by auditory stimulation alone (subthreshold multisensory region). Given their different distributions, the possible corticocortical connectivity underlying these distinct forms of crossmodal convergence was examined using biotinylated dextran amine (BDA) tracer methods in 21 adult cats. The auditory cortical areas examined included the anterior auditory field (AAF), primary auditory cortex (AI), dorsal zone (DZ), secondary auditory cortex (AII), field of the rostral suprasylvian sulcus (FRS), field anterior ectosylvian sulcus (FAES) and the posterior auditory field (PAF). Of these regions, the DZ, AI, AII, and FAES were found to project to the both the bimodal zone and the subthreshold region of the PLLS. This convergence of crossmodal inputs to the PLLS suggests not only that complex auditory information has access to this region but also that these connections provide the substrate for the different forms (bimodal versus subthreshold) of multisensory processing which may facilitate its functional role in visual motion processing.  相似文献   

18.
The brain integrates information from multiple sensory modalities and, through this process, generates a coherent and apparently seamless percept of the external world. Although multisensory integration typically binds information that is derived from the same event, when multisensory cues are somewhat discordant they can result in illusory percepts such as the ventriloquism effect. These biases in stimulus localization are generally accompanied by the perceptual unification of the two stimuli. In the current study, we sought to further elucidate the relationship between localization biases, perceptual unification and measures of a participants uncertainty in target localization (i.e., variability). Participants performed an auditory localization task in which they were also asked to report on whether they perceived the auditory and visual stimuli to be perceptually unified. The auditory and visual stimuli were delivered at a variety of spatial (0°, 5°, 10°, 15°) and temporal (200, 500, 800 ms) disparities. Localization bias and reports of perceptual unity occurred even with substantial spatial (i.e., 15°) and temporal (i.e., 800 ms) disparities. Trial-by-trial comparison of these measures revealed a striking correlation: regardless of their disparity, whenever the auditory and visual stimuli were perceived as unified, they were localized at or very near the light. In contrast, when the stimuli were perceived as not unified, auditory localization was often biased away from the visual stimulus. Furthermore, localization variability was significantly less when the stimuli were perceived as unified. Intriguingly, on non-unity trials such variability increased with decreasing disparity. Together, these results suggest strong and potentially mechanistic links between the multiple facets of multisensory integration that contribute to our perceptual Gestalt.  相似文献   

19.
Neurophysiological studies have shown in animals that a sudden sound enhanced perceptual processing of subsequent visual stimuli. In the present study, we explored the possibility that such enhancement also exists in humans and can be explained through crossmodal integration effects, whereby the interaction occurs at the level of bimodal neurons. Subjects were required to detect visual stimuli in a unimodal visual condition or in crossmodal audio-visual conditions. The spatial and the temporal proximity of multisensory stimuli were systematically varied. An enhancement of the perceptual sensitivity (d') for luminance detection was found when the audiovisual stimuli followed a rather clear spatial and temporal rule, governing multisensory integration at the neuronal level. Electronic Publication  相似文献   

20.
Historically, the study of multisensory processing has examined the function of the definitive neuron type, the bimodal neuron. These neurons are excited by inputs from more than one sensory modality, and when multisensory stimuli are present, they can integrate their responses in a predictable manner. However, recent studies have revealed that multisensory processing in the cortex is not restricted to bimodal neurons. The present investigation sought to examine the potential for multisensory processing in nonbimodal (unimodal) neurons in the retinotopically organized posterolateral lateral suprasylvian (PLLS) area of the cat. Standard extracellular recordings were used to measure responses of all neurons encountered to both separate- and combined-modality stimulation. Whereas bimodal neurons behaved as predicted, the surprising result was that 16% of unimodal visual neurons encountered were significantly facilitated by auditory stimuli. Because these unimodal visual neurons did not respond to an auditory stimulus presented alone but had their visual responses modulated by concurrent auditory stimulation, they represent a new form of multisensory neuron: the subthreshold multisensory neuron. These data also demonstrate that bimodal neurons can no longer be regarded as the exclusive basis for multisensory processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号