首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 569 毫秒
1.
The aim of this study was to establish whether spatial attention triggered by bimodal exogenous cues acts differently as compared to unimodal and crossmodal exogenous cues due to crossmodal integration. In order to investigate this issue, we examined cuing effects in discrimination tasks and compared these effects in a condition wherein a visual target was preceded by both visual and auditory exogenous cues delivered simultaneously at the same side (bimodal cue), with conditions wherein the visual target was preceded by either a visual (unimodal cue) or an auditory cue (crossmodal cue). The results of two experiments revealed that cuing effects on RTs in these three conditions with an SOA of 200 ms had comparable magnitudes. Differences at a longer SOA of 600 ms (inhibition of return for bimodal cues, Experiment 1) disappeared when catch trials were included (in Experiment 2). The current data do not support an additional influence of crossmodal integration on exogenous orienting, but are well in agreement with the existence of a supramodal spatial attention module that allocates attentional resources towards stimulated locations for different sensory modalities.  相似文献   

2.
There is debate in the crossmodal cueing literature as to whether capture of visual attention by means of sound is a fully automatic process. Recent studies show that when visual attention is endogenously focused sound still captures attention. The current study investigated whether there is interaction between exogenous auditory and visual capture. Participants preformed an orthogonal cueing task, in which, the visual target was preceded by both a peripheral visual and auditory cue. When both cues were presented at chance level, visual and auditory capture was observed. However, when the validity of the visual cue was increased to 80% only visual capture and no auditory capture was observed. Furthermore, a highly predictive (80% valid) auditory cue was not able to prevent visual capture. These results demonstrate that crossmodal auditory capture does not occur when a competing predictive visual event is presented and is therefore not a fully automatic process.  相似文献   

3.
The endogenous orienting of spatial attention has been studied with both informative central cues and informative peripheral cues. Central cues studies are difficult to compare with studies that have used uninformative peripheral cues due to the differences in stimulus presentation. Moreover, informative peripheral cues attract both endogenous and exogenous attention, thus making it difficult to disentangle the contribution of each process to any behavioural results observed. In the present study, we used an informative peripheral cue (either tactile or visual) that predicted that the target would appear (in different blocks of trials) on either the same or opposite side as the cue. By using this manipulation, both expected and unexpected trials could either be exogenously cued or uncued, thus making it possible to isolate expectancy effects from cuing effects. Our aim was to compare the endogenous orienting of spatial attention to tactile (Experiment 1) and to visual targets (Experiment 2) under conditions of intramodal and crossmodal spatial cuing. The results suggested that the endogenous orienting of spatial attention should not be considered as being a purely supramodal phenomenon, given that significantly larger expectancy effects were observed in the intramodal cuing conditions than in the crossmodal cuing conditions in both experiments.  相似文献   

4.
Multisensory integration affects ERP components elicited by exogenous cues   总被引:2,自引:2,他引:0  
Previous studies have shown that the amplitude of event related brain potentials (ERPs) elicited by a combined audiovisual stimulus is larger than the sum of a single auditory and visual stimulus. This enlargement is thought to reflect multisensory integration. Based on these data, it may be hypothesized that the speeding up of responses, due to exogenous orienting effects induced by bimodal cues, exceeds the sum of single unimodal cues. Behavioral data, however, typically revealed no increased orienting effect following bimodal as compared to unimodal cues, which could be due to a failure of multisensory integration of the cues. To examine this possibility, we computed ERPs elicited by both bimodal (audiovisual) and unimodal (either auditory or visual) cues, and determined their exogenous orienting effects on responses to a to-be-discriminated visual target. Interestingly, the posterior P1 component elicited by bimodal cues was larger than the sum of the P1 components elicited by a single auditory and visual cue (i.e., a superadditive effect), but no enhanced orienting effect was found on response speed. The latter result suggests that multisensory integration elicited by our bimodal cues plays no special role for spatial orienting, at least in the present setting.
Valerio SantangeloEmail:
  相似文献   

5.
Summary Two monkeys were trained on both visual and auditory association tasks. Single unit activity of the frontal (prefrontal and post-arcuate premotor) cortex was recorded in these monkeys to investigate the convergence of visual and auditory inputs and to examine whether the frontal units are involved in coding the meaning (associative significance) of the stimulus, independent of its modality. A total of 289 units showed changes in firing rate after the cue presentation on the visual and/or auditory tasks and were examined on both modalities of tasks, 175 of them showing differential activity in relation to either the associative significance and/or physical properties of the visual and/or auditory cues. Of the 289 units, 136 (47.0%) were responsive only to the visual cue (76 of them showing cue-related differential activity), 13 units (4.5%) only to the auditory cue (6 of them showing cue-related differential activity) and the remaining 140 units (48.5%) to both modalities of cues (18 of them showing visual, 7 of them showing auditory and 68 showing both modalities of cue-related differential activity). Fifty of the 68 bimodal differential units showed changes in firing in relation to the associative significance of both modalities of cues independent of the cue's physical properties, and are considered to be involved in the crossmodal coding of the associative significance of the stimulus. The proportion of bimodal differential units was higher in the pre- and post-arcuate areas than in the principalis and inferior convexity areas of the frontal cortex. The results indicate that some frontal units participate in the crossmodal coding of the associative significance of the stimulus independent of its physical properties, and most frontal units play different roles depending on the modality of the stimulus.  相似文献   

6.
The Colavita visual dominance effect refers to the phenomenon whereby participants presented with unimodal auditory, unimodal visual, or bimodal audiovisual stimuli in a speeded discrimination task, fail to respond to the auditory component of bimodal targets significantly more often than they fail to respond to the visual component. The Colavita effect was demonstrated in this study when participants were presented with unimodal auditory, unimodal visual, or bimodal stimuli (in the ratios 40:40:20, Experiment 1; or 33:33:33, Experiment 2), to which they had to respond by pressing an auditory response key, a visual response key, or both response keys. The Colavita effect was also demonstrated when participants had to respond to the bimodal targets using a dedicated third (bimodal) response key (Experiment 3). These results therefore suggest that stimulus probability and the response demands of the task do not contribute significantly to the Colavita effect. In Experiment 4, we investigated what role exogenous attention toward a sensory modality plays in the Colavita effect. A significantly larger Colavita effect was observed when a visual cue preceded the bimodal target than when an auditory cue preceded it. This result suggests that the Colavita visual dominance effect can be partially explained in terms of the greater exogenous attention-capturing qualities of visual versus auditory stimuli.  相似文献   

7.
Fixational eye movements occur involuntarily during visual fixation of stationary scenes. The fastest components of these miniature eye movements are microsaccades, which can be observed about once per second. Recent studies demonstrated that microsaccades are linked to covert shifts of visual attention. Here, we generalized this finding in two ways. First, we used peripheral cues, rather than the centrally presented cues of earlier studies. Second, we spatially cued attention in vision and audition to visual and auditory targets. An analysis of microsaccade responses revealed an equivalent impact of visual and auditory cues on microsaccade-rate signature (i.e. an initial inhibition followed by an overshoot and a final return to the pre-cue baseline rate). With visual cues or visual targets, microsaccades were briefly aligned with cue direction and then opposite to cue direction during the overshoot epoch, probably as a result of an inhibition of an automatic saccade to the peripheral cue. With left auditory cues and auditory targets microsaccades oriented in cue direction. We argue that microsaccades can be used to study crossmodal integration of sensory information and to map the time course of saccade preparation during covert shifts of visual and auditory attention.  相似文献   

8.
Four experiments investigated the effects of cross-modal attention between vision and touch in temporal order judgment tasks combined with spatial cueing paradigm. In Experiment 1, two vibrotactile stimuli with simultaneous or successive onsets were presented bimanually to the left and right index fingers and participants were asked to judge the temporal order of the two stimuli. The tactile stimuli were preceded by a spatially uninformative visual cue. Results indicated that shift of spatial attention yielded by visual cueing resulted in the modulation of accuracy of the subsequent tactile temporal order judgment. However, this cueing effect disappeared when participants judged simultaneity of the two stimuli, instead of their temporal order (Experiment 2) or when the cue lead time between the visual cue and the stimuli was relatively long (Experiment 3). Experiment 4 replicated an effect of crossmodal attention on the direction of visual illusory line motion induced by a somatosensory cue (Shimojo, Miyauchi, & Hikosaka, 1997). These results demonstrate that substantial crossmodal links exist between vision and touch for covert exogenous orienting of attention.  相似文献   

9.
In this study we investigated the effect of the directional congruency of tactile, visual, or bimodal visuotactile apparent motion distractors on the perception of auditory apparent motion. Participants had to judge the direction in which an auditory apparent motion stream moved (left-to-right or right-to-left) while trying to ignore one of a range of distractor stimuli, including unimodal tactile or visual, bimodal visuotactile, and crossmodal (i.e., composed of one visual and one tactile stimulus) distractors. Significant crossmodal dynamic capture effects (i.e., better performance when the target and distractor stimuli moved in the same direction rather than in opposite directions) were demonstrated in all conditions. Bimodal distractors elicited more crossmodal dynamic capture than unimodal distractors, thus providing the first empirical demonstration of the effect of information presented simultaneously in two irrelevant sensory modalities on the perception of motion in a third (target) sensory modality. The results of a second experiment demonstrated that the capture effect reported in the crossmodal distractor condition was most probably attributable to the combined effect of the individual static distractors (i.e., to ventriloquism) rather than to any emergent property of crossmodal apparent motion.  相似文献   

10.
When presented with auditory, visual, or bimodal audiovisual stimuli in a speeded detection/discrimination task, participants fail to respond to the auditory component of the bimodal targets significantly more often than they fail to respond to the visual component. Signal detection theory (SDT) was used to explore the contributions of perceptual (sensitivity shifts) and decisional (shifts in response criteria) factors to this effect, known as the Colavita visual dominance effect. Participants performed a version of the Colavita task that had been modified to allow for SDT analyses. The participants had to detect auditory and visual targets (presented unimodally or bimodally) at their individually determined 75% detection thresholds. The results showed a significant decrease in participants’ sensitivity to auditory stimuli when presented concurrently with visual stimuli (in the absence of any significant change in their response criterion), suggesting that Colavita visual dominance does not simply reflect a decisional effect, but can be explained, at least in part, as a truly perceptual phenomenon. The decrease in sensitivity (to auditory stimuli) may be attributable to the exogenous capture of participants’ attention by the visual component of the bimodal target, thus leaving fewer attentional resources for the processing of the auditory stimulus. The reduction in auditory sensitivity reported here may be considered an example of crossmodal masking.  相似文献   

11.
To investigate the temporal dynamics of lateralized event-related brain potential (ERP) components elicited during covert shifts of spatial attention, ERPs were recorded in a task where central visual symbolic cues instructed participants to direct attention to their left or right hand in order to detect infrequent tactile targets presented to that hand, and to ignore tactile stimuli presented to the other hand, as well as all randomly intermingled peripheral visual stimuli. In different blocks, the stimulus onset asynchrony (SOA) between cue and target was 300 ms, 700 ms, or 1,100 ms. Anterior and posterior ERP modulations sensitive to the direction of an attentional shift were time-locked to the attentional cue, rather than to the anticipated arrival of a task-relevant stimulus. These components thus appear to reflect central attentional control rather than the anticipatory preparation of sensory areas. In addition, attentional modulations of ERPs to task-irrelevant visual stimuli were found, providing further evidence for crossmodal links in spatial attention between touch and vision.  相似文献   

12.
To test whether the attentional selection of targets defined by a combination of visual and auditory features is guided in a modality‐specific fashion or by control processes that are integrated across modalities, we measured attentional capture by visual stimuli during unimodal visual and audiovisual search. Search arrays were preceded by spatially uninformative visual singleton cues that matched the current target‐defining visual feature. Participants searched for targets defined by a visual feature, or by a combination of visual and auditory features (e.g., red targets accompanied by high‐pitch tones). Spatial cueing effects indicative of attentional capture were reduced during audiovisual search, and cue‐triggered N2pc components were attenuated and delayed. This reduction of cue‐induced attentional capture effects during audiovisual search provides new evidence for the multimodal control of selective attention.  相似文献   

13.
We studied the effects of eccentric auditory cues to clarify the conditions that evoke inhibition of return (IOR). We found that auditory cues positioned 12° to the left or right of midline failed to produce IOR whereas visual cues produced IOR under the same experimental conditions. The eccentric auditory cues elicited automatic orienting as evidenced by more rapid detection of cued than uncued visual targets at short stimulus onset asynchrony. Yet these same cues did not produce IOR unless observers were required to saccade to the cue and back to center before generating a manual detection response. Thus, under the conditions examined herein automatic orienting was not sufficient to evoke IOR, but oculomotor activation appeared to be essential. The functional significance of IOR and the question of modality-specific orienting processes are considered.  相似文献   

14.
The common view on the interplay between exogenous and endogenous orienting holds that abrupt onsets are not capable of attracting attention when they occur outside the current focus of attention. Does this also apply to sudden irrelevant auditory onsets and when irrelevant visual onsets occur far in the periphery? In addition, does focused attention also reduce the alerting effect of auditory onsets, or vice versa, do highly alerting stimuli distort the attentional state? Crossmodal and unimodal variants of the Posner paradigm were examined in two experiments with targets and irrelevant onsets occurring at 28.3 and 19.3° from fixation. Either centrally presented arrows indicated the forthcoming position of visual targets to be discriminated, or warning cues signaled the likely moment of target occurrence. The targets could be preceded by peripheral auditory or visual onsets that were to be ignored. Crossmodal and unimodal exogenous orienting effects of these irrelevant onsets were observed while participants focused at the relevant side. In addition, no evidence was found that the alerting effect of auditory onsets was dependent on focused attention. Our findings indicate that, at least under the current conditions, neither crossmodal nor unimodal orienting effects of peripheral events dissipate when attention is in a focused state.  相似文献   

15.
Predicted motion (PM) tasks test the accuracy of predicting the future position of a moving target. Previous PM studies using audiovisual stimuli have suggested that observers rely primarily on visual motion cues. To clarify the role of auditory signals in predicting future positions of bimodal targets, we designed a novel PM task where spatial coincidence of audio and visual motion signals was varied in three conditions: auditory and visual motion stimuli were spatially correlated (congruent condition), the auditory motion stimulus was moving behind the visual motion stimulus (sound-trailing condition), or the auditory motion stimulus was moving ahead the visual motion stimulus (sound-leading condition). We manipulated target speed (5.5 or 11 cm/s), the time that the moving audiovisual stimulus was presented (500 or 750 ms viewing time), and the time the visual stimulus disappeared while the auditory stimulus continued to move by itself before prompting subjects to estimate the position of the visual stimulus would have traveled if it continued along with the auditory stimulus (750, 1,000, or 1,500 ms prediction time). We also included two unimodal control conditions: visual-only and auditory-only. Subjects (n = 12) typically overestimated the target position of congruent bimodal targets. In the sound-trailing and sound-leading conditions, pointing responses were biased in the direction of the auditory stimulus, showing that PM performance is not reliant solely upon visual motion cues. We conclude that putative cognitive extrapolation mechanisms assume spatial coherence of bimodal motion signals and may perform some averaging of these motion signals when they do not spatially coincide.  相似文献   

16.
We conducted two audiovisual experiments to determine whether event-related potential (ERP) components elicited by attention-directing cues reflect supramodal attentional control. Symbolic visual cues were used to direct attention prior to auditory targets in Experiment 1, and symbolic auditory cues were used to direct attention prior to visual targets in Experiment 2. Different patterns of cue ERPs were found in the two experiments. A frontal negativity called the ADAN was absent in Experiment 2, which indicates that this component does not reflect supramodal attentional control. A posterior positivity called the LDAP was observed in both experiments but was focused more posteriorly over the occipital scalp in Experiment 2. This component appears to reflect multiple processes, including visual processes involved in location marking and target preparation as well as supramodal processes involved in attentional control.  相似文献   

17.
Participants presented with unimodal auditory (A), unimodal visual (V), or bimodal audiovisual stimuli (AV) in a task in which they have to identify the modality of the targets as rapidly as possible, fail to respond to the auditory component of bimodal targets significantly more often than they fail to respond to the visual component. In the majority of published studies on this phenomenon, known as the Colavita effect, the auditory, visual, and bimodal stimuli have been presented in the ratio 40A:40V:20AV. In the present study, we investigated whether the relatively low frequency with which the bimodal targets in previous studies have been presented may have contributed to participants’ difficulty in responding to such targets correctly. We manipulated the bimodal target probability by presenting the stimuli in the ratios 20A:20V:60AV, in Experiment 1; 5A:5V:90AV, 25A:25V:50AV, and 45A:45V:10AV, in Experiment 2. A significant Colavita visual dominance effect was observed when the bimodal targets were presented on 60% of the trials or less. We suggest that increasing the frequency of bimodal targets may have provided an exogenous cue to performance, that reduced the necessity for endogenous attention when selecting the appropriate response to make to bimodal targets.  相似文献   

18.
We report an experiment designed to investigate the temporal dynamics of the visuotactile crossmodal congruency effect. Vibrotactile targets were presented randomly to the index finger (top side of a hand-held cube) or thumb (bottom side) of either hand while visual distractors were presented randomly from one of the same four possible locations. The stimulus onset asynchrony (SOA) between the vibrotactile target and the visual distractor was varied on a trial-by-trial basis. Participants made speeded discrimination responses regarding the elevation of the vibrotactile targets (i.e., upper versus lower) while trying to ignore the visual distractors. The largest crossmodal congruency effects (defined as the difference in performance between incongruent and congruent elevation distractor trials) were obtained when the visual distractor preceded the vibrotactile target by 50-100 ms, although significant effects were also reported when the distractor followed the target by as much as 100 ms. These results are discussed in terms of the conjoint influence of response competition, crossmodal perceptual interactions (i.e., the ventriloquism effect), and exogenous spatial attention on the crossmodal congruency effect. The distinct temporal signatures of each of these effects are also highlighted.  相似文献   

19.
Temporally synchronous, auditory cues can facilitate participants’ performance on dynamic visual search tasks. Making auditory cues spatially informative with regard to the target location can reduce search latencies still further. In the present study, we investigated how multisensory integration, and temporal and spatial attention, might conjointly influence participants’ performance on an elevation discrimination task for a masked visual target presented in a rapidly-changing sequence of masked visual distractors. Participants were presented with either spatially uninformative (centrally presented), spatially valid (with the target side), or spatially invalid tones that were synchronous with the presentation of the visual target. Participants responded significantly more accurately following the presentation of the spatially valid as compared to the uninformative or invalid auditory cues. Participants endogenously shifted their attention to the likely location of the target indicated by the valid spatial auditory cue (reflecting top-down, cognitive processing mechanisms), which facilitated their processing of the visual target over and above any bottom-up benefits associated solely with the synchronous presentation of the auditory and visual stimuli. The results of the present study therefore suggest that crossmodal attention (both spatial and temporal) and multisensory integration can work in parallel to facilitate people's ability to most efficiently respond to multisensory information.  相似文献   

20.
Saccadic eye movements to visual, auditory, and bimodal targets were measured in four adult cats. Bimodal targets were visual and auditory stimuli presented simultaneously at the same location. Three behavioral tasks were used: a fixation task and two saccadic tracking tasks (gap and overlap task). In the fixation task, a sensory stimulus was presented at a randomly selected location, and the saccade to fixate that stimulus was measured. In the gap and overlap tasks, a second target (hereafter called the saccade target) was presented after the cat had fixated the first target. In the gap task, the fixation target was switched off before the saccade target was turned on; in the overlap task, the saccade target was presented before the fixation target was switched off. All tasks required the cats to redirect their gaze toward the target (within a specified degree of accuracy) within 500 ms of target onset, and in all tasks target positions were varied randomly over five possible locations along the horizontal meridian within the cat's oculomotor range. In the gap task, a significantly greater proportion of saccadic reaction times (SRTs) were less than 125 ms, and mean SRTs were significantly shorter than in the fixation task. With visual targets, saccade latencies were significantly shorter in the gap task than in the overlap task, while, with bimodal targets, saccade latencies were similar in the gap and overlap tasks. On the fixation task, SRTs to auditory targets were longer than those to either visual or bimodal targets, but on the gap task, SRTs to auditory targets were shorter than those to visual or bimodal targets. Thus, SRTs reflected an interaction between target modality and task. Because target locations were unpredictable, these results demonstrate that cats, as well as primates, can produce very short latency goal-directed saccades.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号