首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Three experiments on healthy humans investigated the degree of automaticity of crossmodal spatial attention shifts by assessing the intentionality criterion. We used the orthogonal cuing paradigm in which a lateralized cue, either visual or auditory was followed by a unimodal or crossmodal target that occurred at the same or opposite side. In all experiments, the cue was always uninformative as to target location. In Experiment 1, where both side and modality of targets were unpredictable, we found faster discriminations for visual targets following uninformative auditory cues on the same side. This result was replicated in Experiment 2, where target side was blocked and participants could orient attention in advance toward the appropriate side, and in Experiment 3, where they were additionally informed about target modality. Our results suggest that this sort of crossmodal orienting is automatic because it occurred even when participants were provided with detailed information about the target to prevent uninformative auditory cues from orienting attention. This is consistent with the notion that peripheral auditory stimuli are very powerful in capturing visual attention.  相似文献   

2.
Sounds provide important information about the spatial environment, including the location of approaching objects. Attention to sounds can be directed through automatic or more controlled processes, which have been well studied in the visual modality. However, little is known about the neural underpinnings of attentional control mechanisms for auditory signals. We studied healthy adults who underwent event-related FMRI while performing a task that manipulated automatic and more controlled auditory orienting by varying the probability that cues correctly predicted target location. Specifically, we examined the effects of uninformative (50% validity ratio) and informative (75% validity ratio) auditory cues on reaction time (RT) and neuronal functioning. The stimulus-onset asynchrony (SOA) between the cue and the target was either 100 or 800 ms. At the 100 ms SOA, RT was faster for valid than invalid trials for both cue types, and frontoparietal activation was greater for invalid than valid trials. At the 800 ms SOA, RT and functional activation depended on whether cues were informative or uninformative, and whether cues correctly or incorrectly predicted the target location. Contrary to our prediction, activation in a frontoparietal network was greater for uninformative than informative cues across several different comparisons and at both SOAs. This finding contrasts with similar research of visual orienting, and suggests that the auditory modality may be more biased toward automatic shifts of attention following uninformative cues.  相似文献   

3.
Our attention to a sensory cue of a given modality interferes with attention to a sensory cue of another modality. However, an object emitting various sensory cues attracts attention more effectively. The thalamic reticular nucleus (TRN) could play a pivotal role in such cross‐modal modulation of attention given that cross‐modal sensory interaction takes place in the TRN, because the TRN occupies a highly strategic position to function in the control of gain and/or gating of sensory processing in the thalamocortical loop. In the present study cross‐modal interactions between visual and auditory inputs were examined in single TRN cells of anesthetised rats using juxta‐cellular recording and labeling techniques. Visual or auditory responses were modulated by subthreshold sound or light stimuli, respectively, in the majority of recordings (46 of 54 visual and 60 of 73 auditory cells). However, few bimodal sensory cells were found. Cells showing modulation of the sensory response were distributed in the whole visual and auditory sectors of the TRN. Modulated cells sent axonal projections to first‐order or higher‐order thalamic nuclei. Suppression predominated in modulation that took place not only in primary responses but also in late responses repeatedly evoked after sensory stimulation. Combined sensory stimulation also evoked de‐novo responses, and modulated response latency and burst spiking. These results indicate that the TRN incorporates sensory inputs of different modalities into single cell activity to function in sensory processing in the lemniscal and non‐lemniscal systems. This raises the possibility that the TRN constitutes neural pathways involved in cross‐modal attentional gating.  相似文献   

4.
Reflexive spatial attention is critical for controlling perception and action. An established body of evidence suggests that mechanisms of spatial attention operate both within and between sensory modalities; however the attentional mechanisms that link modalities in the human brain are unclear. Here we used transcranial magnetic stimulation (TMS) to explore the role of the parietal cortex in coordinating reflexive shifts of spatial attention between vision and touch. In two experiments, healthy participants localised visual and somatosensory targets that were preceded by non-informative visual or somatosensory spatial cues. To determine the role of parietal cortex in spatial orienting, TMS was delivered synchronously with cue onset for 100 ms. Results revealed a critical role of the right angular gyrus and supramarginal gyrus in reflexive orienting to visual and somatosensory targets that followed a somatosensory cue. In contrast, the same TMS protocol was ineffective in modulating reflexive orienting based on visual cues. This dependence on cue modality may reflect subcortical redundancy of visual orienting mechanisms. Overall, our results indicate a critical role of the inferior parietal cortex in mediating reflexive shifts of attention within and between sensory modalities.  相似文献   

5.
Garg A  Schwartz D  Stevens AA 《Neuropsychologia》2007,45(10):2307-2321
What happens in vision-related cortical areas when congenitally blind (CB) individuals orient attention to spatial locations? Previous neuroimaging of sighted individuals has found overlapping activation in a network of frontoparietal areas including frontal eye fields (FEF), during both overt (with eye movement) and covert (without eye movement) shifts of spatial attention. Since voluntary eye movement planning seems irrelevant in CB, their FEF neurons should be recruited for alternative functions if their attentional role in sighted individuals is only due to eye movement planning. Recent neuroimaging of the blind has also reported activation in medial occipital areas, normally associated with visual processing, during a diverse set of non-visual tasks, but their response to attentional shifts remains poorly understood. Here, we used event-related fMRI to explore FEF and medial occipital areas in CB individuals and sighted controls with eyes closed (SC) performing a covert attention orienting task with endogenous verbal cues and spatialized auditory targets. We found robust stimulus-locked FEF activation of all CB subjects, similar to and stronger than in SC, suggesting that FEF plays a role in endogenous orienting of covert spatial attention even in individuals in whom voluntary eye movements are irrelevant. We also found robust activation in bilateral medial occipital cortex in CB but not in SC subjects. The response decreased below baseline following endogenous verbal cues but increased following auditory targets, suggesting that the medial occipital area in CB does not directly engage during cued orienting of attention but may be recruited for processing of spatialized auditory targets.  相似文献   

6.
Prior studies have repeatedly reported behavioural benefits to events occurring at attended, compared to unattended, points in time. It has been suggested that, as for spatial orienting, temporal orienting of attention spreads across sensory modalities in a synergistic fashion. However, the consequences of cross‐modal temporal orienting of attention remain poorly understood. One challenge is that the passage of time leads to an increase in event predictability throughout a trial, thus making it difficult to interpret possible effects (or lack thereof). Here we used a design that avoids complete temporal predictability to investigate whether attending to a sensory modality (vision or touch) at a point in time confers beneficial access to events in the other, non‐attended, sensory modality (touch or vision, respectively). In contrast to previous studies and to what happens with spatial attention, we found that events in one (unattended) modality do not automatically benefit from happening at the time point when another modality is expected. Instead, it seems that attention can be deployed in time with relative independence for different sensory modalities. Based on these findings, we argue that temporal orienting of attention can be cross‐modally decoupled in order to flexibly react according to the environmental demands, and that the efficiency of this selective decoupling unfolds in time.  相似文献   

7.
Vision often dominates audition when attentive processes are involved (e.g., the ventriloquist effect), yet little is known about the relative potential of the two modalities to initiate a “break through of the unattended”. The present study was designed to systematically compare the capacity of task-irrelevant auditory and visual events to withdraw attention from the other modality. Sequences of auditory and visual stimuli were presented with different amounts of temporal offset to determine the presence, strength, and time-course of attentional orienting and reorienting as well as their impact on task-related processing. One of the streams was task-relevant, while crossmodal distraction caused by unexpected events in the other stream was measured by impairments of behavioral task performance and by the N2, P3a, and reorienting negativity (RON) components of the event-related potential (ERP). Unexpected events in the visual modality proved to be somewhat more salient than those in the auditory modality, yet crossmodal interference caused by auditory stimuli was more pronounced. The visual modality was relatively constrained in terms of a critical time-range within which distraction effects could be elicited, while the impact of auditory stimuli on task-related processing extended over a longer time-range. These results are discussed in terms of functional differences between the auditory and visual modalities. Further applications of the new crossmodal protocol are deemed promising in view of the considerable size of the obtained distraction effects.  相似文献   

8.
The Attention Network Test (ANT) uses visual stimuli to separately assess the attentional skills of alerting (improved performance following a warning cue), spatial orienting (an additional benefit when the warning cue also cues target location), and executive control (impaired performance when a target stimulus contains conflicting information). This study contrasted performance on auditory and visual versions of the ANT to determine whether the measures it obtains are influenced by presentation modality. Forty healthy volunteers completed both auditory and visual tests. Reaction-time measures of executive control were of a similar magnitude and significantly correlated, suggesting that executive control might be a supramodal resource. Measures of alerting were also comparable across tasks. In contrast, spatial-orienting benefits were obtained only in the visual task. Auditory spatial cues did not improve response times to auditory targets presented at the cued location. The different spatial-orienting measures could reflect either separate orienting resources for each perceptual modality, or an interaction between a supramodal orienting resource and modality-specific perceptual processing.  相似文献   

9.
Although a fronto-parietal network has consistently been implicated in the control of visual spatial attention, the network that guides spatial attention in the auditory domain is not yet clearly understood. To investigate this issue, we measured brain activity using functional magnetic resonance imaging while participants performed a cued auditory spatial attention task. We found that cued orienting of auditory spatial attention activated a medial-superior distributed fronto-parietal network. In addition, we found cue-triggered increases of activity in the auditory sensory cortex prior to the occurrence of an auditory target, suggesting that auditory attentional control operates in part by biasing processing in sensory cortex in favor of expected target stimuli. Finally, an exploratory cross-study comparison further indicated several common frontal and parietal regions as being involved in the control of both visual and auditory spatial attention. Thus, the present findings not only reveal the network of brain areas underlying endogenous spatial orienting in the auditory modality, but also suggest that the control of spatial attention in different sensory modalities is enabled in part by some common, supramodal neural mechanisms.  相似文献   

10.
Several studies on cross-modal attention showed that remapping processes between sensory modalities occur in the spatial orienting of attention. One hypothesis that accounts for such links is that spatial attention operates upon representations of common locations in the external space. However, convincing evidence only exists for cross-modal links in spatial orienting, leaving the dynamics of these effects unexplored. Four experiments were designed to cope with this issue by having participants being involved in an endogenous orienting task to visual and auditory target stimuli. Targets on invalid trials were embedded into two different kinds of sequences of stimuli: (1) long sequences, wherein three valid trials in one modality preceded the invalid trial in the other modality; (2) short sequences, wherein only one valid trial in one modality preceded the invalid trial in the other modality. Results revealed modality-specific meridian effects in the short sequences, and a significant decrement of the modality-specific meridian effect in the long sequences. The results of these experiments indicate that cross-modal links in visual and auditory spatial attention are based on representations of common locations in the external space across sensory modalities. Moreover, the results strongly support the hypothesis that representations of space are dynamically built and updated according to task demands.  相似文献   

11.
The orienting of attention to different locations in space is fundamental to most organisms and occurs in all sensory modalities. Orienting has been extensively studied in vision, but to date, few studies have investigated neuronal networks underlying automatic orienting of attention and inhibition of return to auditory signals. In the current experiment, functional magnetic resonance imaging and behavioral data were collected while healthy volunteers performed an auditory orienting task in which a monaurally presented tone pip (cue) correctly or incorrectly cued the location of a target tone pip. The stimulus onset asynchrony (SOA) between the cue and target was 100 or 800 msec. Behavioral results were consistent with previous studies showing that valid auditory cues produced facilitation at the short SOA and inhibition of return at the long SOA. Functional results indicated that the reorienting of attention (100 msec SOA) and inhibition of return (800 msec SOA) were mediated by both common and distinct neuronal structures. Both attention mechanisms commonly activated a network consisting of fronto-oculomotor areas, the left postcentral gyrus, right premotor area, and bilateral tonsil of the cerebellum. Several distinct areas of frontal and parietal activation were identified for the reorienting condition, whereas the right inferior parietal lobule was the only structure uniquely associated with inhibition of return.  相似文献   

12.
This study analyzed high‐density event‐related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task‐irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory‐visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross‐modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non‐linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top‐down attentional control that further modulates cross‐modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context‐based control over multisensory processing, whose influences multiplex across finer and broader time scales. Hum Brain Mapp 37:273–288, 2016. © 2015 Wiley Periodicals, Inc.  相似文献   

13.
Previous studies on crossmodal spatial orienting typically used simple and stereotyped stimuli in the absence of any meaningful context. This study combined computational models, behavioural measures and functional magnetic resonance imaging to investigate audiovisual spatial interactions in naturalistic settings. We created short videos portraying everyday life situations that included a lateralised visual event and a co‐occurring sound, either on the same or on the opposite side of space. Subjects viewed the videos with or without eye‐movements allowed (overt or covert orienting). For each video, visual and auditory saliency maps were used to index the strength of stimulus‐driven signals, and eye‐movements were used as a measure of the efficacy of the audiovisual events for spatial orienting. Results showed that visual salience modulated activity in higher‐order visual areas, whereas auditory salience modulated activity in the superior temporal cortex. Auditory salience modulated activity also in the posterior parietal cortex, but only when audiovisual stimuli occurred on the same side of space (multisensory spatial congruence). Orienting efficacy affected activity in the visual cortex, within the same regions modulated by visual salience. These patterns of activation were comparable in overt and covert orienting conditions. Our results demonstrate that, during viewing of complex multisensory stimuli, activity in sensory areas reflects both stimulus‐driven signals and their efficacy for spatial orienting; and that the posterior parietal cortex combines spatial information about the visual and the auditory modality. Hum Brain Mapp 35:1597–1614, 2014. © 2013 Wiley Periodicals, Inc.  相似文献   

14.
Recently, experimental and theoretical research has focused on the brain's abilities to extract information from a noisy sensory environment and how cross‐modal inputs are processed to solve the causal inference problem to provide the best estimate of external events. Despite the empirical evidence suggesting that the nervous system uses a statistically optimal and probabilistic approach in addressing these problems, little is known about the brain's architecture needed to implement these computations. The aim of this work was to realize a mathematical model, based on physiologically plausible hypotheses, to analyze the neural mechanisms underlying multisensory perception and causal inference. The model consists of three layers topologically organized: two encode auditory and visual stimuli, separately, and are reciprocally connected via excitatory synapses and send excitatory connections to the third downstream layer. This synaptic organization realizes two mechanisms of cross‐modal interactions: the first is responsible for the sensory representation of the external stimuli, while the second solves the causal inference problem. We tested the network by comparing its results to behavioral data reported in the literature. Among others, the network can account for the ventriloquism illusion, the pattern of sensory bias and the percept of unity as a function of the spatial auditory–visual distance, and the dependence of the auditory error on the causal inference. Finally, simulations results are consistent with probability matching as the perceptual strategy used in auditory–visual spatial localization tasks, agreeing with the behavioral data. The model makes untested predictions that can be investigated in future behavioral experiments.  相似文献   

15.
Recent behavioral and event-related brain potential (ERP) studies have revealed cross-modal interactions in endogenous spatial attention between vision and audition, plus vision and touch. The present ERP study investigated whether these interactions reflect supramodal attentional control mechanisms, and whether similar cross-modal interactions also exist between audition and touch. Participants directed attention to the side indicated by a cue to detect infrequent auditory or tactile targets at the cued side. The relevant modality (audition or touch) was blocked. Attentional control processes were reflected in systematic ERP modulations elicited during cued shifts of attention. An anterior negativity contralateral to the cued side was followed by a contralateral positivity at posterior sites. These effects were similar whether the cue signaled which side was relevant for audition or for touch. They also resembled previously observed ERP modulations for shifts of visual attention, thus implicating supramodal mechanisms in the control of spatial attention. Following each cue, single auditory, tactile, or visual stimuli were presented at the cued or uncued side. Although stimuli in task-irrelevant modalities could be completely ignored, visual and auditory ERPs were nevertheless affected by spatial attention when touch was relevant, revealing cross-modal interactions. When audition was relevant, visual ERPs, but not tactile ERPs, were affected by spatial attention, indicating that touch can be decoupled from cross-modal attention when task-irrelevant.  相似文献   

16.
Spatial attention mediates the selection of information from different parts of space. When a brief cue is presented shortly before a target [cue to target onset asynchrony (CTOA)] in the same location, behavioral responses are facilitated, a process called attention capture. At longer CTOAs, responses to targets presented in the same location are inhibited; this is called inhibition of return (IOR). In the visual modality, these processes have been demonstrated in both humans and non‐human primates, the latter allowing for the study of the underlying neural mechanisms. In audition, the effects of attention have only been shown in humans when the experimental task requires sound localization. Studies in monkeys with the use of similar cues but without a sound localization requirement have produced negative results. We have studied the effects of predictive acoustic cues on the latency of gaze shifts to visual and auditory targets in monkeys experienced in localizing sound sources in the laboratory with the head unrestrained. Both attention capture and IOR were demonstrated with acoustic cues, although with a faster time course than with visual cues. Additionally, the effect was observed across sensory modalities (acoustic cue to visual target), suggesting that the underlying neural mechanisms of these effects may be mediated within the superior colliculus, a center where inputs from both vision and audition converge.  相似文献   

17.
Object recognition benefits maximally from multimodal sensory input when stimulus presentation is noisy, or degraded. Whether this advantage can be attributed specifically to the extent of overlap in object‐related information, or rather, to object‐unspecific enhancement due to the mere presence of additional sensory stimulation, remains unclear. Further, the cortical processing differences driving increased multisensory integration (MSI) for degraded compared with clear information remain poorly understood. Here, two consecutive studies first compared behavioral benefits of audio‐visual overlap of object‐related information, relative to conditions where one channel carried information and the other carried noise. A hierarchical drift diffusion model indicated performance enhancement when auditory and visual object‐related information was simultaneously present for degraded stimuli. A subsequent fMRI study revealed visual dominance on a behavioral and neural level for clear stimuli, while degraded stimulus processing was mainly characterized by activation of a frontoparietal multisensory network, including IPS. Connectivity analyses indicated that integration of degraded object‐related information relied on IPS input, whereas clear stimuli were integrated through direct information exchange between visual and auditory sensory cortices. These results indicate that the inverse effectiveness observed for identification of degraded relative to clear objects in behavior and brain activation might be facilitated by selective recruitment of an executive cortical network which uses IPS as a relay mediating crossmodal sensory information exchange.  相似文献   

18.
Temporal orienting of attention is the ability to focus resources at a particular moment in time in order to optimise behaviour, and is associated with activation of left parietal and premotor cortex [Coull, J. T., Nobre, A. C. Where and when to pay attention: the neural systems for directing attention to spatial locations and to time intervals as revealed by both PET and fMRI. Journal of Neuroscience, 1998, 18, 7426-7435]. In the present experiment, we explored the behavioural and anatomical correlates of temporal orienting to foveal visual stimuli, in order to eliminate any spatial attention confounds. We implemented a two-way factorial design in an event-related fMRI study to examine the factors of trial validity (predictability of target by cue), length of delay (cue-target interval), and their interaction. There were two distinct types of invalid trial: those where attention was automatically drawn to a premature target and those where attention was voluntarily shifted to a delayed time-point. Reaction times for valid trials were shorter than those for invalid trials, demonstrating appropriate allocation of attention to temporal cues. All trial-types activated a shared system, including frontoparietal areas bilaterally, showing that this network is consistently associated with attentional orienting and is not specific to spatial tasks. Distinct brain areas were sensitive to cue-target delays and to trial validity. Long cue-target intervals activated areas involved in motor preparation: supplementary motor cortex, basal ganglia and thalamus. Invalid trials, where temporal expectancies were breached, showed enhanced activation of left parietal and frontal areas, and engagement of orbitofrontal cortex bilaterally. Finally, trial validity interacted with length of delay. Appearance of targets prematurely selectively activated visual extrastriate cortex; while postponement of target appearance selectively activated right prefrontal cortex. These findings suggest that distinct brain areas are involved in redirecting attention based upon sensory events (bottom-up, exogenous shifts) and based upon cognitive expectations (top-down, endogenous shifts).  相似文献   

19.
Cross‐modal reorganization following the loss of input from a sensory modality can recruit sensory‐deprived cortical areas to process information from the remaining senses. Specifically, in early‐deaf cats, the anterior auditory field (AAF) is unresponsive to auditory stimuli but can be activated by somatosensory and visual stimuli. Similarly, AAF neurons respond to tactile input in adult‐deafened animals. To examine anatomical changes that may underlie this functional adaptation following early or late deafness, afferent projections to AAF were examined in hearing cats, and cats with early‐ or adult‐onset deafness. Unilateral deposits of biotinylated dextran amine were made in AAF to retrogradely label cortical and thalamic afferents to AAF. In early‐deaf cats, ipsilateral neuronal labeling in visual and somatosensory cortices increased by 329% and 101%, respectively. The largest increases arose from the anterior ectosylvian visual area and the anterolateral lateral suprasylvian visual area, as well as somatosensory areas S2 and S4. Consequently, labeling in auditory areas was reduced by 36%. The age of deafness onset appeared to influence afferent connectivity, with less marked differences observed in late‐deaf cats. Profound changes to visual and somatosensory afferent connectivity following deafness may reflect corticocortical rewiring affording acoustically deprived AAF with cross‐modal functionality. J. Comp. Neurol. 523:1925–1947, 2015 © 2015 Wiley Periodicals, Inc.  相似文献   

20.
Knowledge about the sensory modality in which a forthcoming event might occur permits anticipatory intersensory attention. Information as to when exactly an event occurs enables temporal orienting. Intersensory and temporal attention mechanisms are often deployed simultaneously, but as yet it is unknown whether these processes operate interactively or in parallel. In this human electroencephalography study, we manipulated intersensory attention and temporal orienting in the same paradigm. A continuous stream of bisensory visuo‐tactile inputs was presented, and a preceding auditory cue indicated to which modality participants should attend (visual or tactile). Temporal orienting was manipulated blockwise by presenting stimuli either at regular or irregular intervals. Using linear beamforming, we examined neural oscillations at virtual channels in sensory and motor cortices. Both attentional processes simultaneously modulated the power of anticipatory delta‐ and beta‐band oscillations, as well as delta‐band phase coherence. Modulations in sensory cortices reflected intersensory attention, indicative of modality‐specific gating mechanisms. Modulations in motor and partly in somatosensory cortex reflected temporal orienting, indicative of a supramodal preparatory mechanism. We found no evidence for interactions between intersensory attention and temporal orienting, suggesting that these two mechanisms act in parallel and largely independent of each other in sensory and motor cortices. Hum Brain Mapp 36:3246–3259, 2015. © 2015 Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号