首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 17 毫秒
1.
Responses of neurons that integrate multiple sensory inputs are traditionally characterized in terms of a set of empirical principles. However, a simple computational framework that accounts for these empirical features of multisensory integration has not been established. We propose that divisive normalization, acting at the stage of multisensory integration, can account for many of the empirical principles of multisensory integration shown by single neurons, such as the principle of inverse effectiveness and the spatial principle. This model, which uses a simple functional operation (normalization) for which there is considerable experimental support, also accounts for the recent observation that the mathematical rule by which multisensory neurons combine their inputs changes with cue reliability. The normalization model, which makes a strong testable prediction regarding cross-modal suppression, may therefore provide a simple unifying computational account of the important features of multisensory integration by neurons.  相似文献   

2.
Although early sensory cortex is organized along dimensions encoded by receptor organs, little is known about the organization of higher areas in which different modalities are integrated. We investigated multisensory integration in human superior temporal sulcus using recent advances in parallel imaging to perform functional magnetic resonance imaging (fMRI) at very high resolution. These studies suggest a functional architecture in which information from different modalities is brought into close proximity via a patchy distribution of inputs, followed by integration in the intervening cortex.  相似文献   

3.
Understanding how we synchronize our actions with stimuli from different sensory modalities plays a central role in helping to establish how we interact with our multisensory environment. Recent research has shown better performance with multisensory over unisensory stimuli; however, the type of stimuli used has mainly been auditory and tactile. The aim of this article was to expand our understanding of sensorimotor synchronization with multisensory audio-visual stimuli and compare these findings to their individual unisensory counterparts. This research also aims to assess the role of spatio-temporal structure for each sensory modality. The visual and/or auditory stimuli had either temporal or spatio-temporal information available and were presented to the participants in unimodal and bimodal conditions. Globally, the performance was significantly better for the bimodal compared to the unimodal conditions; however, this benefit was limited to only one of the bimodal conditions. In terms of the unimodal conditions, the level of synchronization with visual stimuli was better than auditory, and while there was an observed benefit with the spatio-temporal compared to temporal visual stimulus, this was not replicated with the auditory stimulus.  相似文献   

4.
Information from the different senses is seamlessly integrated by the brain in order to modify our behaviors and enrich our perceptions. It is only through the appropriate binding and integration of information from the different senses that a meaningful and accurate perceptual gestalt can be generated. Although a great deal is known about how such cross-modal interactions influence behavior and perception in the adult, there is little knowledge as to the impact of aging on these multisensory processes. In the current study, we examined the speed of discrimination responses of aged and young individuals to the presentation of visual, auditory or combined visual-auditory stimuli. Although the presentation of multisensory stimuli speeded response times in both groups, the performance gain was significantly greater in the aged. Most strikingly, multisensory stimuli restored response times in the aged to those seen in young subjects to the faster of the two unisensory stimuli (i.e., visual). The current results suggest that despite the decline in sensory processing that accompanies aging, the use of multiple sensory channels may represent an effective compensatory strategy to overcome these unisensory deficits.  相似文献   

5.
Stimuli occurring in multiple sensory modalities that are temporally synchronous or spatially coincident can be integrated together to enhance perception. Additionally, the semantic content or meaning of a stimulus can influence cross-modal interactions, improving task performance when these stimuli convey semantically congruent or matching information, but impairing performance when they contain non-matching or distracting information. Attention is one mechanism that is known to alter processing of sensory stimuli by enhancing perception of task-relevant information and suppressing perception of task-irrelevant stimuli. It is not known, however, to what extent attention to a single sensory modality can minimize the impact of stimuli in the unattended sensory modality and reduce the integration of stimuli across multiple sensory modalities. Our hypothesis was that modality-specific selective attention would limit processing of stimuli in the unattended sensory modality, resulting in a reduction of performance enhancements produced by semantically matching multisensory stimuli, and a reduction in performance decrements produced by semantically non-matching multisensory stimuli. The results from two experiments utilizing a cued discrimination task demonstrate that selective attention to a single sensory modality prevents the integration of matching multisensory stimuli that is normally observed when attention is divided between sensory modalities. Attention did not reliably alter the amount of distraction caused by non-matching multisensory stimuli on this task; however, these findings highlight a critical role for modality-specific selective attention in modulating multisensory integration.  相似文献   

6.
It is well known that the detection thresholds for stationary auditory and visual signals are lower if the signals are presented bimodally rather than unimodally, provided the signals coincide in time and space. Recent work on auditory–visual motion detection suggests that the facilitation seen for stationary signals is not seen for motion signals. We investigate the conditions under which motion perception also benefits from the integration of auditory and visual signals. We show that the integration of cross-modal local motion signals that are matched in position and speed is consistent with thresholds predicted by a neural summation model. If the signals are presented in different hemi-fields, move in different directions, or both, then behavioural thresholds are predicted by a probability-summation model. We conclude that cross-modal signals have to be co-localised and co-incident for effective motion integration. We also argue that facilitation is only seen if the signals contain all localisation cues that would be produced by physical objects.  相似文献   

7.
The superior colliculus (SC) plays an important role in integrating visual, auditory and somatosensory information, and in guiding the orientation of the eyes, ears and head. Previously we have shown that cats with unilateral SC lesions showed a preferential loss of multisensory orientation behaviors for stimuli contralateral to the lesion. Surprisingly, this behavioral loss was seen even under circumstances where the SC lesion was far from complete. To assess the physiological changes induced by these lesions, we employed single unit electrophysiological methods to record from individual neurons in both the intact and damaged SC following behavioral testing in two animals. In the damaged SC of these animals, multisensory neurons were preferentially reduced in incidence, comprising less than 25% of the sensory-responsive population (as compared with 49% on the control side). In those multisensory neurons that remained following the lesion, receptive fields were nearly twofold larger, and less than 25% showed normal patterns of multisensory integration, with those that did being found in areas outside of the lesion. These results strongly suggest that the multisensory behavioral deficits seen following SC lesions are the combined result of a loss of multisensory neurons and a loss of multisensory integration in those neurons that remain.  相似文献   

8.
The brain integrates information from multiple sensory modalities and, through this process, generates a coherent and apparently seamless percept of the external world. Although multisensory integration typically binds information that is derived from the same event, when multisensory cues are somewhat discordant they can result in illusory percepts such as the ventriloquism effect. These biases in stimulus localization are generally accompanied by the perceptual unification of the two stimuli. In the current study, we sought to further elucidate the relationship between localization biases, perceptual unification and measures of a participants uncertainty in target localization (i.e., variability). Participants performed an auditory localization task in which they were also asked to report on whether they perceived the auditory and visual stimuli to be perceptually unified. The auditory and visual stimuli were delivered at a variety of spatial (0°, 5°, 10°, 15°) and temporal (200, 500, 800 ms) disparities. Localization bias and reports of perceptual unity occurred even with substantial spatial (i.e., 15°) and temporal (i.e., 800 ms) disparities. Trial-by-trial comparison of these measures revealed a striking correlation: regardless of their disparity, whenever the auditory and visual stimuli were perceived as unified, they were localized at or very near the light. In contrast, when the stimuli were perceived as not unified, auditory localization was often biased away from the visual stimulus. Furthermore, localization variability was significantly less when the stimuli were perceived as unified. Intriguingly, on non-unity trials such variability increased with decreasing disparity. Together, these results suggest strong and potentially mechanistic links between the multiple facets of multisensory integration that contribute to our perceptual Gestalt.  相似文献   

9.
《Neurobiology of aging》2014,35(12):2761-2769
Entorhinal grid cells and hippocampal place cells are key systems for mammalian navigation. By combining information from different sensory modalities, they provide abstract representations of space. Given that both structures are among the earliest to undergo age-related neurodegenerative changes, we asked whether age-related navigational impairments are related to deficient integration of navigational cues. Younger and older adults performed a homing task that required using visual landmarks, self-motion information, or a combination of both. Further, a conflict between cues assessed the influence of each sensory domain. Our findings revealed performance impairments in the older adults, suggestive of a higher noise in the underlying spatial representations. In addition, even though both groups integrated visual and self-motion information to become more accurate and precise, older adults did not place as much influence on visual information as would have been optimal. As these findings were unrelated to potential changes in balance or spatial working memory, this study provides the first evidence that increasing noise and a suboptimal weighting of navigational cues might contribute to the common problems with spatial representations experienced by many older adults. These findings are discussed in the context of the known age-related changes in the entorhinal-hippocampal network.  相似文献   

10.
Multisensory integration enables rapid and accurate behavior. To orient in space, sensory information registered initially in different reference frames has to be integrated with the current postural information to produce an appropriate motor response. In some postures, multisensory integration requires convergence of sensory evidence across hemispheres, which would presumably lessen or hinder integration. Here, we examined orienting gaze shifts in humans to visual, tactile, or visuotactile stimuli when the hands were either in a default uncrossed posture or a crossed posture requiring convergence across hemispheres. Surprisingly, we observed the greatest benefits of multisensory integration in the crossed posture, as indexed by reaction time (RT) decreases. Moreover, such shortening of RTs to multisensory stimuli did not come at the cost of increased error propensity. To explain these results, we propose that two accepted principles of multisensory integration, the spatial principle and inverse effectiveness, dynamically interact to aid the rapid and accurate resolution of complex sensorimotor transformations. First, early mutual inhibition of initial visual and tactile responses registered in different hemispheres reduces error propensity. Second, inverse effectiveness in the integration of the weakened visual response with the remapped tactile representation expedites the generation of the correct motor response. Our results imply that the concept of inverse effectiveness, which is usually associated with external stimulus properties, might extend to internal spatial representations that are more complex given certain body postures.  相似文献   

11.
It has recently been demonstrated that the maturation of normal multisensory circuits in the cortex of the cat takes place over an extended period of postnatal life. Such a finding suggests that the sensory experiences received during this time may play an important role in this developmental process. To test the necessity of sensory experience for normal cortical multisensory development, cats were raised in the absence of visual experience from birth until adulthood, effectively precluding all visual and visual-nonvisual multisensory experiences. As adults, semichronic single-unit recording experiments targeting the anterior ectosylvian sulcus (AES), a well-defined multisensory cortical area in the cat, were initiated and continued at weekly intervals in anesthetized animals. Despite having very little impact on the overall sensory representations in AES, dark-rearing had a substantial impact on the integrative capabilities of multisensory AES neurons. A significant increase was seen in the proportion of multisensory neurons that were modulated by, rather than driven by, a second sensory modality. More important, perhaps, there was a dramatic shift in the percentage of these modulated neurons in which the pairing of weakly effective and spatially and temporally coincident stimuli resulted in response depressions. In normally reared animals such combinations typically give rise to robust response enhancements. These results illustrate the important role sensory experience plays in shaping the development of mature multisensory cortical circuits and suggest that dark-rearing shifts the relative balance of excitation and inhibition in these circuits.  相似文献   

12.
Multisensory neurons in the superior colliculus (SC) typically respond to combinations of stimuli from multiple modalities with enhancements and/or depressions in their activity. Although such changes in response have been shown to follow a predictive set of integrative principles, these principles fail to completely account for the full range of interactions seen throughout the SC population. In an effort to better define this variability, we sought to determine if there were additional features of the neuronal response profile that were predictive of the magnitude of the multisensory interaction. To do this, we recorded from 109 visual-auditory SC neurons while systematically manipulating stimulus intensity. Along with the previously described roles of space, time, and stimulus effectiveness, two features of a neuron's response profile were found to offer predictive value as to the magnitude of the multisensory interaction: spontaneous activity and the level of sensory responsiveness. Multisensory neurons with little or no spontaneous activity and weak sensory responses had the capacity to exhibit large response enhancements. Conversely, neurons with modest spontaneous activity and robust sensory responses exhibited relatively small response enhancements. Together, these results provide a better view into multisensory integration, and suggest substantial heterogeneity in the integrative characteristics of the multisensory SC population.  相似文献   

13.
Human observers combine multiple sensory cues synergistically to achieve greater perceptual sensitivity, but little is known about the underlying neuronal mechanisms. We recorded the activity of neurons in the dorsal medial superior temporal (MSTd) area during a task in which trained monkeys combined visual and vestibular cues near-optimally to discriminate heading. During bimodal stimulation, MSTd neurons combined visual and vestibular inputs linearly with subadditive weights. Neurons with congruent heading preferences for visual and vestibular stimuli showed improvements in sensitivity that parallel behavioral effects. In contrast, neurons with opposite preferences showed diminished sensitivity under cue combination. Responses of congruent cells were more strongly correlated with monkeys' perceptual decisions than were responses of opposite cells, suggesting that the monkey monitored the activity of congruent cells to a greater extent during cue integration. These findings show that perceptual cue integration occurs in nonhuman primates and identify a population of neurons that may form its neural basis.  相似文献   

14.
Integration of multiple sensory cues is essential for precise and accurate perception and behavioral performance, yet the reliability of sensory signals can vary across modalities and viewing conditions. Human observers typically employ the optimal strategy of weighting each cue in proportion to its reliability, but the neural basis of this computation remains poorly understood. We trained monkeys to perform a heading discrimination task from visual and vestibular cues, varying cue reliability randomly. The monkeys appropriately placed greater weight on the more reliable cue, and population decoding of neural responses in the dorsal medial superior temporal area closely predicted behavioral cue weighting, including modest deviations from optimality. We found that the mathematical combination of visual and vestibular inputs by single neurons is generally consistent with recent theories of optimal probabilistic computation in neural circuits. These results provide direct evidence for a neural mechanism mediating a simple and widespread form of statistical inference.  相似文献   

15.
16.
Previously, it has been shown that synchronising actions with periodic pacing stimuli are unaffected by ageing. However, synchronisation often requires combining evidence across multiple sources of timing information. We have previously shown the brain integrates multisensory cues to achieve a best estimate of the events in time and subsequently reduces variability in synchronised movements (Elliott et al. in Eur J Neurosci 31(10):1828-1835, 2010). Yet, it is unclear if sensory integration of temporal cues in older adults is degraded and whether this leads to reduced synchronisation performance. Here, we test for age-related changes when synchronising actions to multisensory temporal cues. We compared synchronisation performance between young (N?=?15, aged 18-37?years) and older adults (N?=?15, aged 63-80?years) using a finger-tapping task to auditory and tactile metronomes presented unimodally and bimodally. We added temporal jitter to the auditory metronome to determine whether participants would integrate auditory and tactile signals, with reduced weighting of the auditory metronome as its reliability decreased under bimodal conditions. We found that older adults matched the performance of young adults when synchronising to an isochronous auditory or tactile metronome. When the temporal regularity of the auditory metronome was reduced, older adults' performance was degraded to a greater extent than the young adults in both unimodal and bimodal conditions. However, proportionally both groups showed similar improvements in synchronisation performance in bimodal conditions compared with the equivalent, auditory-only conditions. We conclude that while older adults become more variable in synchronising to less regular beats, they do not show any deficit in the integration of multisensory temporal cues, suggesting that using multisensory information may help mitigate any deficits in coordinating actions to complex timing cues.  相似文献   

17.
In environments containing sensory events at competing locations, selecting a target for orienting requires prioritization of stimulus values. Although the superior colliculus (SC) is causally linked to the stimulus selection process, the manner in which SC multisensory integration operates in a competitive stimulus environment is unknown. Here we examined how the activity of visual-auditory SC neurons is affected by placement of a competing target in the opposite hemifield, a stimulus configuration that would, in principle, promote interhemispheric competition for access to downstream motor circuitry. Competitive interactions between the targets were evident in how they altered unisensory and multisensory responses of individual neurons. Responses elicited by a cross-modal stimulus (multisensory responses) proved to be substantially more resistant to competitor-induced depression than were unisensory responses (evoked by the component modality-specific stimuli). Similarly, when a cross-modal stimulus served as the competitor, it exerted considerably more depression than did its individual component stimuli, in some cases producing more depression than predicted by their linear sum. These findings suggest that multisensory integration can help resolve competition among multiple targets by enhancing orientation to the location of cross-modal events while simultaneously suppressing orientation to events at alternate locations.  相似文献   

18.
19.
The present study investigated the effects of force feedback in relation to tool use on the multisensory integration of visuo-tactile information. Participants learned to control a robotic tool through a surgical robotic interface. Following tool-use training, participants performed a crossmodal congruency task, by responding to tactile vibrations applied to their hands, while ignoring visual distractors superimposed on the robotic tools. In the first experiment it was found that tool-use training with force feedback facilitates multisensory integration of signals from the tool, as reflected in a stronger crossmodal congruency effect with the force feedback training compared to training without force feedback and to no training. The second experiment extends these findings by showing that training with realistic online force feedback resulted in a stronger crossmodal congruency effect compared to training in which force feedback was delayed. The present study highlights the importance of haptic information for multisensory integration and extends findings from classical tool-use studies to the domain of robotic tools. We argue that such crossmodal congruency effects are an objective measure of robotic tool integration and propose some potential applications in surgical robotics, robotic tools, and human–tool interaction.  相似文献   

20.
We have investigated how visual motion signals are integrated for smooth pursuit eye movements by measuring the initiation of pursuit in monkeys for pairs of moving stimuli of the same or differing luminance. The initiation of pursuit for pairs of stimuli of the same luminance could be accounted for as a vector average of the responses to the two stimuli singly. When stimuli comprised two superimposed patches of moving dot textures, the brighter stimulus suppressed the inputs from the dimmer stimulus, so that the initiation of pursuit became winner-take-all when the luminance ratio of the two stimuli was 8 or greater. The dominance of the brighter stimulus could be not attributed to either the latency difference or the ratio of the eye accelerations for the bright and dim stimuli presented singly. When stimuli comprised either spot targets or two patches of dots moving across separate locations in the visual field, the brighter stimulus had a much weaker suppressive influence; the initiation of pursuit could be accounted for by nearly equal vector averaging of the responses to the two stimuli singly. The suppressive effects of the brighter stimulus also appeared in human perceptual judgments, but again only for superimposed stimuli. We conclude that one locus of the interaction of two moving visual stimuli is shared by perception and action and resides in local inhibitory connections in the visual cortex. A second locus resides deeper in sensory-motor processing and may be more closely related to action selection than to stimulus selection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号