首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
This study describes a possible mechanism of coding of multisensory information in the anterior ectosylvian visual area of the feline cortex. Extracellular microelectrode recordings on 168 cells were carried out in the anterior ectosylvian sulcal region of halothane-anaesthetized, immobilized, artificially ventilated cats. Ninety-five neurons were found to respond to visual stimuli, 96 responded to auditory stimuli and 45 were bimodal, reacting to both visual and auditory modalities. A large proportion of the neurons exhibited significantly different responses to stimuli appearing in different regions of their huge receptive field. These neurons have the ability to provide information via their discharge rate on the site of the stimulus within their receptive field. This suggests that they may serve as panoramic localizers. The ability of the bimodal neurons to localize bimodal stimulus sources is better than any of the unimodal localizing functions. Further, the sites of maximal responsivity of the visual, auditory and bimodal neurons are distributed over the whole extent of the large receptive fields. Thus, a large population of such panoramic visual, auditory and multisensory neurons could accurately code the locations of the sensory stimuli. Our findings support the notion that there is a distributed population code of multisensory information in the feline associative cortex.  相似文献   

3.
The ability to process motion is crucial for coherent perception and action. While the majority of studies have focused on the unimodal factors that influence motion perception (see, for example, the other chapters in this Special Issue), some researchers have also investigated the extent to which information presented in one sensory modality can affect the perception of motion for stimuli presented in another modality. Although early studies often gave rise to mixed results, the development of increasingly sophisticated psychophysical paradigms are now enabling researchers to determine the spatiotemporal constraints on multisensory interactions in the perception of motion. Recent findings indicate that these interactions stand over-and-above the multisensory interactions documented previously for static stimuli, such as the oft-cited 'ventriloquism' effect. Neuroimaging and neuropsychological studies are also beginning to elucidate the network of neural structures responsible for the processing of motion information in the different sensory modalities, an important first step that will ultimately lead to the determination of the neural substrates underlying these multisensory contributions to motion perception.  相似文献   

4.
Is audiovisual integration subserved by the superior colliculus in humans?   总被引:1,自引:0,他引:1  
The brain effectively integrates multisensory information to enhance perception. For example, audiovisual stimuli typically yield faster responses than isolated unimodal ones (redundant signal effect, RSE). Here, we show that the audiovisual RSE is likely subserved by a neural site of integration (neural coactivation), rather than by an independent-channels mechanism such as race models. This neural site is probably the superior colliculus (SC), because an RSE explainable by neural coactivation does not occur with purple or blue stimuli, which are invisible to the SC; such an RSE only occurs for spatially and temporally coincident audiovisual stimuli, in strict adherence with the multisensory responses in the SC of the cat. These data suggest that audiovisual integration in humans occurs very early during sensory processing, in the SC.  相似文献   

5.
Effects of spatial congruity on audio-visual multimodal integration   总被引:3,自引:0,他引:3  
Spatial constraints on multisensory integration of auditory (A) and visual (V) stimuli were investigated in humans using behavioral and electrophysiological measures. The aim was to find out whether cross-modal interactions between A and V stimuli depend on their spatial congruity, as has been found for multisensory neurons in animal studies (Stein & Meredith, 1993). Randomized sequences of unimodal (A or V) and simultaneous bimodal (AV) stimuli were presented to right- or left-field locations while subjects made speeded responses to infrequent targets of greater intensity that occurred in either or both modalities. Behavioral responses to the bimodal stimuli were faster and more accurate than to the unimodal stimuli for both same-location and different-location AV pairings. The neural basis of this cross-modal facilitation was studied by comparing event-related potentials (ERPs) to the bimodal AV stimuli with the summed ERPs to the unimodal A and V stimuli. These comparisons revealed neural interactions localized to the ventral occipito-temporal cortex (at 190 msec) and to the superior temporal cortical areas (at 260 msec) for both same- and different-location AV pairings. In contrast, ERP interactions that differed according to spatial congruity included a phase and amplitude modulation of visual-evoked activity localized to the ventral occipito-temporal cortex at 100-400 msec and an amplitude modulation of activity localized to the superior temporal region at 260-280 msec. These results demonstrate overlapping but distinctive patterns of multisensory integration for spatially congruent and incongruent AV stimuli.  相似文献   

6.
We studied the responses to sensory stimulation in two diencephalic areas, the central posterior nucleus of the dorsal thalamus (CP) and the anterior tuberal nucleus of the hypothalamus (TA). In both the CP and the TA, units sensitive to acoustic (500-Hz sound), hydrodynamic (25-Hz dipole stimulus), and visual (640-nm light flash) stimuli were found. In the CP, most units were unimodal and responded exclusively to visual stimulation. In contrast, in the TA, most units responded to more than one modality. The data suggest that the CP is primarily involved in the unimodal processing of sensory information, whereas the TA may be involved in multisensory integration.  相似文献   

7.
Neurons in the superior colliculus (SC) integrate stimuli of different modalities. In this work, a mathematical model of the integrative response of SC neurons is presented, to gain a deeper insight into the possible mechanisms involved, and on individual differences in integrative abilities. The model includes two unimodal areas (auditory and visual), which communicate via feedforward and feedback synapses with a third multisensory area. Each neuron is represented via a sigmoidal relationship and a first-order dynamic. Neurons in the same area interact via lateral synapses. Simulations show that the model, with a basal parameter set, can mimic various responses described in the literature: (i) multimodal enhancement in response to two cross-modal stimuli within the receptive field, according to the inverse-effectiveness principle; (ii) within-modality suppression and cross-modality suppression by a stimulus (of the same or other modality) placed outside the receptive field. Sensitivity analysis on model parameters demonstrate that different classes of neurons observed in the literature (such as, neurons which exhibit within modality suppression without cross-modality suppression, or neurons with asymmetrical cross-modality suppression) can be reproduced by simply modifying synaptic strengths in the multimodal area. Finally, exempla of the possible role of feedback mechanisms in ambiguous conditions (such as reinforcement of a poor perception by a second cross-modal stimulus, or ventriloquism) are shown and critically discussed. The model may be of value to assess the different mechanisms responsible for multisensory integration in the SC, and, in future, to study neural plasticity in multisensory systems during development or rehabilitation.  相似文献   

8.
Schulz M  Ross B  Pantev C 《Neuroreport》2003,14(1):157-161
The aim of this study was to compare multimodal information processing in the somatosensory and auditory cortices and related multimodal areas in musicians (trumpet players) and non-musicians. Magnetoencephalographic activity (MEG) was recorded in response to five stimulus conditions from 10 professional trumpet players and nine musically untrained control subjects. Somatosensory and auditory stimuli were presented alone or in combination. Our data suggest that musicians, in general, process multisensory stimuli differently to the control group. When stimulating the lip in professional trumpet players, a multimodal interaction (expressed as difference between the multimodal response and the sum of unimodal responses) in the corresponding somatosensory cortex showed a positive peak at 33 ms, which was not found in the control group. Conversely, the control group shows a significant interaction of opposite polarity around 60-80 ms. We suggest that training-induced reorganization in musicians leads to a qualitatively different way to process multisensory information. It favors an early stage of cortical processing, which is modified by the connections between multimodal and auditory neurons from thalamus to primary somatosensory area.  相似文献   

9.
Units were recorded extracellularly from the caudate nucleus (CN) of cats during movement. The majority of CN units fired during sensory-triggered movements rather than movements in general. However, sensory stimulation was a necessary but not a sufficient condition for CN unit responding; stimuli caused unit responses only when movements were evoked. Additionally, only movements triggered by particular stimuli were associated with unit responding. These unit responses were not sensory because neural activity changes were associated with movement onset rather than stimulus presentation. These data are in accord with recent suggestions of a sensory-based motor function for the basal ganglia.  相似文献   

10.
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder.  相似文献   

11.
While multisensory integration is thought to occur in higher hierarchical cortical areas, recent studies in man and monkey have revealed plurisensory modulations of activity in areas previously thought to be unimodal. To determine the cortical network involved in multisensory interactions, we performed multiple injections of different retrograde tracers in unimodal auditory (core), somatosensory (1/3b) and visual (V2 and MT) cortical areas of the marmoset. We found three types of heteromodal connections linking unimodal sensory areas. Visuo-somatosensory projections were observed originating from visual areas [probably the ventral and dorsal fundus of the superior temporal area (FSTv and FSTd), and middle temporal crescent (MTc)] toward areas 1/3b. Somatosensory projections to the auditory cortex were present from S2 and the anterior bank of the lateral sulcus. Finally, a visuo-auditory projection arises from an area anterior to the superior temporal sulcus (STS) toward the auditory core. Injections in different sensory regions allow us to define the frontal convexity and the temporal opercular caudal cortex as putative polysensory areas. A quantitative analysis of the laminar distribution of projecting neurons showed that heteromodal connections could be either feedback or feedforward. Taken together, our results provide the anatomical pathway for multisensory integration at low levels of information processing in the primate and argue against a strict hierarchical model.  相似文献   

12.
The integration of multiple sensory modalities is a key aspect of brain function, allowing animals to take advantage of concurrent sources of information to make more accurate perceptual judgments. For many years, multisensory integration in the cerebral cortex was deemed to occur only in high‐level “polysensory” association areas. However, more recent studies have suggested that cross‐modal stimulation can also influence neural activity in areas traditionally considered to be unimodal. In particular, several human neuroimaging studies have reported that extrastriate areas involved in visual motion perception are also activated by auditory motion, and may integrate audiovisual motion cues. However, the exact nature and extent of the effects of auditory motion on the visual cortex have not been studied at the single neuron level. We recorded the spiking activity of neurons in the middle temporal (MT) and medial superior temporal (MST) areas of anesthetized marmoset monkeys upon presentation of unimodal stimuli (moving auditory or visual patterns), as well as bimodal stimuli (concurrent audiovisual motion). Despite robust, direction selective responses to visual motion, none of the sampled neurons responded to auditory motion stimuli. Moreover, concurrent moving auditory stimuli had no significant effect on the ability of single MT and MST neurons, or populations of simultaneously recorded neurons, to discriminate the direction of motion of visual stimuli (moving random dot patterns with varying levels of motion noise). Our findings do not support the hypothesis that direct interactions between MT, MST and areas low in the hierarchy of auditory areas underlie audiovisual motion integration.  相似文献   

13.
Neurons in the intermediate and deep layers of the superior colliculus (SC) often exhibit sensory‐related activity in addition to discharging for saccadic eye movements. These two patterns of activity can combine so that modifications of the sensory response can lead to changes in orienting behaviour. Can behavioural factors, however, influence sensory activity? In this study of rhesus monkeys, we isolate one behavioural factor, the state of visual fixation, and examine its influences on sensory processing and multisensory integration in the primate SC. Two interleaved fixation conditions were used: a FIX condition requiring exogenous fixation of a visible fixation point; and a FIX‐BLINK condition, requiring endogenous fixation in the absence of a visible fixation point. Neurons of the SC were influenced by fixation state, exhibiting both lower levels of sensory activity and reduced multisensory interactions when fixation was exogenously engaged on a visible fixation point. These results are consistent with active visual fixation suppressing responses to extraneous stimuli, and thus demonstrate that sensory processing and multisensory responses in the SC are not dependent solely on the physical properties of the sensory environment, but are also dynamically influenced by the behavioural state of the animal.  相似文献   

14.
The role of the caudate nucleus (CN) in motor control has been widely studied. Less attention has been paid to the dynamics of visual feedback in motor actions, which is a relevant function of the basal ganglia during the control of eye and body movements. We therefore set out to analyse the visual information processing of neurons in the feline CN. Extracellular single-unit recordings were performed in the CN, where the neuronal responses to drifting gratings of various spatial and temporal frequencies were recorded. The responses of the CN neurons were modulated by the temporal frequency of the grating. The CN units responded optimally to gratings of low spatial frequencies and exhibited low spatial resolution and fine spatial frequency tuning. By contrast, the CN neurons preferred high temporal frequencies, and exhibited high temporal resolution and fine temporal frequency tuning. The spatial and temporal visual properties of the CN neurons enable them to act as spatiotemporal filters. These properties are similar to those observed in certain feline extrageniculate visual structures, i.e. in the superior colliculus, the suprageniculate nucleus and the anterior ectosylvian cortex, but differ strongly from those of the primary visual cortex and the lateral geniculate nucleus. Accordingly, our results suggest a functional relationship of the CN to the extrageniculate tecto-thalamo-cortical system. This system of the mammalian brain may be involved in motion detection, especially in velocity analysis of moving objects, facilitating the detection of changes during the animal's movement.  相似文献   

15.
Navigation in space requires the brain to combine information arising from different sensory modalities with the appropriate motor commands. Sensory information about self-motion in particular is provided by the visual and the vestibular system. The macaque ventral intraparietal area (VIP) has recently been shown to be involved in the processing of self-motion information provided by optical flow, to contain multimodal neurons and to receive input from areas involved in the analysis of vestibular information. By studying responses to linear vestibular, visual and bimodal stimulation we aimed at gaining more insight into the mechanisms involved in multimodal integration and self-motion processing. A large proportion of cells (77%) revealed a significant response to passive linear translation of the monkey. Of these cells, 59% encoded information about the direction of self-motion. The phase relationship between vestibular stimulation and neuronal responses covered a broad spectrum, demonstrating the complexity of the spatio-temporal pattern of vestibular information encoded by neurons in area VIP. For 53% of the direction-selective neurons the preferred directions for stimuli of both modalities were the same; they were opposite for the remaining 47% of the neurons. During bimodal stimulation the responses of neurons with opposite direction selectivity in the two modalities were determined either by the visual (53%) or the vestibular (47%) modality. These heterogeneous responses to unimodal and bimodal stimulation might be used to prevent misjudgements about self- and/or object-motion, which could be caused by relying on information of one sensory modality alone.  相似文献   

16.
The deep superior colliculus (DSC) integrates multisensory input and triggers an orienting movement toward the source of stimulation (target). It would seem reasonable to suppose that input of an additional modality should always increase the amount of information received by a DSC neuron concerning a target. However, of all DSC neurons studied, only about one half in the cat and one-quarter in the monkey were multimodal. The rest received only unimodal input. Multimodal DSC neurons show the properties of multisensory enhancement, in which the neural response to an input of one modality is augmented by input of another modality, and of inverse effectiveness, in which weaker unimodal responses produce a higher percentage enhancement. Previously, we demonstrated that these properties are consistent with the hypothesis that DSC neurons use Bayes' rule to compute the posterior probability that a target is present given their stochastic sensory inputs. Here we use an information theoretic analysis of our Bayesian model to show that input of an additional modality may indeed increase target information, but only if input received from the initial modality does not completely reduce uncertainty concerning the presence of a target. Unimodal DSC neurons may be those whose unimodal input fully reduces target uncertainty and therefore have no need for input of another modality.  相似文献   

17.
On the neuronal basis for multisensory convergence: a brief overview   总被引:4,自引:0,他引:4  
For multisensory stimulation to effect perceptual and behavioral responses, information from the different sensory systems must converge on individual neurons. A great deal is already known regarding processing within the separate sensory systems, as well as about many of the integrative and perceptual/behavioral effects of multisensory processing. However, virtually nothing is known about the functional architecture that underlies multisensory convergence even though it is an integral step to this processing sequence. This paper seeks to summarize the findings pertinent to multisensory convergence, and to initiate the identification of specific convergence patterns that may underlie different multisensory perceptual and behavioral effects.  相似文献   

18.
Response amplification in sensory-specific cortices during crossmodal binding   总被引:12,自引:0,他引:12  
Integrating information across the senses can enhance our ability to detect and classify stimuli in the environment. For example, auditory speech perception is substantially improved when the speaker's face is visible. In an fMRI study designed to investigate the neural mechanisms underlying these crossmodal behavioural gains, bimodal (audio-visual) speech was contrasted against both unimodal (auditory and visual) components. Significant response enhancements in auditory (BA 41/42) and visual (V5) cortices were detected during bimodal stimulation. This effect was found to be specific to semantically congruent crossmodal inputs. These data suggest that the perceptual improvements effected by synthesizing matched multisensory inputs are realised by reciprocal amplification of the signal intensity in participating unimodal cortices.  相似文献   

19.
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain‐specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain‐specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event‐related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50–90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.  相似文献   

20.
Neurons and behavior: the same rules of multisensory integration apply   总被引:6,自引:0,他引:6  
Combinations of different sensory cues (e.g. auditory and visual) that are coincident in space enhance the responses of multisensory superior colliculus neurons, while the responses of these same neurons are depressed if the stimuli are separated in space. Using a behavioral paradigm modeled after that used in physiological studies, the present experiments demonstrate that the rules governing multisensory integration at the level of the single neuron also predict the responses to these stimuli in the intact behaving animal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号