首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
Although a number of studies have demonstrated the effects of altered prenatal experience on subsequent behavioral development, how these effects are achieved remains a topic of enduring interest. The present study examined the immediate effects of unimodal and multimodal prenatal sensory stimulation on physiological and behavioral arousal in bobwhite quail embryos. Embryos were videotaped and their heart rate was monitored during a 4-min exposure period to (a) no supplemental sensory stimulation, (b) unimodal auditory stimulation, (c) unimodal visual stimulation, (d) two sources of concurrent auditory stimulation, or (e) concurrent auditory/visual stimulation. Results indicated that quail embryos' overall activity levels and heart rate can be significantly affected by the type of prenatal sensory stimulation provided during the period prior to hatching. In particular, multimodal stimulation increased both behavioral activity levels and heart rate compared to controls. Across the unimodal and intramodal groups, however, behavioral and physiological measures revealed different patterns of activity in response to supplemental sensory stimulation, highlighting the value of using multiple levels of analysis in exploring arousal mechanisms involved in prenatal perceptual responsiveness.  相似文献   

2.
The ventral intraparietal area (VIP) receives converging inputs from visual, somatosensory, auditory and vestibular systems that use diverse reference frames to encode sensory information. A key issue is how VIP combines those inputs together. We mapped the visual and tactile receptive fields of multimodal VIP neurons in macaque monkeys trained to gaze at three different stationary targets. Tactile receptive fields were found to be encoded into a single somatotopic, or head-centered, reference frame, whereas visual receptive fields were widely distributed between eye- to head-centered coordinates. These findings are inconsistent with a remapping of all sensory modalities in a common frame of reference. Instead, they support an alternative model of multisensory integration based on multidirectional sensory predictions (such as predicting the location of a visual stimulus given where it is felt on the skin and vice versa). This approach can also explain related findings in other multimodal areas.  相似文献   

3.
Sensory dominance in combinations of audio,visual and haptic stimuli   总被引:1,自引:1,他引:0  
Participants presented with auditory, visual, or bi-sensory audio–visual stimuli in a speeded discrimination task, fail to respond to the auditory component of the bi-sensory trials significantly more often than they fail to respond to the visual component—a ‘visual dominance’ effect. The current study investigated further the sensory dominance phenomenon in all combinations of auditory, visual and haptic stimuli. We found a similar visual dominance effect also in bi-sensory trials of combined haptic–visual stimuli, but no bias towards either sensory modality in bi-sensory trials of haptic–auditory stimuli. When presented with tri-sensory trials of combined auditory–visual–haptic stimuli, participants made more errors of responding only to two corresponding sensory signals than errors of responding only to a single sensory modality, however, there were no biases towards either sensory modality (or sensory pairs) in the distribution of both types of errors (i.e. responding only to a single stimulus or to pairs of stimuli). These results suggest that while vision can dominate both the auditory and the haptic sensory modalities, it is limited to bi-sensory combinations in which the visual signal is combined with another single stimulus. However, in a tri-sensory combination when a visual signal is presented simultaneously with both the auditory and the haptic signals, the probability of missing two signals is much smaller than of missing only one signal and therefore the visual dominance disappears.  相似文献   

4.
We used event-related functional magnetic resonance imaging to study the neural correlates of endogenous spatial attention for vision and touch. We examined activity associated with attention-directing cues (central auditory pure tones), symbolically instructing subjects to attend to one hemifield or the other prior to upcoming stimuli, for a visual or tactile task. In different sessions, subjects discriminated either visual or tactile stimuli at the covertly attended side, during bilateral visuotactile stimulation. To distinguish cue-related preparatory activity from any modulation of stimulus processing, unpredictably on some trials only the auditory cue was presented. The use of attend-vision and attend-touch blocks revealed whether preparatory attentional effects were modality-specific or multimodal. Unimodal effects of spatial attention were found in somatosensory cortex for attention to touch, and in occipital areas for attention to vision, both contralateral to the attended side. Multimodal spatial effects (i.e. effects of attended side irrespective of task-relevant modality) were detected in contralateral intraparietal sulcus, traditionally considered a multimodal brain region; and also in the middle occipital gyrus, an area traditionally considered purely visual. Critically, all these activations were observed even on cue-only trials, when no visual or tactile stimuli were subsequently presented. Endogenous shifts of spatial attention result in changes of brain activity prior to the presentation of target stimulation (baseline shifts). Here, we show for the first time the separable multimodal and unimodal components of such preparatory activations. Additionally, irrespective of the attended side and modality, attention-directing auditory cues activated a network of superior frontal and parietal association areas that may play a role in voluntary control of spatial attention for both vision and touch. Electronic Publication  相似文献   

5.
Convergence of inputs from different sensory modalities onto individual neurons is a phenomenon that occurs widely throughout the brain at many phyletic levels and appears to represent a basic neural mechanism by which an organism integrates complex environmental stimuli. In the present study, neurons in the superior colliculus (SC) were used as a model to examine how single neurons deal with simultaneous cues from different sensory modalities (e.g., visual, auditory, somatosensory). The functional result of multisensory convergence on an individual cell was determined by comparing the responses evoked from it by a combined-modality (multimodal) stimulus with those elicited by each (unimodal) component of that stimulus presented alone. Superior colliculus cells exhibited profound changes in their activity when individual sensory stimuli were combined. These "multisensory interactions" were found to be widespread among deep laminae cells and fell into one of two functional categories: response enhancement, characterized by a significant increase in the number of discharges evoked; and response depression, characterized by a significant decrease in the discharges elicited. Multisensory response interactions most often reflected a multiplicative, rather than summative, change in activity. Their absolute magnitude varied from cell to cell and, when stimulus conditions were altered, within the same cell. However, the percentage change of enhanced interactions was generally inversely related to the vigor of the responses that could be evoked by presenting each unimodal stimulus alone and suggest that the potential for response amplification was greatest when responses evoked by individual stimuli were weakest. The majority of cells exhibiting multi-sensory characteristics were demonstrated to have descending efferent projections and thus had access to premotor and motor areas of the brain stem and spinal cord involved in SC-mediated attentive and orientation behaviors. These data show that multisensory convergence provides the descending efferent cells of the SC with a dynamic response character. The responses of these cells and the SC-mediated behaviors that they underlie need not be immutably tied to the presence of any single stimulus, but can vary in response to the particular complex of stimuli present in the environment at any given moment.  相似文献   

6.
This study examined the relationship between unimodal and multimodal sensory stimulation and their effects on prenatal auditory learning in bobwhite quail embryos. Embryos exposed to a maternal call in the 24 hr prior to hatching (unimodal condition) significantly preferred this familiar call over an unfamiliar call in postnatal testing, but failed to demonstrate this preference when the maternal call was presented concurrently with non-synchronized patterned light (multimodal condition). To further explore this interference effect, we provided one group of embryos concurrent exposure to a maternal call and patterned light for 12 hr followed by 12 hr exposure to the call alone (multimodal-->unimodal call). This group failed to prefer the familiar call during postnatal testing. In contrast, reversing the order of presentation during prenatal exposure (unimodal call-->multimodal) led a second group of subjects to significantly prefer the familiar call, suggesting that the order-dependent timing of sensory stimulation can significantly impact prenatal auditory learning. Experiment 3 examined the influence of modality versus timing of sensory stimulation on prenatal auditory learning by providing three groups of embryos with exposure to a maternal call during the 12 hr prior to hatching and by varying the duration of visual stimulation. Results indicate that 12 hr unimodal exposure to patterned light does not support prenatal auditory learning when it is followed by 12 hr exposure to multimodal stimulation (light-->multimodal), but can facilitate prenatal auditory learning when it is followed by unimodal exposure to the call alone (light-->call). Results are discussed in terms of intersensory relationships during perinatal development.  相似文献   

7.
Event-related brain potentials were recorded from the scalp while subjects detected visual, auditory, and somatosensory stimuli presented near threshold. The waveforms were characterized by large, late-positive (P3) waves to signal detections in all three modalities. The scalp distributions of these P3s revealed no substantial differences among the three modalities. There were, however, reliable latency and amplitude differences, with the P3 to visual signals occurring later than to somatic (Study 1) or to auditory (Study 2) signals. Further, the P3s to detected visual signals were substantially larger than those to auditory or somatosensory signals. Taken together, the data suggest that P3 waves to all modalities arise from a common neural generating system but that visual signals access this system in a different fashion from the other modalities.  相似文献   

8.
The thalamus has been described as a “relay station” for sensory information from most sensory modalities projecting to cortical areas. Therefore injury to the thalamus may result in multimodal sensory and motor deficits. In the present study, a 61-year-old woman suffered a right thalamic cerebral vascular accident (CVA; as evidenced by a computerised tomography [CT] scan). Secondary to this incident, she complained of altered sensations across multiple sensory modalities, including olfactory, visual, auditory, tactile, temperature, and pain sensation. Interestingly, during recovery from the thalamic CVA, the patient reported hallucinations in all the modalities cited above. Multimodal dysaethesias (odd sensations) and hallucinations showed reliable laterality in the affective valence across modalities with positive associations within right hemispace and negative associations within left hemispace. Overall, the results support multimodal role of the thalamus and provide evidence for lateralisation of positive and negative affect within the right and left hemispheres respectively.  相似文献   

9.
The thalamus has been described as a "relay station" for sensory information from most sensory modalities projecting to cortical areas. Therefore injury to the thalamus may result in multimodal sensory and motor deficits. In the present study, a 61-year-old woman suffered a right thalamic cerebral vascular accident (CVA; as evidenced by a computerised tomography [CT] scan). Secondary to this incident, she complained of altered sensations across multiple sensory modalities, including olfactory, visual, auditory, tactile, temperature, and pain sensation. Interestingly, during recovery from the thalamic CVA, the patient reported hallucinations in all the modalities cited above. Multimodal dysaethesias (odd sensations) and hallucinations showed reliable laterality in the affective valence across modalities with positive associations within right hemispace and negative associations within left hemispace. Overall, the results support multimodal role of the thalamus and provide evidence for lateralisation of positive and negative affect within the right and left hemispheres respectively.  相似文献   

10.
Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information. Although early cortical areas are traditionally considered unimodal, we found that brain responses to the same ignored information depended on the modality attended. In early visual area V1, responses to ignored visual stimuli were weaker when attending to another visual stimulus, compared with attending to an auditory stimulus. The opposite was true in more central visual area MT+, where responses to ignored visual stimuli were weaker when attending to an auditory stimulus. Furthermore, fMRI responses to the same ignored visual information depended on the location of the auditory stimulus, with stronger responses when the attended auditory stimulus shared the same side of space as the ignored visual stimulus. In early auditory cortex, responses to ignored auditory stimuli were weaker when attending a visual stimulus. A simple parameterization of our data can describe the effects of redirecting attention across space within the same modality (spatial attention) or across modalities (cross-modal attention), and the influence of spatial attention across modalities (cross-modal spatial attention). Our results suggest that the representation of unattended information depends on whether attention is directed to another stimulus in the same modality or the same region of space.  相似文献   

11.
The present study was designed to assess the patterning of occipital and sensorimotor EEG activation during self-generated visual and kinesthetic imagery. Twenty subjects were requested to imagine, in separate trials, a flashing light, a tapping sensation on the right forearm, and both the light and the tapping together. Prior to the imagery trials, subjects were exposed to the stimuli which they were asked to subsequently imagine. EEG was recorded from the left occipital and left sensorimotor regions, filtered for alpha and quantified on-line. The results indicated that self-generated visual imagery elicited greater relative occipital activation than comparable kinesthetic imagery. The imagine-both condition fell predictably in between the two unimodal imagery conditions. The difference between visual and kinesthetic imagery was primarily a function of greater occipital activation during the former versus the latter task. No difference in overall alpha abundance among the three imagery tasks was found. These findings suggest that the self-generation of imagery in different modalities elicits specific changes in the sensory regions of the brain responsible for processing information in the relevant modalities.  相似文献   

12.
Our sensory systems face a daily barrage of auditory and visual signals whose arrival times form a wide range of audiovisual asynchronies. These temporal relationships constitute an important metric for the nervous system when surmising which signals originate from common external events. Internal consistency is known to be aided by sensory adaptation: repeated exposure to consistent asynchrony brings perceived arrival times closer to simultaneity. However, given the diverse nature of our audiovisual environment, functionally useful adaptation would need to be constrained to signals that were generated together. In the current study, we investigate the role of two potential constraining factors: spatial and contextual correspondence. By employing an experimental design that allows independent control of both factors, we show that observers are able to simultaneously adapt to two opposing temporal relationships, provided they are segregated in space. No such recalibration was observed when spatial segregation was replaced by contextual stimulus features (in this case, pitch and spatial frequency). These effects provide support for dedicated asynchrony mechanisms that interact with spatially selective mechanisms early in visual and auditory sensory pathways.  相似文献   

13.
Sense of agency is the way in which we understand the causal relationships between our actions and sensory events. Agency is implicitly measured using intentional binding paradigms, where voluntary self-made actions and consequential sensory events are perceived as shifted closer together in time. However, a crucial question remains as to how we understand the relationship between others’ actions and sensory events. Do we use similar binding processes as for our own actions? Previous attempts to investigate this phenomenon in others’ have reached no clear consensus. Therefore, in an attempt to understand how we attribute the causal relationships between others’ actions and sensory events, we investigated intentional binding in others’ actions using an interval estimation paradigm. In a first experiment participants were required to make a button-press response to indicate the perceived interval between a self-made action and a tone, between a closely matched observed action and tone, and between two tones. For both self-made and observed actions, we found a significant perceived shortening of the interval between the actions and tones as compared with the interval between two tones, thus intentional binding was found for both self-made and observed actions. In a second experiment we validated the findings of the first by contrasting the perceived intervals between an observed action and tone with a matched visual–auditory stimulus and a tone. We again found a significant perceived shortening of the interval for observed action compared with the closely matched visual–auditory control stimulus. The occurrence of intentional binding when observing an action suggests we use similar processes to make causal attributions between our own actions, others’ actions, and sensory events.  相似文献   

14.
Perceptual objects often comprise a visual and auditory signature that arrives simultaneously through distinct sensory channels, and cross-modal features are linked by virtue of being attributed to a specific object. Continued exposure to cross-modal events sets up expectations about what a given object most likely "sounds" like, and vice versa, thereby facilitating object detection and recognition. The binding of familiar auditory and visual signatures is referred to as semantic, multisensory integration. Whereas integration of semantically related cross-modal features is behaviorally advantageous, situations of sensory dominance of one modality at the expense of another impair performance. In the present study, magnetoencephalography recordings of semantically related cross-modal and unimodal stimuli captured the spatiotemporal patterns underlying multisensory processing at multiple stages. At early stages, 100 ms after stimulus onset, posterior parietal brain regions responded preferentially to cross-modal stimuli irrespective of task instructions or the degree of semantic relatedness between the auditory and visual components. As participants were required to classify cross-modal stimuli into semantic categories, activity in superior temporal and posterior cingulate cortices increased between 200 and 400 ms. As task instructions changed to incorporate cross-modal conflict, a process whereby auditory and visual components of cross-modal stimuli were compared to estimate their degree of congruence, multisensory processes were captured in parahippocampal, dorsomedial, and orbitofrontal cortices 100 and 400 ms after stimulus onset. Our results suggest that multisensory facilitation is associated with posterior parietal activity as early as 100 ms after stimulus onset. However, as participants are required to evaluate cross-modal stimuli based on their semantic category or their degree of congruence, multisensory processes extend in cingulate, temporal, and prefrontal cortices.  相似文献   

15.
Sensory stimuli undergoing sudden changes draw attention and preferentially enter our awareness. We used event-related functional magnetic-resonance imaging (fMRI) to identify brain regions responsive to changes in visual, auditory and tactile stimuli. Unimodally responsive areas included visual, auditory and somatosensory association cortex. Multimodally responsive areas comprised a right-lateralized network including the temporoparietal junction, inferior frontal gyrus, insula and left cingulate and supplementary motor areas. These results reveal a distributed, multimodal network for involuntary attention to events in the sensory environment. This network contains areas thought to underlie the P300 event-related potential and closely corresponds to the set of cortical regions damaged in patients with hemineglect syndromes.  相似文献   

16.
The aim of this study was to establish whether spatial attention triggered by bimodal exogenous cues acts differently as compared to unimodal and crossmodal exogenous cues due to crossmodal integration. In order to investigate this issue, we examined cuing effects in discrimination tasks and compared these effects in a condition wherein a visual target was preceded by both visual and auditory exogenous cues delivered simultaneously at the same side (bimodal cue), with conditions wherein the visual target was preceded by either a visual (unimodal cue) or an auditory cue (crossmodal cue). The results of two experiments revealed that cuing effects on RTs in these three conditions with an SOA of 200 ms had comparable magnitudes. Differences at a longer SOA of 600 ms (inhibition of return for bimodal cues, Experiment 1) disappeared when catch trials were included (in Experiment 2). The current data do not support an additional influence of crossmodal integration on exogenous orienting, but are well in agreement with the existence of a supramodal spatial attention module that allocates attentional resources towards stimulated locations for different sensory modalities.  相似文献   

17.
The brain integrates information from multiple sensory modalities and, through this process, generates a coherent and apparently seamless percept of the external world. Although multisensory integration typically binds information that is derived from the same event, when multisensory cues are somewhat discordant they can result in illusory percepts such as the ventriloquism effect. These biases in stimulus localization are generally accompanied by the perceptual unification of the two stimuli. In the current study, we sought to further elucidate the relationship between localization biases, perceptual unification and measures of a participants uncertainty in target localization (i.e., variability). Participants performed an auditory localization task in which they were also asked to report on whether they perceived the auditory and visual stimuli to be perceptually unified. The auditory and visual stimuli were delivered at a variety of spatial (0°, 5°, 10°, 15°) and temporal (200, 500, 800 ms) disparities. Localization bias and reports of perceptual unity occurred even with substantial spatial (i.e., 15°) and temporal (i.e., 800 ms) disparities. Trial-by-trial comparison of these measures revealed a striking correlation: regardless of their disparity, whenever the auditory and visual stimuli were perceived as unified, they were localized at or very near the light. In contrast, when the stimuli were perceived as not unified, auditory localization was often biased away from the visual stimulus. Furthermore, localization variability was significantly less when the stimuli were perceived as unified. Intriguingly, on non-unity trials such variability increased with decreasing disparity. Together, these results suggest strong and potentially mechanistic links between the multiple facets of multisensory integration that contribute to our perceptual Gestalt.  相似文献   

18.
To examine the topographic relationship of P3(00) between the visual and auditory modalities, especially to examine whether there are any modality-specific hemispheric differences of P3 in normal adults. Methods: The P3s were recorded from the same 41 normal right-handed males between the ages of 20 and 33 in both a typical auditory oddball task and a visual oddball paradigm with novel stimuli, with an extensive set of 61 scalp electrodes. In addition to the visual comparison and quantitative assessment of current source density (CSD) maps between the two modalities, canonical correlation analyses on the P3 raw amplitudes and examination of interaction effects of modality × location on both raw and normalized P3 data were performed. Results: The canonical correlation between modalities was generally high, especially at the left parietal brain region. There were no significant hemispheric effects in anterior brain but significant left-greater- than-right hemispheric effects in posterior brain regions in both modalities; modality-specific hemispheric effect was observed only at the parietal region. Strong surface current density activities were observed in the midline parietal-occipital area, and left and right boundary areas of temporal and inferior frontal region. Conclusions: The topographic similarities between P3s recorded in the visual and auditory modality outnumber the differences. Combining data from CSD assessments and profile analysis of P3 topography support the hypothesis of multiple generators of P3 that are differentially active in processing stimuli from different sensory modalities and are not symmetrically distributed between the two hemispheres.  相似文献   

19.
Orienting movements of the eyes and head are made to both auditory and visual stimuli even though in the primary sensory pathways the locations of auditory and visual stimuli are encoded in different coordinates. This study was designed to differentiate between two possible mechanisms for sensory-to-motor transformation. Auditory and visual signals could be translated into common coordinates in order to share a single motor pathway or they could maintain anatomically separate sensory and motor routes for the initiation and guidance of orienting eye movements. The primary purpose of the study was to determine whether neurons in the superior colliculus (SC) that discharge before saccades to visual targets also discharge before saccades directed toward auditory targets. If they do, this would indicate that auditory and visual signals, originally encoded in different coordinates, have been converted into a single coordinate system and are sharing a motor circuit. Trained monkeys made saccadic eye movements to auditory or visual targets while the activity of visual-motor (V-M) cells and saccade-related burst (SRB) cells was monitored. The pattern of spike activity observed during trials in which saccades were made to visual targets was compared with that observed when comparable saccades were made to auditory targets. For most (57 of 59) V-M cells, sensory responses were observed only on visual trials. Auditory stimuli originating from the same region of space did not activate these cells. Yet, of the 72 V-M and SRB cells studied, 79% showed motor bursts prior to saccades to either auditory or visual targets. This finding indicates that visual and auditory signals, originally encoded in retinal and head-centered coordinates, respectively, have undergone a transformation that allows them to share a common efferent pathway for the generation of saccadic eye movements. Saccades to auditory targets usually have lower velocities than saccades of the same amplitude and direction made to acquire visual targets. Since fewer collicular cells are active prior to saccades to auditory targets, one determinant of saccadic velocity may be the number of collicular neurons discharging before a particular saccade.  相似文献   

20.
Budinger E  Heil P  Hess A  Scheich H 《Neuroscience》2006,143(4):1065-1083
It is still a popular view that primary sensory cortices are unimodal, but recent physiological studies have shown that under certain behavioral conditions primary sensory cortices can also be activated by multiple other modalities. Here, we investigate the anatomical substrate, which may underlie multisensory processes at the level of the primary auditory cortex (field AI), and which may, in turn, enable AI to influence other sensory systems. We approached this issue by means of the axonal transport of the sensitive bidirectional neuronal tracer fluorescein-labeled dextran which was injected into AI of Mongolian gerbils (Meriones unguiculatus). Of the total number of retrogradely labeled cell bodies (i.e. cells of origin of direct projections to AI) found in non-auditory sensory and multisensory brain areas, approximately 40% were in cortical areas and 60% in subcortical structures. Of the cell bodies in the cortical areas about 82% were located in multisensory cortex, viz., the dorsoposterior and ventroposterior, posterior parietal cortex, the claustrum, and the endopiriform nucleus, 10% were located in the primary somatosensory cortex (hindlimb and trunk region), and 8% in secondary visual cortex. The cortical regions with retrogradely labeled cells also contained anterogradely labeled axons and their terminations, i.e. they are also target areas of direct projections from AI. In addition, the primary olfactory cortex was identified as a target area of projections from AI. The laminar pattern of corticocortical connections suggests that AI receives primarily cortical feedback-type inputs and projects in a feedforward manner to its target areas. Of the labeled cell bodies in the subcortical structures, approximately 90% were located in multisensory thalamic, 4% in visual thalamic, and 6% in multisensory lower brainstem structures. At subcortical levels, we observed a similar correspondence of retrogradely labeled cells and anterogradely labeled axons and terminals in visual (posterior limitans thalamic nucleus) and multisensory thalamic nuclei (dorsal and medial division of the medial geniculate body, suprageniculate nucleus, posterior thalamic cell group, zona incerta), and in the multisensory nucleus of the brachium of the inferior colliculus. Retrograde, but not anterograde, labeling was found in the multisensory pontine reticular formation, particularly in the reticulotegmental nucleus of the pons. Conversely, anterograde, but no retrograde, labeling was found in the visual laterodorsal and lateroposterior thalamic nuclei, in the multisensory peripeduncular, posterior intralaminar, and reticular thalamic nuclei, as well as in the multisensory superior and pericentral inferior colliculi (including cuneiform and sagulum nucleus), pontine nuclei, and periaqueductal gray. Our study supports the notion that AI is not merely involved in the analysis of auditory stimulus properties but also in processing of other sensory and multisensory information. Since AI is directly connected to other primary sensory cortices (viz. the somatosensory and olfactory ones) multisensory information is probably also processed in these cortices. This suggests more generally, that primary sensory cortices may not be unimodal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号