首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
It has been argued that both modality-specific and supramodal mechanisms dedicated to time perception underlie the estimation of interval durations. While it is generally assumed that early sensory areas are dedicated to modality-specific time estimation, we hypothesized that early sensory areas such as the primary visual cortex or the auditory cortex might be involved in time perception independently of the sensory modality of the input. To test this possibility, we examined whether disruption of the primary visual cortex or the auditory cortex would disrupt time estimation of auditory stimuli and visual stimuli using transcranial magnetic stimulation (TMS). We found that disruption of the auditory cortex impaired not only time estimation of auditory stimuli but also impaired that of visual stimuli to the same degree. This finding suggests a supramodal role of the auditory cortex in time perception. On the other hand, TMS over the primary visual cortex impaired performance only in visual time discrimination. These asymmetric contributions of the auditory and visual cortices in time perception may be explained by a superiority of the auditory cortex in temporal processing. Here, we propose that time is primarily encoded in the auditory system and that visual inputs are automatically encoded into an auditory representation in time discrimination tasks.  相似文献   

2.
Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information. Although early cortical areas are traditionally considered unimodal, we found that brain responses to the same ignored information depended on the modality attended. In early visual area V1, responses to ignored visual stimuli were weaker when attending to another visual stimulus, compared with attending to an auditory stimulus. The opposite was true in more central visual area MT+, where responses to ignored visual stimuli were weaker when attending to an auditory stimulus. Furthermore, fMRI responses to the same ignored visual information depended on the location of the auditory stimulus, with stronger responses when the attended auditory stimulus shared the same side of space as the ignored visual stimulus. In early auditory cortex, responses to ignored auditory stimuli were weaker when attending a visual stimulus. A simple parameterization of our data can describe the effects of redirecting attention across space within the same modality (spatial attention) or across modalities (cross-modal attention), and the influence of spatial attention across modalities (cross-modal spatial attention). Our results suggest that the representation of unattended information depends on whether attention is directed to another stimulus in the same modality or the same region of space.  相似文献   

3.
Sensory stimuli undergoing sudden changes draw attention and preferentially enter our awareness. We used event-related functional magnetic-resonance imaging (fMRI) to identify brain regions responsive to changes in visual, auditory and tactile stimuli. Unimodally responsive areas included visual, auditory and somatosensory association cortex. Multimodally responsive areas comprised a right-lateralized network including the temporoparietal junction, inferior frontal gyrus, insula and left cingulate and supplementary motor areas. These results reveal a distributed, multimodal network for involuntary attention to events in the sensory environment. This network contains areas thought to underlie the P300 event-related potential and closely corresponds to the set of cortical regions damaged in patients with hemineglect syndromes.  相似文献   

4.
The time estimation paradigm allows the recording of anticipatory attention for an upcoming stimulus unconfounded by any anticipatory motor activity. Three seconds after a warning signal (WS) subjects have to press a button. A button press within a time window from 2,850 ms to 3,150 ms after the WS is considered ‘correct’, a movement prior to 2,850 ms after the WS is labelled ‘too early’ and a movement after 3,150 ms is labelled ‘too late’. Two seconds after the button press a Knowledge of Results (KR) stimulus is presented, informing the subject about the correctness of the response. Stimulus Preceding Negativity (SPN) is a slow wave which is recorded prior to the presentation of the KR stimulus. The SPN has a right hemisphere preponderance and is based upon activity in a network in which prefrontal cortex, the insula Reili and the parietal cortex are crucial. In the present study we asked two questions: (1) does the SPN show modality specificity and (2) does the use of verbal KR stimuli influence the right hemisphere preponderance? Auditory and visual stimuli were presented, in a verbal mode and in a non-verbal mode. SPN amplitudes prior to visual stimuli were larger over the visual cortex than prior to auditory stimuli. SPN amplitudes prior to auditory stimuli were larger over the frontal areas than prior to visual stimuli. The use of verbal stimuli did not influence the right hemisphere preponderance. We concluded that apart from the supramodal effect of KR stimuli in general, there is (first) a modality-specific activation of the relevant sensory cortical areas. The supramodal network underlying the attention for and the use of KR information is activated either from different sensory areas or from language processing cortical areas.  相似文献   

5.
Cross-modal binding in auditory-visual speech perception was investigated by using the McGurk effect, a phenomenon in which hearing is altered by incongruent visual mouth movements. We used functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). In each experiment, the subjects were asked to identify spoken syllables ('ba', 'da', 'ga') presented auditorily, visually, or audiovisually (incongruent stimuli). For the auditory component of the stimuli, there were two conditions of intelligibility (High versus Low) as determined by the signal-to-noise (SN) ratio. The control task was visual talker identification of still faces. In the Low intelligibility condition in which the auditory component of the speech was harder to hear, the visual influence was much stronger. Brain imaging data showed bilateral activations specific to the unimodal auditory stimuli (in the temporal cortex) and visual stimuli (in the MT/V5). For the bimodal audiovisual stimuli, activation in the left temporal cortex extended more posteriorly toward the visual-specific area in the Low intelligibility condition. The direct comparison between the Low and High audiovisual conditions showed increased activations in the posterior part of the left superior temporal sulcus (STS), indicating its relationship with the stronger visual influence. It was discussed that this region is likely to be involved in cross-modal binding of auditory-visual speech.  相似文献   

6.
We used event-related functional magnetic resonance imaging to study the neural correlates of endogenous spatial attention for vision and touch. We examined activity associated with attention-directing cues (central auditory pure tones), symbolically instructing subjects to attend to one hemifield or the other prior to upcoming stimuli, for a visual or tactile task. In different sessions, subjects discriminated either visual or tactile stimuli at the covertly attended side, during bilateral visuotactile stimulation. To distinguish cue-related preparatory activity from any modulation of stimulus processing, unpredictably on some trials only the auditory cue was presented. The use of attend-vision and attend-touch blocks revealed whether preparatory attentional effects were modality-specific or multimodal. Unimodal effects of spatial attention were found in somatosensory cortex for attention to touch, and in occipital areas for attention to vision, both contralateral to the attended side. Multimodal spatial effects (i.e. effects of attended side irrespective of task-relevant modality) were detected in contralateral intraparietal sulcus, traditionally considered a multimodal brain region; and also in the middle occipital gyrus, an area traditionally considered purely visual. Critically, all these activations were observed even on cue-only trials, when no visual or tactile stimuli were subsequently presented. Endogenous shifts of spatial attention result in changes of brain activity prior to the presentation of target stimulation (baseline shifts). Here, we show for the first time the separable multimodal and unimodal components of such preparatory activations. Additionally, irrespective of the attended side and modality, attention-directing auditory cues activated a network of superior frontal and parietal association areas that may play a role in voluntary control of spatial attention for both vision and touch. Electronic Publication  相似文献   

7.
The aim of this study was to investigate how the processing of auditory stimuli is affected by the simultaneous presentation of visual stimuli. This was approached in an active and passive condition, during which a P3 was elicited in the human EEG by single auditory stimuli. Subjects were presented tones, either alone or accompanied by the simultaneous exposition of pictures. There were two different sessions. In the first, the presented tones demanded no further cognitive activity from the subjects (passive or 'ignore' session), while in the second session subjects were instructed to count the tones (active or 'count' session). The central question was whether inter-modal influences of visual stimulation in the active condition would modulate the auditory P3 in the same way as in the passive condition. Brain responses in the ignore session revealed only a small P3-like component over the parietal and frontal cortex, however, when the auditory stimuli co-occurred with the visual stimuli, an increased frontal activity in the window of 300-500 ms was observed. This could be interpreted as the reflection of a more intensive involuntary attention shift, provoked by the preceding visual stimulation. Moreover, it was found that cognitive load caused by the count instruction, resulted in an evident P3, with maximal amplitude over parietal locations. This effect was smaller when auditory stimuli were presented on the visual background. These findings might support the thesis that available resources were assigned to the analysis of visual stimulus, and thus were not available to analyze the subsequent auditory stimuli. This reduction in allocation of resources for attention was restricted to the active condition only, when the matching of a template with incoming information results in a distinct P3 component. It is discussed whether the putative source of this effect is a change in the activity of the frontal cortex.  相似文献   

8.
We studied orienting and maintenance of spatial attention in audition and vision. Functional magnetic resonance imaging (fMRI) in nine healthy subjects revealed activations in the same superior and inferior parietal, and posterior prefrontal areas in the auditory and visual orienting tasks when these tasks were compared with the corresponding maintenance tasks. Attention-related activations in the thalamus and cerebellum were observed during the auditory orienting and maintenance tasks and during the visual orienting task. In addition to the supratemporal auditory cortices, auditory orienting, and maintenance produced stronger activity than the respective visual tasks in the inferior parietal and prefrontal cortices, whereas only the occipital visual cortex and the superior parietal cortex showed stronger activity during the visual tasks than during the auditory tasks. Differences between the brain networks involved in auditory and visual spatial attention could be, for example, due to different encoding of auditory and visual spatial information or differences in stimulus-driven (bottom-up triggered) and voluntary (top-down controlled) attention between the auditory and visual modalities, or both.  相似文献   

9.
Talsma D  Kok A 《Psychophysiology》2001,38(5):736-751
The present study focuses on the question of whether inter- and intramodal forms of attention are reflected in activation of the same or different brain areas. ERPs were recorded while subjects were presented a random sequence of visual and auditory stimuli. They were instructed to attend to nonspatial attributes of either auditory or visual stimuli and to detect occasional target stimuli within the attended channel. An occipital selection negativity was found for intramodal attention to visual stimuli. Visual intermodal attention was also manifested in a similar negativity. A symmetrical dipole pair in the medial inferior occipital areas could account for the intramodal effects. Dipole pairs for the intermodal attention effect had a slightly more posterior location compared to the dipole pair for the intramodal effect. Auditory intermodal attention was manifested in an early enhanced negativity overlapping with the N1 and P2 components, which was localized using a symmetrical dipole pair in the lateral auditory cortex. The onset of the intramodal attention effect was somewhat later (around 200 ms), and was reflected in a frontal processing negativity. The present results indicate that intra- and intermodal forms of attention were indeed similar for visual stimuli. Auditory data suggest the involvement of multiple brain areas.  相似文献   

10.
Early blindness in humans and experimental visual deprivation in animal models are known to induce compensatory somatosensory and/or auditory activation of the visual cortex. An abnormal hydrocephalic cat with extreme malformation of the visual system, born in our breeding colony, rendered a good model system for investigating possible cross-modal compensation in such a pathological case. For comparison, we used normal and neonatally enucleated cats. When introduced to a novel environment, the abnormal cat behaved as if it was completely blind, yet it responded normally to auditory stimuli. As anticipated, single cells in the visual cortex of normal cats responded to visual, but not to auditory stimuli. In the visual cortex of enucleated cats, flashes of light did not elicit field-evoked potentials or single-unit responses. However, several cells did respond to various auditory stimuli. In the remnant visual cortex of the abnormal cat, auditory stimuli evoked field potentials and single-cell responses. Unexpectedly, however, unlike the enucleated cats, in the abnormal cat, flashes of light also elicited field-evoked potentials. Judging by its behavior, it is very likely that this deformed cat had completely lost its ability to perceive images, but had probably retained some sensitivity to light.  相似文献   

11.
Recent findings suggest that neural representations in early auditory cortex reflect not only the physical properties of a stimulus, but also high-level, top-down, and even cross-modal information. However, the nature of cross-modal information in auditory cortex remains poorly understood. Here, we used pattern analyses of fMRI data to ask whether early auditory cortex contains information about the visual environment. Our data show that 1) early auditory cortex contained information about a visual stimulus when there was no bottom-up auditory signal, and that 2) no influence of visual stimulation was observed in auditory cortex when visual stimuli did not provide a context relevant to audition. Our findings attest to the capacity of auditory cortex to reflect high-level, top-down, and cross-modal information and indicate that the spatial patterns of activation in auditory cortex reflect contextual/implied auditory information but not visual information per se.  相似文献   

12.
The hippocampal formation is a key structure in memory formation and consolidation. The hippocampus receives information from different cortical and subcortical sources. Cortical information is mostly funneled to the hippocampus through the entorhinal cortex (EC) in a bi-directional way that ultimately ends in the cortex. Retrograde tracing studies in the nonhuman primate indicate that more than two-thirds of the cortical afferents to the EC come from polymodal sensory association areas. Although some evidence for the projection from visual unimodal cortex to the EC exists, inputs from other visual and auditory unimodal association areas, and the possibility of their convergence with polymodal input in the EC remains largely undisclosed. We studied 10 Macaca fascicularis monkeys in which cortical deposits of the anterograde tracer biotinylated dextran-amine were made into different portions of visual and auditory unimodal association cortices in the temporal lobe, and in polymodal association cortex at the upper bank of the superior temporal sulcus. Visual and auditory unimodal as well as polymodal cortical areas projected to the EC. Both visual unimodal and polymodal association cortices presented dense projections, while those from unimodal auditory association cortex were more patchy and less dense. In all instances, the projection distributed in both the superficial and deep layers of the EC. However, while polymodal cortex projected to all layers (including layer I), visual unimodal cortex did not project to layer I, and auditory unimodal cortex projected less densely, scattered through all layers. Topographically, convergence from the three cortical areas studied can be observed in the lateral rostral and lateral caudal subfields. The present study suggests that unimodal and polymodal association cortical inputs converge in the lateral EC, thereby providing the possibility for the integration of complex stimuli for internal representations in declarative memory elaboration.  相似文献   

13.
Neuromagnetic responses were recorded over the left hemisphere to find out in which cortical area the heard and seen speech are integrated. Auditory stimuli were Finnish/pa/syllables presented together with a videotaped face articulating either the concordant syllable/pa/(84% of stimuli, V = A) or the discordant syllable/ka/(16%, V not equal to A). In some subjects the probabilities were reversed. The subjects heard V not equal to A stimuli as/ta/ or ka. The magnetic responses to infrequent perceptions elicited a specific waveform which could be explained by activity in the supratemporal auditory cortex. The results show that visual information from articulatory movements has an entry into the auditory cortex.  相似文献   

14.
In this study, we investigated changes in cortical oscillations following congruent and incongruent grapheme-phoneme stimuli. Hiragana graphemes and phonemes were simultaneously presented as congruent or incongruent audiovisual stimuli to native Japanese-speaking participants. The discriminative reaction time was 57 ms shorter for congruent than incongruent stimuli. Analysis of MEG responses using synthetic aperture magnetometry (SAM) revealed that congruent stimuli evoked larger 2-10 Hz activity in the left auditory cortex within the first 250 ms after stimulus onset, and smaller 2-16 Hz activity in bilateral visual cortices between 250 and 500 ms. These results indicate that congruent visual input can modify cortical activity in the left auditory cortex.  相似文献   

15.
Little is known about how the brain binds together signals from multiple sensory modalities to produce unified percepts of objects and events in the external world. Using event-related functional magnetic resonance imaging (fMRI) in humans, we measured transient brain responses to auditory/visual binding, as evidenced by a sound-induced change in visual motion perception. Identical auditory and visual stimuli were presented in all trials, but in some trials they were perceived to be bound together and in others they were perceived as unbound unimodal events. Cross-modal binding was associated with higher activity in multimodal areas, but lower activity in predominantly unimodal areas. This activation pattern suggests that a reciprocal and 'competitive' interaction between multimodal and unimodal areas underlies the perceptual interpretation of simultaneous signals from multiple sensory modalities.  相似文献   

16.
Previous event-related potential (ERP) studies have suggested a possible participation of the visual cortex of the blind in auditory processing. In the present study, somatosensory and auditory ERPs of blind and sighted subjects were recorded when subjects were instructed to attend to stimuli of one modality and to ignore those of the other. Both modalities were stimulated with frequent (standard) and infrequent (deviant) stimuli, which differed from one another in their spatial locus of origin. In the sighted, deviant stimuli of the attended modality elicited N2 type of deflections (auditory N2b and somatosensory N250) over the lateral scalp areas. In contrast, in the blind, these ERP components were centroposteriorly distributed, suggesting an involvement of posterior brain areas in auditory and somatosensory stimulus discrimination. In addition, the mismatch negativity, elicited by deviant auditory stimuli even when the somatosensory stimuli were attended, was larger in the blind than in the sighted. This appears to indicate enhanced automatic processing of auditory stimulus changes in the blind. Thus, the present data suggest several compensatory changes in both auditory and somatosensory modalities after the onset of early visual deprivation.  相似文献   

17.
Chen H  Yao D  Zhuo Y  Chen L 《Brain topography》2003,15(4):223-232
Independent Component Analysis (ICA) is a promising tool for the analysis of functional magnetic resonance imaging (fMRI) time series. In these studies, mostly assumed is a spatially independent component map of fMRI data (spatial ICA). In this paper, we assume that the temporal courses of the signal and noises are independent within a Tiny spatial domain (temporal ICA). Then with fast-ICA algorithm, spatially neighboring fMRI data were blindly separated into several temporal courses and were preassumed to be formed by a signal time course and several noise time courses where the signal has the largest correlation coefficient with the reference signal. The final functional imaging was completed for the signals obtained from each voxel. Simulations showed that compared with the spatial ICA method, the new temporal ICA method is more effective than the spatial ICA in detecting weak signal in a fMRI dataset. As background noise, the simulations include simulated Gaussian noise and fMRI data without stimulation. Finally, vivo fMRI tests showed that the excited areas evoked by a visual stimuli are mainly in the region of the primary visual cortex and that evoked by auditory stimuli are mainly in the region of the primary temporal cortex.  相似文献   

18.
Studies in the monkey have shown that cortex outside of the primary projection areas in the superior temporal gyrus and in the inferotemporal region in important for the execution of some auditory and visual descriminations. In this study, six monkeys were trained to perform four auditory and two visual discriminations. Retention tests were given prior to bilateral removal of the anterior part of the lateral surface of the superior temporal gyrus, the inferotemporal region, or both areas together. Superior temporal ablations elicited severe deficits on some auditory discriminations. Inferotemporal ablations caused little or no impairment on visual discriminations. This negative finding is attributed to the sequential rather than spatial mode of presentation of visual stimuli, and to overtraining. A single monkey trained on a spatial visual pattern problem without overtraining was impaired. Another monkey tranied to perform an auditory reverse intensity discrimination exhibited a deficit in ability to perform the problem after removal of auditory cortex within the lateral fissure.  相似文献   

19.
Functional neuroimaging experiments have revealed an organization of frequency-dependent responses in human auditory cortex suggestive of multiple tonotopically organized areas. Numerous studies have sampled cortical responses to isolated narrow-band stimuli, revealing multiple locations in auditory cortex at which the position of response varies systematically with frequency content. Because appropriate anatomical or functional grouping of these distinct frequency-dependent responses is uncertain, the number and location of tonotopic mappings within human auditory cortex remains unclear. Further, sampling does not address whether the observed mappings exhibit continuity as a function of position. This functional magnetic resonance imaging study used frequency-swept stimuli to identify progressions of frequency sensitivity across the cortical surface. The center-frequency of narrow-band, amplitude-modulated noise was slowly swept between 125 and 8,000 Hz. The latency of response relative to sweep onset was determined for each cortical surface location. Because frequency varied systematically with time, response latency indicated the frequency to which a location was maximally sensitive. Areas of cortex exhibiting a progressive change in response latency with position were considered tonotopically organized. There exist two main findings. First, six progressions of frequency sensitivity (i.e., tonotopic mappings) were repeatably observed in the superior temporal plane. Second, the locations of the higher- and lower-frequency endpoints of these progressions were approximately congruent with regions reported to be most responsive to discrete higher- and lower-frequency stimuli. Based on these findings and previous anatomical work, we propose a correspondence between these progressions and anatomically defined cortical areas, suggesting that five areas in human auditory cortex exhibit at least six tonotopic organizations.  相似文献   

20.
Foveal attention modulates responses to peripheral stimuli   总被引:2,自引:0,他引:2  
When attending to a visual object, peripheral stimuli must be monitored for appropriate redirection of attention and gaze. Earlier work has revealed precentral and posterior parietal activation when attention has been directed to peripheral vision. We wanted to find out whether similar cortical areas are active when stimuli are presented in nonattended regions of the visual field. The timing and distribution of neuromagnetic responses to a peripheral luminance stimulus were studied in human subjects with and without attention to fixation. Cortical current distribution was analyzed with a minimum L1-norm estimate. Attention enhanced responses 100-160 ms after the stimulus onset in the right precentral cortex, close to the known location of the right frontal eye field. In subjects whose right precentral region was not distinctly active before 160 ms, focused attention commonly enhanced right inferior parietal responses between 180 and 240 ms, whereas in the subjects with clear earlier precentral response no parietal enhancement was detected. In control studies both attended and nonattended stimuli in the peripheral visual field evoked the right precentral response, whereas during auditory attention the visual stimuli failed to evoke such response. These results show that during focused visual attention the right precentral cortex is sensitive to stimuli in all parts of the visual field. A rapid response suggests bypassing of elaborate analysis of stimulus features, possibly to encode target location for a saccade or redirection of attention. In addition, load for frontal and parietal nodi of the attentional network seem to vary between individuals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号