首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
For attentional control of behavior, the brain permanently resolves a competition between the impressions supplied by different senses. Here, using a dual-modality temporal order detection task, we studied attentional modulation of oscillatory neuromagnetic activity in the human cerebral cortex. On each trial, after simultaneous exposure to visual and auditory noise, subjects were presented with an asynchronous pair of a visual and an auditory stimulus. Either of the two stimuli could occur first equally often, their order was not cued. Subjects had to determine the leading stimulus in a pair and attentively monitor it to respond upon its offset. With the attended visual or auditory stimuli, spectral power analysis revealed marked enhancements of induced gamma activity within 250 ms post-stimulus onset over the modality-specific cortices (occipital at 64 Hz, right temporal at 53 Hz). When unattended, however, the stimuli led to a significantly decreased (beneath baseline) gamma response in these cortical regions. The gamma decreases occurred at lower frequencies ( approximately 30 Hz) than did the gamma increases. An increase in the gamma power and frequency for the attended modality and their decrease for the unattended modality suggest that attentional regulation of multisensory processing involves reciprocal changes in synchronization of respective cortical networks. We assume that the gamma decrease reflects an active suppression of the task-irrelevant sensory input. This suppression occurs at lower frequencies, suggesting an involvement of larger scale cell assemblies.  相似文献   

2.
Jacoby O  Hall SE  Mattingley JB 《NeuroImage》2012,61(4):1050-1058
Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another.  相似文献   

3.
Human brain activity associated with audiovisual perception and attention   总被引:1,自引:0,他引:1  
Coherent perception of objects in our environment often requires perceptual integration of auditory and visual information. Recent behavioral data suggest that audiovisual integration depends on attention. The current study investigated the neural basis of audiovisual integration using 3-Tesla functional magnetic resonance imaging (fMRI) in 12 healthy volunteers during attention to auditory or visual features, or audiovisual feature combinations of abstract stimuli (simultaneous harmonic sounds and colored circles). Audiovisual attention was found to modulate activity in the same frontal, temporal, parietal and occipital cortical regions as auditory and visual attention. In addition, attention to audiovisual feature combinations produced stronger activity in the superior temporal cortices than attention to only auditory or visual features. These modality-specific areas might be involved in attention-dependent perceptual binding of synchronous auditory and visual events into coherent audiovisual objects. Furthermore, the modality-specific temporal auditory and occipital visual cortical areas showed attention-related modulations during both auditory and visual attention tasks. This result supports the proposal that attention to stimuli in one modality can spread to encompass synchronously presented stimuli in another modality.  相似文献   

4.
Gonzalo D  Shallice T  Dolan R 《NeuroImage》2000,11(3):243-255
Functional imaging studies of learning and memory have primarily focused on stimulus material presented within a single modality (see review by Gabrieli, 1998, Annu. Rev. Psychol. 49: 87-115). In the present study we investigated mechanisms for learning material presented in visual and auditory modalities, using single-trial functional magnetic resonance imaging. We evaluated time-dependent learning effects under two conditions involving presentation of consistent (repeatedly paired in the same combination) or inconsistent (items presented randomly paired) pairs. We also evaluated time-dependent changes for bimodal (auditory and visual) presentations relative to a condition in which auditory stimuli were repeatedly presented alone. Using a time by condition analysis to compare neural responses to consistent versus inconsistent audiovisual pairs, we found significant time-dependent learning effects in medial parietal and right dorsolateral prefrontal cortices. In contrast, time-dependent effects were seen in left angular gyrus, bilateral anterior cingulate gyrus, and occipital areas bilaterally. A comparison of paired (bimodal) versus unpaired (unimodal) conditions was associated with time-dependent changes in posterior hippocampal and superior frontal regions for both consistent and inconsistent pairs. The results provide evidence that associative learning for stimuli presented in different sensory modalities is supported by neural mechanisms similar to those described for other kinds of memory processes. The involvement of posterior hippocampus and superior frontal gyrus in bimodal learning for both consistent and inconsistent pairs supports a putative function for these regions in associative learning independent of sensory modality.  相似文献   

5.
Speech perception can use not only auditory signals, but also visual information from seeing the speaker's mouth. The relative timing and relative location of auditory and visual inputs are both known to influence crossmodal integration psychologically, but previous imaging studies of audiovisual speech focused primarily on just temporal aspects. Here we used Positron Emission Tomography (PET) during audiovisual speech processing to study how temporal and spatial factors might jointly affect brain activations. In agreement with previous work, synchronous versus asynchronous audiovisual speech yielded increased activity in multisensory association areas (e.g., superior temporal sulcus [STS]), plus in some unimodal visual areas. Our orthogonal manipulation of relative stimulus position (auditory and visual stimuli presented at same location vs. opposite sides) and stimulus synchrony showed that (i) ventral occipital areas and superior temporal sulcus were unaffected by relative location; (ii) lateral and dorsal occipital areas were selectively activated for synchronous bimodal stimulation at the same external location; (iii) right inferior parietal lobule was activated for synchronous auditory and visual stimuli at different locations, that is, in the condition classically associated with the 'ventriloquism effect' (shift of perceived auditory position toward the visual location). Thus, different brain regions are involved in different aspects of audiovisual integration. While ventral areas appear more affected by audiovisual synchrony (which can influence speech identification), more dorsal areas appear to be associated with spatial multisensory interactions.  相似文献   

6.
Kang E  Lee DS  Kang H  Hwang CH  Oh SH  Kim CS  Chung JK  Lee MC 《NeuroImage》2006,32(1):423-431
Speech perception in face-to-face conversation involves processing of speech sounds (auditory) and speech-associated mouth/lip movements (visual) from a speaker. Using PET where no scanner noise was present, brain regions involved in speech cue processing were investigated with the normal hearing subjects with no previous lip-reading training (N = 17) carrying out a semantic plausibility decision on spoken sentences delivered in a movie file. Multimodality was ensured at the sensory level in all four conditions. Sensory-specific speech cue of one sensory modality, i.e., auditory speech (A condition) or mouth movement (V condition), was delivered with a control stimulus of the other modality whereas speech cues of both sensory modalities (AV condition) were delivered during bimodal condition. In comparison to the control condition, extensive activations in the superior temporal regions were observed bilaterally during the A condition but these activations were reduced in extent and left lateralized during the AV condition. Polymodal region such as left posterior superior temporal sulcus (pSTS) involved in cross-modal interaction/integration of audiovisual speech was found to be activated during the A and more so during the AV conditions but not during the V condition. Activations were observed in Broca's (BA 44), medial frontal (BA 8), and anterior ventrolateral prefrontal (BA 47) regions in the left during the V condition, where lip-reading performance was less successful. Results indicated that the speech-associated lip movements (visual speech cue) rendered suppression on the activity in the right auditory temporal regions. Overadditivity (AV > A + V) observed in the right postcentral region during the bimodal condition relative to the sum of unimodal speech conditions was also associated with reduced activity during the V condition. These findings suggested that visual speech cue could exert an inhibitory modulatory effect on the brain activities in the right hemisphere during the cross-modal interaction of audiovisual speech perception.  相似文献   

7.
Using synthetic aperture magnetometry (SAM) analyses of magnetoencephalographic (MEG) data, we investigated the variation in cortical response magnitude and frequency as a function of stimulus temporal frequency. In two separate experiments, a reversing checkerboard stimulus was used in the right or left lower visual field at frequencies from 0 to 21 Hz. Average temporal frequency tuning curves were constructed for regions-of-interest located within medial visual cortex and V5/MT. In medial visual cortex, it was found that both the frequency and magnitude of the steady-state response varied as a function of the stimulus frequency, with multiple harmonics of the stimulus frequency being found in the response. The maximum fundamental response was found at a stimulus frequency of 8 Hz, whilst the maximum broadband response occurred at 4 Hz. In contrast, the magnitude and frequency content of the evoked onset response showed no dependency on stimulus frequency. Whilst medial visual cortex showed a power increase during stimulation, extra-striate areas such as V5/MT exhibited a bilateral event-related desynchronisation (ERD). The frequency content of this ERD did not depend on the stimulus frequency but was a broadband power reduction across the 5-20 Hz frequency range. The magnitude of this ERD within V5/MT was strongly low-pass tuned for stimulus frequency, and showed only a moderate preference for stimuli in the contralateral visual field.  相似文献   

8.
The processing streams of the various sensory modalities are known to interact within the central nervous system. These interactions differ depending on the level of stimulus representation and attention. The current study focused on cross-sensory influences on stimulus change detection during unattended auditory processing. We employed an oddball paradigm to assess cortical processing using whole-head magnetoencephalography (MEG) in 20 volunteers. While subjects performed distraction tasks of varying difficulties, auditory duration deviants were applied randomly to the left or the right ear preceded (200-400 ms) by oculomotor, static visual, or flow field co-stimulation at either side. Mismatch fields were recorded over both hemispheres. Changes in gaze direction and static visual stimuli elicited the most reliable enhancement of deviance detection at the same side (most prominent at the right auditory cortex). Under both conditions, the lateralized unattended and unpredictive pre-cues acted analogously to shifts in selective attention, but were not reduced by attentional load. Thus, the early cognitive representation of sounds seems to reflect automatic cross-modal interference. Preattentive multisensory integration may provide the neuronal basis for orienting reactions to objects in space and thus for voluntary control of selective attention.  相似文献   

9.
Jessen S  Kotz SA 《NeuroImage》2011,58(2):665-674
Face-to-face communication works multimodally. Not only do we employ vocal and facial expressions; body language provides valuable information as well. Here we focused on multimodal perception of emotion expressions, monitoring the temporal unfolding of the interaction of different modalities in the electroencephalogram (EEG). In the auditory condition, participants listened to emotional interjections such as "ah", while they saw mute video clips containing emotional body language in the visual condition. In the audiovisual condition participants saw video clips with matching interjections. In all three conditions, the emotions "anger" and "fear", as well as non-emotional stimuli were used. The N100 amplitude was strongly reduced in the audiovisual compared to the auditory condition, suggesting a significant impact of visual information on early auditory processing. Furthermore, anger and fear expressions were distinct in the auditory but not the audiovisual condition. Complementing these event-related potential (ERP) findings, we report strong similarities in the alpha- and beta-band in the visual and the audiovisual conditions, suggesting a strong visual processing component in the perception of audiovisual stimuli. Overall, our results show an early interaction of modalities in emotional face-to-face communication using complex and highly natural stimuli.  相似文献   

10.
In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.  相似文献   

11.
Schmid C  Büchel C  Rose M 《NeuroImage》2011,55(1):304-311
Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain.  相似文献   

12.
In this study, the spatial and temporal frequency tuning characteristics of the MEG gamma (40-60 Hz) rhythm and the BOLD response in primary visual cortex were measured and compared. In an identical MEG/fMRI paradigm, 10 participants viewed reversing square wave gratings at 2 spatial frequencies [0.5 and 3 cycles per degree (cpd)] reversing at 5 temporal frequencies (0, 1 6, 10, 15 Hz). Three-dimensional images of MEG source power were generated with synthetic aperture magnetometry (SAM) and showed a high degree of spatial correspondence with BOLD responses in primary visual cortex with a mean spatial separation of 6.5 mm, but the two modalities showed different tuning characteristics. The gamma rhythm showed a clear increase in induced power for the high spatial frequency stimulus while BOLD showed no difference in activity for the two spatial frequencies used. Both imaging modalities showed a general increase of activity with temporal frequency, however, BOLD plateaued around 6-10 Hz while the MEG generally increased with a dip exhibited at 6 Hz. These results demonstrate that the two modalities may show activation in similar spatial locations but that the functional pattern of these activations may differ in a complex manner, suggesting that they may be tuned to different aspects of neuronal activity.  相似文献   

13.
Mouraux A  Diukova A  Lee MC  Wise RG  Iannetti GD 《NeuroImage》2011,54(3):2237-2249
Functional neuroimaging studies in humans have shown that nociceptive stimuli elicit activity in a wide network of cortical areas commonly labeled as the "pain matrix" and thought to be preferentially involved in the perception of pain. Despite the fact that this "pain matrix" has been used extensively to build models of where and how nociception is processed in the human brain, convincing experimental evidence demonstrating that this network is specifically related to nociception is lacking. The aim of the present study was to determine whether there is at least a subset of the "pain matrix" that responds uniquely to nociceptive somatosensory stimulation. In a first experiment, we compared the fMRI brain responses elicited by a random sequence of brief nociceptive somatosensory, non-nociceptive somatosensory, auditory and visual stimuli, all presented within a similar attentional context. We found that the fMRI responses triggered by nociceptive stimuli can be largely explained by a combination of (1) multimodal neural activities (i.e., activities elicited by all stimuli regardless of sensory modality) and (2) somatosensory-specific but not nociceptive-specific neural activities (i.e., activities elicited by both nociceptive and non-nociceptive somatosensory stimuli). The magnitude of multimodal activities correlated significantly with the perceived saliency of the stimulus. In a second experiment, we compared these multimodal activities to the fMRI responses elicited by auditory stimuli presented using an oddball paradigm. We found that the spatial distribution of the responses elicited by novel non-target and novel target auditory stimuli resembled closely that of the multimodal responses identified in the first experiment. Taken together, these findings suggest that the largest part of the fMRI responses elicited by phasic nociceptive stimuli reflects non nociceptive-specific cognitive processes.  相似文献   

14.
Temporal congruency promotes perceptual binding of multisensory inputs. Here, we used EEG frequency-tagging to track cortical activities elicited by auditory and visual inputs separately, in the form of steady-state evoked potentials (SS-EPs). We tested whether SS-EPs could reveal a dynamic coupling of cortical activities related to the binding of auditory and visual inputs conveying synchronous vs. non-synchronous temporal periodicities, or beats. The temporally congruent audiovisual condition elicited markedly enhanced auditory and visual SS-EPs, as compared to the incongruent condition. Furthermore, an increased inter-trial phase coherence of both SS-EPs was observed in that condition. Taken together, these observations indicate that temporal congruency enhances the processing of multisensory inputs at sensory-specific stages of cortical processing, possibly through a dynamic binding by synchrony of the elicited activities and/or improved dynamic attending. Moreover, we show that EEG frequency-tagging with SS-EPs constitutes an effective tool to explore the neural dynamics of multisensory integration in the human brain.  相似文献   

15.
J. S. Morris  C. Buchel  R. J. Dolan   《NeuroImage》2001,13(6):1044-1052
We used event-related fMRI to measure neural activity in volunteer subjects during acquisition of an implicit association between a visual conditioned stimulus (CS+) (angry face) and an auditory unconditioned stimulus (UCS) (aversive, loud noise). Three distinct functional regions were identified within left amygdala: a UCS (noise)-related lateral region, a CS+-related ventral region, and a dorsal region where CS+-related responses changed progressively across the learning session. Differential neural responses to the visual CS+ were also evoked in extrastriate and auditory cortices. Our results indicate that learning an association between biologically salient stimuli of different sensory modalities involves parallel changes of neural activity in segregated amygdala subregions and unimodal sensory cortices.  相似文献   

16.
Oscillatory activity in the gamma-band range in human magneto- and electroencephalogram is thought to reflect the oscillatory synchronization of cortical networks. Findings of enhanced gamma-band activity (GBA) during cognitive processes like gestalt perception, attention and memory have led to the notion that GBA may reflect the activation of internal object representations. However, there is little direct evidence suggesting that GBA is related to subjective perceptual experience. In the present study, magnetoencephalogram was recorded during an audiovisual oddball paradigm with infrequent visual (auditory /ta/ + visual /pa/) or acoustic deviants (auditory /pa/ + visual /ta/) interspersed in a sequence of frequent audiovisual standard stimuli (auditory /ta/ + visual /ta/). Sixteen human subjects had to respond to perceived acoustic changes which could be produced either by real acoustic or illusory (visual) deviants. Statistical probability mapping served to identify correlations between oscillatory activity in response to visual and acoustic deviants, respectively, and the detection rates for either type of deviant. The perception of illusory acoustic changes induced by visual deviants was closely associated with gamma-band amplitude at approximately 80 Hz between 250 and 350 ms over midline occipital cortex. In contrast, the detection of real acoustic deviants correlated positively with induced GBA at approximately 42 Hz between 200 and 300 ms over left superior temporal cortex and negatively with evoked gamma responses at approximately 41 Hz between 220 and 240 ms over occipital areas. These findings support the relevance of high-frequency oscillatory activity over early sensory areas for perceptual experience.  相似文献   

17.
During object manipulation the brain integrates the visual, auditory, and haptic experience of an object into a unified percept. Previous brain imaging studies have implicated for instance the dorsal part of the lateral occipital complex in visuo-tactile and the posterior superior temporal sulcus in audio-visual integration of object-related inputs (Amedi et al., 2005). Yet it is still unclear which brain regions represent object-specific information of all three sensory modalities. To address this question, we performed two complementary functional magnetic resonance imaging experiments. In the first experiment, we identified brain regions which were consistently activated by unimodal visual, auditory, and haptic processing of manipulable objects relative to non-object control stimuli presented in the same modality. In the second experiment, we assessed regional brain activations when participants had to match object-related information that was presented simultaneously in two or all three modalities. Only a well-defined region in left fusiform gyrus (FG) showed an object-specific activation during unisensory processing in the visual, auditory, and tactile modalities. The same region was also consistently activated during multisensory matching of object-related information across all three senses. Taken together, our results suggest that this region is central to the recognition of manipulable objects. A putative role of this FG region is to unify object-specific information provided by the visual, auditory, and tactile modalities into trisensory object representations.  相似文献   

18.
Cohen L  Jobert A  Le Bihan D  Dehaene S 《NeuroImage》2004,23(4):1256-1270
How are word recognition circuits organized in the left temporal lobe? We used functional magnetic resonance imaging (fMRI) to dissect cortical word-processing circuits using three diagnostic criteria: the capacity of an area (1) to respond to words in a single modality (visual or auditory) or in both modalities, (2) to modulate its response in a top-down manner as a function of the graphemic or phonemic emphasis of the task, and (3) to show repetition suppression in response to the conscious repetition of the target word within the same sensory modality or across different modalities. The results clarify the organization of visual and auditory word-processing streams. In particular, the visual word form area (VWFA) in the left occipitotemporal sulcus appears strictly as a visual unimodal area. It is, however, bordered by a second lateral inferotemporal area which is multimodal [lateral inferotemporal multimodal area (LIMA)]. Both areas might have been confounded in past work. Our results also suggest a possible homolog of the VWFA in the auditory stream, the auditory word form area, located in the left anterior superior temporal sulcus.  相似文献   

19.
An important step in perceptual processing is the integration of information from different sensory modalities into a coherent percept. It has been suggested that such crossmodal binding might be achieved by transient synchronization of neurons from different modalities in the gamma-frequency range (> 30 Hz). Here we employed a crossmodal priming paradigm, modulating the semantic congruency between visual–auditory natural object stimulus pairs, during the recording of the high density electroencephalogram (EEG). Subjects performed a semantic categorization task. Analysis of the behavioral data showed a crossmodal priming effect (facilitated auditory object recognition) in response to semantically congruent stimuli. Differences in event-related potentials (ERP) were found between 250 and 350 ms, which were localized to left middle temporal gyrus (BA 21) using a distributed linear source model. Early gamma-band activity (40–50 Hz) was increased between 120 ms and 180 ms following auditory stimulus onset for semantically congruent stimulus pairs. Source reconstruction for this gamma-band response revealed a maximal increase in left middle temporal gyrus (BA 21), an area known to be related to the processing of both complex auditory stimuli and multisensory processing. The data support the hypothesis that oscillatory activity in the gamma-band reflects crossmodal semantic-matching processes in multisensory convergence sites.  相似文献   

20.
Evidence for speech-specific brain processes has been searched for through the manipulation of formant frequencies which mediate phonetic content and which are, in evolutionary terms, relatively "new" aspects of speech. Here we used whole-head magnetoencephalography and advanced stimulus reproduction methodology to examine the contribution of the fundamental frequency F0 and its harmonic integer multiples in cortical processing. The subjects were presented with a vowel, a frequency-matched counterpart of the vowel lacking in phonetic contents, and a pure tone. The F0 of the stimuli was set at that of a typical male (i.e., 100 Hz), female (200 Hz), or infant (270 Hz) speaker. We found that speech sounds, both with and without phonetic content, elicited the N1m response in human auditory cortex at a constant latency of 120 ms, whereas pure tones matching the speech sounds in frequency, intensity, and duration gave rise to N1m responses whose latency varied between 120 and 160 ms. Thus, it seems that the fundamental frequency F0 and its harmonics determine the temporal dynamics of speech processing in human auditory cortex and that speech specificity arises out of cortical sensitivity to the complex acoustic structure determined by the human sound production apparatus.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号