首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Speech perception can use not only auditory signals, but also visual information from seeing the speaker's mouth. The relative timing and relative location of auditory and visual inputs are both known to influence crossmodal integration psychologically, but previous imaging studies of audiovisual speech focused primarily on just temporal aspects. Here we used Positron Emission Tomography (PET) during audiovisual speech processing to study how temporal and spatial factors might jointly affect brain activations. In agreement with previous work, synchronous versus asynchronous audiovisual speech yielded increased activity in multisensory association areas (e.g., superior temporal sulcus [STS]), plus in some unimodal visual areas. Our orthogonal manipulation of relative stimulus position (auditory and visual stimuli presented at same location vs. opposite sides) and stimulus synchrony showed that (i) ventral occipital areas and superior temporal sulcus were unaffected by relative location; (ii) lateral and dorsal occipital areas were selectively activated for synchronous bimodal stimulation at the same external location; (iii) right inferior parietal lobule was activated for synchronous auditory and visual stimuli at different locations, that is, in the condition classically associated with the 'ventriloquism effect' (shift of perceived auditory position toward the visual location). Thus, different brain regions are involved in different aspects of audiovisual integration. While ventral areas appear more affected by audiovisual synchrony (which can influence speech identification), more dorsal areas appear to be associated with spatial multisensory interactions.  相似文献   

2.
In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.  相似文献   

3.
Schürmann M  Raij T  Fujiki N  Hari R 《NeuroImage》2002,16(2):434-440
The temporospatial pattern of brain activity during auditory imagery was studied using magnetoencephalography. Trained musicians were presented with visual notes and instructed to imagine the corresponding sounds. Brain activity specific to the auditory imagery task was observed, first as enhanced activity of left and right occipital areas (average onset 120-150 ms after the onset of the visual stimulus) and then spreading to the midline parietal cortex (precuneus) and to such extraoccipital areas that were not activated during the visual control condition (e.g., the left temporal auditory association cortex and the left and right premotor cortices). The latest activations, with average onset latencies of 270-400 ms clearly separate from the earliest ones, occurred in the left sensorimotor cortex and the right inferotemporal visual association cortex. These data imply a complex temporospatial activation sequence of multiple cortical areas when musicians recall firmly established audiovisual associations.  相似文献   

4.
The role of attention in speech comprehension is not well understood. We used fMRI to study the neural correlates of auditory word, pseudoword, and nonspeech (spectrally rotated speech) perception during a bimodal (auditory, visual) selective attention task. In three conditions, Attend Auditory (ignore visual), Ignore Auditory (attend visual), and Visual (no auditory stimulation), 28 subjects performed a one-back matching task in the assigned attended modality. The visual task, attending to rapidly presented Japanese characters, was designed to be highly demanding in order to prevent attention to the simultaneously presented auditory stimuli. Regardless of stimulus type, attention to the auditory channel enhanced activation by the auditory stimuli (Attend Auditory>Ignore Auditory) in bilateral posterior superior temporal regions and left inferior frontal cortex. Across attentional conditions, there were main effects of speech processing (word+pseudoword>rotated speech) in left orbitofrontal cortex and several posterior right hemisphere regions, though these areas also showed strong interactions with attention (larger speech effects in the Attend Auditory than in the Ignore Auditory condition) and no significant speech effects in the Ignore Auditory condition. Several other regions, including the postcentral gyri, left supramarginal gyrus, and temporal lobes bilaterally, showed similar interactions due to the presence of speech effects only in the Attend Auditory condition. Main effects of lexicality (word>pseudoword) were isolated to a small region of the left lateral prefrontal cortex. Examination of this region showed significant word>pseudoword activation only in the Attend Auditory condition. Several other brain regions, including left ventromedial frontal lobe, left dorsal prefrontal cortex, and left middle temporal gyrus, showed Attention x Lexicality interactions due to the presence of lexical activation only in the Attend Auditory condition. These results support a model in which neutral speech presented in an unattended sensory channel undergoes relatively little processing beyond the early perceptual level. Specifically, processing of phonetic and lexical-semantic information appears to be very limited in such circumstances, consistent with prior behavioral studies.  相似文献   

5.
Shahin AJ  Kerlin JR  Bhat J  Miller LM 《NeuroImage》2012,60(1):530-538
When speech is interrupted by noise, listeners often perceptually "fill-in" the degraded signal, giving an illusion of continuity and improving intelligibility. This phenomenon involves a neural process in which the auditory cortex (AC) response to onsets and offsets of acoustic interruptions is suppressed. Since meaningful visual cues behaviorally enhance this illusory filling-in, we hypothesized that during the illusion, lip movements congruent with acoustic speech should elicit a weaker AC response to interruptions relative to static (no movements) or incongruent visual speech. AC response to interruptions was measured as the power and inter-trial phase consistency of the auditory evoked theta band (4-8 Hz) activity of the electroencephalogram (EEG) and the N1 and P2 auditory evoked potentials (AEPs). A reduction in the N1 and P2 amplitudes and in theta phase-consistency reflected the perceptual illusion at the onset and/or offset of interruptions regardless of visual condition. These results suggest that the brain engages filling-in mechanisms throughout the interruption, which repairs degraded speech lasting up to ~250 ms following the onset of the degradation. Behaviorally, participants perceived speech continuity over longer interruptions for congruent compared to incongruent or static audiovisual streams. However, this specific behavioral profile was not mirrored in the neural markers of interest. We conclude that lip-reading enhances illusory perception of degraded speech not by altering the quality of the AC response, but by delaying it during degradations so that longer interruptions can be tolerated.  相似文献   

6.
Gonzalo D  Shallice T  Dolan R 《NeuroImage》2000,11(3):243-255
Functional imaging studies of learning and memory have primarily focused on stimulus material presented within a single modality (see review by Gabrieli, 1998, Annu. Rev. Psychol. 49: 87-115). In the present study we investigated mechanisms for learning material presented in visual and auditory modalities, using single-trial functional magnetic resonance imaging. We evaluated time-dependent learning effects under two conditions involving presentation of consistent (repeatedly paired in the same combination) or inconsistent (items presented randomly paired) pairs. We also evaluated time-dependent changes for bimodal (auditory and visual) presentations relative to a condition in which auditory stimuli were repeatedly presented alone. Using a time by condition analysis to compare neural responses to consistent versus inconsistent audiovisual pairs, we found significant time-dependent learning effects in medial parietal and right dorsolateral prefrontal cortices. In contrast, time-dependent effects were seen in left angular gyrus, bilateral anterior cingulate gyrus, and occipital areas bilaterally. A comparison of paired (bimodal) versus unpaired (unimodal) conditions was associated with time-dependent changes in posterior hippocampal and superior frontal regions for both consistent and inconsistent pairs. The results provide evidence that associative learning for stimuli presented in different sensory modalities is supported by neural mechanisms similar to those described for other kinds of memory processes. The involvement of posterior hippocampus and superior frontal gyrus in bimodal learning for both consistent and inconsistent pairs supports a putative function for these regions in associative learning independent of sensory modality.  相似文献   

7.
The temporal synchrony of auditory and visual signals is known to affect the perception of an external event, yet it is unclear what neural mechanisms underlie the influence of temporal synchrony on perception. Using parametrically varied levels of stimulus asynchrony in combination with BOLD fMRI, we identified two anatomically distinct subregions of multisensory superior temporal cortex (mSTC) that showed qualitatively distinct BOLD activation patterns. A synchrony-defined subregion of mSTC (synchronous > asynchronous) responded only when auditory and visual stimuli were synchronous, whereas a bimodal subregion of mSTC (auditory > baseline and visual > baseline) showed significant activation to all presentations, but showed monotonically increasing activation with increasing levels of asynchrony. The presence of two distinct activation patterns suggests that the two subregions of mSTC may rely on different neural mechanisms to integrate audiovisual sensory signals. An additional whole-brain analysis revealed a network of regions responding more with synchronous than asynchronous speech, including right mSTC, and bilateral superior colliculus, fusiform gyrus, lateral occipital cortex, and extrastriate visual cortex. The spatial location of individual mSTC ROIs was much more variable in the left than right hemisphere, suggesting that individual differences may contribute to the right lateralization of mSTC in a group SPM. These findings suggest that bilateral mSTC is composed of distinct multisensory subregions that integrate audiovisual speech signals through qualitatively different mechanisms, and may be differentially sensitive to stimulus properties including, but not limited to, temporal synchrony.  相似文献   

8.
The cortical processing of auditory-alone, visual-alone, and audiovisual speech information is temporally and spatially distributed, and functional magnetic resonance imaging (fMRI) cannot adequately resolve its temporal dynamics. In order to investigate a hypothesized spatiotemporal organization for audiovisual speech processing circuits, event-related potentials (ERPs) were recorded using electroencephalography (EEG). Stimuli were congruent audiovisual/ba/, incongruent auditory/ba/synchronized with visual/ga/, auditory-only/ba/, and visual-only/ba/and/ga/. Current density reconstructions (CDRs) of the ERP data were computed across the latency interval of 50-250 ms. The CDRs demonstrated complex spatiotemporal activation patterns that differed across stimulus conditions. The hypothesized circuit that was investigated here comprised initial integration of audiovisual speech by the middle superior temporal sulcus (STS), followed by recruitment of the intraparietal sulcus (IPS), followed by activation of Broca's area [Miller, L.M., d'Esposito, M., 2005. Perceptual fusion and stimulus coincidence in the cross-modal integration of speech. Journal of Neuroscience 25, 5884-5893]. The importance of spatiotemporally sensitive measures in evaluating processing pathways was demonstrated. Results showed, strikingly, early (<100 ms) and simultaneous activations in areas of the supramarginal and angular gyrus (SMG/AG), the IPS, the inferior frontal gyrus, and the dorsolateral prefrontal cortex. Also, emergent left hemisphere SMG/AG activation, not predicted based on the unisensory stimulus conditions was observed at approximately 160 to 220 ms. The STS was neither the earliest nor most prominent activation site, although it is frequently considered the sine qua non of audiovisual speech integration. As discussed here, the relatively late activity of the SMG/AG solely under audiovisual conditions is a possible candidate audiovisual speech integration response.  相似文献   

9.
Kang E  Lee DS  Kang H  Lee JS  Oh SH  Lee MC  Kim CS 《NeuroImage》2004,22(3):1173-1181
Brain plasticity was investigated, which underlies the gaining of auditory sensory and/or auditory language in deaf children with an early onset deafness after cochlear implantation (CI) surgery. This study examined both the glucose metabolism of the brain and the auditory speech learning using 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) and the Central Institute of Deaf (CID) test, respectively, both before and after the CI surgery. In a within analysis comparing the pre-CI and the post-CI PET results, CI itself resulted in an increase in the glucose metabolism in the medial visual cortex, the bilateral thalamus, and the posterior cingulate. Compared with the normal hearing controls, the brain activity of the deaf children was greater in the medial visual cortex and bilateral occipito-parietal junctions after the CI. The better speech perception ability was associated with increases in activity in the higher visual areas such as middle occipito-temporal junction (hMT/V5) and posterior inferior temporal region (BA 21/37) in the left hemisphere and associated with decreases in activity in the right inferior parieto-dorsal prefrontal region. These findings suggest that the speech learning resulted in a greater demand of the visual and visuospatial processings subserved by the early visual cortex and parietal cortices. However, only those deaf children who successfully learned the auditory language after CI used more visual motion perception for mouth movement in the left hMT/V5 region and less somatosensory function in the right parieto-frontal region.  相似文献   

10.
Malinen S  Hlushchuk Y  Hari R 《NeuroImage》2007,35(1):131-139
In search for suitable tools to study brain activation in natural environments, where the stimuli are multimodal, poorly predictable and irregularly varying, we collected functional magnetic resonance imaging data from 6 subjects during a continuous 8-min stimulus sequence that comprised auditory (speech or tone pips), visual (video clips dominated by faces, hands, or buildings), and tactile finger stimuli in blocks of 6-33 s. Results obtained by independent component analysis (ICA) and general-linear-model-based analysis (GLM) were compared. ICA separated in the superior temporal gyrus one independent component (IC) that reacted to all auditory stimuli and in the superior temporal sulcus another IC responding only to speech. Several distinct and rather symmetric vision-sensitive ICs were found in the posterior brain. An IC in the V5/MT region reacted to videos depicting faces or hands, whereas ICs in the V1/V2 region reacted to all video clips, including buildings. The corresponding GLM-derived activations in the auditory and early visual cortices comprised sub-areas of the ICA-revealed activations. ICA separated a prominent IC in the primary somatosensory cortex whereas the GLM-based analysis failed to show any touch-related activation. "Intrinsic" components, unrelated to the stimuli but spatially consistent across subjects, were discerned as well. The individual time courses were highly consistent in sensory projection cortices and more variable elsewhere. The ability to differentiate functionally meaningful composites of activated brain areas and to straightforwardly reveal their temporal dynamics renders ICA a sensitive tool to study brain responses to complex natural stimuli.  相似文献   

11.
Functional anatomy of intra- and cross-modal lexical tasks   总被引:6,自引:0,他引:6  
Functional magnetic resonance imaging (fMRI) was used to examine lexical processing in normal adults (20-35 years). Two tasks required only intramodal processing (spelling judgments with visual input and rhyming judgments with auditory input) and two tasks required cross-modal processing between phonologic and orthographic representations (spelling judgments with auditory input and rhyming judgments with visual input). Each task led to greater activation in the unimodal association area concordant with the modality of input, namely fusiform gyrus (BA 19, 37) for written words and superior temporal gyrus (BA 22, 42) for spoken words. Cross-modal tasks generated greater activation in posterior heteromodal regions including the supramarginal and angular gyri (BA 40, 39). Cross-modal tasks generated additional activation in unimodal areas representing the target of conversion, superior temporal gyrus for visual rhyming and fusiform gyrus for auditory spelling. Our findings suggest that the fusiform gyrus processes orthographic word forms, the superior temporal gyrus processes phonologic word forms, and posterior heteromodal regions are involved in the conversion between orthography and phonology.  相似文献   

12.
Human brain activity associated with audiovisual perception and attention   总被引:1,自引:0,他引:1  
Coherent perception of objects in our environment often requires perceptual integration of auditory and visual information. Recent behavioral data suggest that audiovisual integration depends on attention. The current study investigated the neural basis of audiovisual integration using 3-Tesla functional magnetic resonance imaging (fMRI) in 12 healthy volunteers during attention to auditory or visual features, or audiovisual feature combinations of abstract stimuli (simultaneous harmonic sounds and colored circles). Audiovisual attention was found to modulate activity in the same frontal, temporal, parietal and occipital cortical regions as auditory and visual attention. In addition, attention to audiovisual feature combinations produced stronger activity in the superior temporal cortices than attention to only auditory or visual features. These modality-specific areas might be involved in attention-dependent perceptual binding of synchronous auditory and visual events into coherent audiovisual objects. Furthermore, the modality-specific temporal auditory and occipital visual cortical areas showed attention-related modulations during both auditory and visual attention tasks. This result supports the proposal that attention to stimuli in one modality can spread to encompass synchronously presented stimuli in another modality.  相似文献   

13.
Electrophysiological studies in nonhuman primates and other mammals have shown that sensory cues from different modalities that appear at the same time and in the same location can increase the firing rate of multisensory cells in the superior colliculus to a level exceeding that predicted by summing the responses to the unimodal inputs. In contrast, spatially disparate multisensory cues can induce a profound response depression. We have previously demonstrated using functional magnetic resonance imaging (fMRI) that similar indices of crossmodal facilitation and inhibition are detectable in human cortex when subjects listen to speech while viewing visually congruent and incongruent lip and mouth movements. Here, we have used fMRI to investigate whether similar BOLD signal changes are observable during the crossmodal integration of nonspeech auditory and visual stimuli, matched or mismatched solely on the basis of their temporal synchrony, and if so, whether these crossmodal effects occur in similar brain areas as those identified during the integration of audio-visual speech. Subjects were exposed to synchronous and asynchronous auditory (white noise bursts) and visual (B/W alternating checkerboard) stimuli and to each modality in isolation. Synchronous and asynchronous bimodal inputs produced superadditive BOLD response enhancement and response depression across a large network of polysensory areas. The most highly significant of these crossmodal gains and decrements were observed in the superior colliculi. Other regions exhibiting these crossmodal interactions included cortex within the superior temporal sulcus, intraparietal sulcus, insula, and several foci in the frontal lobe, including within the superior and ventromedial frontal gyri. These data demonstrate the efficacy of using an analytic approach informed by electrophysiology to identify multisensory integration sites in humans and suggest that the particular network of brain areas implicated in these crossmodal integrative processes are dependent on the nature of the correspondence between the different sensory inputs (e.g. space, time, and/or form).  相似文献   

14.
Luks TL  Simpson GV 《NeuroImage》2004,22(4):1515-1522
We used event-related fMRI to test the hypothesis that preparatory attention modulations occur in higher-order motion-processing regions when subjects deploy attention to internally driven representations in a complex motion-processing task. Using a cued attention-to-motion task, we found preparatory increases in fMRI activity in visual motion regions in the absence of visual motion stimulation. The cue, a brief enlargement of the fixation cross, directed subjects to prepare for a complex motion discrimination task. This preparation activated higher-order and lower-order motion regions. The motion regions activated included temporal regions consistent with V5/MT+, occipital regions consistent with V3+, parietal-occipital junction regions, ventral and dorsal intraparietal sulcus, superior temporal sulcus (STS), posterior insular cortex (PIC), and a region of BA 39/40 superior to V5/MT+ involving the angular gyrus and supramarginal gyrus (A-SM). Consistent with our hypothesis that these motion sensory activations are under top-down control, we also found activation of an extensive frontal network during the cue period, including anterior cingulate and multiple prefrontal regions. These results support the hypothesis that anticipatory deployment of attention to internally driven representations is achieved via top-down modulation of activity in task-relevant processing areas.  相似文献   

15.
Functional topography of working memory for face or voice identity   总被引:3,自引:0,他引:3  
Rämä P  Courtney SM 《NeuroImage》2005,24(1):224-234
We used functional magnetic resonance imaging (fMRI) to investigate whether the neural systems for nonspatial visual and auditory working memory exhibits a functional dissociation. The subjects performed a delayed recognition task for previously unfamiliar faces and voices and an audiovisual sensorimotor control task. During the initial sample and subsequent test stimulus presentations, activation was greater for the face than for the voice identity task bilaterally in the occipitotemporal cortex and, conversely, greater for voices than for faces bilaterally in the superior temporal sulcus/gyrus (STS/STG). Ventral prefrontal regions were activated by both memory delays in comparison with the control delays, and there was no significant difference in direct voxelwise comparisons between the tasks. However, further analyses showed that there was a subtle difference in the functional topography for two delay types within the ventral prefrontal cortex. Face delays preferentially activate the dorsal part of the ventral prefrontal cortex (BA 44/45) while voice delays preferentially activate the inferior part (BA 45/47), indicating a ventral/dorsal auditory/visual topography within the ventral prefrontal cortex. The results confirm that there is a modality-specific attentional modulation of activity in visual and auditory sensory areas during stimulus presentation. Moreover, within the nonspatial information-type domain, there is a subtle across-modality dissociation within the ventral prefrontal cortex during working memory maintenance of faces and voices.  相似文献   

16.
Nath AR  Beauchamp MS 《NeuroImage》2012,59(1):781-787
The McGurk effect is a compelling illusion in which humans perceive mismatched audiovisual speech as a completely different syllable. However, some normal individuals do not experience the illusion, reporting that the stimulus sounds the same with or without visual input. Converging evidence suggests that the left superior temporal sulcus (STS) is critical for audiovisual integration during speech perception. We used blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) to measure brain activity as McGurk perceivers and non-perceivers were presented with congruent audiovisual syllables, McGurk audiovisual syllables, and non-McGurk incongruent syllables. The inferior frontal gyrus showed an effect of stimulus condition (greater responses for incongruent stimuli) but not susceptibility group, while the left auditory cortex showed an effect of susceptibility group (greater response in susceptible individuals) but not stimulus condition. Only one brain region, the left STS, showed a significant effect of both susceptibility and stimulus condition. The amplitude of the response in the left STS was significantly correlated with the likelihood of perceiving the McGurk effect: a weak STS response meant that a subject was less likely to perceive the McGurk effect, while a strong response meant that a subject was more likely to perceive it. These results suggest that the left STS is a key locus for interindividual differences in speech perception.  相似文献   

17.
Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech.  相似文献   

18.
Attention is, in part, a mechanism for identifying features of the sensory environment of potential relevance to behavior. The network of brain areas sensitive to the behavioral relevance of multimodal sensory events has not been fully characterized. We used event-related fMRI to identify brain regions responsive to changes in both visual and auditory stimuli when those changes were either behaviorally relevant or behaviorally irrelevant. A widespread network of "context-dependent" activations responded to both task-irrelevant and task-relevant events but responded more strongly to task-relevant events. The most extensive activations in this network were located in right and left temporoparietal junction (TPJ), with smaller activations in left precuneus, left anterior insula, left anterior cingulate cortex, and right thalamus. Another network of "context-independent" activations responded similarly to all events, regardless of task relevance. This network featured a large activation encompassing left supplementary and cingulate motor areas (SMA/CMA) as well as right IFG, right/left precuneus, and right anterior insula, with smaller activations in right/left inferior temporal gyrus and left posterior cingulate cortex. Distinct context-dependent and context-independent subregions of activation were also found within the left and right TPJ, left anterior insula, and left SMA/CMA. In the right TPJ, a subregion in the supramarginal gyrus showed sensitivity to the behavioral context (i.e., relevance) of stimulus changes, while two subregions in the superior temporal gyrus did not. The results indicate a role for the TPJ in detecting behaviorally relevant events in the sensory environment. The TPJ may serve to identify salient events in the sensory environment both within and independent of the current behavioral context.  相似文献   

19.
In modern perceptual neuroscience, the focus of interest has shifted from a restriction to individual modalities to an acknowledgement of the importance of multisensory processing. One particularly well-known example of cross-modal interaction is the McGurk illusion. It has been shown that this illusion can be modified, such that it creates an auditory perceptual bias that lasts beyond the duration of audiovisual stimulation, a process referred to as cross-modal recalibration (Bertelson et al., 2003). Recently, we have suggested that this perceptual bias is stored in auditory cortex, by demonstrating the feasibility of retrieving the subjective perceptual interpretation of recalibrated ambiguous phonemes from functional magnetic resonance imaging (fMRI) measurements in these regions (Kilian-Hütten et al., 2011). However, this does not explain which brain areas integrate the information from the two senses and represent the origin of the auditory perceptual bias. Here we analyzed fMRI data from audiovisual recalibration blocks, utilizing behavioral data from perceptual classifications of ambiguous auditory phonemes that followed these blocks later in time. Adhering to this logic, we could identify a network of brain areas (bilateral inferior parietal lobe [IPL], inferior frontal sulcus [IFS], and posterior middle temporal gyrus [MTG]), whose activation during audiovisual exposure anticipated auditory perceptual tendencies later in time. We propose a model in which a higher-order network, including IPL and IFS, accommodates audiovisual integrative learning processes, which are responsible for the installation of a perceptual bias in auditory regions. This bias then determines constructive perceptual processing.  相似文献   

20.
Visual perceptual load has been shown to modulate brain activation to emotional facial expressions. However, it is unknown whether cross-modal effects of visual perceptual load on brain activation to threat-related auditory stimuli also exist. The current fMRI study investigated brain responses to angry and neutral voices while subjects had to solve an easy or a demanding visual task. Although the easy visual condition was associated with increased activation in the right superior temporal region to angry vs. neutral prosody, this effect was absent during the demanding task. Thus, our results show that cross-modal perceptual load modulates the activation to emotional voices in the auditory cortex and that high visual load prevents the increased processing of emotional prosody.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号