首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The cortical processing of auditory-alone, visual-alone, and audiovisual speech information is temporally and spatially distributed, and functional magnetic resonance imaging (fMRI) cannot adequately resolve its temporal dynamics. In order to investigate a hypothesized spatiotemporal organization for audiovisual speech processing circuits, event-related potentials (ERPs) were recorded using electroencephalography (EEG). Stimuli were congruent audiovisual/ba/, incongruent auditory/ba/synchronized with visual/ga/, auditory-only/ba/, and visual-only/ba/and/ga/. Current density reconstructions (CDRs) of the ERP data were computed across the latency interval of 50-250 ms. The CDRs demonstrated complex spatiotemporal activation patterns that differed across stimulus conditions. The hypothesized circuit that was investigated here comprised initial integration of audiovisual speech by the middle superior temporal sulcus (STS), followed by recruitment of the intraparietal sulcus (IPS), followed by activation of Broca's area [Miller, L.M., d'Esposito, M., 2005. Perceptual fusion and stimulus coincidence in the cross-modal integration of speech. Journal of Neuroscience 25, 5884-5893]. The importance of spatiotemporally sensitive measures in evaluating processing pathways was demonstrated. Results showed, strikingly, early (<100 ms) and simultaneous activations in areas of the supramarginal and angular gyrus (SMG/AG), the IPS, the inferior frontal gyrus, and the dorsolateral prefrontal cortex. Also, emergent left hemisphere SMG/AG activation, not predicted based on the unisensory stimulus conditions was observed at approximately 160 to 220 ms. The STS was neither the earliest nor most prominent activation site, although it is frequently considered the sine qua non of audiovisual speech integration. As discussed here, the relatively late activity of the SMG/AG solely under audiovisual conditions is a possible candidate audiovisual speech integration response.  相似文献   

2.
The temporal synchrony of auditory and visual signals is known to affect the perception of an external event, yet it is unclear what neural mechanisms underlie the influence of temporal synchrony on perception. Using parametrically varied levels of stimulus asynchrony in combination with BOLD fMRI, we identified two anatomically distinct subregions of multisensory superior temporal cortex (mSTC) that showed qualitatively distinct BOLD activation patterns. A synchrony-defined subregion of mSTC (synchronous > asynchronous) responded only when auditory and visual stimuli were synchronous, whereas a bimodal subregion of mSTC (auditory > baseline and visual > baseline) showed significant activation to all presentations, but showed monotonically increasing activation with increasing levels of asynchrony. The presence of two distinct activation patterns suggests that the two subregions of mSTC may rely on different neural mechanisms to integrate audiovisual sensory signals. An additional whole-brain analysis revealed a network of regions responding more with synchronous than asynchronous speech, including right mSTC, and bilateral superior colliculus, fusiform gyrus, lateral occipital cortex, and extrastriate visual cortex. The spatial location of individual mSTC ROIs was much more variable in the left than right hemisphere, suggesting that individual differences may contribute to the right lateralization of mSTC in a group SPM. These findings suggest that bilateral mSTC is composed of distinct multisensory subregions that integrate audiovisual speech signals through qualitatively different mechanisms, and may be differentially sensitive to stimulus properties including, but not limited to, temporal synchrony.  相似文献   

3.
In modern perceptual neuroscience, the focus of interest has shifted from a restriction to individual modalities to an acknowledgement of the importance of multisensory processing. One particularly well-known example of cross-modal interaction is the McGurk illusion. It has been shown that this illusion can be modified, such that it creates an auditory perceptual bias that lasts beyond the duration of audiovisual stimulation, a process referred to as cross-modal recalibration (Bertelson et al., 2003). Recently, we have suggested that this perceptual bias is stored in auditory cortex, by demonstrating the feasibility of retrieving the subjective perceptual interpretation of recalibrated ambiguous phonemes from functional magnetic resonance imaging (fMRI) measurements in these regions (Kilian-Hütten et al., 2011). However, this does not explain which brain areas integrate the information from the two senses and represent the origin of the auditory perceptual bias. Here we analyzed fMRI data from audiovisual recalibration blocks, utilizing behavioral data from perceptual classifications of ambiguous auditory phonemes that followed these blocks later in time. Adhering to this logic, we could identify a network of brain areas (bilateral inferior parietal lobe [IPL], inferior frontal sulcus [IFS], and posterior middle temporal gyrus [MTG]), whose activation during audiovisual exposure anticipated auditory perceptual tendencies later in time. We propose a model in which a higher-order network, including IPL and IFS, accommodates audiovisual integrative learning processes, which are responsible for the installation of a perceptual bias in auditory regions. This bias then determines constructive perceptual processing.  相似文献   

4.
Shahin AJ  Kerlin JR  Bhat J  Miller LM 《NeuroImage》2012,60(1):530-538
When speech is interrupted by noise, listeners often perceptually "fill-in" the degraded signal, giving an illusion of continuity and improving intelligibility. This phenomenon involves a neural process in which the auditory cortex (AC) response to onsets and offsets of acoustic interruptions is suppressed. Since meaningful visual cues behaviorally enhance this illusory filling-in, we hypothesized that during the illusion, lip movements congruent with acoustic speech should elicit a weaker AC response to interruptions relative to static (no movements) or incongruent visual speech. AC response to interruptions was measured as the power and inter-trial phase consistency of the auditory evoked theta band (4-8 Hz) activity of the electroencephalogram (EEG) and the N1 and P2 auditory evoked potentials (AEPs). A reduction in the N1 and P2 amplitudes and in theta phase-consistency reflected the perceptual illusion at the onset and/or offset of interruptions regardless of visual condition. These results suggest that the brain engages filling-in mechanisms throughout the interruption, which repairs degraded speech lasting up to ~250 ms following the onset of the degradation. Behaviorally, participants perceived speech continuity over longer interruptions for congruent compared to incongruent or static audiovisual streams. However, this specific behavioral profile was not mirrored in the neural markers of interest. We conclude that lip-reading enhances illusory perception of degraded speech not by altering the quality of the AC response, but by delaying it during degradations so that longer interruptions can be tolerated.  相似文献   

5.
Language production and perception imply motor system recruitment. Therefore, language should obey the theory of shared motor representation between self and other, by means of mirror-like systems. These mirror-like systems (referring to single-unit recordings in animals) show the property to be recruited both when accomplishing and when perceiving a goal-directed action, whatever the sensory modality may be. This hypothesis supposes that a neural network for self-awareness is involved to distinguish speech production from speech listening. We used fMRI to test this assumption in 12 healthy subjects, who performed two different block-design experiments. The first experiment showed involvement of a lateral mirror-like network in speech listening, including ventral premotor cortex, superior temporal sulcus and the inferior parietal lobule (IPL). The activity of this mirror-like network is associated with the perception of an intelligible speech. The second experiment looked at a self-awareness network. It showed involvement of a medial resting-state network, including the medial parietal and medial prefrontal cortices, during the 'self-generated voice' condition, as opposed to passive speech listening. Our results support the fact that deactivation of this medial network, in association with modulation of the activity of the IPL (part of the mirror-like network previously described), is linked to self-awareness in speech processing. Overall, these results support the idea that self-awareness is present when distinguishing between speech production and speech listening situations, and may depend on these two different parieto-frontal networks.  相似文献   

6.
Kang E  Lee DS  Kang H  Hwang CH  Oh SH  Kim CS  Chung JK  Lee MC 《NeuroImage》2006,32(1):423-431
Speech perception in face-to-face conversation involves processing of speech sounds (auditory) and speech-associated mouth/lip movements (visual) from a speaker. Using PET where no scanner noise was present, brain regions involved in speech cue processing were investigated with the normal hearing subjects with no previous lip-reading training (N = 17) carrying out a semantic plausibility decision on spoken sentences delivered in a movie file. Multimodality was ensured at the sensory level in all four conditions. Sensory-specific speech cue of one sensory modality, i.e., auditory speech (A condition) or mouth movement (V condition), was delivered with a control stimulus of the other modality whereas speech cues of both sensory modalities (AV condition) were delivered during bimodal condition. In comparison to the control condition, extensive activations in the superior temporal regions were observed bilaterally during the A condition but these activations were reduced in extent and left lateralized during the AV condition. Polymodal region such as left posterior superior temporal sulcus (pSTS) involved in cross-modal interaction/integration of audiovisual speech was found to be activated during the A and more so during the AV conditions but not during the V condition. Activations were observed in Broca's (BA 44), medial frontal (BA 8), and anterior ventrolateral prefrontal (BA 47) regions in the left during the V condition, where lip-reading performance was less successful. Results indicated that the speech-associated lip movements (visual speech cue) rendered suppression on the activity in the right auditory temporal regions. Overadditivity (AV > A + V) observed in the right postcentral region during the bimodal condition relative to the sum of unimodal speech conditions was also associated with reduced activity during the V condition. These findings suggested that visual speech cue could exert an inhibitory modulatory effect on the brain activities in the right hemisphere during the cross-modal interaction of audiovisual speech perception.  相似文献   

7.
We investigated cerebral processing of audiovisual speech stimuli in humans using functional magnetic resonance imaging (fMRI). Ten healthy volunteers were scanned with a 'clustered volume acquisition' paradigm at 3 T during observation of phonetically matching (e.g., visual and acoustic /y/) and conflicting (e.g., visual /a/ and acoustic /y/) audiovisual vowels. Both stimuli activated the sensory-specific auditory and visual cortices, along with the superior temporal, inferior frontal (Broca's area), premotor, and visual-parietal regions bilaterally. Phonetically conflicting vowels, contrasted with matching ones, specifically increased activity in Broca's area. Activity during phonetically matching stimuli, contrasted with conflicting ones, was not enhanced in any brain region. We suggest that the increased activity in Broca's area reflects processing of conflicting visual and acoustic phonetic inputs in partly disparate neuron populations. On the other hand, matching acoustic and visual inputs would converge on the same neurons.  相似文献   

8.
We presented phonetically matching and conflicting audiovisual vowels to 10 dyslexic and 10 fluent-reading young adults during "clustered volume acquisition" functional magnetic resonance imaging (fMRI) at 3 T. We further assessed co-variation between the dyslexic readers' phonological processing abilities, as indexed by neuropsychological test scores, and BOLD signal change within the visual cortex, auditory cortex, and Broca's area. Both dyslexic and fluent readers showed increased activation during observation of phonetically conflicting compared to matching vowels within the classical motor speech regions (Broca's area and the left premotor cortex), this activation difference being more extensive and bilateral in the dyslexic group. The between-group activation difference in conflicting > matching contrast reached significance in the motor speech regions and in the left inferior parietal lobule, with dyslexic readers exhibiting stronger activation than fluent readers. The dyslexic readers' BOLD signal change co-varied with their phonological processing abilities within the visual cortex and Broca's area, and to a lesser extent within the auditory cortex. We suggest these findings as reflecting dyslexic readers' greater use of motor-articulatory and visual strategies during phonetic processing of audiovisual speech, possibly to compensate for their difficulties in auditory speech perception.  相似文献   

9.
Joanisse MF  Gati JS 《NeuroImage》2003,19(1):64-79
Speech perception involves recovering the phonetic form of speech from a dynamic auditory signal containing both time-varying and steady-state cues. We examined the roles of inferior frontal and superior temporal cortex in processing these aspects of auditory speech and nonspeech signals. Event-related functional magnetic resonance imaging was used to record activation in superior temporal gyrus (STG) and inferior frontal gyrus (IFG) while participants discriminated pairs of either speech syllables or nonspeech tones. Speech stimuli differed in either the consonant or the vowel portion of the syllable, whereas the nonspeech signals consisted of sinewave tones differing along either a dynamic or a spectral dimension. Analyses failed to identify regions of activation that clearly contrasted the speech and nonspeech conditions. However, we did identify regions in the posterior portion of left and right STG and left IFG yielding greater activation for both speech and nonspeech conditions that involved rapid temporal discrimination, compared to speech and nonspeech conditions involving spectral discrimination. The results suggest that, when semantic and lexical factors are adequately ruled out, there is significant overlap in the brain regions involved in processing the rapid temporal characteristics of both speech and nonspeech signals.  相似文献   

10.
Normal subjects activate the left temporal polar cortex when they name persons, and subjects with damage to the left temporal pole due to left anterior temporal lobectomy are impaired in the retrieval of the proper names of persons. Eight such subjects were studied in a PET activation experiment to address the neural systems supporting their residual naming. We hypothesized that there would be increased activity, relative to normal controls, in the surround of the damaged region, the homologous right temporal pole, or both. Neither the group nor individual target subjects showed significantly increased activity in the lesion surround or in the right temporal pole. Several other regions that are activated by normals during the retrieval of the proper names of persons and which were undamaged in the target subjects (left anterior superior temporal sulcus, mesial frontal cortex, and anterior cingulate) were nevertheless significantly less activated by the target subjects, a finding that suggests that damage to the left temporal pole alters the function of a large-scale system needed for the retrieval of proper nouns. There was increased activity in early visual cortices, suggesting intensification of visual processing to compensate for the defective preferred name retrieval processing.  相似文献   

11.
Pauses during continuous speech, particularly those that occur within clauses, are thought to reflect the planning of forthcoming verbal output. We used functional Magnetic Resonance Imaging (fMRI) to examine their neural correlates. Six volunteers were scanned while describing seven Rorschach inkblots, producing 3 min of speech per inkblot. In an event-related design, the level of blood oxygenation level dependent (BOLD) contrast during brief speech pauses (mean duration 1.3 s, SD 0.3 s) during overt speech was contrasted with that during intervening periods of articulation. We then examined activity associated with pauses that occurred within clauses and pauses that occurred between grammatical junctions. Relative to articulation during speech, pauses were associated with activation in the banks of the left superior temporal sulcus (BA 39/22), at the temporoparietal junction. Continuous speech was associated with greater activation bilaterally in the inferior frontal (BA 44/45), middle frontal (BA 8) and anterior cingulate (BA 24) gyri, the middle temporal sulcus (BA 21/22), the occipital cortex and the cerebellum. Left temporal activation was evident during pauses that occurred within clauses but not during pauses at grammatical junctions. In summary, articulation during continuous speech involved frontal, temporal and cerebellar areas, while pausing was associated with activity in the left temporal cortex, especially when this occurred within a clause. The latter finding is consistent with evidence that within-clause pauses are a correlate of speech planning and in particular lexical retrieval.  相似文献   

12.
A PET study of stimulus- and task-induced semantic processing   总被引:4,自引:0,他引:4  
Noppeney U  Price CJ 《NeuroImage》2002,15(4):927-935
To investigate the neural correlates of semantic processing, previous functional imaging studies have used semantic decision and generation tasks. However, in addition to activating semantic associations these tasks also involve executive functions that are not specific to semantics. The study reported in this paper aims to dissociate brain activity due to stimulus-driven semantic associations and task-induced semantic and executive processing by using repetition and semantic decision on auditorily presented words in a cognitive conjunction design. The left posterior inferior temporal, inferior frontal (BA 44/45), and medial orbital gyri were activated by both tasks, suggesting a general role in stimulus-driven semantic and phonological processing. In addition, semantic decision increased activation in (i) left ventral inferior frontal cortex (BA 47), right cerebellum, and paracingulate, which have all previously been implicated in executive functions, and (ii) a ventral region in the left anterior temporal pole which is commonly affected in patients with semantic impairments. We attribute activation in this area to the effortful linkage of semantic features. Thus, our study replicated the functional dissociation between dorsal and ventral regions of the left inferior frontal cortex. Moreover, it also dissociated the semantic functions of the left posterior inferior temporal gyrus and anterior temporal pole: The posterior region subserves stimulus-driven activation of semantic associations and the left anterior region is involved in task-induced association of semantic information.  相似文献   

13.
14.
Stevenson RA  James TW 《NeuroImage》2009,44(3):1210-1223
The superior temporal sulcus (STS) is a region involved in audiovisual integration. In non-human primates, multisensory neurons in STS display inverse effectiveness. In two fMRI studies using multisensory tool and speech stimuli presented at parametrically varied levels of signal strength, we show that the pattern of neural activation in human STS is also inversely effective. Although multisensory tool-defined and speech-defined regions of interest were non-overlapping, the pattern of inverse effectiveness was the same for tools and speech across regions. The findings suggest that, even though there are sub-regions in STS that are speech-selective, the manner in which visual and auditory signals are integrated in multisensory STS is not specific to speech.  相似文献   

15.
The focus of our magnetoencephalographic (MEG) study was to obtain further insight into the neuronal organization of language processing in stutterers. We recorded neuronal activity of 10 male developmental stutterers and 10 male controls, while they listened to pure tones, to words in order to repeat them, and to sentences in order to either repeat or transform them into passive form. Stimulation with pure tones resulted in similar activation patterns in the two groups, but differences emerged in the more complex auditory language tasks. In the stutterers, the left inferior frontal cortex was activated for a short while from 95 to 145 ms after sentence onset, which was not evident in the controls nor in either group during the word task. In both subject groups, the left rolandic area was activated when listening to the speech stimuli, but in the stutterers, there was an additional activation of the right rolandic area from 315 ms onwards, which was more pronounced in the sentence than word task. Activation of areas typically associated with language production was thus observed also during speech perception both in controls and in stutterers. Previous research on speech production in stutterers has found abnormalities in both the amount and timing of activation in these areas. The present data suggest that activation in the left inferior frontal and right rolandic areas in stutterers differs from that in controls also during speech perception.  相似文献   

16.
The neural correlates of speech monitoring overlap with neural correlates of speech comprehension and production. However, it is unclear how these correlates are organized within functional connectivity networks, and how these networks interact to subserve speech monitoring. We applied spatial and temporal independent component analysis (sICA and tICA) to a functional magnetic resonance imaging (fMRI) experiment involving overt speech production, comprehension and monitoring. SICA and tICA respectively decompose fMRI data into spatial and temporal components that can be interpreted as distributed estimates of functional connectivity and concurrent temporal dynamics in one or more regions of fMRI activity. Using sICA we found multiple connectivity components that were associated with speech perception (auditory and left fronto-temporal components) and production (bilateral central sulcus and default-mode components), but not with speech monitoring. In order to further investigate if speech monitoring could be mapped in the auditory cortex as a unique temporal process, we applied tICA to voxels of the sICA auditory component. Amongst the temporal components we found a single, unique component that matched the speech monitoring temporal pattern. We used this temporal component as a new predictor for whole-brain activity and found that it correlated positively with bilateral auditory cortex, and negatively with the supplementary motor area (SMA). Psychophysiological interaction analysis of task and activity in bilateral auditory cortex and SMA showed that functional connectivity changed with task conditions. These results suggest that speech monitoring entails a dynamic coupling between different functional networks. Furthermore, we demonstrate that overt speech comprises multiple networks that are associated with specific speech-related processes. We conclude that the sequential combination of sICA and tICA is a powerful approach for the analysis of complex, overt speech tasks.  相似文献   

17.
Healthy subjects show increased activation in left temporal lobe regions in response to speech sounds compared to complex nonspeech sounds. Abnormal lateralization of speech-processing regions in the temporal lobes has been posited to be a cardinal feature of schizophrenia. Event-related fMRI was used to test the hypothesis that schizophrenic patients would show an abnormal pattern of hemispheric lateralization when detecting speech compared with complex nonspeech sounds in an auditory oddball target-detection task. We predicted that differential activation for speech in the vicinity of the superior temporal sulcus would be greater in schizophrenic patients than in healthy subjects in the right hemisphere, but less in patients than in healthy subjects in the left hemisphere. Fourteen patients with schizophrenia (selected from an outpatient population, 2 females, 12 males, mean age 35.1 years) and 29 healthy subjects (8 females, 21 males, mean age 29.3 years) were scanned while they performed an auditory oddball task in which the oddball stimuli were either speech sounds or complex nonspeech sounds. Compared to controls, individuals with schizophrenia showed greater differential activation between speech and nonspeech in right temporal cortex, left superior frontal cortex, and the left temporal-parietal junction. The magnitude of the difference in the left temporal parietal junction was significantly correlated with severity of disorganized thinking. This study supports the hypothesis that aberrant functional lateralization of speech processing is an underlying feature of schizophrenia and suggests the magnitude of the disturbance in speech-processing circuits may be associated with severity of disorganized thinking.  相似文献   

18.
The human male psychosexual cycle consists of four phases: excitation, plateau, orgasm, and resolution. Identification of the specific neural substrates of each phase may provide information regarding the brain's pathophysiology of sexual dysfunction. We previously analyzed regional cerebral blood flow (rCBF) with H(2)15O-positron emission tomography (PET) during the excitation phase (initiation of penile erection) induced by audiovisual sexual stimuli (AVSS) and identified activation of the cerebellar vermis, the bilateral extrastriate cortex, and right orbitofrontal cortex, suggesting a role of cognition/emotion in the excitement phase. In the present study, we analyzed rCBF of the same six healthy volunteers during the plateau phase (maintenance of penile erection) induced by AVSS and compared the results with those of the excitation phase. Penile rigidity was monitored in real time with RigiScan Plus during PET scanning. Images were analyzed by statistical parametric mapping (SPM) software, and rCBF in the amygdala, hypothalamus, anterior cingulate, and insula was measured. During the plateau phase, primary subcortical activation was noted in the right ventral putamen, indicating motivational factors in the sexual response via the limbic reward circuit. A significant increase in rCBF in the left hypothalamus was also observed during the plateau phase. The right anterior cingulate and left insula were specifically activated during the excitation phase but not during the plateau phase. These results indicate a significant role of the ventral putamen and the hypothalamus in the plateau phase and confirm that paralimbic and limbic components of the human brain differentially coordinate the sexual response in a psychosexual phase-dependent manner.  相似文献   

19.
Gonzalo D  Shallice T  Dolan R 《NeuroImage》2000,11(3):243-255
Functional imaging studies of learning and memory have primarily focused on stimulus material presented within a single modality (see review by Gabrieli, 1998, Annu. Rev. Psychol. 49: 87-115). In the present study we investigated mechanisms for learning material presented in visual and auditory modalities, using single-trial functional magnetic resonance imaging. We evaluated time-dependent learning effects under two conditions involving presentation of consistent (repeatedly paired in the same combination) or inconsistent (items presented randomly paired) pairs. We also evaluated time-dependent changes for bimodal (auditory and visual) presentations relative to a condition in which auditory stimuli were repeatedly presented alone. Using a time by condition analysis to compare neural responses to consistent versus inconsistent audiovisual pairs, we found significant time-dependent learning effects in medial parietal and right dorsolateral prefrontal cortices. In contrast, time-dependent effects were seen in left angular gyrus, bilateral anterior cingulate gyrus, and occipital areas bilaterally. A comparison of paired (bimodal) versus unpaired (unimodal) conditions was associated with time-dependent changes in posterior hippocampal and superior frontal regions for both consistent and inconsistent pairs. The results provide evidence that associative learning for stimuli presented in different sensory modalities is supported by neural mechanisms similar to those described for other kinds of memory processes. The involvement of posterior hippocampus and superior frontal gyrus in bimodal learning for both consistent and inconsistent pairs supports a putative function for these regions in associative learning independent of sensory modality.  相似文献   

20.
It is widely accepted that dorsolateral prefrontal cortex (DLPFC) is activated at the time of action generation in humans. However, the previous functional neuroimaging studies that have supported this hypothesis temporally integrated brain dynamics and therefore could not demonstrate when DLPFC underwent activation relative to the emergence of voluntary behavior. Data that are time-locked to the instant of voluntary action execution do not reveal DLPFC activation at that moment. Rather, activated foci are seen at the frontal poles. We investigated this apparent conundrum through three differentially constrained experiments, utilizing functional magnetic resonance imaging to identify those prefrontal areas exhibiting functional change at the moment of spontaneous action execution. We observed profound functional dissociation between anterior and dorsolateral regions, compatible with their involvement at different points during the temporal evolution of action: bilaterally the frontal poles activated at the moment of execution, while simultaneously (and relative to a prior activation state) left DLPFC 'deactivated.'  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号