首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The ability to comprehend narratives constitutes an important component of human development and experience. The neural correlates of auditory narrative comprehension in children were investigated in a large-scale functional magnetic resonance imaging (fMRI) study involving 313 subjects ages 5-18. Using group independent component analysis (ICA), bilateral task-related components were found comprising the primary auditory cortex, the mid-superior temporal gyrus, the hippocampus, the angular gyrus, and medial aspect of the parietal lobule (precuneus/posterior cingulate). In addition, a right-lateralized component was found involving the most posterior aspect of the superior temporal gyrus, and a left-lateralized component was found comprising the inferior frontal gyrus (including Broca's area), the inferior parietal lobule, and the medial temporal gyrus. Using a novel data-driven analysis technique, increased task-related activity related to age was found in the components comprising the mid-superior temporal gyrus (Wernicke's area) and the posterior aspect of the superior temporal gyrus, while decreased activity related to age was found in the component comprising the angular gyrus. The results are discussed in light of recent hypotheses involving the functional segregation of Wernicke's area and the specific role of the mid-superior temporal gyrus in speech comprehension.  相似文献   

2.
An fMRI investigation of syllable sequence production   总被引:2,自引:0,他引:2  
Bohland JW  Guenther FH 《NeuroImage》2006,32(2):821-841
Fluent speech comprises sequences that are composed from a finite alphabet of learned words, syllables, and phonemes. The sequencing of discrete motor behaviors has received much attention in the motor control literature, but relatively little has been focused directly on speech production. In this paper, we investigate the cortical and subcortical regions involved in organizing and enacting sequences of simple speech sounds. Sparse event-triggered functional magnetic resonance imaging (fMRI) was used to measure responses to preparation and overt production of non-lexical three-syllable utterances, parameterized by two factors: syllable complexity and sequence complexity. The comparison of overt production trials to preparation only trials revealed a network related to the initiation of a speech plan, control of the articulators, and to hearing one's own voice. This network included the primary motor and somatosensory cortices, auditory cortical areas, supplementary motor area (SMA), the precentral gyrus of the insula, and portions of the thalamus, basal ganglia, and cerebellum. Additional stimulus complexity led to increased engagement of the basic speech network and recruitment of additional areas known to be involved in sequencing non-speech motor acts. In particular, the left hemisphere inferior frontal sulcus and posterior parietal cortex, and bilateral regions at the junction of the anterior insula and frontal operculum, the SMA and pre-SMA, the basal ganglia, anterior thalamus, and the cerebellum showed increased activity for more complex stimuli. We hypothesize mechanistic roles for the extended speech production network in the organization and execution of sequences of speech sounds.  相似文献   

3.
Osnes B  Hugdahl K  Specht K 《NeuroImage》2011,54(3):2437-2445
Several reports of premotor cortex involvement in speech perception have been put forward. Still, the functional role of premotor cortex is under debate. In order to investigate the functional role of premotor cortex, we presented parametrically varied speech stimuli in both a behavioral and functional magnetic resonance imaging (fMRI) study. White noise was transformed over seven distinct steps into a speech sound and presented to the participants in a randomized order. As control condition served the same transformation from white noise into a music instrument sound. The fMRI data were modelled with Dynamic Causal Modeling (DCM) where the effective connectivity between Heschl's gyrus, planum temporale, superior temporal sulcus and premotor cortex were tested. The fMRI results revealed a graded increase in activation in the left superior temporal sulcus. Premotor cortex activity was only present at an intermediate step when the speech sounds became identifiable but were still distorted but was not present when the speech sounds were clearly perceivable. A Bayesian model selection procedure favored a model that contained significant interconnections between Heschl's gyrus, planum temporal, and superior temporal sulcus when processing speech sounds. In addition, bidirectional connections between premotor cortex and superior temporal sulcus and from planum temporale to premotor cortex were significant. Processing non-speech sounds initiated no significant connections to premotor cortex. Since the highest level of motor activity was observed only when processing identifiable sounds with incomplete phonological information, it is concluded that premotor cortex is not generally necessary for speech perception but may facilitate interpreting a sound as speech when the acoustic input is sparse.  相似文献   

4.
The role of attention in speech comprehension is not well understood. We used fMRI to study the neural correlates of auditory word, pseudoword, and nonspeech (spectrally rotated speech) perception during a bimodal (auditory, visual) selective attention task. In three conditions, Attend Auditory (ignore visual), Ignore Auditory (attend visual), and Visual (no auditory stimulation), 28 subjects performed a one-back matching task in the assigned attended modality. The visual task, attending to rapidly presented Japanese characters, was designed to be highly demanding in order to prevent attention to the simultaneously presented auditory stimuli. Regardless of stimulus type, attention to the auditory channel enhanced activation by the auditory stimuli (Attend Auditory>Ignore Auditory) in bilateral posterior superior temporal regions and left inferior frontal cortex. Across attentional conditions, there were main effects of speech processing (word+pseudoword>rotated speech) in left orbitofrontal cortex and several posterior right hemisphere regions, though these areas also showed strong interactions with attention (larger speech effects in the Attend Auditory than in the Ignore Auditory condition) and no significant speech effects in the Ignore Auditory condition. Several other regions, including the postcentral gyri, left supramarginal gyrus, and temporal lobes bilaterally, showed similar interactions due to the presence of speech effects only in the Attend Auditory condition. Main effects of lexicality (word>pseudoword) were isolated to a small region of the left lateral prefrontal cortex. Examination of this region showed significant word>pseudoword activation only in the Attend Auditory condition. Several other brain regions, including left ventromedial frontal lobe, left dorsal prefrontal cortex, and left middle temporal gyrus, showed Attention x Lexicality interactions due to the presence of lexical activation only in the Attend Auditory condition. These results support a model in which neutral speech presented in an unattended sensory channel undergoes relatively little processing beyond the early perceptual level. Specifically, processing of phonetic and lexical-semantic information appears to be very limited in such circumstances, consistent with prior behavioral studies.  相似文献   

5.
Neural mechanisms underlying auditory feedback control of speech   总被引:1,自引:0,他引:1  
The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 136 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech.  相似文献   

6.
Neurophysiological research suggests that understanding the actions of others harnesses neural circuits that would be used to produce those actions directly. We used fMRI to examine brain areas active during language comprehension in which the speaker was seen and heard while talking (audiovisual) or heard but not seen (audio-alone) or when the speaker was seen talking with the audio track removed (video-alone). We found that audiovisual speech perception activated a network of brain regions that included cortical motor areas involved in planning and executing speech production and areas subserving proprioception related to speech production. These regions included the posterior part of the superior temporal gyrus and sulcus, the pars opercularis, premotor cortex, adjacent primary motor cortex, somatosensory cortex, and the cerebellum. Activity in premotor cortex and posterior superior temporal gyrus and sulcus was modulated by the amount of visually distinguishable phonemes in the stories. None of these regions was activated to the same extent in the audio- or video-alone conditions. These results suggest that integrating observed facial movements into the speech perception process involves a network of multimodal brain regions associated with speech production and that these areas contribute less to speech perception when only auditory signals are present. This distributed network could participate in recognition processing by interpreting visual information about mouth movements as phonetic information based on motor commands that could have generated those movements.  相似文献   

7.
Many techniques to study early functional brain development lack the whole-brain spatial resolution that is available with fMRI. We utilized a relatively novel method in which fMRI data were collected from children during natural sleep. Stimulus-evoked responses to auditory and visual stimuli as well as stimulus-independent functional networks were examined in typically developing 2-4-year-old children. Reliable fMRI data were collected from 13 children during presentation of auditory stimuli (tones, vocal sounds, and nonvocal sounds) in a block design. Twelve children were presented with visual flashing lights at 2.5 Hz. When analyses combined all three types of auditory stimulus conditions as compared to rest, activation included bilateral superior temporal gyri/sulci (STG/S) and right cerebellum. Direct comparisons between conditions revealed significantly greater responses to nonvocal sounds and tones than to vocal sounds in a number of brain regions including superior temporal gyrus/sulcus, medial frontal cortex and right lateral cerebellum. The response to visual stimuli was localized to occipital cortex. Furthermore, stimulus-independent functional connectivity MRI analyses (fcMRI) revealed functional connectivity between STG and other temporal regions (including contralateral STG) and medial and lateral prefrontal regions. Functional connectivity with an occipital seed was localized to occipital and parietal cortex. In sum, 2-4 year olds showed a differential fMRI response both between stimulus modalities and between stimuli in the auditory modality. Furthermore, superior temporal regions showed functional connectivity with numerous higher-order regions during sleep. We conclude that the use of sleep fMRI may be a valuable tool for examining functional brain organization in young children.  相似文献   

8.
The way humans comprehend narrative speech plays an important part in human development and experience. A group of 313 children with ages 5-18 were subjected to a large-scale functional magnetic resonance imaging (fMRI) study in order to investigate the neural correlates of auditory narrative comprehension. The results were analyzed to investigate the age-related brain activity changes involved in the narrative language comprehension circuitry. We found age-related differences in brain activity which may either reflect changes in local neuroplasticity (of the regions involved) in the developing brain or a more global transformation of brain activity related to neuroplasticity. To investigate this issue, Structural Equation Modeling (SEM) was applied to the results obtained from a group independent component analysis (Schmithorst, V.J., Holland, S.K., et al., 2005. Cognitive modules utilized for narrative comprehension in children: a functional magnetic resonance imaging study. NeuroImage) and the age-related differences were examined in terms of changes in path coefficients between brain regions. The group Independent Component Analysis (ICA) had identified five bilateral task-related components comprising the primary auditory cortex, the mid-superior temporal gyrus, the most posterior aspect of the superior temporal gyrus, the hippocampus, the angular gyrus and the medial aspect of the parietal lobule (precuneus/posterior cingulate). Furthermore, a left-lateralized network (sixth component) was also identified comprising the inferior frontal gyrus (including Broca's area), the inferior parietal lobule, and the medial temporal gyrus. The components (brain regions) for the SEM were identified based on the ICA maps and the results are discussed in light of recent neuroimaging studies corroborating the functional segregation of Broca's and Wernicke's areas and the important role played by the right hemisphere in narrative comprehension. The classical Wernicke-Geschwind (WG) model for speech processing is expanded to a two-route model involving a direct route between Broca's and Wernicke's area and an indirect route involving the parietal lobe.  相似文献   

9.
The separation of concurrent sounds is paramount to human communication in everyday settings. The primary auditory cortex and the planum temporale are thought to be essential for both the separation of physical sound sources into perceptual objects and the comparison of those representations with previously learned acoustic events. To examine the role of these areas in speech separation, we measured brain activity using event-related functional Magnetic Resonance Imaging (fMRI) while participants were asked to identify two phonetically different vowels presented simultaneously. The processing of brief speech sounds (200 ms in duration) activated the thalamus and superior temporal gyrus bilaterally, left anterior temporal lobe, and left inferior temporal gyrus. A comparison of fMRI signals between trials in which participants successfully identified both vowels as opposed to when only one of the two vowels was recognized revealed enhanced activity in left thalamus, Heschl's gyrus, superior temporal gyrus, and the planum temporale. Because participants successfully identified at least one of the two vowels on each trial, the difference in fMRI signal indexes the extra computational work needed to segregate and identify successfully the other concurrently presented vowel. The results support the view that auditory cortex in or near Heschl's gyrus as well as in the planum temporale are involved in sound segregation and reveal a link between left thalamo-cortical activation and the successful separation and identification of simultaneous speech sounds.  相似文献   

10.
In a recent fMRI language comprehension study, we asked participants to listen to word-pairs and to make same/different judgments for regularly and irregularly inflected word forms [Tyler, L.K., Stamatakis, E.A., Post, B., Randall, B., Marslen-Wilson, W.D., in press. Temporal and frontal systems in speech comprehension: an fMRI study of past tense processing. Neuropsychologia, available online.]. We found that a fronto-temporal network, including the anterior cingulate cortex (ACC), left inferior frontal gyrus (LIFG), bilateral superior temporal gyrus (STG) and middle temporal gyrus (MTG), is preferentially activated for regularly inflected words. We report a complementary re-analysis of the data seeking to understand the behavior of this network in terms of inter-regional covariances, which are taken as an index of functional connectivity. We identified regions in which activity was predicted by ACC and LIFG activity, and critically, by the interaction between these two regions. Furthermore, we determined the extent to which these inter-regional correlations were influenced differentially by the experimental context (i.e. regularly or irregularly inflected words). We found that functional connectivity between LIFG and left MTG is positively modulated by activity in the ACC and that this effect is significantly greater for regulars than irregulars. These findings suggest a monitoring role for the ACC which, in the context of processing regular inflected words, is associated with greater engagement of an integrated fronto-temporal language system.  相似文献   

11.
The purpose of this study was to develop a functional MRI method to examine overt speech in stroke patients with aphasia. An fMRI block design for overt picture naming was utilized which took advantage of the hemodynamic response delay where increased blood flow remains for 4-8 s after the task [(Friston, K.J., Jezzard, P., Turner, R., 1994. Analysis of functional MRI time-series. Hum. Brain Mapp. 1, 153-171)]. This allowed task-related information to be obtained after the task, minimizing motion artifact from overt speech (Eden, G.F., Joseph, J., Brown, H.E., Brown, C.P., Zeffiro, T.A., 1999. Utilizing hemodynamic delay and dispersion to detect fMRI signal change without auditory interference: the behavior interleaved gradients technique. Magn. Reson. Med. 41, 13-20; Birn, RM., Bandettini, P.A., Cox, R.W., Shaker, R., 1999. Event-related fMRI of tasks involving brief motion. Hum. Brain Mapp. 7, 106-114; Birn, R.M., Cox, R.W., Bandettini, P.A., 2004. Experimental designs and processing strategies for fMRI studies involving overt verbal responses. NeuroImage 23, 1046-1058). Five chronic aphasia patients participated (4 mild-moderate and 1 severe nonfluent/global). The four mild-moderate patients who correctly named 88-100% of the pictures during fMRI, had a greater number of suprathreshold voxels in L supplementary motor area (SMA) than R SMA (P < 0.07). Three of these four mild-moderate patients showed activation in R BA 45 and/or 44; along with L temporal and/or parietal regions. The severe patient, who named no pictures, activated almost twice as many voxels in R SMA than L SMA. He also showed activation in R BA 44, but had remarkably extensive L and R temporal activation. His poor naming and widespread temporal activation may reflect poor modulation of the bi-hemispheric neural network for naming. Results indicate that this fMRI block design utilizing hemodynamic response delay can be used to study overt naming in aphasia patients, including those with mild-moderate or severe aphasia. This method permitted verification that the patients were cooperating with the task during fMRI. It has application for future fMRI studies of overt speech in aphasia.  相似文献   

12.
Pauses during continuous speech, particularly those that occur within clauses, are thought to reflect the planning of forthcoming verbal output. We used functional Magnetic Resonance Imaging (fMRI) to examine their neural correlates. Six volunteers were scanned while describing seven Rorschach inkblots, producing 3 min of speech per inkblot. In an event-related design, the level of blood oxygenation level dependent (BOLD) contrast during brief speech pauses (mean duration 1.3 s, SD 0.3 s) during overt speech was contrasted with that during intervening periods of articulation. We then examined activity associated with pauses that occurred within clauses and pauses that occurred between grammatical junctions. Relative to articulation during speech, pauses were associated with activation in the banks of the left superior temporal sulcus (BA 39/22), at the temporoparietal junction. Continuous speech was associated with greater activation bilaterally in the inferior frontal (BA 44/45), middle frontal (BA 8) and anterior cingulate (BA 24) gyri, the middle temporal sulcus (BA 21/22), the occipital cortex and the cerebellum. Left temporal activation was evident during pauses that occurred within clauses but not during pauses at grammatical junctions. In summary, articulation during continuous speech involved frontal, temporal and cerebellar areas, while pausing was associated with activity in the left temporal cortex, especially when this occurred within a clause. The latter finding is consistent with evidence that within-clause pauses are a correlate of speech planning and in particular lexical retrieval.  相似文献   

13.
Unified SPM-ICA for fMRI analysis   总被引:2,自引:0,他引:2  
Hu D  Yan L  Liu Y  Zhou Z  Friston KJ  Tan C  Wu D 《NeuroImage》2005,25(3):746-755
A widely used tool for functional magnetic resonance imaging (fMRI) data analysis, statistical parametric mapping (SPM), is based on the general linear model (GLM). SPM therefore requires a priori knowledge or specific assumptions about the time courses contributing to signal changes. In contradistinction, independent component analysis (ICA) is a data-driven method based on the assumption that the causes of responses are statistically independent. Here we describe a unified method, which combines ICA, temporal ICA (tICA), and SPM for analyzing fMRI data. tICA was applied to fMRI datasets to disclose independent components, whose number was determined by the Bayesian information criterion (BIC). The resulting components were used to construct the design matrix of a GLM. Parameters were estimated and regionally-specific statistical inferences were made about activations in the usual way. The sensitivity and specificity were evaluated using Monte Carlo simulations. The receiver operating characteristic (ROC) curves indicated that the unified SPM-ICA method had a better performance. Moreover, SPM-ICA was applied to fMRI datasets from twelve normal subjects performing left and right hand movements. The areas identified corresponded to motor (premotor, sensorimotor areas and SMA) areas and were consistently task related. Part of the frontal lobe, parietal cortex, and cingulate gyrus also showed transiently task-related responses. The unified method requires less supervision than the conventional SPM and enables classical inference about the expression of independent components. Our results also suggest that the method has a higher sensitivity than SPM analyses.  相似文献   

14.
目的 采用静息态fMRI技术观察肝性脑病(HE)患者双侧苍白与全脑网络连接的改变。方法 收集21例明显HE患者(OHE组)、22例轻微型HE患者(MHE组)及21名健康志愿者(HC组)行静息态fMRI,选择双侧苍白球作为种子点,利用种子体素相关性脑功能网络分析方法对数据进行处理并进行统计学分析。结果 3组间脑网络连接差异的脑区主要位于额叶、颞叶、双侧尾状核及顶叶(P均<0.05)。与HC组比较,OHE组右侧梭状回、右侧枕下回、左侧眶部额上回、右侧额中回等脑区连接减弱,在双侧尾状核、左三角部额下回、左海马旁回等脑区连接增强;MHE组双侧颞中回、左中央前回、左内侧额上回等脑区连接减弱;与MHE组比较,OHE组右梭状回、右侧楔前叶、右侧颞中回、右侧角回连接减弱,右侧颞下回、双侧尾状核连接增强(P均<0.05)。结论 OHE及MHE患者皮层与皮层下区域脑功能网络连接存在异常,HE患者认知功能障碍可能与功能网络连接改变有关。  相似文献   

15.
Functional magnetic resonance imaging (fMRI) studies can provide insight into the neural correlates of hallucinations. Commonly, such studies require self-reports about the timing of the hallucination events. While many studies have found activity in higher-order sensory cortical areas, only a few have demonstrated activity of the primary auditory cortex during auditory verbal hallucinations. In this case, using self-reports as a model of brain activity may not be sensitive enough to capture all neurophysiological signals related to hallucinations. We used spatial independent component analysis (sICA) to extract the activity patterns associated with auditory verbal hallucinations in six schizophrenia patients. SICA decomposes the functional data set into a set of spatial maps without the use of any input function. The resulting activity patterns from auditory and sensorimotor components were further analyzed in a single-subject fashion using a visualization tool that allows for easy inspection of the variability of regional brain responses. We found bilateral auditory cortex activity, including Heschl's gyrus, during hallucinations of one patient, and unilateral auditory cortex activity in two more patients. The associated time courses showed a large variability in the shape, amplitude, and time of onset relative to the self-reports. However, the average of the time courses during hallucinations showed a clear association with this clinical phenomenon. We suggest that detection of this activity may be facilitated by examining hallucination epochs of sufficient length, in combination with a data-driven approach.  相似文献   

16.
Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech.  相似文献   

17.
The beneficial effects of mindful awareness and mindfulness meditation training on physical and psychological health are thought to be mediated in part through changes in underlying brain processes. Functional connectivity MRI (fcMRI) allows identification of functional networks in the brain. It has been used to examine state-dependent activity and is well suited for studying states such as meditation. We applied fcMRI to determine if Mindfulness-Based Stress Reduction (MBSR) training is effective in altering intrinsic connectivity networks (ICNs). Healthy women were randomly assigned to participate in an 8-week Mindfulness-Based Stress Reduction (MBSR) training course or an 8-week waiting period. After 8 weeks, fMRI data (1.5T) was acquired while subjects rested with eyes closed, with the instruction to pay attention to the sounds of the scanner environment. Group independent component analysis was performed to investigate training-related changes in functional connectivity. Significant MBSR-related differences in functional connectivity were found mainly in auditory/salience and medial visual networks. Relative to findings in the control group, MBSR subjects showed (1) increased functional connectivity within auditory and visual networks, (2) increased functional connectivity between auditory cortex and areas associated with attentional and self-referential processes, (3) stronger anticorrelation between auditory and visual cortex, and (4) stronger anticorrelation between visual cortex and areas associated with attentional and self-referential processes. These findings suggest that 8 weeks of mindfulness meditation training alters intrinsic functional connectivity in ways that may reflect a more consistent attentional focus, enhanced sensory processing, and reflective awareness of sensory experience.  相似文献   

18.
This study was conducted to investigate the connectivity architecture of neural structures involved in processing of emotional speech melody (prosody). 24 subjects underwent event-related functional magnetic resonance imaging (fMRI) while rating the emotional valence of either prosody or semantics of binaurally presented adjectives. Conventional analysis of fMRI data revealed activation within the right posterior middle temporal gyrus and bilateral inferior frontal cortex during evaluation of affective prosody and left temporal pole, orbitofrontal, and medial superior frontal cortex during judgment of affective semantics. Dynamic causal modeling (DCM) in combination with Bayes factors was used to compare competing neurophysiological models with different intrinsic connectivity structures and input regions within the network of brain regions underlying comprehension of affective prosody. Comparison on group level revealed superiority of a model in which the right temporal cortex serves as input region as compared to models in which one of the frontal areas is assumed to receive external inputs. Moreover, models with parallel information conductance from the right temporal cortex were superior to models in which the two frontal lobes accomplish serial processing steps. In conclusion, connectivity analysis supports the view that evaluation of affective prosody requires prior analysis of acoustic features within the temporal and that transfer of information from the temporal cortex to the frontal lobes occurs via parallel pathways.  相似文献   

19.
目的 采用功能磁共振成像(fMRI)技术观察执行不出声和出声图片命名任务时大脑活动的差异.方法 在10名健康志愿者(24~27岁)分别进行不出声和出声图片命名时,同时采集其脑部的fMRI数据,通过分析处理获得执行不同任务时的头动结果及脑功能区统计激活图.结果 不出声任务的平均头动和最大头动低于出声任务,但差异无统计学意义(P=0.23).不出声图片命名的神经激活网络包括双侧枕回及小脑、双侧辅助运动区、中央后回、双侧额下回和前扣带回.出声图片命名时除在上述不出声时的激活区有更强激活外,还激活了双侧中央前回(BA4)、双侧后上颞回、左侧前上颞回、双侧丘脑及基底节区、左侧岛叶.结论 不出声和出声图片命名的神经处理网络及环节互不相同,两种任务不能相互替代.  相似文献   

20.
Phonation is defined as a laryngeal motor behavior used for speech production, which involves a highly specialized coordination of laryngeal and respiratory neuromuscular control. During speech, brief periods of vocal fold vibration for vowels are interspersed by voiced and unvoiced consonants, glottal stops and glottal fricatives (/h/). It remains unknown whether laryngeal/respiratory coordination of phonation for speech relies on separate neural systems from respiratory control or whether a common system controls both behaviors. To identify the central control system for human phonation, we used event-related fMRI to contrast brain activity during phonation with activity during prolonged exhalation in healthy adults. Both whole-brain analyses and region of interest comparisons were conducted. Production of syllables containing glottal stops and vowels was accompanied by activity in left sensorimotor, bilateral temporoparietal and medial motor areas. Prolonged exhalation similarly involved activity in left sensorimotor and temporoparietal areas but not medial motor areas. Significant differences between phonation and exhalation were found primarily in the bilateral auditory cortices with whole-brain analysis. The ROI analysis similarly indicated task differences in the auditory cortex with differences also detected in the inferolateral motor cortex and dentate nucleus of the cerebellum. A second experiment confirmed that activity in the auditory cortex only occurred during phonation for speech and did not depend upon sound production. Overall, a similar central neural system was identified for both speech phonation and voluntary exhalation that primarily differed in auditory monitoring.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号