首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 956 毫秒
1.
Speech comprehension includes both bottom-up and top-down processes, and imaging studies have isolated a frontal-temporal network of brain areas active during speech perception. However, the precise role of the various areas in this network during normal speech comprehension is not yet fully understood. In the present fMRI study, the signal-to-noise ratio (SNR) of spoken sentences was varied in 144 steps, and speech intelligibility was measured independently in order to study in detail its effect on the activation of brain areas involved in speech perception. Relative to noise alone, intelligible speech in noise evoked spatially extended activation in left frontal, bilateral temporal, and medial occipital brain regions. Increasing SNR led to a sigmoid-shaped increase of activation in all areas of the frontal-temporal network. The onset of the activation with respect to SNR was similar in temporal and frontal regions, but frontal activation was found to be smaller than temporal activation at the highest SNRs. Finally, only Broca's area (BA44) showed activation to unintelligible speech presented at low SNRs. These findings demonstrate distinct roles of frontal and temporal areas in speech comprehension in that temporal regions subserve bottom-up processing of speech, whereas frontal areas are more involved in top-down supplementary mechanisms.  相似文献   

2.
Neurophysiological research suggests that understanding the actions of others harnesses neural circuits that would be used to produce those actions directly. We used fMRI to examine brain areas active during language comprehension in which the speaker was seen and heard while talking (audiovisual) or heard but not seen (audio-alone) or when the speaker was seen talking with the audio track removed (video-alone). We found that audiovisual speech perception activated a network of brain regions that included cortical motor areas involved in planning and executing speech production and areas subserving proprioception related to speech production. These regions included the posterior part of the superior temporal gyrus and sulcus, the pars opercularis, premotor cortex, adjacent primary motor cortex, somatosensory cortex, and the cerebellum. Activity in premotor cortex and posterior superior temporal gyrus and sulcus was modulated by the amount of visually distinguishable phonemes in the stories. None of these regions was activated to the same extent in the audio- or video-alone conditions. These results suggest that integrating observed facial movements into the speech perception process involves a network of multimodal brain regions associated with speech production and that these areas contribute less to speech perception when only auditory signals are present. This distributed network could participate in recognition processing by interpreting visual information about mouth movements as phonetic information based on motor commands that could have generated those movements.  相似文献   

3.
The neural correlates of sign versus word production   总被引:1,自引:0,他引:1  
The production of sign language involves two large articulators (the hands) moving through space and contacting the body. In contrast, speech production requires small movements of the tongue and vocal tract with no observable spatial contrasts. Nonetheless, both language types exhibit a sublexical layer of structure with similar properties (e.g., segments, syllables, feature hierarchies). To investigate which neural areas are involved in modality-independent language production and which are tied specifically to the input-output mechanisms of signed and spoken language, we reanalyzed PET data collected from 29 deaf signers and 64 hearing speakers who participated in a series of separate studies. Participants were asked to overtly name concrete objects from distinct semantic categories in either American Sign Language (ASL) or in English. The baseline task required participants to judge the orientation of unknown faces (overtly responding 'yes'/'no' for upright/inverted). A random effects analysis revealed that left mesial temporal cortex and the left inferior frontal gyrus were equally involved in both speech and sign production, suggesting a modality-independent role for these regions in lexical access. Within the left parietal lobe, two regions were more active for sign than for speech: the supramarginal gyrus (peak coordinates: -60, -35, +27) and the superior parietal lobule (peak coordinates: -26, -51, +54). Activation in these regions may be linked to modality-specific output parameters of sign language. Specifically, activation within left SMG may reflect aspects of phonological processing in ASL (e.g., selection of hand configuration and place of articulation features), whereas activation within SPL may reflect proprioceptive monitoring of motoric output.  相似文献   

4.
In communicative situations, speech is often accompanied by gestures. For example, speakers tend to illustrate certain contents of speech by means of iconic gestures which are hand movements that bear a formal relationship to the contents of speech. The meaning of an iconic gesture is determined both by its form as well as the speech context in which it is performed. Thus, gesture and speech interact in comprehension. Using fMRI, the present study investigated what brain areas are involved in this interaction process. Participants watched videos in which sentences containing an ambiguous word (e.g. She touched the mouse) were accompanied by either a meaningless grooming movement, a gesture supporting the more frequent dominant meaning (e.g. animal) or a gesture supporting the less frequent subordinate meaning (e.g. computer device). We hypothesized that brain areas involved in the interaction of gesture and speech would show greater activation to gesture-supported sentences as compared to sentences accompanied by a meaningless grooming movement. The main results are that when contrasted with grooming, both types of gestures (dominant and subordinate) activated an array of brain regions consisting of the left posterior superior temporal sulcus (STS), the inferior parietal lobule bilaterally and the ventral precentral sulcus bilaterally. Given the crucial role of the STS in audiovisual integration processes, this activation might reflect the interaction between the meaning of gesture and the ambiguous sentence. The activations in inferior frontal and inferior parietal regions may reflect a mechanism of determining the goal of co-speech hand movements through an observation-execution matching process.  相似文献   

5.
Zhang Y  Kuhl PK  Imada T  Kotani M  Tohkura Y 《NeuroImage》2005,26(3):703-720
Linguistic experience alters an individual's perception of speech. We here provide evidence of the effects of language experience at the neural level from two magnetoencephalography (MEG) studies that compare adult American and Japanese listeners' phonetic processing. The experimental stimuli were American English /ra/ and /la/ syllables, phonemic in English but not in Japanese. In Experiment 1, the control stimuli were /ba/ and /wa/ syllables, phonemic in both languages; in Experiment 2, they were non-speech replicas of /ra/ and /la/. The behavioral and neuromagnetic results showed that Japanese listeners were less sensitive to the phonemic /r-l/ difference than American listeners. Furthermore, processing non-native speech sounds recruited significantly greater brain resources in both hemispheres and required a significantly longer period of brain activation in two regions, the superior temporal area and the inferior parietal area. The control stimuli showed no significant differences except that the duration effect in the superior temporal cortex also applied to the non-speech replicas. We argue that early exposure to a particular language produces a "neural commitment" to the acoustic properties of that language and that this neural commitment interferes with foreign language processing, making it less efficient.  相似文献   

6.
An fMRI investigation of syllable sequence production   总被引:2,自引:0,他引:2  
Bohland JW  Guenther FH 《NeuroImage》2006,32(2):821-841
Fluent speech comprises sequences that are composed from a finite alphabet of learned words, syllables, and phonemes. The sequencing of discrete motor behaviors has received much attention in the motor control literature, but relatively little has been focused directly on speech production. In this paper, we investigate the cortical and subcortical regions involved in organizing and enacting sequences of simple speech sounds. Sparse event-triggered functional magnetic resonance imaging (fMRI) was used to measure responses to preparation and overt production of non-lexical three-syllable utterances, parameterized by two factors: syllable complexity and sequence complexity. The comparison of overt production trials to preparation only trials revealed a network related to the initiation of a speech plan, control of the articulators, and to hearing one's own voice. This network included the primary motor and somatosensory cortices, auditory cortical areas, supplementary motor area (SMA), the precentral gyrus of the insula, and portions of the thalamus, basal ganglia, and cerebellum. Additional stimulus complexity led to increased engagement of the basic speech network and recruitment of additional areas known to be involved in sequencing non-speech motor acts. In particular, the left hemisphere inferior frontal sulcus and posterior parietal cortex, and bilateral regions at the junction of the anterior insula and frontal operculum, the SMA and pre-SMA, the basal ganglia, anterior thalamus, and the cerebellum showed increased activity for more complex stimuli. We hypothesize mechanistic roles for the extended speech production network in the organization and execution of sequences of speech sounds.  相似文献   

7.
汉字与英文字形辨认的脑功能磁共振成像初步研究   总被引:4,自引:0,他引:4  
目的:探讨功能磁共振成像(fMRI)技术研究人脑汉字与英文字形辨认方面的价值。材料与方法:12例(男5例,女7例)母语为汉语且裸眼视力正常的大学生参加实验。设备为GE Signa 1.5T MR仪,采用EPI序列,BOLD法行脑功能磁共振扫描。实验任务分别将汉字与英文(真字、假字、非字)投射到屏幕上,受试者通过头线圈反光镜观看屏幕并辨认。数据分析使用SPM 99升级软件,经过数据采集、预处理和建立模型显示结果。结果:汉字真字刺激在左额叶、中央前回(BA6)及枕叶(BA18)显著激活;左顶叶、中央后回(BA3)、右额下回(BA9)及双侧颞叶少量激活。英文真字刺激时左额中回、中央前回、左额下回显著激活;左颞梭状回(BA37)、右枕语言回(BA18)及左顶叶(BA40)也有激活。汉字和英文假字与非字只引起少量激活(P>0.05)。汉字与英文刺激左大脑半球的激活体积明显大于右半球;除枕叶外,英文在额、颞及顶叶引起的激活体积均大于汉字。结论:fMRI是研究人脑汉字和英文语言加工理想的无创性影像学方法,其脑加工优势半球均为左半球;母语为汉语者,其英文脑处理过程需更多的脑活动来参与和完成。  相似文献   

8.
We investigated the perception and categorization of speech (vowels, syllables) and non-speech (tones, tonal contours) stimuli using MEG. In a delayed-match-to-sample paradigm, participants listened to two sounds and decided if they sounded exactly the same or different (auditory discrimination, AUD), or if they belonged to the same or different categories (category discrimination, CAT). Stimuli across the two conditions were identical; the category definitions for each kind of sound were learned in a training session before recording. MEG data were analyzed using an induced wavelet transform method to investigate task-related differences in time-frequency patterns. In auditory cortex, for both AUD and CAT conditions, an alpha (8-13 Hz) band activation enhancement during the delay period was found for all stimulus types. A clear difference between AUD and CAT conditions was observed for the non-speech stimuli in auditory areas and for both speech and non-speech stimuli in frontal areas. The results suggest that alpha band activation in auditory areas is related to both working memory and categorization for new non-speech stimuli. The fact that the dissociation between speech and non-speech occurred in auditory areas, but not frontal areas, points to different categorization mechanisms and networks for newly learned (non-speech) and natural (speech) categories.  相似文献   

9.
The parieto-frontal network plays a crucial role in the transformations that convert visual information into motor commands for hand reaching movements. Here we use electroencephalography to determine whether the planning of reaching movements to visual and somatosensory targets involves a similar spatio-temporal pattern of neural activity. Subjects performed reaching movements toward spatial locations defined either by visual (light-emitting diode) or somatosensory (vibration of a fingertip of the contralateral hand) stimuli. To identify the activations associated with sensorimotor transformations, we subtracted the event-related potentials recorded in a “static” task (the stimuli were presented but no movement was initiated) from those recorded in a “reach” task (a reach had to be initiated toward the spatial location of the stimuli). In the visual condition, reach-related activities were observed over parietal, premotor and sensorimotor areas contralateral to the reaching hand. Activation was first observed over parietal areas 140 ms after stimulus onset and progressed to frontal areas. The proprioceptive condition recruited a similar set of structures as for visual targets. However, the temporal pattern of activity within these cortical areas differed greatly. Activity was sustained over premotor and sensorimotor areas throughout the reaction time interval, occurring simultaneously with the parietal activation. These results suggest that a common cortical network serves to transform visual and somatosensory signals into motor commands, but that the interactions between the structures of this network differ. This raises the possibility that different coordinate frames are used to encode the motor error for the two target modalities.  相似文献   

10.
The way humans comprehend narrative speech plays an important part in human development and experience. A group of 313 children with ages 5-18 were subjected to a large-scale functional magnetic resonance imaging (fMRI) study in order to investigate the neural correlates of auditory narrative comprehension. The results were analyzed to investigate the age-related brain activity changes involved in the narrative language comprehension circuitry. We found age-related differences in brain activity which may either reflect changes in local neuroplasticity (of the regions involved) in the developing brain or a more global transformation of brain activity related to neuroplasticity. To investigate this issue, Structural Equation Modeling (SEM) was applied to the results obtained from a group independent component analysis (Schmithorst, V.J., Holland, S.K., et al., 2005. Cognitive modules utilized for narrative comprehension in children: a functional magnetic resonance imaging study. NeuroImage) and the age-related differences were examined in terms of changes in path coefficients between brain regions. The group Independent Component Analysis (ICA) had identified five bilateral task-related components comprising the primary auditory cortex, the mid-superior temporal gyrus, the most posterior aspect of the superior temporal gyrus, the hippocampus, the angular gyrus and the medial aspect of the parietal lobule (precuneus/posterior cingulate). Furthermore, a left-lateralized network (sixth component) was also identified comprising the inferior frontal gyrus (including Broca's area), the inferior parietal lobule, and the medial temporal gyrus. The components (brain regions) for the SEM were identified based on the ICA maps and the results are discussed in light of recent neuroimaging studies corroborating the functional segregation of Broca's and Wernicke's areas and the important role played by the right hemisphere in narrative comprehension. The classical Wernicke-Geschwind (WG) model for speech processing is expanded to a two-route model involving a direct route between Broca's and Wernicke's area and an indirect route involving the parietal lobe.  相似文献   

11.
Yoo WK  You SH  Ko MH  Tae Kim S  Park CH  Park JW  Hoon Ohn S  Hallett M  Kim YH 《NeuroImage》2008,39(4):1886-1895
Repetitive transcranial magnetic stimulation (rTMS) to the primary motor cortex (M1) may induce functional modulation of motor performance and sensory perception. To address the underlying neurophysiological modulation following 10 Hz rTMS applied over M1, we examined cortical activation using 3T functional magnetic resonance imaging (fMRI), as well as the associated motor and sensory behavioral changes. The motor performance measure involved a sequential finger motor task that was also used as an activation task during fMRI. For sensory assessment, current perception threshold was measured before and after rTMS outside the MR scanner, and noxious mechanical stimulation was used as an activation task during fMRI. We found that significant activation in the bilateral basal ganglia, left superior frontal gyrus, bilateral pre-SMA, right medial temporal lobe, right inferior parietal lobe, and right cerebellar hemisphere correlated with enhanced motor performance in subjects that received real rTMS compared with sham-stimulated controls. Conversely, significant deactivation in the right superior and middle frontal gyri, bilateral postcentral and bilateral cingulate gyri, left SMA, right insula, right basal ganglia, and right cerebellar hemisphere were associated with an increase in the sensory threshold. Our findings reveal that rTMS induced rapid changes in the sensorimotor networks associated with sensory perception and motor performance and demonstrate the complexity of such intervention.  相似文献   

12.
This 3-T fMRI study investigates brain regions similarly and differentially involved with listening and covert production of singing relative to speech. Given the greater use of auditory-motor self-monitoring and imagery with respect to consonance in singing, brain regions involved with these processes are predicted to be differentially active for singing more than for speech. The stimuli consisted of six Japanese songs. A block design was employed in which the tasks for the subject were to listen passively to singing of the song lyrics, passively listen to speaking of the song lyrics, covertly sing the song lyrics visually presented, covertly speak the song lyrics visually presented, and to rest. The conjunction of passive listening and covert production tasks used in this study allow for general neural processes underlying both perception and production to be discerned that are not exclusively a result of stimulus induced auditory processing nor to low level articulatory motor control. Brain regions involved with both perception and production for singing as well as speech were found to include the left planum temporale/superior temporal parietal region, as well as left and right premotor cortex, lateral aspect of the VI lobule of posterior cerebellum, anterior superior temporal gyrus, and planum polare. Greater activity for the singing over the speech condition for both the listening and covert production tasks was found in the right planum temporale. Greater activity in brain regions involved with consonance, orbitofrontal cortex (listening task), subcallosal cingulate (covert production task) were also present for singing over speech. The results are consistent with the PT mediating representational transformation across auditory and motor domains in response to consonance for singing over that of speech. Hemispheric laterality was assessed by paired t tests between active voxels in the contrast of interest relative to the left-right flipped contrast of interest calculated from images normalized to the left-right reflected template. Consistent with some hypotheses regarding hemispheric specialization, a pattern of differential laterality for speech over singing (both covert production and listening tasks) occurs in the left temporal lobe, whereas, singing over speech (listening task only) occurs in right temporal lobe.  相似文献   

13.
Wilson SM  Iacoboni M 《NeuroImage》2006,33(1):316-325
Neural responses to unfamiliar non-native phonemes varying in the extent to which they can be articulated were studied with functional magnetic resonance imaging (fMRI). Both superior temporal (auditory) and precentral (motor) areas were activated by passive speech perception, and both distinguished non-native from native phonemes, with greater signal change in response to non-native phonemes. Furthermore, speech-responsive motor regions and superior temporal sites were functionally connected. However, only in auditory areas did activity covary with the producibility of non-native phonemes. These data suggest that auditory areas are crucial for the transformation from acoustic signal to phonetic code, but the motor system also plays an active role, which may involve the internal generation of candidate phonemic categorizations. These 'motor' categorizations would then be compared to the acoustic input in auditory areas. The data suggest that speech perception is neither purely sensory nor motor, but rather a sensorimotor process.  相似文献   

14.
The advent of functional neuroimaging has allowed tremendous advances in our understanding of brain-language relationships, in addition to generating substantial empirical data on this subject in the form of thousands of activation peak coordinates reported in a decade of language studies. We performed a large-scale meta-analysis of this literature, aimed at defining the composition of the phonological, semantic, and sentence processing networks in the frontal, temporal, and inferior parietal regions of the left cerebral hemisphere. For each of these language components, activation peaks issued from relevant component-specific contrasts were submitted to a spatial clustering algorithm, which gathered activation peaks on the basis of their relative distance in the MNI space. From a sample of 730 activation peaks extracted from 129 scientific reports selected among 260, we isolated 30 activation clusters, defining the functional fields constituting three distributed networks of frontal and temporal areas and revealing the functional organization of the left hemisphere for language. The functional role of each activation cluster is discussed based on the nature of the tasks in which it was involved. This meta-analysis sheds light on several contemporary issues, notably on the fine-scale functional architecture of the inferior frontal gyrus for phonological and semantic processing, the evidence for an elementary audio-motor loop involved in both comprehension and production of syllables including the primary auditory areas and the motor mouth area, evidence of areas of overlap between phonological and semantic processing, in particular at the location of the selective human voice area that was the seat of partial overlap of the three language components, the evidence of a cortical area in the pars opercularis of the inferior frontal gyrus dedicated to syntactic processing and in the posterior part of the superior temporal gyrus a region selectively activated by sentence and text processing, and the hypothesis that different working memory perception-actions loops are identifiable for the different language components. These results argue for large-scale architecture networks rather than modular organization of language in the left hemisphere.  相似文献   

15.
Activation maps of 16 professional classical singers were evaluated during overt singing and imagined singing of an Italian aria utilizing a sparse sampling functional magnetic imaging (fMRI) technique. Overt singing involved bilateral primary and secondary sensorimotor and auditory cortices but also areas associated with speech and language production. Activation magnitude within the gyri of Heschl (A1) was comparable in both hemispheres. Subcortical motor areas (cerebellum, thalamus, medulla and basal ganglia) were active too. Areas associated with emotional processing showed slight (anterior cingulate cortex, anterior insula) activation. Cerebral activation sites during imagined singing were centered on fronto-parietal areas and involved primary and secondary sensorimotor areas in both hemispheres. Areas processing emotions showed intense activation (ACC and bilateral insula, hippocampus and anterior temporal poles, bilateral amygdala). Imagery showed no significant activation in A1. Overt minus imagined singing revealed increased activation in cortical (bilateral primary motor; M1) and subcortical (right cerebellar hemisphere, medulla) motor as well as in sensory areas (primary somatosensory cortex, bilateral A1). Imagined minus overt singing showed enhanced activity in the medial Brodmann's area 6, the ventrolateral and medial prefrontal cortex (PFC), the anterior cingulate cortex and the inferior parietal lobe. Additionally, Wernicke's area and Brocca's area and their homologues were increasingly active during imagery. We conclude that imagined and overt singing involves partly different brain systems in professional singers with more prefrontal and limbic activation and a larger network of higher order associative functions during imagery.  相似文献   

16.
Functional neuroimaging studies have demonstrated that mental rotation paradigms activate a network of spatially distributed cortical areas rather than a discrete brain region. Although the neuro-anatomical nodes of the rotation network are well established, their specific functional role is less well identified. It was the aim of the present study to dissociate network components involved in the visual perception of 3D cubic objects from regions involved in their mental spatial transformation. This was achieved by desynchronizing the time course of the perceptional process (i.e., stimulus duration) from the duration of the cognitive process (i.e., reaction times) and by comparing these with the temporal characteristics of the hemodynamic response functions (HRFs) in regions of interest. To minimize intersubject variability, an all-female subject group was chosen for this investigation. Time-resolved fMRI analysis revealed a significant increase in the full width at half maximum (FWHM) of the HRF with reaction times in the supplementary motor area (pre-SMA), in the bilateral premotor cortex (PMd-proper), and in the left parietal lobe (PP). The FWHM in visual system components such as the bilateral lateral occipital complex (LOC) and dorsal extrastriate visual areas (DE) was constant across trials and roughly equal to the stimulus duration. These findings suggest that visual system activation during mental rotation reflects visual perception and can be dissociated from other network components whose response characteristics indicates an involvement in the mental spatial transformation itself.  相似文献   

17.
蒋玉尔  林枫  江钟立 《中国康复》2020,35(12):619-624
目的:探讨图片的语义类别对命名任务的影响,研究生物类和非生物类图片命名任务对正常大脑活动时空模式的影响。方法:入组健康成年右利手受试者10例,选取来自不同语义类别(生物类和非生物类)的图片20张,应用磁共振成像获取个体大脑结构,脑磁图(MEG)检测任务中的全脑活动。结果:在图片命名任务中,早期脑区激活从双侧枕叶开始,逐步向颞叶和顶叶扩散,最后额叶激活产生言语。在视觉相关时间窗内,生物类图片命名在右侧扣带回激活强度显著低于非生物类图片命名(P=0.0475),而生物类图片命名在双侧枕叶、左侧顶叶和左侧额叶均显著高于非生物类图片命名(均P<0.05)。在语义相关时间窗内,生物类图片命名在双侧额叶显著高于非生物类图片命名(P<0.05)。在语音相关时间窗内,生物类图片命名在双侧枕叶、左侧顶叶和左侧额叶均显著高于非生物类图片命名(均P<0.05)。结论:生物类和非生物类图片命名相比,两者在颞叶的活动未见显著差异,但前者具有枕叶、顶叶和额叶优势激活。提示生物类图片可以更好地在相关时间窗内刺激相关功能脑区。  相似文献   

18.
Rhythm is an essential element of human culture, particularly in language and music. To acquire language or music, we have to perceive the sensory inputs, organize them into structured sequences as rhythms, actively hold the rhythm information in mind, and use the information when we reproduce or mimic the same rhythm. Previous brain imaging studies have elucidated brain regions related to the perception and production of rhythms. However, the neural substrates involved in the working memory of rhythm remain unclear. In addition, little is known about the processing of rhythm information from non-auditory inputs (visual or tactile). Therefore, we measured brain activity by functional magnetic resonance imaging while healthy subjects memorized and reproduced auditory and visual rhythmic information. The inferior parietal lobule, inferior frontal gyrus, supplementary motor area, and cerebellum exhibited significant activations during both encoding and retrieving rhythm information. In addition, most of these areas exhibited significant activation also during the maintenance of rhythm information. All of these regions functioned in the processing of auditory and visual rhythms. The bilateral inferior parietal lobule, inferior frontal gyrus, supplementary motor area, and cerebellum are thought to be essential for motor control. When we listen to a certain rhythm, we are often stimulated to move our body, which suggests the existence of a strong interaction between rhythm processing and the motor system. Here, we propose that rhythm information may be represented and retained as information about bodily movements in the supra-modal motor brain system.  相似文献   

19.
Human functional MRI studies frequently reveal the joint activation of parietal and of lateral and mesial frontal areas during various cognitive tasks. To analyze the geometrical organization of those networks, we used an automatized clustering algorithm that parcels out sets of areas based on their similar profile of task-related activations or deactivations. This algorithm allowed us to reanalyze published fMRI data (Simon, O., Mangin, J.F., Cohen, L., Le Bihan, D., Dehaene, S., 2002. Topographical layout of hand, eye, calculation, and language-related areas in the human parietal lobe. Neuron 33, 475-487) and to reproduce the previously observed geometrical organization of activations for saccades, attention, grasping, pointing, calculation, and language processing in the parietal lobe. Further, we show that this organization extends to lateral and mesial prefrontal regions. Relative to the parietal lobe, the prefrontal functional geometry is characterized by a partially symmetrical anteroposterior ordering of activations, a decreased representation of effector-specific tasks, and a greater emphasis on higher cognitive functions of attention, higher-order spatial representation, calculation, and language. Anatomically, our results in humans are closely homologous to the known connectivity of parietal and frontal regions in the macaque monkey.  相似文献   

20.
目的应用功能磁共振成像(fMRI)对比研究人脑在数字和汉字认知刺激下脑功能活动,探讨其精确脑区定位及其可能机制。方法母语为汉语的12例右利手健康志愿者(男5例,女7例)进行了数字和汉字的看及默读实验,使用MRI仪以BOLD法采集脑部数据,用SPM99等软件后处理,获得脑功能活动图像。结果看数字和汉字均能激活双侧纹外视区、枕上回、顶叶及额下回少部分区域;默读数字和汉字额上回及额中回激活较明显;处理汉字比处理数字在额叶激活的面积更大;还发现在汉字的认知加工过程中有豆状核、丘脑及小脑的参与。结论数字及汉字的认知加工激活脑内不同区域,额叶在汉字的处理过程中起更重要作用。fMRI是研究活体人脑语言功能十分有效的方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号