首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Evoked magnetic fields were recorded from 18 adult volunteers using magnetoencephalography (MEG) during perception of speech stimuli (the endpoints of a voice onset time (VOT) series ranging from /ga/ to /ka/), analogous nonspeech stimuli (the endpoints of a two-tone series varying in relative tone onset time (TOT), and a set of harmonically complex tones varying in pitch. During the early time window (approximately 60 to approximately 130 ms post-stimulus onset), activation of the primary auditory cortex was bilaterally equal in strength for all three tasks. During the middle (approximately 130 to 800 ms) and late (800 to 1400 ms) time windows of the VOT task, activation of the posterior portion of the superior temporal gyrus (STGp) was greater in the left hemisphere than in the right hemisphere, in both group and individual data. These asymmetries were not evident in response to the nonspeech stimuli. Hemispheric asymmetries in a measure of neurophysiological activity in STGp, which includes the supratemporal plane and cortex inside the superior temporal sulcus, may reflect a specialization of association auditory cortex in the left hemisphere for processing speech sounds. Differences in late activation patterns potentially reflect the operation of a postperceptual process (e.g., rehearsal in working memory) that is restricted to speech stimuli.  相似文献   

2.
Specht K  Reul J 《NeuroImage》2003,20(4):1944-1954
With this study, we explored the blood oxygen level-dependent responses within the temporal lobe to short auditory stimuli of different classes. To address this issue, we performed an attentive listening event-related fMRI study, where subjects were required to concentrate during the presentation of different types of stimuli. Because the order of stimuli was randomized and not predictable for the subject, the observed differences between the stimuli types were interpreted as an automatic effect and were not affected by attention. We used three types of stimuli: tones, sounds of animals and instruments, and words. We found in all cases bilateral activations of the primary and secondary auditory cortex. The strength and lateralization depended on the type of stimulus. The tone trials led to the weakest and smallest activations. The perception of sounds increased the activated network bilaterally into the superior temporal sulcus mainly on the right and the perception of words led to the highest activation within the left superior temporal sulcus as well as in left inferior frontal gyrus. Within the left temporal sulcus, we were able to distinguish between different subsystems, showing an extending activation from posterior to anterior for speech and speechlike information. Whereas posterior parts were involved in analyzing the complex auditory structure of sounds and speech, the middle and anterior parts responded strongest only in the perception of speech. In summary, a functional segregation of the temporal lobes into several subsystems responsible for auditory processing was visible. A lateralization for verbal stimuli to the left and sounds to the right was already detectable when short stimuli were used.  相似文献   

3.
The key question in understanding the nature of speech perception is whether the human brain has unique speech-specific mechanisms or treats all sounds equally. We assessed possible differences between the processing of speech and complex nonspeech sounds in the two cerebral hemispheres by measuring the magnetic equivalent of the mismatch negativity, the brain's automatic change–detection response, which was elicited by speech sounds and by similarly complex nonspeech sounds with either fast or slow acoustic transitions. Our results suggest that the right hemisphere is predominant in the perception of slow acoustic transitions, whereas neither hemisphere clearly dominates the discrimination of nonspeech sounds with fast acoustic transitions. In contrast, the perception of speech stimuli with similarly rapid acoustic transitions was dominated by the left hemisphere, which may be explained by the presence of acoustic templates (long-term memory traces) for speech sounds formed in this hemisphere.  相似文献   

4.
It is commonly assumed that, in the cochlea and the brainstem, the auditory system processes speech sounds without differentiating them from any other sounds. At some stage, however, it must treat speech sounds and nonspeech sounds differently, since we perceive them as different. The purpose of this study was to delimit the first location in the auditory pathway that makes this distinction using functional MRI, by identifying regions that are differentially sensitive to the internal structure of speech sounds as opposed to closely matched control sounds. We analyzed data from nine right-handed volunteers who were scanned while listening to natural and synthetic vowels, or to nonspeech stimuli matched to the vowel sounds in terms of their long-term energy and both their spectral and temporal profiles. The vowels produced more activation than nonspeech sounds in a bilateral region of the superior temporal sulcus, lateral and inferior to regions of auditory cortex that were activated by both vowels and nonspeech stimuli. The results suggest that the perception of vowel sounds is compatible with a hierarchical model of primate auditory processing in which early cortical stages of processing respond indiscriminately to speech and nonspeech sounds, and only higher regions, beyond anatomically defined auditory cortex, show selectivity for speech sounds.  相似文献   

5.
Rimol LM  Specht K  Hugdahl K 《NeuroImage》2006,30(2):554-562
Previous neuroimaging studies have consistently reported bilateral activation to speech stimuli in the superior temporal gyrus (STG) and have identified an anteroventral stream of speech processing along the superior temporal sulcus (STS). However, little attention has been devoted to the possible confound of individual differences in hemispheric dominance for speech. The present study was designed to test for speech-selective activation while controlling for inter-individual variance in auditory laterality, by using only subjects with at least 10% right ear advantage (REA) on the dichotic listening test. Eighteen right-handed, healthy male volunteers (median age 26) participated in the study. The stimuli were words, syllables, and sine wave tones (220-2600 Hz), presented in a block design. Comparing words > tones and syllables > tones yielded activation in the left posterior MTG and the lateral STG (upper bank of STS). In the right temporal lobe, the activation was located in the MTG/STS (lower bank). Comparing left and right temporal lobe cluster sizes from the words > tones and syllables > tones contrasts on single-subject level demonstrated a statistically significant left lateralization for speech sound processing in the STS/MTG area. The asymmetry analyses suggest that dichotic listening may be a suitable method for selecting a homogenous group of subjects with respect to left hemisphere language dominance.  相似文献   

6.
Joanisse MF  Gati JS 《NeuroImage》2003,19(1):64-79
Speech perception involves recovering the phonetic form of speech from a dynamic auditory signal containing both time-varying and steady-state cues. We examined the roles of inferior frontal and superior temporal cortex in processing these aspects of auditory speech and nonspeech signals. Event-related functional magnetic resonance imaging was used to record activation in superior temporal gyrus (STG) and inferior frontal gyrus (IFG) while participants discriminated pairs of either speech syllables or nonspeech tones. Speech stimuli differed in either the consonant or the vowel portion of the syllable, whereas the nonspeech signals consisted of sinewave tones differing along either a dynamic or a spectral dimension. Analyses failed to identify regions of activation that clearly contrasted the speech and nonspeech conditions. However, we did identify regions in the posterior portion of left and right STG and left IFG yielding greater activation for both speech and nonspeech conditions that involved rapid temporal discrimination, compared to speech and nonspeech conditions involving spectral discrimination. The results suggest that, when semantic and lexical factors are adequately ruled out, there is significant overlap in the brain regions involved in processing the rapid temporal characteristics of both speech and nonspeech signals.  相似文献   

7.
Kim JS  Chung CK 《NeuroImage》2008,42(4):1499-1507
Some patients with epilepsy have difficulty performing complex language tasks due to the long duration of the disease and cognitive side effects of antiepileptic drugs. Therefore, a simple passive paradigm would be useful for determining the language dominance lateralization in epilepsy patients. The goal of this study was to develop an efficient and non-invasive analysis method for determining language dominance in epilepsy patients. To this end, magnetoencephalography was performed while an auditory stimulus sequence comprised of two one-syllable spoken words was presented to 17 subjects in an oddball paradigm without subject response. The time-frequency difference between deviant and standard sounds was then analyzed in the source space using a spatial filtering method that was based on minimum-norm estimation. The laterality index was estimated in language-related regions of interest (ROI). The results were compared to the traditional lateralization method using the Wada test. Beta band oscillation activity decreased during deviant stimulation, and the lateralization of the decrease was in good agreement with the Wada test, in the posterior part of the inferior frontal gyrus in 94% of the subjects and in the posterior part of the superior temporal gyrus in 71% of the subjects. In conclusion, the ROI-based time-frequency difference between deviant and standard sounds can be used to assess language lateralization in accordance with the Wada test.  相似文献   

8.
The analysis of auditory deviant events outside the focus of attention is a fundamental capacity of human information processing and has been studied in experiments on Mismatch Negativity (MMN) and the P3a component in evoked potential research. However, generators contributing to these components are still under discussion. Here we assessed cortical blood flow to auditory stimulation in three conditions. Six healthy subjects were presented with standard tones, frequency deviant tones (MMN condition), and complex novel sounds (Novelty condition), while attention was directed to a nondemanding visual task. Analysis of the MMN condition contrasted with thestandard condition revealed blood flow changes in the left and right superior temporal gyrus, right superior temporal sulcus and left inferior frontal gyrus. Complex novel sounds contrasted with the standard condition activated the left superior temporal gyrus and the left inferior and middle frontal gyrus. A small subcortical activation emerged in the left parahippocampal gyrus and an extended activation was found covering the right superior temporal gyrus. Novel sounds activated the right inferior frontal gyrus when controlling for deviance probability. In contrast to previous studies our results indicate a left hemisphere contribution to a frontotemporal network of auditory deviance processing. Our results provide further evidence for a contribution of the frontal cortex to the processing of auditory deviance outside the focus of directed attention.  相似文献   

9.
The role of attention in speech comprehension is not well understood. We used fMRI to study the neural correlates of auditory word, pseudoword, and nonspeech (spectrally rotated speech) perception during a bimodal (auditory, visual) selective attention task. In three conditions, Attend Auditory (ignore visual), Ignore Auditory (attend visual), and Visual (no auditory stimulation), 28 subjects performed a one-back matching task in the assigned attended modality. The visual task, attending to rapidly presented Japanese characters, was designed to be highly demanding in order to prevent attention to the simultaneously presented auditory stimuli. Regardless of stimulus type, attention to the auditory channel enhanced activation by the auditory stimuli (Attend Auditory>Ignore Auditory) in bilateral posterior superior temporal regions and left inferior frontal cortex. Across attentional conditions, there were main effects of speech processing (word+pseudoword>rotated speech) in left orbitofrontal cortex and several posterior right hemisphere regions, though these areas also showed strong interactions with attention (larger speech effects in the Attend Auditory than in the Ignore Auditory condition) and no significant speech effects in the Ignore Auditory condition. Several other regions, including the postcentral gyri, left supramarginal gyrus, and temporal lobes bilaterally, showed similar interactions due to the presence of speech effects only in the Attend Auditory condition. Main effects of lexicality (word>pseudoword) were isolated to a small region of the left lateral prefrontal cortex. Examination of this region showed significant word>pseudoword activation only in the Attend Auditory condition. Several other brain regions, including left ventromedial frontal lobe, left dorsal prefrontal cortex, and left middle temporal gyrus, showed Attention x Lexicality interactions due to the presence of lexical activation only in the Attend Auditory condition. These results support a model in which neutral speech presented in an unattended sensory channel undergoes relatively little processing beyond the early perceptual level. Specifically, processing of phonetic and lexical-semantic information appears to be very limited in such circumstances, consistent with prior behavioral studies.  相似文献   

10.
The high degree of intersubject structural variability in the human brain is an obstacle in combining data across subjects in functional neuroimaging experiments. A common method for aligning individual data is normalization into standard 3D stereotaxic space. Since the inherent geometry of the cortex is that of a 2D sheet, higher precision can potentially be achieved if the intersubject alignment is based on landmarks in this 2D space. To examine the potential advantage of surface-based alignment for localization of auditory cortex activation, and to obtain high-resolution maps of areas activated by speech sounds, fMRI data were analyzed from the left hemisphere of subjects tested with phoneme and tone discrimination tasks. We compared Talairach stereotaxic normalization with two surface-based methods: Landmark Based Warping, in which landmarks in the auditory cortex were chosen manually, and Automated Spherical Warping, in which hemispheres were aligned automatically based on spherical representations of individual and average brains. Examination of group maps generated with these alignment methods revealed superiority of the surface-based alignment in providing precise localization of functional foci and in avoiding mis-registration due to intersubject anatomical variability. Human left hemisphere cortical areas engaged in complex auditory perception appear to lie on the superior temporal gyrus, the dorsal bank of the superior temporal sulcus, and the lateral third of Heschl's gyrus.  相似文献   

11.
Productive and perceptive language reorganization in temporal lobe epilepsy   总被引:6,自引:0,他引:6  
The aim of this work was to determine whether productive and perceptive language functions are differentially affected in homogeneous groups of epilepsy patients with right and left temporal lobe epilepsy (TLE). Eighteen patients with left TLE, 18 with right TLE, and 17 healthy volunteers were studied using fMRI during performance of three tasks assessing the productive and perceptive aspects of language (covert semantic verbal fluency, covert sentence repetition, and story listening). Hemispheric dominance for language was calculated in the frontal and temporal regions using laterality indices (LI). Atypical lateralization was defined as a right-sided LI (LI<-0.20) in the frontal lobes during the verbal fluency task or in the temporal lobes during the story listening task. Control subjects and right TLE patients demonstrated a strong left lateralization for language in the frontal lobes during the fluency task, whereas activation was less lateralized to the left hemisphere in left TLE patients, although the difference did not reach significance. In the story listening and the repetition tasks, activation was significantly more right sided in the temporal lobes of patients with left TLE. Atypical language representation was found in 19% of TLE patients (five left and two right TLE). The shift toward the right hemisphere was significantly larger in the temporal than the frontal lobes in patients with atypical language lateralization compared to TLE patients with a typical language lateralization. Neuropsychological performances of patients with atypical language patterns were better than those of patients with typical patterns, suggesting that this reorganization may represent a compensatory mechanism.  相似文献   

12.
Walter H  Vasic N  Höse A  Spitzer M  Wolf RC 《NeuroImage》2007,35(4):1551-1561
Studies on working memory (WM) dysfunction in schizophrenia have reported several functionally aberrant brain areas including the lateral prefrontal cortex, superior temporal areas and the striatum. However, less is known about the relationship of WM-dysfunction, cerebral activation, task-accuracy and diagnostic specificity. Using a novel WM-task and event-related functional magnetic resonance imaging (fMRI), we studied healthy control subjects (n=17) and partially remitted, medicated inpatients meeting DSM-IV criteria for schizophrenia (n=19) and major depressive disorder (n=12). Due to the event-related technique, we excluded incorrectly performed trials, thus controlling for accuracy-related activation confounds. Compared with controls, patients with schizophrenia showed less activation in frontoparietal and subcortical regions at high cognitive load levels. Compared with patients with depression, schizophrenic patients showed less prefrontal activation in left inferior frontal cortex and right cerebellum. In patients with schizophrenia, a lack of deactivation of the superior temporal cortex was found compared to both healthy controls and patients with depression. Thus, we could not confirm previous findings of impaired lateral prefrontal activation during WM performance in schizophrenic patients after the exclusion of incorrectly performed or omitted trials in our functional analysis. However, superior temporal cortex dysfunction in patients with schizophrenia may be regarded as schizophrenia-specific finding in terms of psychiatric diagnosis specificity.  相似文献   

13.
The neural substrates underlying speech perception are still not well understood. Previously, we found dissociation of speech and nonspeech processing at the earliest cortical level (AI), using speech and nonspeech complexity dimensions. Acoustic differences between speech and nonspeech stimuli in imaging studies, however, confound the search for linguistic-phonetic regions. Presently, we used sinewave speech (SWsp) and nonspeech (SWnon), which replace speech formants with sinewave tones, in order to match acoustic spectral and temporal complexity while contrasting phonetics. Chord progressions (CP) were used to remove the effects of auditory coherence and object processing. Twelve normal RH volunteers were scanned with fMRI while listening to SWsp, SWnon, CP, and a baseline condition arranged in blocks. Only two brain regions, in bilateral superior temporal sulcus, extending more posteriorly on the left, were found to prefer the SWsp condition after accounting for acoustic modulation and coherence effects. Two regions responded preferentially to the more frequency-modulated stimuli, including one that overlapped the right temporal phonetic area and another in the left angular gyrus far from the phonetic area. These findings are proposed to form the basis for the two subtypes of auditory word deafness. Several brain regions, including auditory and non-auditory areas, preferred the coherent auditory stimuli and are likely involved in auditory object recognition. The design of the current study allowed for separation of acoustic spectrotemporal, object recognition, and phonetic effects resulting in distinct and overlapping components.  相似文献   

14.
Schizophrenia is associated with language-related dysfunction. A previous study [Schizophr. Res. 59 (2003c) 159] has shown that this abnormality is present at the level of automatic discrimination of change in speech sounds, as revealed by magnetoencephalographic recording of auditory mismatch field in response to across-category change in vowels. Here, we investigated the neuroanatomical substrate for this physiological abnormality. Thirteen patients with schizophrenia and 19 matched control subjects were examined using magnetoencephalography (MEG) and high-resolution magnetic resonance imaging (MRI) to evaluate both mismatch field strengths in response to change between vowel /a/ and /o/, and gray matter volumes of Heschl's gyrus (HG) and planum temporale (PT). The magnetic global field power of mismatch response to change in phonemes showed a bilateral reduction in patients with schizophrenia. The gray matter volume of left planum temporale, but not right planum temporale or bilateral Heschl's gyrus, was significantly smaller in patients with schizophrenia compared with that in control subjects. Furthermore, the phonetic mismatch strength in the left hemisphere was significantly correlated with left planum temporale gray matter volume in patients with schizophrenia only. These results suggest that structural abnormalities of the planum temporale may underlie the functional abnormalities of fundamental language-related processing in schizophrenia.  相似文献   

15.
The left superior temporal cortex shows greater responsiveness to speech than to non-speech sounds according to previous neuroimaging studies, suggesting that this brain region has a special role in speech processing. However, since speech sounds differ acoustically from the non-speech sounds, it is possible that this region is not involved in speech perception per se, but rather in processing of some complex acoustic features. "Sine wave speech" (SWS) provides a tool to study neural speech specificity using identical acoustic stimuli, which can be perceived either as speech or non-speech, depending on previous experience of the stimuli. We scanned 21 subjects using 3T functional MRI in two sessions, both including SWS and control stimuli. In the pre-training session, all subjects perceived the SWS stimuli as non-speech. In the post-training session, the identical stimuli were perceived as speech by 16 subjects. In these subjects, SWS stimuli elicited significantly stronger activity within the left posterior superior temporal sulcus (STSp) in the post- vs. pre-training session. In contrast, activity in this region was not enhanced after training in 5 subjects who did not perceive SWS stimuli as speech. Moreover, the control stimuli, which were always perceived as non-speech, elicited similar activity in this region in both sessions. Altogether, the present findings suggest that activation of the neural speech representations in the left STSp might be a pre-requisite for hearing sounds as speech.  相似文献   

16.
Rimol LM  Specht K  Weis S  Savoy R  Hugdahl K 《NeuroImage》2005,26(4):97-1067
The objective of this study was to investigate phonological processing in the brain by using sub-syllabic speech units with rapidly changing frequency spectra. We used isolated stop consonants extracted from natural speech consonant-vowel (CV) syllables, which were digitized and presented through headphones in a functional magnetic resonance imaging (fMRI) paradigm. The stop consonants were contrasted with CV syllables. In order to control for general auditory activation, we used duration- and intensity-matched noise as a third stimulus category. The subjects were seventeen right-handed, healthy male volunteers. BOLD activation responses were acquired on a 1.5-T MR scanner. The auditory stimuli were presented through MR compatible headphones, using an fMRI paradigm with clustered volume acquisition and 12 s repetition time. The consonant vs. noise comparison resulted in unilateral left lateralized activation in the posterior part of the middle temporal gyrus and superior temporal sulcus (MTG/STS). The CV syllable vs. noise comparison resulted in bilateral activation in the same regions, with a leftward asymmetry. The reversed comparisons, i.e., noise vs. speech stimuli, resulted in right hemisphere activation in the supramarginal and superior temporal gyrus, as well as right prefrontal activation. Since the consonant stimuli are unlikely to have activated a semantic-lexical processing system, it seems reasonable to assume that the MTG/STS activation represents phonetic/phonological processing. This may involve the processing of both spectral and temporal features considered important for phonetic encoding.  相似文献   

17.
Smooth pursuit eye movements (SPEM) are necessary to follow slowly moving targets while maintaining foveal fixation. In about 50% of schizophrenic patients SPEM velocity is reduced. In this study we were interested in identifying the cortical mechanisms associated with extraretinal processing of SPEM in schizophrenic patients. During condition A, patients and healthy subjects had to pursue a constantly visible target (10 degrees /s). During condition B the target was blanked out for 1000 ms while subjects were instructed to continue SPEM. Eye movement data were assessed during scanning sessions by a limbus tracker. During condition A, reduced SPEM velocity in patients was associated with reduced activation of the right ventral premotor cortex and increased activation of the left dorsolateral prefrontal cortex, the right thalamus and the Crus II of the left cerebellar hemisphere. During condition B, SPEM velocity was reduced to a similar extent in both groups. While in patients a decrease in activation was observed in the right cerebellar area VIIIA, the activation of the right anterior cingulate, the right superior temporal cortex, and the bilateral frontal eye fields was increased. The results implicate that schizophrenic patients employ different strategies during SPEM both with and without target blanking than healthy subjects. These strategies predominantly involve extraretinal mechanisms.  相似文献   

18.
Functional brain imaging studies of working memory (WM) in schizophrenia have yielded inconsistent results regarding deficits in the dorsolateral prefrontal (DLPFC) and parietal cortices. In spite of its potential importance in schizophrenia, there have been few investigations of WM deficits using auditory stimuli and no functional imaging studies have attempted to relate brain activation during auditory WM to positive and negative symptoms of schizophrenia. We used a two-back auditory WM paradigm in a functional MRI study of men with schizophrenia (N = 11) and controls (N = 13). Region of interest analysis was used to investigate group differences in activation as well as correlations with symptom scores from the Brief Psychiatric Rating Scale. Patients with schizophrenia performed significantly worse and were slower than control subjects in the WM task. Patients also showed decreased lateralization of activation and significant WM related activation deficits in the left and right DLPFC, frontal operculum, inferior parietal, and superior parietal cortex but not in the anterior cingulate or superior temporal gyrus. These results indicate that in addition to the prefrontal cortex, parietal cortex function is also disrupted during WM in schizophrenia. Withdrawal-retardation symptom scores were inversely correlated with frontal operculum activation. Thinking disturbance symptom scores were inversely correlated with right DLPFC activation. Our findings suggest an association between thinking disturbance symptoms, particularly unusual thought content, and disrupted WM processing in schizophrenia.  相似文献   

19.
Attention deficits have been consistently described in schizophrenia. Functional neuroimaging and electrophysiological studies have focused on anterior cingulate cortex (ACC) dysfunction as a possible mediator. However, recent basic research has suggested that the effect of attention is also observed as a relative amplification of activity in modality-associated cortical areas. In the present study, the question was addressed whether an amplification deficit is seen in the auditory cortex of schizophrenic patients during an attention-requiring choice reaction task. Twenty-one drug-free schizophrenic patients and 21 age- and sex-matched healthy controls were studied (32-channel EEG). The underlying generators of the event-related N1 component were separated in neuroanatomic space using a minimum-norm (LORETA) and a multiple dipole (BESA) approach. Both methods revealed activation in the primary auditory cortex (peak latency approximately 100 ms) and in the area of the ACC (peak latency approximately 130 ms). In addition, the adapted multiple dipole model also showed a temporal-radial source activation in nonprimary auditory areas (peak latency approximately 140 ms). In schizophrenic patients, significant activation deficits were found in the ACC as well as in the left nonprimary auditory areas that differentially correlated with negative and positive symptoms. The results suggest that (1) the source in the nonprimary auditory cortex is detected only with a multiple dipole approach and (2) that the N1 generators in the ACC and in the nonprimary auditory cortex are dysfunctional in schizophrenia. This would be in line with the notion that attention deficits in schizophrenia involve an extended cortical network.  相似文献   

20.
Thought disorder is a symptom of schizophrenia expressed as disorganized or incoherent speech. Severity of thought disorder correlates with decreased left superior temporal gyrus grey matter volume and cortical activation in posterior temporal regions during the performance of language tasks. The goal of this study was to determine whether language-related activation mediates the association between thought disorder and left superior temporal lobe grey matter volume. 12 patients with schizophrenia were assessed for thought disorder. FMRI images were acquired for each subject while they listened to English speech, along with a high resolution structural image. Thought disorder was used as a covariate in the functional analysis to identify brain regions within which activation correlated with symptom severity. Voxel based morphometry was used to calculate grey matter volume of the planum temporale. A mediation model waste-tested using a four-step multiple regression approach incorporating cortical volume, functional activation and symptom severity. Thought disorder correlated with activation in a single cluster within the left posterior middle temporal gyrus during listening to speech. Grey matter volume within the planum temporale correlated significantly with severity of thought disorder and activation within the functional cluster. Regressing thought disorder on grey matter volume and BOLD response simultaneously led to a significant reduction in the correlation between grey matter volume and thought disorder. These results support the hypothesis that the association between decreased grey matter volume in the left planum temporale and severity of thought disorder is mediated by activation in the posterior temporal lobe during language processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号