首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Rimol LM  Specht K  Weis S  Savoy R  Hugdahl K 《NeuroImage》2005,26(4):97-1067
The objective of this study was to investigate phonological processing in the brain by using sub-syllabic speech units with rapidly changing frequency spectra. We used isolated stop consonants extracted from natural speech consonant-vowel (CV) syllables, which were digitized and presented through headphones in a functional magnetic resonance imaging (fMRI) paradigm. The stop consonants were contrasted with CV syllables. In order to control for general auditory activation, we used duration- and intensity-matched noise as a third stimulus category. The subjects were seventeen right-handed, healthy male volunteers. BOLD activation responses were acquired on a 1.5-T MR scanner. The auditory stimuli were presented through MR compatible headphones, using an fMRI paradigm with clustered volume acquisition and 12 s repetition time. The consonant vs. noise comparison resulted in unilateral left lateralized activation in the posterior part of the middle temporal gyrus and superior temporal sulcus (MTG/STS). The CV syllable vs. noise comparison resulted in bilateral activation in the same regions, with a leftward asymmetry. The reversed comparisons, i.e., noise vs. speech stimuli, resulted in right hemisphere activation in the supramarginal and superior temporal gyrus, as well as right prefrontal activation. Since the consonant stimuli are unlikely to have activated a semantic-lexical processing system, it seems reasonable to assume that the MTG/STS activation represents phonetic/phonological processing. This may involve the processing of both spectral and temporal features considered important for phonetic encoding.  相似文献   

2.
Specht K  Reul J 《NeuroImage》2003,20(4):1944-1954
With this study, we explored the blood oxygen level-dependent responses within the temporal lobe to short auditory stimuli of different classes. To address this issue, we performed an attentive listening event-related fMRI study, where subjects were required to concentrate during the presentation of different types of stimuli. Because the order of stimuli was randomized and not predictable for the subject, the observed differences between the stimuli types were interpreted as an automatic effect and were not affected by attention. We used three types of stimuli: tones, sounds of animals and instruments, and words. We found in all cases bilateral activations of the primary and secondary auditory cortex. The strength and lateralization depended on the type of stimulus. The tone trials led to the weakest and smallest activations. The perception of sounds increased the activated network bilaterally into the superior temporal sulcus mainly on the right and the perception of words led to the highest activation within the left superior temporal sulcus as well as in left inferior frontal gyrus. Within the left temporal sulcus, we were able to distinguish between different subsystems, showing an extending activation from posterior to anterior for speech and speechlike information. Whereas posterior parts were involved in analyzing the complex auditory structure of sounds and speech, the middle and anterior parts responded strongest only in the perception of speech. In summary, a functional segregation of the temporal lobes into several subsystems responsible for auditory processing was visible. A lateralization for verbal stimuli to the left and sounds to the right was already detectable when short stimuli were used.  相似文献   

3.
Adults and children processing music: an fMRI study   总被引:5,自引:0,他引:5  
Koelsch S  Fritz T  Schulze K  Alsop D  Schlaug G 《NeuroImage》2005,25(4):1068-1076
The present study investigates the functional neuroanatomy of music perception with functional magnetic resonance imaging (fMRI). Three different subject groups were investigated to examine developmental aspects and effects of musical training: 10-year-old children with varying degrees of musical training, adults without formal musical training (nonmusicians), and adult musicians. Subjects made judgements on sequences that ended on chords that were music-syntactically either regular or irregular. In adults, irregular chords activated the inferior frontal gyrus, orbital frontolateral cortex, the anterior insula, ventrolateral premotor cortex, anterior and posterior areas of the superior temporal gyrus, the superior temporal sulcus, and the supramarginal gyrus. These structures presumably form different networks mediating cognitive aspects of music processing (such as processing of musical syntax and musical meaning, as well as auditory working memory), and possibly emotional aspects of music processing. In the right hemisphere, the activation pattern of children was similar to that of adults. In the left hemisphere, adults showed larger activations than children in prefrontal areas, in the supramarginal gyrus, and in temporal areas. In both adults and children, musical training was correlated with stronger activations in the frontal operculum and the anterior portion of the superior temporal gyrus.  相似文献   

4.
Dissociated brain organization for single-digit addition and multiplication   总被引:1,自引:0,他引:1  
Zhou X  Chen C  Zang Y  Dong Q  Chen C  Qiao S  Gong Q 《NeuroImage》2007,35(2):871-880
This study compared the patterns of brain activation elicited by single-digit addition and multiplication problems. 20 Chinese undergraduates were asked to verify whether arithmetic equations were true or false during functional magnetic resonance imaging. Results showed that both addition and multiplication were supported by a broad neural system that involved regions within SMA, precentral gyrus, intraparietal sulcus, occipital gyri, superior temporal gyrus, and middle frontal gyrus, as well as some subcortical structures. Nevertheless, addition problems elicited more activation in the intraparietal sulcus and middle occipital gyri at the right hemisphere, and superior occipital gyri at both hemispheres, whereas multiplication had more activation in precentral gyrus, supplementary motor areas, and posterior and anterior superior temporal gyrus at the left hemisphere. This pattern of dissociated activation supports our hypothesis that addition has greater reliance on visuospatial processing and multiplication on verbal processing.  相似文献   

5.
Two major influences on how the brain processes music are maturational development and active musical training. Previous functional neuroimaging studies investigating music processing have typically focused on either categorical differences between "musicians versus nonmusicians" or "children versus adults." In the present study, we explored a cross-sectional data set (n=84) using multiple linear regression to isolate the performance-independent effects of age (5 to 33 years) and cumulative duration of musical training (0 to 21,000 practice hours) on fMRI activation similarities and differences between melodic discrimination (MD) and rhythmic discrimination (RD). Age-related effects common to MD and RD were present in three left hemisphere regions: temporofrontal junction, ventral premotor cortex, and the inferior part of the intraparietal sulcus, regions involved in active attending to auditory rhythms, sensorimotor integration, and working memory transformations of pitch and rhythmic patterns. By contrast, training-related effects common to MD and RD were localized to the posterior portion of the left superior temporal gyrus/planum temporale, an area implicated in spectrotemporal pattern matching and auditory-motor coordinate transformations. A single cluster in right superior temporal gyrus showed significantly greater activation during MD than RD. This is the first fMRI which has distinguished maturational from training effects during music processing.  相似文献   

6.
Evoked magnetic fields were recorded from 18 adult volunteers using magnetoencephalography (MEG) during perception of speech stimuli (the endpoints of a voice onset time (VOT) series ranging from /ga/ to /ka/), analogous nonspeech stimuli (the endpoints of a two-tone series varying in relative tone onset time (TOT), and a set of harmonically complex tones varying in pitch. During the early time window (approximately 60 to approximately 130 ms post-stimulus onset), activation of the primary auditory cortex was bilaterally equal in strength for all three tasks. During the middle (approximately 130 to 800 ms) and late (800 to 1400 ms) time windows of the VOT task, activation of the posterior portion of the superior temporal gyrus (STGp) was greater in the left hemisphere than in the right hemisphere, in both group and individual data. These asymmetries were not evident in response to the nonspeech stimuli. Hemispheric asymmetries in a measure of neurophysiological activity in STGp, which includes the supratemporal plane and cortex inside the superior temporal sulcus, may reflect a specialization of association auditory cortex in the left hemisphere for processing speech sounds. Differences in late activation patterns potentially reflect the operation of a postperceptual process (e.g., rehearsal in working memory) that is restricted to speech stimuli.  相似文献   

7.
Tong Y  Gandour J  Talavage T  Wong D  Dzemidzic M  Xu Y  Li X  Lowe M 《NeuroImage》2005,28(2):417-428
This study investigates the neural substrates underlying the perception of two sentence-level prosodic phenomena in Mandarin Chinese: contrastive stress (initial vs. final emphasis position) and intonation (declarative vs. interrogative modality). In an fMRI experiment, Chinese and English listeners were asked to selectively attend to either stress or intonation in paired 3-word sentences, and make speeded-response discrimination judgments. Between-group comparisons revealed that the Chinese group exhibited significantly greater activity in the left supramarginal gyrus and posterior middle temporal gyrus relative to the English group for both tasks. These same two regions showed a leftward asymmetry in the stress task for the Chinese group only. For both language groups, rightward asymmetries were observed in the middle portion of the middle frontal gyrus across tasks. All task effects involved greater activity for the stress task as compared to intonation. A left-sided task effect was observed in the posterior middle temporal gyrus for the Chinese group only. Both language groups exhibited a task effect bilaterally in the intraparietal sulcus. These findings support the emerging view that speech prosody perception involves a dynamic interplay among widely distributed regions not only within a single hemisphere but also between the two hemispheres. This model of speech prosody processing emphasizes the role of right hemisphere regions for complex-sound analysis, whereas task-dependent regions in the left hemisphere predominate when language processing is required.  相似文献   

8.
The analysis of auditory deviant events outside the focus of attention is a fundamental capacity of human information processing and has been studied in experiments on Mismatch Negativity (MMN) and the P3a component in evoked potential research. However, generators contributing to these components are still under discussion. Here we assessed cortical blood flow to auditory stimulation in three conditions. Six healthy subjects were presented with standard tones, frequency deviant tones (MMN condition), and complex novel sounds (Novelty condition), while attention was directed to a nondemanding visual task. Analysis of the MMN condition contrasted with thestandard condition revealed blood flow changes in the left and right superior temporal gyrus, right superior temporal sulcus and left inferior frontal gyrus. Complex novel sounds contrasted with the standard condition activated the left superior temporal gyrus and the left inferior and middle frontal gyrus. A small subcortical activation emerged in the left parahippocampal gyrus and an extended activation was found covering the right superior temporal gyrus. Novel sounds activated the right inferior frontal gyrus when controlling for deviance probability. In contrast to previous studies our results indicate a left hemisphere contribution to a frontotemporal network of auditory deviance processing. Our results provide further evidence for a contribution of the frontal cortex to the processing of auditory deviance outside the focus of directed attention.  相似文献   

9.
Considerable experimental evidence shows that functional cerebral asymmetries are widespread in animals. Activity of the right cerebral hemisphere has been associated with responses to novel stimuli and the expression of intense emotions, such as aggression, escape behaviour and fear. The left hemisphere uses learned patterns and responds to familiar stimuli. Although such lateralization has been studied mainly for visual responses, there is evidence in primates that auditory perception is lateralized and that vocal communication depends on differential processing by the hemispheres. The aim of the present work was to investigate whether dogs use different hemispheres to process different acoustic stimuli by presenting them with playbacks of a thunderstorm and their species-typical vocalizations. The results revealed that dogs usually process their species-typical vocalizations using the left hemisphere and the thunderstorm sounds using the right hemisphere. Nevertheless, conspecific vocalizations are not always processed by the left hemisphere, since the right hemisphere is used for processing vocalizations when they elicit intense emotion, including fear. These findings suggest that the specialisation of the left hemisphere for intraspecific communication is more ancient that previously thought, and so is specialisation of the right hemisphere for intense emotions.  相似文献   

10.
Rimol LM  Specht K  Hugdahl K 《NeuroImage》2006,30(2):554-562
Previous neuroimaging studies have consistently reported bilateral activation to speech stimuli in the superior temporal gyrus (STG) and have identified an anteroventral stream of speech processing along the superior temporal sulcus (STS). However, little attention has been devoted to the possible confound of individual differences in hemispheric dominance for speech. The present study was designed to test for speech-selective activation while controlling for inter-individual variance in auditory laterality, by using only subjects with at least 10% right ear advantage (REA) on the dichotic listening test. Eighteen right-handed, healthy male volunteers (median age 26) participated in the study. The stimuli were words, syllables, and sine wave tones (220-2600 Hz), presented in a block design. Comparing words > tones and syllables > tones yielded activation in the left posterior MTG and the lateral STG (upper bank of STS). In the right temporal lobe, the activation was located in the MTG/STS (lower bank). Comparing left and right temporal lobe cluster sizes from the words > tones and syllables > tones contrasts on single-subject level demonstrated a statistically significant left lateralization for speech sound processing in the STS/MTG area. The asymmetry analyses suggest that dichotic listening may be a suitable method for selecting a homogenous group of subjects with respect to left hemisphere language dominance.  相似文献   

11.
Healthy subjects show increased activation in left temporal lobe regions in response to speech sounds compared to complex nonspeech sounds. Abnormal lateralization of speech-processing regions in the temporal lobes has been posited to be a cardinal feature of schizophrenia. Event-related fMRI was used to test the hypothesis that schizophrenic patients would show an abnormal pattern of hemispheric lateralization when detecting speech compared with complex nonspeech sounds in an auditory oddball target-detection task. We predicted that differential activation for speech in the vicinity of the superior temporal sulcus would be greater in schizophrenic patients than in healthy subjects in the right hemisphere, but less in patients than in healthy subjects in the left hemisphere. Fourteen patients with schizophrenia (selected from an outpatient population, 2 females, 12 males, mean age 35.1 years) and 29 healthy subjects (8 females, 21 males, mean age 29.3 years) were scanned while they performed an auditory oddball task in which the oddball stimuli were either speech sounds or complex nonspeech sounds. Compared to controls, individuals with schizophrenia showed greater differential activation between speech and nonspeech in right temporal cortex, left superior frontal cortex, and the left temporal-parietal junction. The magnitude of the difference in the left temporal parietal junction was significantly correlated with severity of disorganized thinking. This study supports the hypothesis that aberrant functional lateralization of speech processing is an underlying feature of schizophrenia and suggests the magnitude of the disturbance in speech-processing circuits may be associated with severity of disorganized thinking.  相似文献   

12.
Kang E  Lee DS  Kang H  Hwang CH  Oh SH  Kim CS  Chung JK  Lee MC 《NeuroImage》2006,32(1):423-431
Speech perception in face-to-face conversation involves processing of speech sounds (auditory) and speech-associated mouth/lip movements (visual) from a speaker. Using PET where no scanner noise was present, brain regions involved in speech cue processing were investigated with the normal hearing subjects with no previous lip-reading training (N = 17) carrying out a semantic plausibility decision on spoken sentences delivered in a movie file. Multimodality was ensured at the sensory level in all four conditions. Sensory-specific speech cue of one sensory modality, i.e., auditory speech (A condition) or mouth movement (V condition), was delivered with a control stimulus of the other modality whereas speech cues of both sensory modalities (AV condition) were delivered during bimodal condition. In comparison to the control condition, extensive activations in the superior temporal regions were observed bilaterally during the A condition but these activations were reduced in extent and left lateralized during the AV condition. Polymodal region such as left posterior superior temporal sulcus (pSTS) involved in cross-modal interaction/integration of audiovisual speech was found to be activated during the A and more so during the AV conditions but not during the V condition. Activations were observed in Broca's (BA 44), medial frontal (BA 8), and anterior ventrolateral prefrontal (BA 47) regions in the left during the V condition, where lip-reading performance was less successful. Results indicated that the speech-associated lip movements (visual speech cue) rendered suppression on the activity in the right auditory temporal regions. Overadditivity (AV > A + V) observed in the right postcentral region during the bimodal condition relative to the sum of unimodal speech conditions was also associated with reduced activity during the V condition. These findings suggested that visual speech cue could exert an inhibitory modulatory effect on the brain activities in the right hemisphere during the cross-modal interaction of audiovisual speech perception.  相似文献   

13.
Meyer M  Baumann S  Jancke L 《NeuroImage》2006,32(4):1510-1523
Timbre is a major attribute of sound perception and a key feature for the identification of sound quality. Here, we present event-related brain potentials (ERPs) obtained from sixteen healthy individuals while they discriminated complex instrumental tones (piano, trumpet, and violin) or simple sine wave tones that lack the principal features of timbre. Data analysis yielded enhanced N1 and P2 responses to instrumental tones relative to sine wave tones. Furthermore, we applied an electrical brain imaging approach using low-resolution electromagnetic tomography (LORETA) to estimate the neural sources of N1/P2 responses. Separate significance tests of instrumental vs. sine wave tones for N1 and P2 revealed distinct regions as principally governing timbre perception. In an initial stage (N1), timbre perception recruits left and right (peri-)auditory fields with an activity maximum over the right posterior Sylvian fissure (SF) and the posterior cingulate (PCC) territory. In the subsequent stage (P2), we uncovered enhanced activity in the vicinity of the entire cingulate gyrus. The involvement of extra-auditory areas in timbre perception may imply the presence of a highly associative processing level which might be generally related to musical sensations and integrates widespread medial areas of the human cortex. In summary, our results demonstrate spatio-temporally distinct stages in timbre perception which not only involve bilateral parts of the peri-auditory cortex but also medially situated regions of the human brain associated with emotional and auditory imagery functions.  相似文献   

14.
The high degree of intersubject structural variability in the human brain is an obstacle in combining data across subjects in functional neuroimaging experiments. A common method for aligning individual data is normalization into standard 3D stereotaxic space. Since the inherent geometry of the cortex is that of a 2D sheet, higher precision can potentially be achieved if the intersubject alignment is based on landmarks in this 2D space. To examine the potential advantage of surface-based alignment for localization of auditory cortex activation, and to obtain high-resolution maps of areas activated by speech sounds, fMRI data were analyzed from the left hemisphere of subjects tested with phoneme and tone discrimination tasks. We compared Talairach stereotaxic normalization with two surface-based methods: Landmark Based Warping, in which landmarks in the auditory cortex were chosen manually, and Automated Spherical Warping, in which hemispheres were aligned automatically based on spherical representations of individual and average brains. Examination of group maps generated with these alignment methods revealed superiority of the surface-based alignment in providing precise localization of functional foci and in avoiding mis-registration due to intersubject anatomical variability. Human left hemisphere cortical areas engaged in complex auditory perception appear to lie on the superior temporal gyrus, the dorsal bank of the superior temporal sulcus, and the lateral third of Heschl's gyrus.  相似文献   

15.
Millman RE  Woods WP  Quinlan PT 《NeuroImage》2011,54(3):2364-2373
It is generally accepted that, while speech is processed bilaterally in auditory cortical areas, complementary analyses of the speech signal are carried out across the hemispheres. However, the Asymmetric Sampling in Time (AST) model (Poeppel, 2003) suggests that there is functional asymmetry due to different time scales of temporal integration in each hemisphere. The right hemisphere preferentially processes slow modulations commensurate with the theta frequency band (~4-8 Hz), whereas the left hemisphere is more sensitive to fast temporal modulations in the gamma frequency range (~25-50 Hz). Here we examined the perception of noise-vocoded, i.e. spectrally-degraded, words. Magnetoencephalography (MEG) beamformer analyses were used to determine where and how noise-vocoded speech is represented in terms of changes in power resulting from neuronal activity. The outputs of beamformer spatial filters were used to delineate the temporal dynamics of these changes in power. Beamformer analyses localised low-frequency "delta" (1-4 Hz) and "theta" (3-6 Hz) changes in total power to the left hemisphere and high-frequency "gamma" (60-80 Hz, 80-100 Hz) changes in total power to the right hemisphere. Time-frequency analyses confirmed the frequency content and timing of changes in power in the left and right hemispheres. Together the beamformer and time-frequency analyses demonstrate a functional asymmetry in the representation of noise-vocoded words that is inconsistent with the AST model, at least in brain areas outside of primary auditory cortex.  相似文献   

16.
The discrimination of voice-onset time, an acoustic-phonetic cue to voicing in stop consonants, was investigated to explore the neural systems underlying the perception of a rapid temporal speech parameter. Pairs of synthetic stimuli taken from a [da] to [ta] continuum varying in voice-onset time (VOT) were presented for discrimination judgments. Participants exhibited categorical perception, discriminating 15-ms and 30-ms between-category comparisons and failing to discriminate 15-ms within-category comparisons. Contrastive analysis with a tone discrimination task demonstrated left superior temporal gyrus activation in all three VOT conditions with recruitment of additional regions, particularly the right inferior frontal gyrus and middle frontal gyrus for the 15-ms between-category stimuli. Hemispheric differences using anatomically defined regions of interest showed two distinct patterns with anterior regions showing more activation in the right hemisphere relative to the left hemisphere and temporal regions demonstrating greater activation in the left hemisphere relative to the right hemisphere. Activation in the temporal regions appears to reflect initial acoustic-perceptual analysis of VOT. Greater activation in the right hemisphere anterior regions may reflect increased processing demands, suggesting involvement of the right hemisphere when the acoustic distance between the stimuli are reduced and when the discrimination judgment becomes more difficult.  相似文献   

17.
Complex sentence processing is supported by a left-lateralized neural network including inferior frontal cortex and posterior superior temporal cortex. This study investigates the pattern of connectivity and information flow within this network. We used fMRI BOLD data derived from 12 healthy participants reported in an earlier study (Thompson, C. K., Den Ouden, D. B., Bonakdarpour, B., Garibaldi, K., & Parrish, T. B. (2010b). Neural plasticity and treatment-induced recovery of sentence processing in agrammatism. Neuropsychologia, 48(11), 3211-3227) to identify activation peaks associated with object-cleft over syntactically less complex subject-cleft processing. Directed Partial Correlation Analysis was conducted on time series extracted from participant-specific activation peaks and showed evidence of functional connectivity between four regions, linearly between premotor cortex, inferior frontal gyrus, posterior superior temporal sulcus and anterior middle temporal gyrus. This pattern served as the basis for Dynamic Causal Modeling of networks with a driving input to posterior superior temporal cortex, which likely supports thematic role assignment, and networks with a driving input to inferior frontal cortex, a core region associated with syntactic computation. The optimal model was determined through both frequentist and Bayesian Model Selection and turned out to reflect a network with a primary drive from inferior frontal cortex and modulation of the connection between inferior frontal cortex and posterior superior temporal cortex by complex sentence processing. The winning model also showed a substantive role for a feedback mechanism from posterior superior temporal cortex back to inferior frontal cortex. We suggest that complex syntactic processing is driven by word-order analysis, supported by inferior frontal cortex, in an interactive relation with posterior superior temporal cortex, which supports verb argument structure processing.  相似文献   

18.
Macaluso E  Frith C 《NeuroImage》2000,12(5):485-494
Functional asymmetries between hemispheres have been reported in relation to spatial and temporal information processing. Here we used functional magnetic resonance imaging to investigate the influence of task on activity in extrastriate areas during selective spatial attention. During bilateral visual stimulation, subjects attended either the left or the right hemifield. Within the attended side, the task was either to discriminate the orientation of the stimuli or to judge their temporal characteristics. The bilateral stimulation caused symmetric activation of the left and right occipitotemporal junction. Within these regions we investigated the modulatory effects attention and the effect of task upon these. A region of interest approach was used to compare activity in the two hemispheres. The signal at occipitotemporal junction was analyzed in a 2 x 2 x 2 factorial design, with attended side, type of task, and hemisphere as factors. We found that, in both hemispheres, activity was higher when attention was directed to the contralateral hemifield compared with the ipsilateral hemifield. However, the size of these contralateral attentional modulations was dependent on the task. In the left occipitotemporal junction, contralateral modulations were stronger during the temporal task, while in the right occipitotemporal junction contralateral modulations were stronger during orientation discrimination. Overall, this pattern of activity lead to a significant three-way interaction between attended side, type of task, and hemisphere. We conclude that task characteristics influence brain activity associated with spatial selective attention. Our results support the hypothesis that temporal and orientation processing are preferentially associated with the left and right hemisphere, respectively.  相似文献   

19.
Levitin DJ  Menon V 《NeuroImage》2003,20(4):2142-2152
The neuroanatomical correlates of musical structure were investigated using functional magnetic neuroimaging (fMRI) and a unique stimulus manipulation involving scrambled music. The experiment compared brain responses while participants listened to classical music and scrambled versions of that same music. Specifically, the scrambled versions disrupted musical structure while holding low-level musical attributes constant, including the psychoacoustic features of the music such as pitch, loudness, and timbre. Comparing music to its scrambled counterpart, we found focal activation in the pars orbitalis region (Brodmann Area 47) of the left inferior frontal cortex, a region that has been previously closely associated with the processing of linguistic structure in spoken and signed language, and its right hemisphere homologue. We speculate that this particular region of inferior frontal cortex may be more generally responsible for processing fine-structured stimuli that evolve over time, not merely those that are linguistic.  相似文献   

20.
Cortical representation of saccular vestibular stimulation: VEMPs in fMRI   总被引:1,自引:0,他引:1  
Short tone bursts trigger a vestibular evoked myogenic potential (VEMP), an inhibitory potential which reflects a component of the vestibulocollic reflex (VCR). These potentials arise as a result of activation of the sacculus and are expressed through the vestibulo-collic reflex (VCR). Up to now, the ascending projections of the sacculus are unknown in humans, only the representation of the semicircular canals or the entire vestibular nerve has been demonstrated. The aim of this study was to determine whether a sacculus stimulus that evoked VEMPs could activate vestibular cortical areas in fMRI. To determine this, we studied the differential effects of unilateral VEMP stimulation in 21 healthy right-handers in a clinical 1.5 T scanner while wearing piezo electric headphones. A unilateral VEMP stimulus and two auditory control stimuli were given in randomized order over the stimulated ear. A random effects statistical analysis was done with SPM2 (p<0.05, corrected). After exclusion of the auditory effects, the major findings were as follows: (i) significant activations were located in the multisensory cortical vestibular network within both hemispheres, including the posterior insular cortex, the middle and superior temporal gyri, and the inferior parietal cortex. (ii) The activation pattern was elicited bilaterally with a predominance of the right hemisphere in right-handers. (iii) Saccular vestibular projection was predominantly ipsilateral, whereas (iv) pure acoustic stimuli were processed with a predominance of the respective contralateral and mainly in the left hemisphere. This is the first demonstration by means of fMRI of the cortical representation of the saccular input at cortical level. The activation pattern is similar to that known from the stimulation of the entire vestibular nerve or the horizontal semicircular canal. Our data give evidence of a task-dependent separation of the processing within the vestibular otolith and the auditory systems in the two hemispheres.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号