首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Healthy subjects show increased activation in left temporal lobe regions in response to speech sounds compared to complex nonspeech sounds. Abnormal lateralization of speech-processing regions in the temporal lobes has been posited to be a cardinal feature of schizophrenia. Event-related fMRI was used to test the hypothesis that schizophrenic patients would show an abnormal pattern of hemispheric lateralization when detecting speech compared with complex nonspeech sounds in an auditory oddball target-detection task. We predicted that differential activation for speech in the vicinity of the superior temporal sulcus would be greater in schizophrenic patients than in healthy subjects in the right hemisphere, but less in patients than in healthy subjects in the left hemisphere. Fourteen patients with schizophrenia (selected from an outpatient population, 2 females, 12 males, mean age 35.1 years) and 29 healthy subjects (8 females, 21 males, mean age 29.3 years) were scanned while they performed an auditory oddball task in which the oddball stimuli were either speech sounds or complex nonspeech sounds. Compared to controls, individuals with schizophrenia showed greater differential activation between speech and nonspeech in right temporal cortex, left superior frontal cortex, and the left temporal-parietal junction. The magnitude of the difference in the left temporal parietal junction was significantly correlated with severity of disorganized thinking. This study supports the hypothesis that aberrant functional lateralization of speech processing is an underlying feature of schizophrenia and suggests the magnitude of the disturbance in speech-processing circuits may be associated with severity of disorganized thinking.  相似文献   

2.
Hashimoto T  Usui N  Taira M  Nose I  Haji T  Kojima S 《NeuroImage》2006,31(4):1762-1770
This event-related fMRI study was conducted to examine the blood-oxygen-level-dependent responses to the processing of auditory onomatopoeic sounds. We used a sound categorization task in which the participants heard four types of stimuli: onomatopoeic sounds, nouns (verbal), animal (nonverbal) sounds, and pure tone/noise (control). By discriminating between the categories of target sounds (birds/nonbirds), the nouns resulted in activations in the left anterior superior temporal gyrus (STG), whereas the animal sounds resulted in activations in the bilateral superior temporal sulcus (STS) and the left inferior frontal gyrus (IFG). In contrast, the onomatopoeias activated extensive brain regions, including the left anterior STG, the region from the bilateral STS to the middle temporal gyrus, and the bilateral IFG. The onomatopoeic sounds showed greater activation in the right middle STS than did the nouns and environmental sounds. These results indicate that onomatopoeic sounds are processed by extensive brain regions involved in the processing of both verbal and nonverbal sounds. Thus, we can posit that onomatopoeic sounds can serve as a bridge between nouns and animal sounds. This is the first evidence to demonstrate the way in which onomatopoeic sounds are processed in the human brain.  相似文献   

3.
Kang E  Lee DS  Kang H  Hwang CH  Oh SH  Kim CS  Chung JK  Lee MC 《NeuroImage》2006,32(1):423-431
Speech perception in face-to-face conversation involves processing of speech sounds (auditory) and speech-associated mouth/lip movements (visual) from a speaker. Using PET where no scanner noise was present, brain regions involved in speech cue processing were investigated with the normal hearing subjects with no previous lip-reading training (N = 17) carrying out a semantic plausibility decision on spoken sentences delivered in a movie file. Multimodality was ensured at the sensory level in all four conditions. Sensory-specific speech cue of one sensory modality, i.e., auditory speech (A condition) or mouth movement (V condition), was delivered with a control stimulus of the other modality whereas speech cues of both sensory modalities (AV condition) were delivered during bimodal condition. In comparison to the control condition, extensive activations in the superior temporal regions were observed bilaterally during the A condition but these activations were reduced in extent and left lateralized during the AV condition. Polymodal region such as left posterior superior temporal sulcus (pSTS) involved in cross-modal interaction/integration of audiovisual speech was found to be activated during the A and more so during the AV conditions but not during the V condition. Activations were observed in Broca's (BA 44), medial frontal (BA 8), and anterior ventrolateral prefrontal (BA 47) regions in the left during the V condition, where lip-reading performance was less successful. Results indicated that the speech-associated lip movements (visual speech cue) rendered suppression on the activity in the right auditory temporal regions. Overadditivity (AV > A + V) observed in the right postcentral region during the bimodal condition relative to the sum of unimodal speech conditions was also associated with reduced activity during the V condition. These findings suggested that visual speech cue could exert an inhibitory modulatory effect on the brain activities in the right hemisphere during the cross-modal interaction of audiovisual speech perception.  相似文献   

4.
Timbre is a major structuring force in music and one of the most important and ecologically relevant features of auditory events. We used sound stimuli selected on the basis of previous psychophysiological studies to investigate the neural correlates of timbre perception. Our results indicate that both the left and right hemispheres are involved in timbre processing, challenging the conventional notion that the elementary attributes of musical perception are predominantly lateralized to the right hemisphere. Significant timbre-related brain activation was found in well-defined regions of posterior Heschl's gyrus and superior temporal sulcus, extending into the circular insular sulcus. Although the extent of activation was not significantly different between left and right hemispheres, temporal lobe activations were significantly posterior in the left, compared to the right, hemisphere, suggesting a functional asymmetry in their respective contributions to timbre processing. The implications of our findings for music processing in particular and auditory processing in general are discussed.  相似文献   

5.
Rimol LM  Specht K  Hugdahl K 《NeuroImage》2006,30(2):554-562
Previous neuroimaging studies have consistently reported bilateral activation to speech stimuli in the superior temporal gyrus (STG) and have identified an anteroventral stream of speech processing along the superior temporal sulcus (STS). However, little attention has been devoted to the possible confound of individual differences in hemispheric dominance for speech. The present study was designed to test for speech-selective activation while controlling for inter-individual variance in auditory laterality, by using only subjects with at least 10% right ear advantage (REA) on the dichotic listening test. Eighteen right-handed, healthy male volunteers (median age 26) participated in the study. The stimuli were words, syllables, and sine wave tones (220-2600 Hz), presented in a block design. Comparing words > tones and syllables > tones yielded activation in the left posterior MTG and the lateral STG (upper bank of STS). In the right temporal lobe, the activation was located in the MTG/STS (lower bank). Comparing left and right temporal lobe cluster sizes from the words > tones and syllables > tones contrasts on single-subject level demonstrated a statistically significant left lateralization for speech sound processing in the STS/MTG area. The asymmetry analyses suggest that dichotic listening may be a suitable method for selecting a homogenous group of subjects with respect to left hemisphere language dominance.  相似文献   

6.
Parallel cortical pathways have been proposed for the processing of auditory pattern and spatial information, respectively. We tested this segregation with human functional magnetic resonance imaging (fMRI) and separate electroencephalographic (EEG) recordings in the same subjects who listened passively to four sequences of repetitive spatial animal vocalizations in an event-related paradigm. Transitions between sequences constituted either a change of auditory pattern, location, or both pattern+location. This procedure allowed us to investigate the cortical correlates of natural auditory "what" and "where" changes independent of differences in the individual stimuli. For pattern changes, we observed significantly increased fMRI responses along the bilateral anterior superior temporal gyrus and superior temporal sulcus, the planum polare, lateral Heschl's gyrus and anterior planum temporale. For location changes, significant increases of fMRI responses were observed in bilateral posterior superior temporal gyrus and planum temporale. An overlap of these two types of changes occurred in the lateral anterior planum temporale and posterior superior temporal gyrus. The analysis of source event-related potentials (ERPs) revealed faster processing of location than pattern changes. Thus, our data suggest that passive processing of auditory spatial and pattern changes is dissociated both temporally and anatomically in the human brain. The predominant role of more anterior aspects of the superior temporal lobe in sound identity processing supports the role of this area as part of the auditory pattern processing stream, while spatial processing of auditory stimuli appears to be mediated by the more posterior parts of the superior temporal lobe.  相似文献   

7.
The left superior temporal cortex shows greater responsiveness to speech than to non-speech sounds according to previous neuroimaging studies, suggesting that this brain region has a special role in speech processing. However, since speech sounds differ acoustically from the non-speech sounds, it is possible that this region is not involved in speech perception per se, but rather in processing of some complex acoustic features. "Sine wave speech" (SWS) provides a tool to study neural speech specificity using identical acoustic stimuli, which can be perceived either as speech or non-speech, depending on previous experience of the stimuli. We scanned 21 subjects using 3T functional MRI in two sessions, both including SWS and control stimuli. In the pre-training session, all subjects perceived the SWS stimuli as non-speech. In the post-training session, the identical stimuli were perceived as speech by 16 subjects. In these subjects, SWS stimuli elicited significantly stronger activity within the left posterior superior temporal sulcus (STSp) in the post- vs. pre-training session. In contrast, activity in this region was not enhanced after training in 5 subjects who did not perceive SWS stimuli as speech. Moreover, the control stimuli, which were always perceived as non-speech, elicited similar activity in this region in both sessions. Altogether, the present findings suggest that activation of the neural speech representations in the left STSp might be a pre-requisite for hearing sounds as speech.  相似文献   

8.
Evoked magnetic fields were recorded from 18 adult volunteers using magnetoencephalography (MEG) during perception of speech stimuli (the endpoints of a voice onset time (VOT) series ranging from /ga/ to /ka/), analogous nonspeech stimuli (the endpoints of a two-tone series varying in relative tone onset time (TOT), and a set of harmonically complex tones varying in pitch. During the early time window (approximately 60 to approximately 130 ms post-stimulus onset), activation of the primary auditory cortex was bilaterally equal in strength for all three tasks. During the middle (approximately 130 to 800 ms) and late (800 to 1400 ms) time windows of the VOT task, activation of the posterior portion of the superior temporal gyrus (STGp) was greater in the left hemisphere than in the right hemisphere, in both group and individual data. These asymmetries were not evident in response to the nonspeech stimuli. Hemispheric asymmetries in a measure of neurophysiological activity in STGp, which includes the supratemporal plane and cortex inside the superior temporal sulcus, may reflect a specialization of association auditory cortex in the left hemisphere for processing speech sounds. Differences in late activation patterns potentially reflect the operation of a postperceptual process (e.g., rehearsal in working memory) that is restricted to speech stimuli.  相似文献   

9.
It is commonly assumed that, in the cochlea and the brainstem, the auditory system processes speech sounds without differentiating them from any other sounds. At some stage, however, it must treat speech sounds and nonspeech sounds differently, since we perceive them as different. The purpose of this study was to delimit the first location in the auditory pathway that makes this distinction using functional MRI, by identifying regions that are differentially sensitive to the internal structure of speech sounds as opposed to closely matched control sounds. We analyzed data from nine right-handed volunteers who were scanned while listening to natural and synthetic vowels, or to nonspeech stimuli matched to the vowel sounds in terms of their long-term energy and both their spectral and temporal profiles. The vowels produced more activation than nonspeech sounds in a bilateral region of the superior temporal sulcus, lateral and inferior to regions of auditory cortex that were activated by both vowels and nonspeech stimuli. The results suggest that the perception of vowel sounds is compatible with a hierarchical model of primate auditory processing in which early cortical stages of processing respond indiscriminately to speech and nonspeech sounds, and only higher regions, beyond anatomically defined auditory cortex, show selectivity for speech sounds.  相似文献   

10.
Osnes B  Hugdahl K  Specht K 《NeuroImage》2011,54(3):2437-2445
Several reports of premotor cortex involvement in speech perception have been put forward. Still, the functional role of premotor cortex is under debate. In order to investigate the functional role of premotor cortex, we presented parametrically varied speech stimuli in both a behavioral and functional magnetic resonance imaging (fMRI) study. White noise was transformed over seven distinct steps into a speech sound and presented to the participants in a randomized order. As control condition served the same transformation from white noise into a music instrument sound. The fMRI data were modelled with Dynamic Causal Modeling (DCM) where the effective connectivity between Heschl's gyrus, planum temporale, superior temporal sulcus and premotor cortex were tested. The fMRI results revealed a graded increase in activation in the left superior temporal sulcus. Premotor cortex activity was only present at an intermediate step when the speech sounds became identifiable but were still distorted but was not present when the speech sounds were clearly perceivable. A Bayesian model selection procedure favored a model that contained significant interconnections between Heschl's gyrus, planum temporal, and superior temporal sulcus when processing speech sounds. In addition, bidirectional connections between premotor cortex and superior temporal sulcus and from planum temporale to premotor cortex were significant. Processing non-speech sounds initiated no significant connections to premotor cortex. Since the highest level of motor activity was observed only when processing identifiable sounds with incomplete phonological information, it is concluded that premotor cortex is not generally necessary for speech perception but may facilitate interpreting a sound as speech when the acoustic input is sparse.  相似文献   

11.
Many people exposed to sinewave analogues of speech first report hearing them as electronic glissando and, later, when they switch into a 'speech mode', hearing them as syllables. This perceptual switch modifies their discrimination abilities, enhancing perception of differences that cross phonemic boundaries while diminishing perception of differences within phonemic categories. Using high-density evoked potentials and fMRI in a discrimination paradigm, we studied the changes in brain activity that are related to this change in perception. With ERPs, we observed that phonemic coding is faster than acoustic coding: The electrophysiological mismatch response (MMR) occurred earlier for a phonemic change than for an equivalent acoustic change. The MMR topography was also more asymmetric for a phonemic change than for an acoustic change. In fMRI, activations were also significantly asymmetric, favoring the left hemisphere in both perception modes. Furthermore, switching to the speech mode significantly enhanced activation in the posterior parts of the left superior gyrus and sulcus relative to the non-speech mode. When responses to a change of stimulus were studied, a cluster of voxels in the supramarginal gyrus was activated significantly more by a phonemic change than by an acoustic change. These results demonstrate that phoneme perception in adults relies on a specific and highly efficient left-hemispheric network, which can be activated in top-down fashion when processing ambiguous speech/non-speech stimuli.  相似文献   

12.
Adults and children processing music: an fMRI study   总被引:5,自引:0,他引:5  
Koelsch S  Fritz T  Schulze K  Alsop D  Schlaug G 《NeuroImage》2005,25(4):1068-1076
The present study investigates the functional neuroanatomy of music perception with functional magnetic resonance imaging (fMRI). Three different subject groups were investigated to examine developmental aspects and effects of musical training: 10-year-old children with varying degrees of musical training, adults without formal musical training (nonmusicians), and adult musicians. Subjects made judgements on sequences that ended on chords that were music-syntactically either regular or irregular. In adults, irregular chords activated the inferior frontal gyrus, orbital frontolateral cortex, the anterior insula, ventrolateral premotor cortex, anterior and posterior areas of the superior temporal gyrus, the superior temporal sulcus, and the supramarginal gyrus. These structures presumably form different networks mediating cognitive aspects of music processing (such as processing of musical syntax and musical meaning, as well as auditory working memory), and possibly emotional aspects of music processing. In the right hemisphere, the activation pattern of children was similar to that of adults. In the left hemisphere, adults showed larger activations than children in prefrontal areas, in the supramarginal gyrus, and in temporal areas. In both adults and children, musical training was correlated with stronger activations in the frontal operculum and the anterior portion of the superior temporal gyrus.  相似文献   

13.
Processing syntax is believed to be a higher cognitive function involving cortical regions outside sensory cortices. In particular, previous studies revealed that early syntactic processes at around 100-200 ms affect brain activations in anterior regions of the superior temporal gyrus (STG), while independent studies showed that pure auditory perceptual processing is related to sensory cortex activations. However, syntax-related modulations of sensory cortices were reported recently, thereby adding diverging findings to the previous studies. The goal of the present magnetoencephalography study was to localize the cortical regions underlying early syntactic processes and those underlying perceptual processes using a within-subject design. Sentences varying the factors syntax (correct vs. incorrect) and auditory space (standard vs. change of interaural time difference (ITD)) were auditorily presented. Both syntactic and auditory spatial anomalies led to very early activations (40-90 ms) in the STG. Around 135 ms after violation onset, differential effects were observed for syntax and auditory space, with syntactically incorrect sentences leading to activations in the anterior STG, whereas ITD changes elicited activations more posterior in the STG. Furthermore, our observations strongly indicate that the anterior and the posterior STG are activated simultaneously when a double violation is encountered. Thus, the present findings provide evidence of a dissociation of speech-related processes in the anterior STG and the processing of auditory spatial information in the posterior STG, compatible with the view of different processing streams in the temporal cortex.  相似文献   

14.
Recently we have demonstrated that during auditory short-term memory maintenance, gamma-band activity (GBA) components can be identified which are specific to the retained stimulus. These activations peaked in the middle of the delay phase between sample and test stimuli, and their magnitude during the final part of this period correlated with performance. However, using a constant delay duration did not allow to answer the question whether stimulus-specific GBA components represented responses to sample sounds or anticipatory activations preceding test stimuli. Here we addressed this unresolved issue by investigating the temporal dynamics of stimulus-specific GBA during two delay durations. Magnetoencephalogram was recorded in 18 adults during an auditory spatial short-term memory task involving lateralized sample stimuli presented with two different interaural time delays. Subjects had to decide whether test stimuli presented after retention phases of 800 or 1200 ms had the same lateralization as sample sounds. Statistical probability mapping served to identify oscillatory activations differentiating between the two sample sounds. We found stimulus-specific GBA components over posterior cortex peaking about 400 ms prior to the onset of test stimuli regardless of delay duration. Their magnitude correlated with task performance. In summary, stimulus-specific GBA components with a predictive power for short-term memory performance were observed in anticipation of test stimuli. They may reflect the preparatory activation of memory representations or the shifting of attention to the specific expected location of the test stimulus.  相似文献   

15.
The high degree of intersubject structural variability in the human brain is an obstacle in combining data across subjects in functional neuroimaging experiments. A common method for aligning individual data is normalization into standard 3D stereotaxic space. Since the inherent geometry of the cortex is that of a 2D sheet, higher precision can potentially be achieved if the intersubject alignment is based on landmarks in this 2D space. To examine the potential advantage of surface-based alignment for localization of auditory cortex activation, and to obtain high-resolution maps of areas activated by speech sounds, fMRI data were analyzed from the left hemisphere of subjects tested with phoneme and tone discrimination tasks. We compared Talairach stereotaxic normalization with two surface-based methods: Landmark Based Warping, in which landmarks in the auditory cortex were chosen manually, and Automated Spherical Warping, in which hemispheres were aligned automatically based on spherical representations of individual and average brains. Examination of group maps generated with these alignment methods revealed superiority of the surface-based alignment in providing precise localization of functional foci and in avoiding mis-registration due to intersubject anatomical variability. Human left hemisphere cortical areas engaged in complex auditory perception appear to lie on the superior temporal gyrus, the dorsal bank of the superior temporal sulcus, and the lateral third of Heschl's gyrus.  相似文献   

16.
Vibrotactile stimuli can facilitate hearing, both in hearing-impaired and in normally hearing people. Accordingly, the sounds of hands exploring a surface contribute to the explorer's haptic percepts. As a possible brain basis of such phenomena, functional brain imaging has identified activations specific to audiotactile interaction in secondary somatosensory cortex, auditory belt area, and posterior parietal cortex, depending on the quality and relative salience of the stimuli. We studied 13 subjects with non-invasive functional magnetic resonance imaging (fMRI) to search for auditory brain areas that would be activated by touch. Vibration bursts of 200 Hz were delivered to the subjects' fingers and palm and tactile pressure pulses to their fingertips. Noise bursts served to identify auditory cortex. Vibrotactile-auditory co-activation, addressed with minimal smoothing to obtain a conservative estimate, was found in an 85-mm3 region in the posterior auditory belt area. This co-activation could be related to facilitated hearing at the behavioral level, reflecting the analysis of sound-like temporal patterns in vibration. However, even tactile pulses (without any vibration) activated parts of the posterior auditory belt area, which therefore might subserve processing of audiotactile events that arise during dynamic contact between hands and environment.  相似文献   

17.
An fMRI investigation of syllable sequence production   总被引:2,自引:0,他引:2  
Bohland JW  Guenther FH 《NeuroImage》2006,32(2):821-841
Fluent speech comprises sequences that are composed from a finite alphabet of learned words, syllables, and phonemes. The sequencing of discrete motor behaviors has received much attention in the motor control literature, but relatively little has been focused directly on speech production. In this paper, we investigate the cortical and subcortical regions involved in organizing and enacting sequences of simple speech sounds. Sparse event-triggered functional magnetic resonance imaging (fMRI) was used to measure responses to preparation and overt production of non-lexical three-syllable utterances, parameterized by two factors: syllable complexity and sequence complexity. The comparison of overt production trials to preparation only trials revealed a network related to the initiation of a speech plan, control of the articulators, and to hearing one's own voice. This network included the primary motor and somatosensory cortices, auditory cortical areas, supplementary motor area (SMA), the precentral gyrus of the insula, and portions of the thalamus, basal ganglia, and cerebellum. Additional stimulus complexity led to increased engagement of the basic speech network and recruitment of additional areas known to be involved in sequencing non-speech motor acts. In particular, the left hemisphere inferior frontal sulcus and posterior parietal cortex, and bilateral regions at the junction of the anterior insula and frontal operculum, the SMA and pre-SMA, the basal ganglia, anterior thalamus, and the cerebellum showed increased activity for more complex stimuli. We hypothesize mechanistic roles for the extended speech production network in the organization and execution of sequences of speech sounds.  相似文献   

18.
Malinen S  Hlushchuk Y  Hari R 《NeuroImage》2007,35(1):131-139
In search for suitable tools to study brain activation in natural environments, where the stimuli are multimodal, poorly predictable and irregularly varying, we collected functional magnetic resonance imaging data from 6 subjects during a continuous 8-min stimulus sequence that comprised auditory (speech or tone pips), visual (video clips dominated by faces, hands, or buildings), and tactile finger stimuli in blocks of 6-33 s. Results obtained by independent component analysis (ICA) and general-linear-model-based analysis (GLM) were compared. ICA separated in the superior temporal gyrus one independent component (IC) that reacted to all auditory stimuli and in the superior temporal sulcus another IC responding only to speech. Several distinct and rather symmetric vision-sensitive ICs were found in the posterior brain. An IC in the V5/MT region reacted to videos depicting faces or hands, whereas ICs in the V1/V2 region reacted to all video clips, including buildings. The corresponding GLM-derived activations in the auditory and early visual cortices comprised sub-areas of the ICA-revealed activations. ICA separated a prominent IC in the primary somatosensory cortex whereas the GLM-based analysis failed to show any touch-related activation. "Intrinsic" components, unrelated to the stimuli but spatially consistent across subjects, were discerned as well. The individual time courses were highly consistent in sensory projection cortices and more variable elsewhere. The ability to differentiate functionally meaningful composites of activated brain areas and to straightforwardly reveal their temporal dynamics renders ICA a sensitive tool to study brain responses to complex natural stimuli.  相似文献   

19.
Male and female voices activate distinct regions in the male brain   总被引:1,自引:0,他引:1  
In schizophrenia, auditory verbal hallucinations (AVHs) are likely to be perceived as gender-specific. Given that functional neuro-imaging correlates of AVHs involve multiple brain regions principally including auditory cortex, it is likely that those brain regions responsible for attribution of gender to speech are invoked during AVHs. We used functional magnetic resonance imaging (fMRI) and a paradigm utilising 'gender-apparent' (unaltered) and 'gender-ambiguous' (pitch-scaled) male and female voice stimuli to test the hypothesis that male and female voices activate distinct brain areas during gender attribution. The perception of female voices, when compared with male voices, affected greater activation of the right anterior superior temporal gyrus, near the superior temporal sulcus. Similarly, male voice perception activated the mesio-parietal precuneus area. These different gender associations could not be explained by either simple pitch perception or behavioural response because the activations that we observed were conjointly activated by both 'gender-apparent' and 'gender-ambiguous' voices. The results of this study demonstrate that, in the male brain, the perception of male and female voices activates distinct brain regions.  相似文献   

20.
Rinne T  Koistinen S  Talja S  Wikman P  Salonen O 《NeuroImage》2012,59(4):4126-4131
In the present study, we applied high-resolution functional magnetic resonance imaging (fMRI) of the human auditory cortex (AC) and adjacent areas to compare activations during spatial discrimination and spatial n-back memory tasks that were varied parametrically in difficulty. We found that activations in the anterior superior temporal gyrus (STG) were stronger during spatial discrimination than during spatial memory, while spatial memory was associated with stronger activations in the inferior parietal lobule (IPL). We also found that wide AC areas were strongly deactivated during the spatial memory tasks. The present AC activation patterns associated with spatial discrimination and spatial memory tasks were highly similar to those obtained in our previous study comparing AC activations during pitch discrimination and pitch memory (Rinne et al., 2009). Together our previous and present results indicate that discrimination and memory tasks activate anterior and posterior AC areas differently and that this anterior-posterior division is present both when these tasks are performed on spatially invariant (pitch discrimination vs. memory) or spatially varying (spatial discrimination vs. memory) sounds. These results also further strengthen the view that activations of human AC cannot be explained only by stimulus-level parameters (e.g., spatial vs. nonspatial stimuli) but that the activations observed with fMRI are strongly dependent on the characteristics of the behavioral task. Thus, our results suggest that in order to understand the functional structure of AC a more systematic investigation of task-related factors affecting AC activations is needed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号