首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Osnes B  Hugdahl K  Specht K 《NeuroImage》2011,54(3):2437-2445
Several reports of premotor cortex involvement in speech perception have been put forward. Still, the functional role of premotor cortex is under debate. In order to investigate the functional role of premotor cortex, we presented parametrically varied speech stimuli in both a behavioral and functional magnetic resonance imaging (fMRI) study. White noise was transformed over seven distinct steps into a speech sound and presented to the participants in a randomized order. As control condition served the same transformation from white noise into a music instrument sound. The fMRI data were modelled with Dynamic Causal Modeling (DCM) where the effective connectivity between Heschl's gyrus, planum temporale, superior temporal sulcus and premotor cortex were tested. The fMRI results revealed a graded increase in activation in the left superior temporal sulcus. Premotor cortex activity was only present at an intermediate step when the speech sounds became identifiable but were still distorted but was not present when the speech sounds were clearly perceivable. A Bayesian model selection procedure favored a model that contained significant interconnections between Heschl's gyrus, planum temporal, and superior temporal sulcus when processing speech sounds. In addition, bidirectional connections between premotor cortex and superior temporal sulcus and from planum temporale to premotor cortex were significant. Processing non-speech sounds initiated no significant connections to premotor cortex. Since the highest level of motor activity was observed only when processing identifiable sounds with incomplete phonological information, it is concluded that premotor cortex is not generally necessary for speech perception but may facilitate interpreting a sound as speech when the acoustic input is sparse.  相似文献   

2.
Scanning silence: mental imagery of complex sounds   总被引:1,自引:0,他引:1  
In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.  相似文献   

3.
Parallel cortical pathways have been proposed for the processing of auditory pattern and spatial information, respectively. We tested this segregation with human functional magnetic resonance imaging (fMRI) and separate electroencephalographic (EEG) recordings in the same subjects who listened passively to four sequences of repetitive spatial animal vocalizations in an event-related paradigm. Transitions between sequences constituted either a change of auditory pattern, location, or both pattern+location. This procedure allowed us to investigate the cortical correlates of natural auditory "what" and "where" changes independent of differences in the individual stimuli. For pattern changes, we observed significantly increased fMRI responses along the bilateral anterior superior temporal gyrus and superior temporal sulcus, the planum polare, lateral Heschl's gyrus and anterior planum temporale. For location changes, significant increases of fMRI responses were observed in bilateral posterior superior temporal gyrus and planum temporale. An overlap of these two types of changes occurred in the lateral anterior planum temporale and posterior superior temporal gyrus. The analysis of source event-related potentials (ERPs) revealed faster processing of location than pattern changes. Thus, our data suggest that passive processing of auditory spatial and pattern changes is dissociated both temporally and anatomically in the human brain. The predominant role of more anterior aspects of the superior temporal lobe in sound identity processing supports the role of this area as part of the auditory pattern processing stream, while spatial processing of auditory stimuli appears to be mediated by the more posterior parts of the superior temporal lobe.  相似文献   

4.
Recently, magnetic resonance properties of cerebral gray matter have been spatially mapped--in vivo--over the cortical surface. In one of the first neuroscientific applications of this approach, this study explores what can be learned about auditory cortex in living humans by mapping longitudinal relaxation rate (R1), a property related to myelin content. Gray matter R1 (and thickness) showed repeatable trends, including the following: (1) Regions of high R1 were always found overlapping posteromedial Heschl's gyrus. They also sometimes occurred in planum temporale and never in other parts of the superior temporal lobe. We hypothesize that the high R1 overlapping Heschl's gyrus (which likely indicates dense gray matter myelination) reflects auditory koniocortex (i.e., primary cortex), a heavily myelinated area that shows comparable overlap with the gyrus. High R1 overlapping Heschl's gyrus was identified in every instance suggesting that R1 may ultimately provide a marker for koniocortex in individuals. Such a marker would be significant for auditory neuroimaging, which has no standard means (anatomic or physiologic) for localizing cortical areas in individual subjects. (2) Inter-hemispheric comparisons revealed greater R1 on the left on Heschl's gyrus, planum temporale, superior temporal gyrus and superior temporal sulcus. This asymmetry suggests greater gray matter myelination in left auditory cortex, which may be a substrate for the left hemisphere's specialized processing of speech, language, and rapid acoustic changes. These results indicate that in vivo R1 mapping can provide new insights into the structure of human cortical gray matter and its relation to function.  相似文献   

5.
Schizophrenia is associated with language-related dysfunction. A previous study [Schizophr. Res. 59 (2003c) 159] has shown that this abnormality is present at the level of automatic discrimination of change in speech sounds, as revealed by magnetoencephalographic recording of auditory mismatch field in response to across-category change in vowels. Here, we investigated the neuroanatomical substrate for this physiological abnormality. Thirteen patients with schizophrenia and 19 matched control subjects were examined using magnetoencephalography (MEG) and high-resolution magnetic resonance imaging (MRI) to evaluate both mismatch field strengths in response to change between vowel /a/ and /o/, and gray matter volumes of Heschl's gyrus (HG) and planum temporale (PT). The magnetic global field power of mismatch response to change in phonemes showed a bilateral reduction in patients with schizophrenia. The gray matter volume of left planum temporale, but not right planum temporale or bilateral Heschl's gyrus, was significantly smaller in patients with schizophrenia compared with that in control subjects. Furthermore, the phonetic mismatch strength in the left hemisphere was significantly correlated with left planum temporale gray matter volume in patients with schizophrenia only. These results suggest that structural abnormalities of the planum temporale may underlie the functional abnormalities of fundamental language-related processing in schizophrenia.  相似文献   

6.
Pulse-resonance sounds like vowels or instrumental tones contain acoustic information about the physical size of the sound source (pulse rate) and body resonators (resonance scale). Previous research has revealed correlates of these variables in humans using functional neuroimaging. Here, we report two experiments that use magnetoencephalography to study the neuromagnetic representations of pulse rate and resonance scale in human auditory cortex. In Experiment 1, auditory evoked fields were recorded from nineteen subjects presented with French horn tones, the pulse rate and resonance scale of which had been manipulated independently using a mucoder. In Experiment 2, fifteen subjects listened to French horn tones which differed in resonance scale but which lacked pulse rate cues. The resulting cortical activity was evaluated by spatio-temporal source analysis. Changes in pulse rate elicited a well-defined N1m component with cortical generators located at the border between Heschl's gyrus and planum temporale. Changes in resonance scale elicited a second, independent, N1m component located in planum temporale. Our results demonstrate that resonance scale can be distinguished in its neuromagnetic representation from cortical activity related to the sound's pulse rate. Moreover, the existence of two separate components in the N1m sensitive to register information highlights the importance of this time window for the processing of frequency information in human auditory cortex.  相似文献   

7.
The human auditory cortex plays a special role in speech recognition. It is therefore necessary to clarify the functional roles of individual auditory areas. We applied functional magnetic resonance imaging (fMRI) to examine cortical responses to speech sounds, which were presented under the dichotic and diotic (binaural) listening conditions. We found two different response patterns in multiple auditory areas and language-related areas. In the auditory cortex, the medial portion of the secondary auditory area (A2), as well as a part of the planum temporale (PT) and the superior temporal gyrus and sulcus (ST), showed greater responses under the dichotic condition than under the diotic condition. This dichotic selectivity may reflect acoustic differences and attention-related factors such as spatial attention and selective attention to targets. In contrast, other parts of the auditory cortex showed comparable responses to the dichotic and diotic conditions. We found similar functional differentiation in the inferior frontal (IF) cortex. These results suggest that multiple auditory and language areas may play a pivotal role in integrating the functional differentiation for speech recognition.  相似文献   

8.
Barrett DJ  Hall DA 《NeuroImage》2006,32(2):968-977
Primate studies suggest the auditory cortex is organized in at least two anatomically and functionally separate pathways: a ventral pathway specializing in object recognition and a dorsal pathway specializing in object localization. The current experiment assesses the validity of this model in human listeners using fMRI to investigate the neural substrates of spatial and non-spatial temporal pattern information. Targets were differentiated from non-targets on the basis of two levels of pitch information (present vs. absent, fixed vs. varying) and two levels of spatial information (compact vs. diffuse sound source, fixed vs. varying location) in a factorial design. Analyses revealed spatially separate responses to spatial and non-spatial temporal information. The main activation associated with pitch occurred predominantly in Heschl's gyrus (HG) and planum polare, while that associated with changing sound source location occurred posterior to HG, in planum temporale (PT). Activation common to both pitch and changing spatial location was located bilaterally in anterior PT. Apart from this small region of overlap, our data support the anatomical and functional segregation of 'what' and 'where' in human non-primary auditory cortex. Our results also highlight a distinction in the sensitivity of anterior and posterior fields of PT to non-spatial information and specify the type of spatial information that is coded within early areas of the spatial processing stream.  相似文献   

9.
Magnetoencephalography was used to investigate the relationship between the sustained magnetic field in auditory cortex and the perception of periodic sounds. The response to regular and irregular click trains was measured at three sound intensities. Two separate sources were isolated adjacent to primary auditory cortex: One, located in lateral Heschl's gyrus, was particularly sensitive to regularity and largely insensitive to sound level. The second, located just posterior to the first in planum temporale, was particularly sensitive to sound level and largely insensitive to regularity. This double dissociation to the same stimuli indicates that the two sources represent separate mechanisms; the first would appear to be involved with pitch perception and the second with loudness. The delay of the offset of the sustained field was found to increase with interclick interval up to 200 ms at least, which suggests that the sustained field offset represents a sophisticated offset-monitoring mechanism rather than simply the cessation of stimulation.  相似文献   

10.
This 3-T fMRI study investigates brain regions similarly and differentially involved with listening and covert production of singing relative to speech. Given the greater use of auditory-motor self-monitoring and imagery with respect to consonance in singing, brain regions involved with these processes are predicted to be differentially active for singing more than for speech. The stimuli consisted of six Japanese songs. A block design was employed in which the tasks for the subject were to listen passively to singing of the song lyrics, passively listen to speaking of the song lyrics, covertly sing the song lyrics visually presented, covertly speak the song lyrics visually presented, and to rest. The conjunction of passive listening and covert production tasks used in this study allow for general neural processes underlying both perception and production to be discerned that are not exclusively a result of stimulus induced auditory processing nor to low level articulatory motor control. Brain regions involved with both perception and production for singing as well as speech were found to include the left planum temporale/superior temporal parietal region, as well as left and right premotor cortex, lateral aspect of the VI lobule of posterior cerebellum, anterior superior temporal gyrus, and planum polare. Greater activity for the singing over the speech condition for both the listening and covert production tasks was found in the right planum temporale. Greater activity in brain regions involved with consonance, orbitofrontal cortex (listening task), subcallosal cingulate (covert production task) were also present for singing over speech. The results are consistent with the PT mediating representational transformation across auditory and motor domains in response to consonance for singing over that of speech. Hemispheric laterality was assessed by paired t tests between active voxels in the contrast of interest relative to the left-right flipped contrast of interest calculated from images normalized to the left-right reflected template. Consistent with some hypotheses regarding hemispheric specialization, a pattern of differential laterality for speech over singing (both covert production and listening tasks) occurs in the left temporal lobe, whereas, singing over speech (listening task only) occurs in right temporal lobe.  相似文献   

11.
The high degree of intersubject structural variability in the human brain is an obstacle in combining data across subjects in functional neuroimaging experiments. A common method for aligning individual data is normalization into standard 3D stereotaxic space. Since the inherent geometry of the cortex is that of a 2D sheet, higher precision can potentially be achieved if the intersubject alignment is based on landmarks in this 2D space. To examine the potential advantage of surface-based alignment for localization of auditory cortex activation, and to obtain high-resolution maps of areas activated by speech sounds, fMRI data were analyzed from the left hemisphere of subjects tested with phoneme and tone discrimination tasks. We compared Talairach stereotaxic normalization with two surface-based methods: Landmark Based Warping, in which landmarks in the auditory cortex were chosen manually, and Automated Spherical Warping, in which hemispheres were aligned automatically based on spherical representations of individual and average brains. Examination of group maps generated with these alignment methods revealed superiority of the surface-based alignment in providing precise localization of functional foci and in avoiding mis-registration due to intersubject anatomical variability. Human left hemisphere cortical areas engaged in complex auditory perception appear to lie on the superior temporal gyrus, the dorsal bank of the superior temporal sulcus, and the lateral third of Heschl's gyrus.  相似文献   

12.
In a recent electroencephalography (EEG) study (Takeichi et al., 2007a), we developed a new technique for assessing speech comprehension using speech degraded by m-sequence modulation and found a correlation peak with a 400-ms delay. This peak depended on the comprehensibility of the modulated speech sounds. Here we report the results of a functional magnetic resonance imaging (fMRI) experiment comparable to our previous EEG experiment. We examined brain areas related to verbal comprehension of the modulated speech sound to examine which neural system processes this modulated speech. A non-integer, alternating-block factorial design was used with 23 Japanese-speaking participants, with time reversal and m-sequence modulation as factors. A main effect of time reversal was found in the left temporal cortex along the superior temporal sulcus (BA21 and BA39), left precentral gyrus (BA6) and right inferior temporal gyrus (BA21). A main effect of modulation was found in the left postcentral gyrus (BA43) and the right medial frontal gyri (BA6) as an increase by modulation and in the left temporal cortex (BA21, 39), parahippocampal gyrus (BA34), posterior cingulate (BA23), caudate and thalamus and right superior temporal gyrus (BA38) as a decrease by modulation. An interaction effect associated specifically with non-modulated speech was found in the left frontal gyrus (BA47), left occipital cortex in the cuneus (BA18), left precuneus (BA7, 31), right precuneus (BA31) and right thalamus (forward > reverse). The other interaction effect associated specifically with modulation of speech sound was found in the inferior frontal gyrus in the opercular area (BA44) (forward > reverse). Estimated scalp projection of the component correlation function (Cao et al., 2002) for the corresponding EEG data (Takeichi et al., 2007a, showed leftward dominance. Hence, activities in the superior temporal sulcus (BA21 and BA39), which are commonly observed for speech processing, as well as left precentral gyrus (BA6) and left inferior frontal gyrus in the opercular area (BA44) is suggested to contribute to the comprehension-related EEG signal.  相似文献   

13.
Functional MRI was performed to investigate differences in the basic functional organization of the primary and secondary auditory cortex regarding preferred stimulus lateralization and frequency. A modified sparse acquisition scheme was used to spatially map the characteristics of the auditory cortex at the level of individual voxels. In the regions of Heschl's gyrus and sulcus that correspond with the primary auditory cortex, activation was systematically strongest in response to contralateral stimulation. Contrarily, in the surrounding secondary active regions including the planum polare and the planum temporale, large-scale preferences with respect to stimulus lateralization were absent. Regarding optimal stimulus frequency, low- to high-frequency spatial gradients were discernable along the Heschl's gyrus and sulcus in anterolateral to posteromedial direction, especially in the right hemisphere, consistent with the presence of a tonotopic organization in these primary areas. However, in the surrounding activated secondary areas frequency preferences were erratic. Lateralization preferences did not depend on stimulus frequency, and frequency preferences did not depend on stimulus lateralization. While the primary auditory cortex is topographically organized with respect to physical stimulus properties (i.e., lateralization and frequency), such organizational principles are no longer obvious in secondary and higher areas. This suggests a neural re-encoding of sound signals in the transition from primary to secondary areas, possibly in relation to auditory scene analysis and the processing of auditory objects.  相似文献   

14.
The aim of the present study was the investigation of neural correlates of music processing with fMRI. Chord sequences were presented to the participants, infrequently containing unexpected musical events. These events activated the areas of Broca and Wernicke, the superior temporal sulcus, Heschl's gyrus, both planum polare and planum temporale, as well as the anterior superior insular cortices. Some of these brain structures have previously been shown to be involved in music processing, but the cortical network comprising all these structures has up to now been thought to be domain-specific for language processing. To what extent this network might also be activated by the processing of non-linguistic information has remained unknown. The present fMRI-data reveal that the human brain employs this neuronal network also for the processing of musical information, suggesting that the cortical network known to support language processing is less domain-specific than previously believed.  相似文献   

15.
A vivid perception of a moving human can be evoked when viewing a few point-lights on the joints of an invisible walker. This special visual ability for biological motion perception has been found to involve the posterior superior temporal sulcus (STSp). However, in everyday life, human motion can also be recognized using acoustic cues. In the present study, we investigated the neural substrate of human motion perception when listening to footsteps, by means of a sparse sampling functional MRI design. We first showed an auditory attentional network that shares frontal and parietal areas previously found in visual attention paradigms. Second, an activation was observed in the auditory cortex (Heschl's gyrus and planum temporale), likely to be related to low-level sound processing. Most strikingly, another activation was evidenced in a STSp region overlapping the temporal biological motion area previously reported using visual input. We thus propose that a part of the STSp region might be a supramodal area involved in human motion recognition, irrespective of the sensory modality input.  相似文献   

16.
Gestures of the face, arms, and hands are components of signed languages used by Deaf people. Signaling codes, such as the racecourse betting code known as Tic Tac, are also made up of such gestures. Tic Tac lacks the phonological structure of British Sign Language (BSL) but is similar in terms of its visual and articulatory components. Using fMRI, we compared the neural correlates of viewing a gestural language (BSL) and a manual-brachial code (Tic Tac) relative to a low-level baseline task. We compared three groups: Deaf native signers, hearing native signers, and hearing nonsigners. None of the participants had any knowledge of Tic Tac. All three groups activated an extensive frontal-posterior network in response to both types of stimuli. Superior temporal cortex, including the planum temporale, was activated bilaterally in response to both types of gesture in all groups, irrespective of hearing status. The engagement of these traditionally auditory processing regions was greater in Deaf than hearing participants. These data suggest that the planum temporale may be responsive to visual movement in both deaf and hearing people, yet when hearing is absent early in development, the visual processing role of this region is enhanced. Greater activation for BSL than Tic Tac was observed in signers, but not in nonsigners, in the left posterior superior temporal sulcus and gyrus, extending into the supramarginal gyrus. This suggests that the left posterior perisylvian cortex is of fundamental importance to language processing, regardless of the modality in which it is conveyed.  相似文献   

17.
We recorded auditory-evoked potentials (AEPs) during simultaneous, continuous fMRI and identified trial-to-trial correlations between the amplitude of electrophysiological responses, characterised in the time domain and the time–frequency domain, and the hemodynamic BOLD response. Cortical AEPs were recorded from 30 EEG channels within the 3 T MRI scanner with and without the collection of simultaneous BOLD fMRI. Focussing on the Cz (vertex) EEG response, single-trial AEP responses were measured from time-domain waveforms. Furthermore, a novel method was used to characterise the single-trial AEP response within three regions of interest in the time–frequency domain (TF-ROIs). The latency and amplitude values of the N1 and P2 AEP peaks during fMRI scanning were not significantly different from the Control session (p > 0.16). BOLD fMRI responses to the auditory stimulation were observed in bilateral secondary auditory cortices as well as in the right precentral and postcentral gyri, anterior cingulate cortex (ACC) and supplementary motor cortex (SMC). Significant single-trial correlations were observed with a voxel-wise analysis, between (1) the magnitude of the EEG TF-ROI1 (70–800 ms post-stimulus, 1–5 Hz) and the BOLD response in right primary (Heschl's gyrus) and secondary (STG, planum temporale) auditory cortex; and (2) the amplitude of the P2 peak and the BOLD response in left pre- and postcentral gyri, the ACC and SMC. No correlation was observed with single-trial N1 amplitude on a voxel-wise basis. An fMRI-ROI analysis of functionally-identified auditory responsive regions identified further single-trial correlations of BOLD and EEG responses. The TF amplitudes in TF-ROI1 and TF-ROI2 (20–400 ms post-stimulus, 5–15 Hz) were significantly correlated with the BOLD response in all bilateral auditory areas investigated (planum temporale, superior temporal gyrus and Heschl's gyrus). However the N1 and P2 peak amplitudes, occurring at similar latencies did not show a correlation in these regions. N1 and P2 peak amplitude did correlate with the BOLD response in bilateral precentral and postcentral gyri and the SMC. Additionally P2 and TF-ROI1 both correlated with the ACC. TF-ROI3 (400–900 ms post-stimulus, 4–10 Hz) correlations were only observed in the ACC and SMC. Across the group, the subject-mean N1 peak amplitude correlated with the BOLD response amplitude in the primary and secondary auditory cortices bilaterally, as well as the right precentral gyrus and SMC. We confirm that auditory-evoked EEG responses can be recorded during continuous and simultaneous fMRI. We have presented further evidence of an empirical single-trial coupling between the EEG and BOLD fMRI responses, and show that a time–frequency decomposition of EEG signals can yield additional BOLD fMRI correlates, predominantly in auditory cortices, beyond those found using the evoked response amplitude alone.  相似文献   

18.
Edges are important cues defining coherent auditory objects. As a model of auditory edges, sound on- and offset are particularly suitable to study their neural underpinnings because they contrast a specific physical input against no physical input. Change from silence to sound, that is onset, has extensively been studied and elicits transient neural responses bilaterally in auditory cortex. However, neural activity associated with sound onset is not only related to edge detection but also to novel afferent inputs. Edges at the change from sound to silence, that is offset, are not confounded by novel physical input and thus allow to examine neural activity associated with sound edges per se. In the first experiment, we used silent acquisition functional magnetic resonance imaging and found that the offset of pulsed sound activates planum temporale, superior temporal sulcus and planum polare of the right hemisphere. In the planum temporale and the superior temporal sulcus, offset response amplitudes were related to the pulse repetition rate of the preceding stimulation. In the second experiment, we found that these offset-responsive regions were also activated by single sound pulses, onset of sound pulse sequences and single sound pulse omissions within sound pulse sequences. However, they were not active during sustained sound presentation. Thus, our data show that circumscribed areas in right temporal cortex are specifically involved in identifying auditory edges. This operation is crucial for translating acoustic signal time series into coherent auditory objects.  相似文献   

19.
We used voxel-based morphometry (VBM) to examine human brain asymmetry and the effects of sex and handedness on brain structure in 465 normal adults. We observed significant asymmetry of cerebral grey and white matter in the occipital, frontal, and temporal lobes (petalia), including Heschl's gyrus, planum temporale (PT) and the hippocampal formation. Males demonstrated increased leftward asymmetry within Heschl's gyrus and PT compared to females. There was no significant interaction between asymmetry and handedness and no main effect of handedness. There was a significant main effect of sex on brain morphology, even after accounting for the larger global volumes of grey and white matter in males. Females had increased grey matter volume adjacent to the depths of both central sulci and the left superior temporal sulcus, in right Heschl's gyrus and PT, in right inferior frontal and frontomarginal gyri and in the cingulate gyrus. Females had significantly increased grey matter concentration extensively and relatively symmetrically in the cortical mantle, parahippocampal gyri, and in the banks of the cingulate and calcarine sulci. Males had increased grey matter volume bilaterally in the mesial temporal lobes, entorhinal and perirhinal cortex, and in the anterior lobes of the cerebellum, but no regions of increased grey matter concentration.  相似文献   

20.
Analysis of the spectral envelope of sounds by the human brain   总被引:6,自引:0,他引:6  
Spectral envelope is the shape of the power spectrum of sound. It is an important cue for the identification of sound sources such as voices or instruments, and particular classes of sounds such as vowels. In everyday life, sounds with similar spectral envelopes are perceived as similar: we recognize a voice or a vowel regardless of pitch and intensity variations, and we recognize the same vowel regardless of whether it is voiced (a spectral envelope applied to a harmonic series) or whispered (a spectral envelope applied to noise). In this functional magnetic resonance imaging (fMRI) experiment, we investigated the basis for analysis of spectral envelope by the human brain. Changing either the pitch or the spectral envelope of harmonic sounds produced similar activation within a bilateral network including Heschl's gyrus and adjacent cortical areas in the superior temporal lobe. Changing the spectral envelope of continuously alternating noise and harmonic sounds produced additional right-lateralized activation in superior temporal sulcus (STS). Our findings show that spectral shape is abstracted in superior temporal sulcus, suggesting that this region may have a generic role in the spectral analysis of sounds. These distinct levels of spectral analysis may represent early computational stages in a putative anteriorly directed stream for the categorization of sound.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号