首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The separation of concurrent sounds is paramount to human communication in everyday settings. The primary auditory cortex and the planum temporale are thought to be essential for both the separation of physical sound sources into perceptual objects and the comparison of those representations with previously learned acoustic events. To examine the role of these areas in speech separation, we measured brain activity using event-related functional Magnetic Resonance Imaging (fMRI) while participants were asked to identify two phonetically different vowels presented simultaneously. The processing of brief speech sounds (200 ms in duration) activated the thalamus and superior temporal gyrus bilaterally, left anterior temporal lobe, and left inferior temporal gyrus. A comparison of fMRI signals between trials in which participants successfully identified both vowels as opposed to when only one of the two vowels was recognized revealed enhanced activity in left thalamus, Heschl's gyrus, superior temporal gyrus, and the planum temporale. Because participants successfully identified at least one of the two vowels on each trial, the difference in fMRI signal indexes the extra computational work needed to segregate and identify successfully the other concurrently presented vowel. The results support the view that auditory cortex in or near Heschl's gyrus as well as in the planum temporale are involved in sound segregation and reveal a link between left thalamo-cortical activation and the successful separation and identification of simultaneous speech sounds.  相似文献   

2.
Scanning silence: mental imagery of complex sounds   总被引:1,自引:0,他引:1  
In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.  相似文献   

3.
Parallel cortical pathways have been proposed for the processing of auditory pattern and spatial information, respectively. We tested this segregation with human functional magnetic resonance imaging (fMRI) and separate electroencephalographic (EEG) recordings in the same subjects who listened passively to four sequences of repetitive spatial animal vocalizations in an event-related paradigm. Transitions between sequences constituted either a change of auditory pattern, location, or both pattern+location. This procedure allowed us to investigate the cortical correlates of natural auditory "what" and "where" changes independent of differences in the individual stimuli. For pattern changes, we observed significantly increased fMRI responses along the bilateral anterior superior temporal gyrus and superior temporal sulcus, the planum polare, lateral Heschl's gyrus and anterior planum temporale. For location changes, significant increases of fMRI responses were observed in bilateral posterior superior temporal gyrus and planum temporale. An overlap of these two types of changes occurred in the lateral anterior planum temporale and posterior superior temporal gyrus. The analysis of source event-related potentials (ERPs) revealed faster processing of location than pattern changes. Thus, our data suggest that passive processing of auditory spatial and pattern changes is dissociated both temporally and anatomically in the human brain. The predominant role of more anterior aspects of the superior temporal lobe in sound identity processing supports the role of this area as part of the auditory pattern processing stream, while spatial processing of auditory stimuli appears to be mediated by the more posterior parts of the superior temporal lobe.  相似文献   

4.
Recently, magnetic resonance properties of cerebral gray matter have been spatially mapped--in vivo--over the cortical surface. In one of the first neuroscientific applications of this approach, this study explores what can be learned about auditory cortex in living humans by mapping longitudinal relaxation rate (R1), a property related to myelin content. Gray matter R1 (and thickness) showed repeatable trends, including the following: (1) Regions of high R1 were always found overlapping posteromedial Heschl's gyrus. They also sometimes occurred in planum temporale and never in other parts of the superior temporal lobe. We hypothesize that the high R1 overlapping Heschl's gyrus (which likely indicates dense gray matter myelination) reflects auditory koniocortex (i.e., primary cortex), a heavily myelinated area that shows comparable overlap with the gyrus. High R1 overlapping Heschl's gyrus was identified in every instance suggesting that R1 may ultimately provide a marker for koniocortex in individuals. Such a marker would be significant for auditory neuroimaging, which has no standard means (anatomic or physiologic) for localizing cortical areas in individual subjects. (2) Inter-hemispheric comparisons revealed greater R1 on the left on Heschl's gyrus, planum temporale, superior temporal gyrus and superior temporal sulcus. This asymmetry suggests greater gray matter myelination in left auditory cortex, which may be a substrate for the left hemisphere's specialized processing of speech, language, and rapid acoustic changes. These results indicate that in vivo R1 mapping can provide new insights into the structure of human cortical gray matter and its relation to function.  相似文献   

5.
A vivid perception of a moving human can be evoked when viewing a few point-lights on the joints of an invisible walker. This special visual ability for biological motion perception has been found to involve the posterior superior temporal sulcus (STSp). However, in everyday life, human motion can also be recognized using acoustic cues. In the present study, we investigated the neural substrate of human motion perception when listening to footsteps, by means of a sparse sampling functional MRI design. We first showed an auditory attentional network that shares frontal and parietal areas previously found in visual attention paradigms. Second, an activation was observed in the auditory cortex (Heschl's gyrus and planum temporale), likely to be related to low-level sound processing. Most strikingly, another activation was evidenced in a STSp region overlapping the temporal biological motion area previously reported using visual input. We thus propose that a part of the STSp region might be a supramodal area involved in human motion recognition, irrespective of the sensory modality input.  相似文献   

6.
The human auditory cortex plays a special role in speech recognition. It is therefore necessary to clarify the functional roles of individual auditory areas. We applied functional magnetic resonance imaging (fMRI) to examine cortical responses to speech sounds, which were presented under the dichotic and diotic (binaural) listening conditions. We found two different response patterns in multiple auditory areas and language-related areas. In the auditory cortex, the medial portion of the secondary auditory area (A2), as well as a part of the planum temporale (PT) and the superior temporal gyrus and sulcus (ST), showed greater responses under the dichotic condition than under the diotic condition. This dichotic selectivity may reflect acoustic differences and attention-related factors such as spatial attention and selective attention to targets. In contrast, other parts of the auditory cortex showed comparable responses to the dichotic and diotic conditions. We found similar functional differentiation in the inferior frontal (IF) cortex. These results suggest that multiple auditory and language areas may play a pivotal role in integrating the functional differentiation for speech recognition.  相似文献   

7.
Analysis of the spectral envelope of sounds by the human brain   总被引:6,自引:0,他引:6  
Spectral envelope is the shape of the power spectrum of sound. It is an important cue for the identification of sound sources such as voices or instruments, and particular classes of sounds such as vowels. In everyday life, sounds with similar spectral envelopes are perceived as similar: we recognize a voice or a vowel regardless of pitch and intensity variations, and we recognize the same vowel regardless of whether it is voiced (a spectral envelope applied to a harmonic series) or whispered (a spectral envelope applied to noise). In this functional magnetic resonance imaging (fMRI) experiment, we investigated the basis for analysis of spectral envelope by the human brain. Changing either the pitch or the spectral envelope of harmonic sounds produced similar activation within a bilateral network including Heschl's gyrus and adjacent cortical areas in the superior temporal lobe. Changing the spectral envelope of continuously alternating noise and harmonic sounds produced additional right-lateralized activation in superior temporal sulcus (STS). Our findings show that spectral shape is abstracted in superior temporal sulcus, suggesting that this region may have a generic role in the spectral analysis of sounds. These distinct levels of spectral analysis may represent early computational stages in a putative anteriorly directed stream for the categorization of sound.  相似文献   

8.
Neurophysiological research suggests that understanding the actions of others harnesses neural circuits that would be used to produce those actions directly. We used fMRI to examine brain areas active during language comprehension in which the speaker was seen and heard while talking (audiovisual) or heard but not seen (audio-alone) or when the speaker was seen talking with the audio track removed (video-alone). We found that audiovisual speech perception activated a network of brain regions that included cortical motor areas involved in planning and executing speech production and areas subserving proprioception related to speech production. These regions included the posterior part of the superior temporal gyrus and sulcus, the pars opercularis, premotor cortex, adjacent primary motor cortex, somatosensory cortex, and the cerebellum. Activity in premotor cortex and posterior superior temporal gyrus and sulcus was modulated by the amount of visually distinguishable phonemes in the stories. None of these regions was activated to the same extent in the audio- or video-alone conditions. These results suggest that integrating observed facial movements into the speech perception process involves a network of multimodal brain regions associated with speech production and that these areas contribute less to speech perception when only auditory signals are present. This distributed network could participate in recognition processing by interpreting visual information about mouth movements as phonetic information based on motor commands that could have generated those movements.  相似文献   

9.
This 3-T fMRI study investigates brain regions similarly and differentially involved with listening and covert production of singing relative to speech. Given the greater use of auditory-motor self-monitoring and imagery with respect to consonance in singing, brain regions involved with these processes are predicted to be differentially active for singing more than for speech. The stimuli consisted of six Japanese songs. A block design was employed in which the tasks for the subject were to listen passively to singing of the song lyrics, passively listen to speaking of the song lyrics, covertly sing the song lyrics visually presented, covertly speak the song lyrics visually presented, and to rest. The conjunction of passive listening and covert production tasks used in this study allow for general neural processes underlying both perception and production to be discerned that are not exclusively a result of stimulus induced auditory processing nor to low level articulatory motor control. Brain regions involved with both perception and production for singing as well as speech were found to include the left planum temporale/superior temporal parietal region, as well as left and right premotor cortex, lateral aspect of the VI lobule of posterior cerebellum, anterior superior temporal gyrus, and planum polare. Greater activity for the singing over the speech condition for both the listening and covert production tasks was found in the right planum temporale. Greater activity in brain regions involved with consonance, orbitofrontal cortex (listening task), subcallosal cingulate (covert production task) were also present for singing over speech. The results are consistent with the PT mediating representational transformation across auditory and motor domains in response to consonance for singing over that of speech. Hemispheric laterality was assessed by paired t tests between active voxels in the contrast of interest relative to the left-right flipped contrast of interest calculated from images normalized to the left-right reflected template. Consistent with some hypotheses regarding hemispheric specialization, a pattern of differential laterality for speech over singing (both covert production and listening tasks) occurs in the left temporal lobe, whereas, singing over speech (listening task only) occurs in right temporal lobe.  相似文献   

10.
Schizophrenia is associated with language-related dysfunction. A previous study [Schizophr. Res. 59 (2003c) 159] has shown that this abnormality is present at the level of automatic discrimination of change in speech sounds, as revealed by magnetoencephalographic recording of auditory mismatch field in response to across-category change in vowels. Here, we investigated the neuroanatomical substrate for this physiological abnormality. Thirteen patients with schizophrenia and 19 matched control subjects were examined using magnetoencephalography (MEG) and high-resolution magnetic resonance imaging (MRI) to evaluate both mismatch field strengths in response to change between vowel /a/ and /o/, and gray matter volumes of Heschl's gyrus (HG) and planum temporale (PT). The magnetic global field power of mismatch response to change in phonemes showed a bilateral reduction in patients with schizophrenia. The gray matter volume of left planum temporale, but not right planum temporale or bilateral Heschl's gyrus, was significantly smaller in patients with schizophrenia compared with that in control subjects. Furthermore, the phonetic mismatch strength in the left hemisphere was significantly correlated with left planum temporale gray matter volume in patients with schizophrenia only. These results suggest that structural abnormalities of the planum temporale may underlie the functional abnormalities of fundamental language-related processing in schizophrenia.  相似文献   

11.
The high degree of intersubject structural variability in the human brain is an obstacle in combining data across subjects in functional neuroimaging experiments. A common method for aligning individual data is normalization into standard 3D stereotaxic space. Since the inherent geometry of the cortex is that of a 2D sheet, higher precision can potentially be achieved if the intersubject alignment is based on landmarks in this 2D space. To examine the potential advantage of surface-based alignment for localization of auditory cortex activation, and to obtain high-resolution maps of areas activated by speech sounds, fMRI data were analyzed from the left hemisphere of subjects tested with phoneme and tone discrimination tasks. We compared Talairach stereotaxic normalization with two surface-based methods: Landmark Based Warping, in which landmarks in the auditory cortex were chosen manually, and Automated Spherical Warping, in which hemispheres were aligned automatically based on spherical representations of individual and average brains. Examination of group maps generated with these alignment methods revealed superiority of the surface-based alignment in providing precise localization of functional foci and in avoiding mis-registration due to intersubject anatomical variability. Human left hemisphere cortical areas engaged in complex auditory perception appear to lie on the superior temporal gyrus, the dorsal bank of the superior temporal sulcus, and the lateral third of Heschl's gyrus.  相似文献   

12.
Functional MRI was performed to investigate differences in the basic functional organization of the primary and secondary auditory cortex regarding preferred stimulus lateralization and frequency. A modified sparse acquisition scheme was used to spatially map the characteristics of the auditory cortex at the level of individual voxels. In the regions of Heschl's gyrus and sulcus that correspond with the primary auditory cortex, activation was systematically strongest in response to contralateral stimulation. Contrarily, in the surrounding secondary active regions including the planum polare and the planum temporale, large-scale preferences with respect to stimulus lateralization were absent. Regarding optimal stimulus frequency, low- to high-frequency spatial gradients were discernable along the Heschl's gyrus and sulcus in anterolateral to posteromedial direction, especially in the right hemisphere, consistent with the presence of a tonotopic organization in these primary areas. However, in the surrounding activated secondary areas frequency preferences were erratic. Lateralization preferences did not depend on stimulus frequency, and frequency preferences did not depend on stimulus lateralization. While the primary auditory cortex is topographically organized with respect to physical stimulus properties (i.e., lateralization and frequency), such organizational principles are no longer obvious in secondary and higher areas. This suggests a neural re-encoding of sound signals in the transition from primary to secondary areas, possibly in relation to auditory scene analysis and the processing of auditory objects.  相似文献   

13.
We used voxel-based morphometry (VBM) to examine human brain asymmetry and the effects of sex and handedness on brain structure in 465 normal adults. We observed significant asymmetry of cerebral grey and white matter in the occipital, frontal, and temporal lobes (petalia), including Heschl's gyrus, planum temporale (PT) and the hippocampal formation. Males demonstrated increased leftward asymmetry within Heschl's gyrus and PT compared to females. There was no significant interaction between asymmetry and handedness and no main effect of handedness. There was a significant main effect of sex on brain morphology, even after accounting for the larger global volumes of grey and white matter in males. Females had increased grey matter volume adjacent to the depths of both central sulci and the left superior temporal sulcus, in right Heschl's gyrus and PT, in right inferior frontal and frontomarginal gyri and in the cingulate gyrus. Females had significantly increased grey matter concentration extensively and relatively symmetrically in the cortical mantle, parahippocampal gyri, and in the banks of the cingulate and calcarine sulci. Males had increased grey matter volume bilaterally in the mesial temporal lobes, entorhinal and perirhinal cortex, and in the anterior lobes of the cerebellum, but no regions of increased grey matter concentration.  相似文献   

14.
The aim of the present study was the investigation of neural correlates of music processing with fMRI. Chord sequences were presented to the participants, infrequently containing unexpected musical events. These events activated the areas of Broca and Wernicke, the superior temporal sulcus, Heschl's gyrus, both planum polare and planum temporale, as well as the anterior superior insular cortices. Some of these brain structures have previously been shown to be involved in music processing, but the cortical network comprising all these structures has up to now been thought to be domain-specific for language processing. To what extent this network might also be activated by the processing of non-linguistic information has remained unknown. The present fMRI-data reveal that the human brain employs this neuronal network also for the processing of musical information, suggesting that the cortical network known to support language processing is less domain-specific than previously believed.  相似文献   

15.
Specht K  Reul J 《NeuroImage》2003,20(4):1944-1954
With this study, we explored the blood oxygen level-dependent responses within the temporal lobe to short auditory stimuli of different classes. To address this issue, we performed an attentive listening event-related fMRI study, where subjects were required to concentrate during the presentation of different types of stimuli. Because the order of stimuli was randomized and not predictable for the subject, the observed differences between the stimuli types were interpreted as an automatic effect and were not affected by attention. We used three types of stimuli: tones, sounds of animals and instruments, and words. We found in all cases bilateral activations of the primary and secondary auditory cortex. The strength and lateralization depended on the type of stimulus. The tone trials led to the weakest and smallest activations. The perception of sounds increased the activated network bilaterally into the superior temporal sulcus mainly on the right and the perception of words led to the highest activation within the left superior temporal sulcus as well as in left inferior frontal gyrus. Within the left temporal sulcus, we were able to distinguish between different subsystems, showing an extending activation from posterior to anterior for speech and speechlike information. Whereas posterior parts were involved in analyzing the complex auditory structure of sounds and speech, the middle and anterior parts responded strongest only in the perception of speech. In summary, a functional segregation of the temporal lobes into several subsystems responsible for auditory processing was visible. A lateralization for verbal stimuli to the left and sounds to the right was already detectable when short stimuli were used.  相似文献   

16.
Recent neuroimaging studies have suggested that spatial versus nonspatial changes in acoustic stimulation are processed along separate cortical pathways. However, it has remained unclear in how far change-related responses are modulated by selective attention. Thus, we aimed at testing effects of feature-selective attention on the cortical representation of pattern and location of complex natural sounds using human functional magnetic resonance imaging (fMRI) adaptation. We consecutively presented the following pairs of animal vocalizations: (a) two identical animal vocalizations, (b) same animal vocalizations at different locations, (c) different animal vocalizations at the same location, and (d) different animal vocalizations at different locations. Subjects underwent this stimulation under two different task conditions requiring either to match sound identity or location. We observed significant fMRI adaptation effects within the bilateral superior temporal sulcus (STS), planum temporale (PT) and right anterior insula for location changes. For pattern changes, we found adaptation effects within the bilateral superior temporal lobe, in particular along the superior temporal gyrus (STG), PT and posterior STS, the bilateral anterior insula and inferior frontal areas. While the adaptation effects within the pattern-selective temporal lobe areas were robust to task requirements, adaptation within the more posterior location-selective areas was modulated by feature-specific attention. In contrast, inferior frontal cortex and anterior insular exhibited adaptation effects mainly during the location matching task. Given that the location matching task was significantly more difficult than the pattern matching, our data suggest that frontal and insular regions were modulated by task difficulty rather than feature-specific attention.  相似文献   

17.
Two major influences on how the brain processes music are maturational development and active musical training. Previous functional neuroimaging studies investigating music processing have typically focused on either categorical differences between "musicians versus nonmusicians" or "children versus adults." In the present study, we explored a cross-sectional data set (n=84) using multiple linear regression to isolate the performance-independent effects of age (5 to 33 years) and cumulative duration of musical training (0 to 21,000 practice hours) on fMRI activation similarities and differences between melodic discrimination (MD) and rhythmic discrimination (RD). Age-related effects common to MD and RD were present in three left hemisphere regions: temporofrontal junction, ventral premotor cortex, and the inferior part of the intraparietal sulcus, regions involved in active attending to auditory rhythms, sensorimotor integration, and working memory transformations of pitch and rhythmic patterns. By contrast, training-related effects common to MD and RD were localized to the posterior portion of the left superior temporal gyrus/planum temporale, an area implicated in spectrotemporal pattern matching and auditory-motor coordinate transformations. A single cluster in right superior temporal gyrus showed significantly greater activation during MD than RD. This is the first fMRI which has distinguished maturational from training effects during music processing.  相似文献   

18.
Magnetoencephalography was used to investigate the relationship between the sustained magnetic field in auditory cortex and the perception of periodic sounds. The response to regular and irregular click trains was measured at three sound intensities. Two separate sources were isolated adjacent to primary auditory cortex: One, located in lateral Heschl's gyrus, was particularly sensitive to regularity and largely insensitive to sound level. The second, located just posterior to the first in planum temporale, was particularly sensitive to sound level and largely insensitive to regularity. This double dissociation to the same stimuli indicates that the two sources represent separate mechanisms; the first would appear to be involved with pitch perception and the second with loudness. The delay of the offset of the sustained field was found to increase with interclick interval up to 200 ms at least, which suggests that the sustained field offset represents a sophisticated offset-monitoring mechanism rather than simply the cessation of stimulation.  相似文献   

19.
Edges are important cues defining coherent auditory objects. As a model of auditory edges, sound on- and offset are particularly suitable to study their neural underpinnings because they contrast a specific physical input against no physical input. Change from silence to sound, that is onset, has extensively been studied and elicits transient neural responses bilaterally in auditory cortex. However, neural activity associated with sound onset is not only related to edge detection but also to novel afferent inputs. Edges at the change from sound to silence, that is offset, are not confounded by novel physical input and thus allow to examine neural activity associated with sound edges per se. In the first experiment, we used silent acquisition functional magnetic resonance imaging and found that the offset of pulsed sound activates planum temporale, superior temporal sulcus and planum polare of the right hemisphere. In the planum temporale and the superior temporal sulcus, offset response amplitudes were related to the pulse repetition rate of the preceding stimulation. In the second experiment, we found that these offset-responsive regions were also activated by single sound pulses, onset of sound pulse sequences and single sound pulse omissions within sound pulse sequences. However, they were not active during sustained sound presentation. Thus, our data show that circumscribed areas in right temporal cortex are specifically involved in identifying auditory edges. This operation is crucial for translating acoustic signal time series into coherent auditory objects.  相似文献   

20.
Barrett DJ  Hall DA 《NeuroImage》2006,32(2):968-977
Primate studies suggest the auditory cortex is organized in at least two anatomically and functionally separate pathways: a ventral pathway specializing in object recognition and a dorsal pathway specializing in object localization. The current experiment assesses the validity of this model in human listeners using fMRI to investigate the neural substrates of spatial and non-spatial temporal pattern information. Targets were differentiated from non-targets on the basis of two levels of pitch information (present vs. absent, fixed vs. varying) and two levels of spatial information (compact vs. diffuse sound source, fixed vs. varying location) in a factorial design. Analyses revealed spatially separate responses to spatial and non-spatial temporal information. The main activation associated with pitch occurred predominantly in Heschl's gyrus (HG) and planum polare, while that associated with changing sound source location occurred posterior to HG, in planum temporale (PT). Activation common to both pitch and changing spatial location was located bilaterally in anterior PT. Apart from this small region of overlap, our data support the anatomical and functional segregation of 'what' and 'where' in human non-primary auditory cortex. Our results also highlight a distinction in the sensitivity of anterior and posterior fields of PT to non-spatial information and specify the type of spatial information that is coded within early areas of the spatial processing stream.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号