首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Osnes B  Hugdahl K  Specht K 《NeuroImage》2011,54(3):2437-2445
Several reports of premotor cortex involvement in speech perception have been put forward. Still, the functional role of premotor cortex is under debate. In order to investigate the functional role of premotor cortex, we presented parametrically varied speech stimuli in both a behavioral and functional magnetic resonance imaging (fMRI) study. White noise was transformed over seven distinct steps into a speech sound and presented to the participants in a randomized order. As control condition served the same transformation from white noise into a music instrument sound. The fMRI data were modelled with Dynamic Causal Modeling (DCM) where the effective connectivity between Heschl's gyrus, planum temporale, superior temporal sulcus and premotor cortex were tested. The fMRI results revealed a graded increase in activation in the left superior temporal sulcus. Premotor cortex activity was only present at an intermediate step when the speech sounds became identifiable but were still distorted but was not present when the speech sounds were clearly perceivable. A Bayesian model selection procedure favored a model that contained significant interconnections between Heschl's gyrus, planum temporal, and superior temporal sulcus when processing speech sounds. In addition, bidirectional connections between premotor cortex and superior temporal sulcus and from planum temporale to premotor cortex were significant. Processing non-speech sounds initiated no significant connections to premotor cortex. Since the highest level of motor activity was observed only when processing identifiable sounds with incomplete phonological information, it is concluded that premotor cortex is not generally necessary for speech perception but may facilitate interpreting a sound as speech when the acoustic input is sparse.  相似文献   

2.
The separation of concurrent sounds is paramount to human communication in everyday settings. The primary auditory cortex and the planum temporale are thought to be essential for both the separation of physical sound sources into perceptual objects and the comparison of those representations with previously learned acoustic events. To examine the role of these areas in speech separation, we measured brain activity using event-related functional Magnetic Resonance Imaging (fMRI) while participants were asked to identify two phonetically different vowels presented simultaneously. The processing of brief speech sounds (200 ms in duration) activated the thalamus and superior temporal gyrus bilaterally, left anterior temporal lobe, and left inferior temporal gyrus. A comparison of fMRI signals between trials in which participants successfully identified both vowels as opposed to when only one of the two vowels was recognized revealed enhanced activity in left thalamus, Heschl's gyrus, superior temporal gyrus, and the planum temporale. Because participants successfully identified at least one of the two vowels on each trial, the difference in fMRI signal indexes the extra computational work needed to segregate and identify successfully the other concurrently presented vowel. The results support the view that auditory cortex in or near Heschl's gyrus as well as in the planum temporale are involved in sound segregation and reveal a link between left thalamo-cortical activation and the successful separation and identification of simultaneous speech sounds.  相似文献   

3.
A vivid perception of a moving human can be evoked when viewing a few point-lights on the joints of an invisible walker. This special visual ability for biological motion perception has been found to involve the posterior superior temporal sulcus (STSp). However, in everyday life, human motion can also be recognized using acoustic cues. In the present study, we investigated the neural substrate of human motion perception when listening to footsteps, by means of a sparse sampling functional MRI design. We first showed an auditory attentional network that shares frontal and parietal areas previously found in visual attention paradigms. Second, an activation was observed in the auditory cortex (Heschl's gyrus and planum temporale), likely to be related to low-level sound processing. Most strikingly, another activation was evidenced in a STSp region overlapping the temporal biological motion area previously reported using visual input. We thus propose that a part of the STSp region might be a supramodal area involved in human motion recognition, irrespective of the sensory modality input.  相似文献   

4.
Functional MRI was performed to investigate differences in the basic functional organization of the primary and secondary auditory cortex regarding preferred stimulus lateralization and frequency. A modified sparse acquisition scheme was used to spatially map the characteristics of the auditory cortex at the level of individual voxels. In the regions of Heschl's gyrus and sulcus that correspond with the primary auditory cortex, activation was systematically strongest in response to contralateral stimulation. Contrarily, in the surrounding secondary active regions including the planum polare and the planum temporale, large-scale preferences with respect to stimulus lateralization were absent. Regarding optimal stimulus frequency, low- to high-frequency spatial gradients were discernable along the Heschl's gyrus and sulcus in anterolateral to posteromedial direction, especially in the right hemisphere, consistent with the presence of a tonotopic organization in these primary areas. However, in the surrounding activated secondary areas frequency preferences were erratic. Lateralization preferences did not depend on stimulus frequency, and frequency preferences did not depend on stimulus lateralization. While the primary auditory cortex is topographically organized with respect to physical stimulus properties (i.e., lateralization and frequency), such organizational principles are no longer obvious in secondary and higher areas. This suggests a neural re-encoding of sound signals in the transition from primary to secondary areas, possibly in relation to auditory scene analysis and the processing of auditory objects.  相似文献   

5.
Parallel cortical pathways have been proposed for the processing of auditory pattern and spatial information, respectively. We tested this segregation with human functional magnetic resonance imaging (fMRI) and separate electroencephalographic (EEG) recordings in the same subjects who listened passively to four sequences of repetitive spatial animal vocalizations in an event-related paradigm. Transitions between sequences constituted either a change of auditory pattern, location, or both pattern+location. This procedure allowed us to investigate the cortical correlates of natural auditory "what" and "where" changes independent of differences in the individual stimuli. For pattern changes, we observed significantly increased fMRI responses along the bilateral anterior superior temporal gyrus and superior temporal sulcus, the planum polare, lateral Heschl's gyrus and anterior planum temporale. For location changes, significant increases of fMRI responses were observed in bilateral posterior superior temporal gyrus and planum temporale. An overlap of these two types of changes occurred in the lateral anterior planum temporale and posterior superior temporal gyrus. The analysis of source event-related potentials (ERPs) revealed faster processing of location than pattern changes. Thus, our data suggest that passive processing of auditory spatial and pattern changes is dissociated both temporally and anatomically in the human brain. The predominant role of more anterior aspects of the superior temporal lobe in sound identity processing supports the role of this area as part of the auditory pattern processing stream, while spatial processing of auditory stimuli appears to be mediated by the more posterior parts of the superior temporal lobe.  相似文献   

6.
Temporal integration is a fundamental process that the brain carries out to construct coherent percepts from serial sensory events. This process critically depends on the formation of memory traces reconciling past with present events and is particularly important in the auditory domain where sensory information is received both serially and in parallel. It has been suggested that buffers for transient auditory memory traces reside in the auditory cortex. However, previous studies investigating "echoic memory" did not distinguish between brain response to novel auditory stimulus characteristics on the level of basic sound processing and a higher level involving matching of present with stored information. Here we used functional magnetic resonance imaging in combination with a regular pattern of sounds repeated every 100 ms and deviant interspersed stimuli of 100-ms duration, which were either brief presentations of louder sounds or brief periods of silence, to probe the formation of auditory memory traces. To avoid interaction with scanner noise, the auditory stimulation sequence was implemented into the image acquisition scheme. Compared to increased loudness events, silent periods produced specific neural activation in the right planum temporale and temporoparietal junction. Our findings suggest that this area posterior to the auditory cortex plays a critical role in integrating sequential auditory events and is involved in the formation of short-term auditory memory traces. This function of the planum temporale appears to be fundamental in the segregation of simultaneous sound sources.  相似文献   

7.
Scanning silence: mental imagery of complex sounds   总被引:1,自引:0,他引:1  
In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.  相似文献   

8.
The length of a vocal tract is reflected in the sound it is producing. The length of the vocal tract is correlated with body size and humans are very good at making size judgments based on the acoustic effect of vocal tract length only. Here we investigate the underlying mechanism for processing this main auditory cue to size information in the human brain. Sensory encoding of the acoustic effect of vocal tract length (VTL) depends on a time-stabilized spectral scaling mechanism that is independent of glottal pulse rate (GPR, or voice pitch); we provide evidence that a potential neural correlate for such a mechanism exists in the medial geniculate body (MGB). The perception of the acoustic effect of speaker size is influenced by GPR suggesting an interaction between VTL and GPR processing; such an interaction occurs only at the level of non-primary auditory cortex in planum temporale and anterior superior temporal gyrus. Our findings support a two-stage model for the processing of size information in speech based on an initial stage of sensory analysis as early as MGB, and a neural correlate of the perception of source size in non-primary auditory cortex.  相似文献   

9.
Recently, magnetic resonance properties of cerebral gray matter have been spatially mapped--in vivo--over the cortical surface. In one of the first neuroscientific applications of this approach, this study explores what can be learned about auditory cortex in living humans by mapping longitudinal relaxation rate (R1), a property related to myelin content. Gray matter R1 (and thickness) showed repeatable trends, including the following: (1) Regions of high R1 were always found overlapping posteromedial Heschl's gyrus. They also sometimes occurred in planum temporale and never in other parts of the superior temporal lobe. We hypothesize that the high R1 overlapping Heschl's gyrus (which likely indicates dense gray matter myelination) reflects auditory koniocortex (i.e., primary cortex), a heavily myelinated area that shows comparable overlap with the gyrus. High R1 overlapping Heschl's gyrus was identified in every instance suggesting that R1 may ultimately provide a marker for koniocortex in individuals. Such a marker would be significant for auditory neuroimaging, which has no standard means (anatomic or physiologic) for localizing cortical areas in individual subjects. (2) Inter-hemispheric comparisons revealed greater R1 on the left on Heschl's gyrus, planum temporale, superior temporal gyrus and superior temporal sulcus. This asymmetry suggests greater gray matter myelination in left auditory cortex, which may be a substrate for the left hemisphere's specialized processing of speech, language, and rapid acoustic changes. These results indicate that in vivo R1 mapping can provide new insights into the structure of human cortical gray matter and its relation to function.  相似文献   

10.
Pulse-resonance sounds like vowels or instrumental tones contain acoustic information about the physical size of the sound source (pulse rate) and body resonators (resonance scale). Previous research has revealed correlates of these variables in humans using functional neuroimaging. Here, we report two experiments that use magnetoencephalography to study the neuromagnetic representations of pulse rate and resonance scale in human auditory cortex. In Experiment 1, auditory evoked fields were recorded from nineteen subjects presented with French horn tones, the pulse rate and resonance scale of which had been manipulated independently using a mucoder. In Experiment 2, fifteen subjects listened to French horn tones which differed in resonance scale but which lacked pulse rate cues. The resulting cortical activity was evaluated by spatio-temporal source analysis. Changes in pulse rate elicited a well-defined N1m component with cortical generators located at the border between Heschl's gyrus and planum temporale. Changes in resonance scale elicited a second, independent, N1m component located in planum temporale. Our results demonstrate that resonance scale can be distinguished in its neuromagnetic representation from cortical activity related to the sound's pulse rate. Moreover, the existence of two separate components in the N1m sensitive to register information highlights the importance of this time window for the processing of frequency information in human auditory cortex.  相似文献   

11.
The human auditory cortex plays a special role in speech recognition. It is therefore necessary to clarify the functional roles of individual auditory areas. We applied functional magnetic resonance imaging (fMRI) to examine cortical responses to speech sounds, which were presented under the dichotic and diotic (binaural) listening conditions. We found two different response patterns in multiple auditory areas and language-related areas. In the auditory cortex, the medial portion of the secondary auditory area (A2), as well as a part of the planum temporale (PT) and the superior temporal gyrus and sulcus (ST), showed greater responses under the dichotic condition than under the diotic condition. This dichotic selectivity may reflect acoustic differences and attention-related factors such as spatial attention and selective attention to targets. In contrast, other parts of the auditory cortex showed comparable responses to the dichotic and diotic conditions. We found similar functional differentiation in the inferior frontal (IF) cortex. These results suggest that multiple auditory and language areas may play a pivotal role in integrating the functional differentiation for speech recognition.  相似文献   

12.
Gestures of the face, arms, and hands are components of signed languages used by Deaf people. Signaling codes, such as the racecourse betting code known as Tic Tac, are also made up of such gestures. Tic Tac lacks the phonological structure of British Sign Language (BSL) but is similar in terms of its visual and articulatory components. Using fMRI, we compared the neural correlates of viewing a gestural language (BSL) and a manual-brachial code (Tic Tac) relative to a low-level baseline task. We compared three groups: Deaf native signers, hearing native signers, and hearing nonsigners. None of the participants had any knowledge of Tic Tac. All three groups activated an extensive frontal-posterior network in response to both types of stimuli. Superior temporal cortex, including the planum temporale, was activated bilaterally in response to both types of gesture in all groups, irrespective of hearing status. The engagement of these traditionally auditory processing regions was greater in Deaf than hearing participants. These data suggest that the planum temporale may be responsive to visual movement in both deaf and hearing people, yet when hearing is absent early in development, the visual processing role of this region is enhanced. Greater activation for BSL than Tic Tac was observed in signers, but not in nonsigners, in the left posterior superior temporal sulcus and gyrus, extending into the supramarginal gyrus. This suggests that the left posterior perisylvian cortex is of fundamental importance to language processing, regardless of the modality in which it is conveyed.  相似文献   

13.
Magnetoencephalography was used to investigate the relationship between the sustained magnetic field in auditory cortex and the perception of periodic sounds. The response to regular and irregular click trains was measured at three sound intensities. Two separate sources were isolated adjacent to primary auditory cortex: One, located in lateral Heschl's gyrus, was particularly sensitive to regularity and largely insensitive to sound level. The second, located just posterior to the first in planum temporale, was particularly sensitive to sound level and largely insensitive to regularity. This double dissociation to the same stimuli indicates that the two sources represent separate mechanisms; the first would appear to be involved with pitch perception and the second with loudness. The delay of the offset of the sustained field was found to increase with interclick interval up to 200 ms at least, which suggests that the sustained field offset represents a sophisticated offset-monitoring mechanism rather than simply the cessation of stimulation.  相似文献   

14.
Barrett DJ  Hall DA 《NeuroImage》2006,32(2):968-977
Primate studies suggest the auditory cortex is organized in at least two anatomically and functionally separate pathways: a ventral pathway specializing in object recognition and a dorsal pathway specializing in object localization. The current experiment assesses the validity of this model in human listeners using fMRI to investigate the neural substrates of spatial and non-spatial temporal pattern information. Targets were differentiated from non-targets on the basis of two levels of pitch information (present vs. absent, fixed vs. varying) and two levels of spatial information (compact vs. diffuse sound source, fixed vs. varying location) in a factorial design. Analyses revealed spatially separate responses to spatial and non-spatial temporal information. The main activation associated with pitch occurred predominantly in Heschl's gyrus (HG) and planum polare, while that associated with changing sound source location occurred posterior to HG, in planum temporale (PT). Activation common to both pitch and changing spatial location was located bilaterally in anterior PT. Apart from this small region of overlap, our data support the anatomical and functional segregation of 'what' and 'where' in human non-primary auditory cortex. Our results also highlight a distinction in the sensitivity of anterior and posterior fields of PT to non-spatial information and specify the type of spatial information that is coded within early areas of the spatial processing stream.  相似文献   

15.
The gradient switching during fast echoplanar functional magnetic resonance imaging (EPI-fMRI) produces loud noises that may interact with the functional activation of the central auditory system induced by experimental acoustic stimuli. This interaction is unpredictable and is likely to confound the interpretation of functional maps of the auditory cortex. In the present study we used an experimental design which does not require the presentation of stimuli during EPI acquisitions and allows for mapping of the auditory cortex without the interference of scanner noise. The design relies on the physiological delays between the onset, or the end, of stimulation and the corresponding hemodynamic response. Owing to these delays and through a time-resolved acquisition protocol it is possible to analyze the decay of the stimulus-specific signal changes after the cessation of the stimulus itself and before the onset of the EPI-acoustic noise related activation (decay-sampling technique). This experimental design, which might permit a more detailed insight in the auditory cortex, has been applied to the study of the cortical responses to pulsed 1000 Hz sine tones. Distinct activation clusters were detected in the Heschl's gyri and the planum temporale, with an increased extension compared to a conventional block-design paradigm. Furthermore, the comparison of the hemodynamic response of the most anterior and the posterior clusters of activation highlighted differential response patterns to the sound stimulation and to the EPI-noise. These differences, attributable to reciprocal saturation effects unevenly distributed over the superior temporal cortex, provided evidence for functionally distinct auditory fields.  相似文献   

16.
The aim of the present study was the investigation of neural correlates of music processing with fMRI. Chord sequences were presented to the participants, infrequently containing unexpected musical events. These events activated the areas of Broca and Wernicke, the superior temporal sulcus, Heschl's gyrus, both planum polare and planum temporale, as well as the anterior superior insular cortices. Some of these brain structures have previously been shown to be involved in music processing, but the cortical network comprising all these structures has up to now been thought to be domain-specific for language processing. To what extent this network might also be activated by the processing of non-linguistic information has remained unknown. The present fMRI-data reveal that the human brain employs this neuronal network also for the processing of musical information, suggesting that the cortical network known to support language processing is less domain-specific than previously believed.  相似文献   

17.
This 3-T fMRI study investigates brain regions similarly and differentially involved with listening and covert production of singing relative to speech. Given the greater use of auditory-motor self-monitoring and imagery with respect to consonance in singing, brain regions involved with these processes are predicted to be differentially active for singing more than for speech. The stimuli consisted of six Japanese songs. A block design was employed in which the tasks for the subject were to listen passively to singing of the song lyrics, passively listen to speaking of the song lyrics, covertly sing the song lyrics visually presented, covertly speak the song lyrics visually presented, and to rest. The conjunction of passive listening and covert production tasks used in this study allow for general neural processes underlying both perception and production to be discerned that are not exclusively a result of stimulus induced auditory processing nor to low level articulatory motor control. Brain regions involved with both perception and production for singing as well as speech were found to include the left planum temporale/superior temporal parietal region, as well as left and right premotor cortex, lateral aspect of the VI lobule of posterior cerebellum, anterior superior temporal gyrus, and planum polare. Greater activity for the singing over the speech condition for both the listening and covert production tasks was found in the right planum temporale. Greater activity in brain regions involved with consonance, orbitofrontal cortex (listening task), subcallosal cingulate (covert production task) were also present for singing over speech. The results are consistent with the PT mediating representational transformation across auditory and motor domains in response to consonance for singing over that of speech. Hemispheric laterality was assessed by paired t tests between active voxels in the contrast of interest relative to the left-right flipped contrast of interest calculated from images normalized to the left-right reflected template. Consistent with some hypotheses regarding hemispheric specialization, a pattern of differential laterality for speech over singing (both covert production and listening tasks) occurs in the left temporal lobe, whereas, singing over speech (listening task only) occurs in right temporal lobe.  相似文献   

18.
Watkins S  Shams L  Tanaka S  Haynes JD  Rees G 《NeuroImage》2006,31(3):1247-1256
When a single brief visual flash is accompanied by two auditory bleeps, it is frequently perceived incorrectly as two flashes. Here, we used high field functional MRI in humans to examine the neural basis of this multisensory perceptual illusion. We show that activity in retinotopic visual cortex is increased by the presence of concurrent auditory stimulation, irrespective of any illusory perception. However, when concurrent auditory stimulation gave rise to illusory visual perception, activity in V1 was enhanced, despite auditory and visual stimulation being unchanged. These findings confirm that responses in human V1 can be altered by sound and show that they reflect subjective perception rather than the physically present visual stimulus. Moreover, as the right superior temporal sulcus and superior colliculus were also activated by illusory visual perception, together with V1, they provide a potential neural substrate for the generation of this multisensory illusion.  相似文献   

19.
Two major influences on how the brain processes music are maturational development and active musical training. Previous functional neuroimaging studies investigating music processing have typically focused on either categorical differences between "musicians versus nonmusicians" or "children versus adults." In the present study, we explored a cross-sectional data set (n=84) using multiple linear regression to isolate the performance-independent effects of age (5 to 33 years) and cumulative duration of musical training (0 to 21,000 practice hours) on fMRI activation similarities and differences between melodic discrimination (MD) and rhythmic discrimination (RD). Age-related effects common to MD and RD were present in three left hemisphere regions: temporofrontal junction, ventral premotor cortex, and the inferior part of the intraparietal sulcus, regions involved in active attending to auditory rhythms, sensorimotor integration, and working memory transformations of pitch and rhythmic patterns. By contrast, training-related effects common to MD and RD were localized to the posterior portion of the left superior temporal gyrus/planum temporale, an area implicated in spectrotemporal pattern matching and auditory-motor coordinate transformations. A single cluster in right superior temporal gyrus showed significantly greater activation during MD than RD. This is the first fMRI which has distinguished maturational from training effects during music processing.  相似文献   

20.
Recent neuroimaging studies have suggested that spatial versus nonspatial changes in acoustic stimulation are processed along separate cortical pathways. However, it has remained unclear in how far change-related responses are modulated by selective attention. Thus, we aimed at testing effects of feature-selective attention on the cortical representation of pattern and location of complex natural sounds using human functional magnetic resonance imaging (fMRI) adaptation. We consecutively presented the following pairs of animal vocalizations: (a) two identical animal vocalizations, (b) same animal vocalizations at different locations, (c) different animal vocalizations at the same location, and (d) different animal vocalizations at different locations. Subjects underwent this stimulation under two different task conditions requiring either to match sound identity or location. We observed significant fMRI adaptation effects within the bilateral superior temporal sulcus (STS), planum temporale (PT) and right anterior insula for location changes. For pattern changes, we found adaptation effects within the bilateral superior temporal lobe, in particular along the superior temporal gyrus (STG), PT and posterior STS, the bilateral anterior insula and inferior frontal areas. While the adaptation effects within the pattern-selective temporal lobe areas were robust to task requirements, adaptation within the more posterior location-selective areas was modulated by feature-specific attention. In contrast, inferior frontal cortex and anterior insular exhibited adaptation effects mainly during the location matching task. Given that the location matching task was significantly more difficult than the pattern matching, our data suggest that frontal and insular regions were modulated by task difficulty rather than feature-specific attention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号