首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
We aimed at testing the cortical representation of complex natural sounds within auditory cortex using human functional magnetic resonance imaging (fMRI). To this end, we employed 2 different paradigms in the same subjects: a block-design experiment was to provide a localization of areas involved in the processing of animal vocalizations, whereas an event-related fMRI adaptation experiment was to characterize the representation of animal vocalizations in the auditory cortex. During the first experiment, we presented subjects with recognizable and degraded animal vocalizations. We observed significantly stronger fMRI responses for animal vocalizations compared with the degraded stimuli along the bilateral superior temporal gyrus (STG). In the second experiment, we employed an event-related fMRI adaptation paradigm in which pairs of auditory stimuli were presented in 4 different conditions: 1) 2 identical animal vocalizations, 2) 2 different animal vocalizations, 3) an animal vocalization and its degraded control, and 4) an animal vocalization and a degraded control of a different sound. We observed significant fMRI adaptation effects within the left STG. Our data thus suggest that complex sounds such as animal vocalizations are represented in putatively nonprimary auditory cortex in the left STG. Their representation is probably based on their spectrotemporal dynamics rather than simple spectral features.  相似文献   

2.
Long-latency auditory-evoked magnetic field and potential show strong attenuation of N1m/N1 responses when an identical stimulus is presented repeatedly due to adaptation of auditory cortical neurons. This adaptation is weak in subsequently occurring P2m/P2 responses, being weaker for piano chords than single piano notes. The adaptation of P2m is more suppressed in musicians having long-term musical training than in nonmusicians, whereas the amplitude of P2 is enhanced preferentially in musicians as the spectral complexity of musical tones increases. To address the key issues of whether such high responsiveness of P2m/P2 responses to complex sounds is intrinsic and common to nonmusical sounds, we conducted a magnetoencephalographic study on participants who had no experience of musical training, using consecutive trains of piano and vowel sounds. The dipole moment of the P2m sources located in the auditory cortex indicated significantly suppressed adaptation in the right hemisphere both to piano and vowel sounds. Thus, the persistent responsiveness of the P2m activity may be inherent, not induced by intensive training, and common to spectrally complex sounds. The right hemisphere dominance of the responsiveness to musical and speech sounds suggests analysis of acoustic features of object sounds to be a significant function of P2m activity.  相似文献   

3.
Increasing evidence suggests separate auditory pattern and space processing streams. The present paper describes two magnetoencephalogram studies examining gamma-band activity to changes in auditory patterns using consonant-vowel syllables (experiment 1), animal vocalizations and artificial noises (experiment 2). Two samples of each sound type were presented to passively listening subjects in separate oddball paradigms with 80% standards and 20% deviants differing in their spectral composition. Evoked magnetic mismatch fields peaking approximately 190 ms poststimulus showed a trend for a left-hemisphere advantage for syllables, but no hemispheric differences for the other sounds. Frequency analysis and statistical probability mapping of the differences between deviants and standards revealed increased gamma-band activity above 60 Hz over left anterior temporal/ventrolateral prefrontal cortex for all three types of stimuli. This activity peaked simultaneously with the mismatch responses for animal sounds (180 ms) but was delayed for noises (260 ms) and syllables (320 ms). Our results support the hypothesized role of anterior temporal/ventral prefrontal regions in the processing of auditory pattern change. They extend earlier findings of gamma-band activity over posterior parieto-temporal cortex during auditory spatial processing that supported the putative auditory dorsal stream. Furthermore, earlier gamma-band responses to animal vocalizations may suggest faster processing of fear-relevant information.  相似文献   

4.
Neural substrates of phonemic perception   总被引:5,自引:2,他引:3  
The temporal lobe in the left hemisphere has long been implicated in the perception of speech sounds. Little is known, however, regarding the specific function of different temporal regions in the analysis of the speech signal. Here we show that an area extending along the left middle and anterior superior temporal sulcus (STS) is more responsive to familiar consonant-vowel syllables during an auditory discrimination task than to comparably complex auditory patterns that cannot be associated with learned phonemic categories. In contrast, areas in the dorsal superior temporal gyrus bilaterally, closer to primary auditory cortex, are activated to the same extent by the phonemic and nonphonemic sounds. Thus, the left middle/anterior STS appears to play a role in phonemic perception. It may represent an intermediate stage of processing in a functional pathway linking areas in the bilateral dorsal superior temporal gyrus, presumably involved in the analysis of physical features of speech and other complex non-speech sounds, to areas in the left anterior STS and middle temporal gyrus that are engaged in higher-level linguistic processes.  相似文献   

5.
Neurophysiological measures indicate cortical sensitivity to speech sounds by 150 ms after stimulus onset. In this time window dyslexic subjects start to show abnormal cortical processing. We investigated whether phonetic analysis is reflected in the robust auditory cortical activation at approximately 100 ms (N100m), and whether dyslexic subjects show abnormal N100m responses to speech or nonspeech sounds. We used magnetoencephalography to record auditory responses of 10 normally reading and 10 dyslexic adults. The speech stimuli were synthetic Finnish speech sounds (/a/, /u/, /pa/, /ka/). The nonspeech stimuli were complex nonspeech sounds and simple sine wave tones, composed of the F1+F2+F3 and F2 formant frequencies of the speech sounds, respectively. All sounds evoked a prominent N100m response in the bilateral auditory cortices. The N100m activation was stronger to speech than nonspeech sounds in the left but not in the right auditory cortex, in both subject groups. The leftward shift of hemispheric balance for speech sounds is likely to reflect analysis at the phonetic level. In dyslexic subjects the overall interhemispheric amplitude balance and timing were altered for all sound types alike. Dyslexic individuals thus seem to have an unusual cortical organization of general auditory processing in the time window of speech-sensitive analysis.  相似文献   

6.
There is an increasing interest to integrate electrophysiological and hemodynamic measures for characterizing spatial and temporal aspects of cortical processing. However, an informative combination of responses that have markedly different sensitivities to the underlying neural activity is not straightforward, especially in complex cognitive tasks. Here, we used parametric stimulus manipulation in magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) recordings on the same subjects, to study effects of noise on processing of spoken words and environmental sounds. The added noise influenced MEG response strengths in the bilateral supratemporal auditory cortex, at different times for the different stimulus types. Specifically for spoken words, the effect of noise on the electrophysiological response was remarkably nonlinear. Therefore, we used the single-subject MEG responses to construct parametrization for fMRI data analysis and obtained notably higher sensitivity than with conventional stimulus-based parametrization. fMRI results showed that partly different temporal areas were involved in noise-sensitive processing of words and environmental sounds. These results indicate that cortical processing of sounds in background noise is stimulus specific in both timing and location and provide a new functionally meaningful platform for combining information obtained with electrophysiological and hemodynamic measures of brain function.  相似文献   

7.
Speech processing in auditory cortex and beyond is a remarkable yet poorly understood faculty of the listening brain. Here we show that stop consonants, as the most transient constituents of speech, are sufficient to involve speech perception circuits in the human superior temporal cortex. Left anterolateral superior temporal cortex showed a stronger response in blood oxygenation level-dependent functional magnetic resonance imaging (fMRI) to intelligible consonantal bursts compared with incomprehensible control sounds matched for spectrotemporal complexity. Simultaneously, the left posterior superior temporal plane (including planum temporale [PT]) exhibited a noncategorical responsivity to complex stimulus acoustics across all trials, showing no preference for intelligible speech sounds. Multistage hierarchical processing of speech sounds is thus revealed with fMRI, providing evidence for a role of the PT in the fundamental stages of the acoustic analysis of complex sounds, including speech.  相似文献   

8.
Recent research indicates that non-tonal novel events, deviating from an ongoing auditory environment, elicit a positive event-related potential (ERP), the novel P3. Although a variety of studies examined the neural network engaged in novelty detection, there is no complete picture of the underlying brain mechanisms. This experiment investigated these neural mechanisms by combining ERP and functional magnetic resonance imaging (fMRI). Hemodynamic and electrophysiological responses were measured in the same subjects using the same experimental design. The ERP analysis revealed a novel P3, while the fMRI responses showed bilateral foci in the middle part of the superior temporal gyrus. When subjects attended to the novel stimuli only identifiable novel sounds evoked a N4-like negativity. Subjects showing a strong N4-effect had additional fMRI activation in right prefrontal cortex (rPFC) as compared to subjects with a weak N4-effect. This pattern of results suggests that novelty processing not only includes the registration of deviancy but may also lead to a fast access and retrieval of related semantic concepts. The fMRI activation pattern suggests that the superior temporal gyrus is involved in novelty detection, whereas accessing and retrieving semantic concepts related to novel sounds additionally engages the rPFC.  相似文献   

9.
In order to investigate how the auditory scene is analyzed and perceived, auditory spectrotemporal receptive fields (STRFs) are generally used as a convenient way to describe how frequency and temporal sound information is encoded. However, using broadband sounds to estimate STRFs imperfectly reflects the way neurons process complex stimuli like conspecific vocalizations insofar as natural sounds often show limited bandwidth. Using recordings in the primary auditory cortex of anesthetized cats, we show that presentation of narrowband stimuli not including the best frequency of neurons provokes the appearance of residual peaks and increased firing rate at some specific spectral edges of stimuli compared with classical STRFs obtained from broadband stimuli. This result is the same for STRFs obtained from both spikes and local field potentials. Potential mechanisms likely involve release from inhibition. We thus emphasize some aspects of context dependency of STRFs, that is, how the balance of inhibitory and excitatory inputs is able to shape the neural response from the spectral content of stimuli.  相似文献   

10.
Human temporal lobe activation by speech and nonspeech sounds   总被引:27,自引:18,他引:9  
Functional organization of the lateral temporal cortex in humans is not well understood. We recorded blood oxygenation signals from the temporal lobes of normal volunteers using functional magnetic resonance imaging during stimulation with unstructured noise, frequency-modulated (FM) tones, reversed speech, pseudowords and words. For all conditions, subjects performed a material-nonspecific detection response when a train of stimuli began or ceased. Dorsal areas surrounding Heschl's gyrus bilaterally, particularly the planum temporale and dorsolateral superior temporal gyrus, were more strongly activated by FM tones than by noise, suggesting a role in processing simple temporally encoded auditory information. Distinct from these dorsolateral areas, regions centered in the superior temporal sulcus bilaterally were more activated by speech stimuli than by FM tones. Identical results were obtained in this region using words, pseudowords and reversed speech, suggesting that the speech-tones activation difference is due to acoustic rather than linguistic factors. In contrast, previous comparisons between word and nonword speech sounds showed left-lateralized activation differences in more ventral temporal and temporoparietal regions that are likely involved in processing lexical-semantic or syntactic information associated with words. The results indicate functional subdivision of the human lateral temporal cortex and provide a preliminary framework for understanding the cortical processing of speech sounds.  相似文献   

11.
Lesion studies in monkeys have suggested a modest left hemisphere dominance for processing species-specific vocalizations, the neural basis of which has thus far remained unclear. We used contrast agent-enhanced functional magnetic resonance imaging to map the regions of the rhesus monkey brain involved in processing conspecific vocalizations as well as human speech and emotional sounds. Control conditions included scrambled versions of all 3 stimuli and silence. Compared with silence, all stimuli activated widespread parts of the auditory cortex and subcortical auditory structures with a right hemispheric bias at the level of the auditory core. However, comparing intact with scrambled sounds revealed a leftward bias in the auditory belt and the parabelt. The left-sided dominance was stronger and more robust for human speech than for rhesus vocalizations and hence does not reflect conspecific call selectivity but rather the processing of complex spectrotemporal patterns, such as those present in human speech and in some of the rhesus monkey vocalizations. This was confirmed by regressing brain activity with a model-derived parameter indexing the prevalence of such patterns. Our results indicate that processing of vocal sounds in the lateral belt and parabelt is asymmetric in monkeys, as predicted from lesion studies.  相似文献   

12.
BACKGROUND: The extent to which complex auditory stimuli are processed and differentiated during general anesthesia is unknown. The authors used blood oxygenation level-dependent functional magnetic resonance imaging to examine the processing words (10 per period; compared with scrambled words) and nonspeech human vocal sounds (10 per period; compared with environmental sounds) during propofol anesthesia. METHODS: Seven healthy subjects were tested. Propofol was given by a computer-controlled pump to obtain stable plasma concentrations. Data were acquired during awake baseline, sedation (propofol concentration in arterial plasma: 0.64 +/- 0.13 microg/ml; mean +/- SD), general anesthesia (4.62 +/- 0.57 microg/ml), and recovery. Subjects were asked to memorize the words. RESULTS: During all periods including anesthesia, the sounds conditions combined elicited significantly greater activations than silence bilaterally in primary auditory cortices (Heschl gyrus) and adjacent regions within the planum temporale. During sedation and anesthesia, however, the magnitude of the activations was reduced by 40-50% (P < 0.05). Furthermore, anesthesia abolished voice-specific activations seen bilaterally in the superior temporal sulcus during the other periods as well as word-specific activations bilaterally in the Heschl gyrus, planum temporale, and superior temporal gyrus. However, scrambled words paradoxically elicited significantly more activation than normal words bilaterally in planum temporale during anesthesia. Recognition the next day occurred only for words presented during baseline plus recovery and was correlated (P < 0.01) with activity in right and left planum temporale. CONCLUSIONS: The authors conclude that during anesthesia, the primary and association auditory cortices remain responsive to complex auditory stimuli, but in a nonspecific way such that the ability for higher-level analysis is lost.  相似文献   

13.
OBJECTIVE: The goal was to assess auditory cortex activation evoked by pure-tone stimulus with functional MRI. METHODS: Five healthy children, aged 7 to 10 years, were studied. Hearing evaluation was performed by pure-tone audiometry in a sound-treated room and in the MRI scanner with the scanner noise in the background. Subjects were asked to listen to pure tones (500, 1000, 2000, and 4000 Hz) at thresholds determined in the MRI scanner. Functional image processing was performed with a cross-correlation technique with a correlation coefficient of 0.5 (P < 0.0001). Auditory cortex activation was assessed by observing activated pixels in functional images. RESULTS: Functional images of auditory cortex activation were obtained in 3 children. All children showed activation in Heschl's gyrus, middle temporal gyrus, superior temporal gyrus, and planum temporale. The number of activated pixels in auditory cortexes ranged from 4 to 33. CONCLUSIONS: Functional images of auditory cortex activation evoked by pure-tone stimuli are obtained in healthy children with the functional MRI technique.  相似文献   

14.
Human brain regions involved in recognizing environmental sounds   总被引:5,自引:0,他引:5  
To identify the brain regions preferentially involved in environmental sound recognition (comprising portions of a putative auditory 'what' pathway), we collected functional imaging data while listeners attended to a wide range of sounds, including those produced by tools, animals, liquids and dropped objects. These recognizable sounds, in contrast to unrecognizable, temporally reversed control sounds, evoked activity in a distributed network of brain regions previously associated with semantic processing, located predominantly in the left hemisphere, but also included strong bilateral activity in posterior portions of the middle temporal gyri (pMTG). Comparisons with earlier studies suggest that these bilateral pMTG foci partially overlap cortex implicated in high-level visual processing of complex biological motion and recognition of tools and other artifacts. We propose that the pMTG foci process multimodal (or supramodal) information about objects and object-associated motion, and that this may represent 'action' knowledge that can be recruited for purposes of recognition of familiar environmental sound-sources. These data also provide a functional and anatomical explanation for the symptoms of pure auditory agnosia for environmental sounds reported in human lesion studies.  相似文献   

15.
We studied eight normal subjects in an fMRI experiment where they listened to natural speech sentences and to matched simple or complex speech envelope noises. Neither of the noises (simple or complex) were understood initially, but after the corresponding natural speech sentences had been heard, comprehension was close to perfect for the complex but still absent for the simple speech envelope noises. This setting thus involved identical stimuli that were understood or not and permitted to identify (i) a neural substrate of speech comprehension unconfounded by stimulus acoustic properties (common to natural speech and complex noises), (ii) putative correlates of auditory search for phonetic cues in noisy stimuli (common to simple and complex noises once the matching natural speech had been heard) and (iii) the cortical regions where speech comprehension and auditory search interact. We found correlates of speech comprehension in bilateral medial (BA21) and inferior (BA38 and BA38/21) temporal regions, whereas acoustic feature processing occurred in more dorsal temporal regions. The left posterior superior temporal cortex (Wernicke's area) responded to the acoustic complexity of the stimuli but was additionally sensitive to auditory search and speech comprehension. Attention was associated with recruitment of the dorsal part of Broca's area (BA44) and interaction of auditory attention and comprehension occurred in bilateral insulae, the anterior cingulate and the right medial frontal cortex. In combination, these results delineate a neuroanatomical framework for the functional components at work during natural speech processing, i.e. when comprehension results from concurrent acoustic processing and effortful auditory search.  相似文献   

16.
Synesthesia is defined as the involuntary and automatic perception of a stimulus in 2 or more sensory modalities (i.e., cross-modal linkage). Colored-hearing synesthetes experience colors when hearing tones or spoken utterances. Based on event-related potentials we employed electric brain tomography with high temporal resolution in colored-hearing synesthetes and nonsynesthetic controls during auditory verbal stimulation. The auditory-evoked potentials to words and letters were different between synesthetes and controls at the N1 and P2 components, showing longer latencies and lower amplitudes in synesthetes. The intracerebral sources of these components were estimated with low-resolution brain electromagnetic tomography and revealed stronger activation in synesthetes in left posterior inferior temporal regions, within the color area in the fusiform gyrus (V4), and in orbitofrontal brain regions (ventromedial and lateral). The differences occurred as early as 122 ms after stimulus onset. Our findings replicate and extend earlier reports with functional magnetic resonance imaging and positron emission tomography in colored-hearing synesthesia and contribute new information on the time course in synesthesia demonstrating the fast and possibly automatic processing of this unusual and remarkable phenomenon.  相似文献   

17.
To better understand face recognition, it is necessary to identify not only which brain structures are implicated but also the dynamics of the neuronal activity in these structures. Latencies can then be compared to unravel the temporal dynamics of information processing at the distributed network level. To achieve high spatial and temporal resolution, we used intracerebral recordings in epileptic subjects while they performed a famous/unfamiliar face recognition task. The first components peaked at 110 ms in the fusiform gyrus (FG) and simultaneously in the inferior frontal gyrus, suggesting the early establishment of a large-scale network. This was followed by components peaking at 160 ms in 2 areas along the FG. Important stages of distributed parallel processes ensued at 240 and 360 ms involving up to 6 regions along the ventral visual pathway. The final components peaked at 480 ms in the hippocampus. These stages largely overlapped. Importantly, event-related potentials to famous faces differed from unfamiliar faces and control stimuli in all medial temporal lobe structures. The network was bilateral but more right sided. Thus, recognition of famous faces takes place through the establishment of a complex set of local and distributed processes that interact dynamically and may be an emergent property of these interactions.  相似文献   

18.
Rapid face-selective adaptation of an early extrastriate component in MEG   总被引:1,自引:0,他引:1  
Adaptation paradigms are becoming increasingly popular for characterizing visual areas in neuroimaging, but the relation of these results to perception is unclear. Neurophysiological studies have generally reported effects of stimulus repetition starting at 250-300 ms after stimulus onset, well beyond the latencies of components associated with perception (100-200 ms). Here we demonstrate adaptation for earlier evoked components when 2 stimuli (S1 and S2) are presented in close succession. Using magnetoencephalography, we examined the M170, a "face-selective" response at 170 ms after stimulus onset that shows a larger response to faces than to other stimuli. Adaptation of the M170 occurred only when stimuli were presented with relatively short stimulus onset asynchronies (< 800 ms) and was larger for faces preceded by faces than by houses. This face-selective adaptation is not merely low-level habituation to physical stimulus attributes, as photographic, line-drawing, and 2-tone face images produced similar levels of adaptation. Nor does it depend on the amplitude of the S1 response: adaptation remained greater for faces than houses even when the amplitude of the S1 face response was reduced by visual noise. These results indicate that rapid adaptation of early, short-latency responses not only exists but also can be category selective.  相似文献   

19.
The supratemporal sources of the earliest auditory cortical responses (20-80 ms) were identified using simultaneously recorded electroencephalographic (EEG) and magnetoencephalographic (MEG) data. Both hemispheres of six subjects were recorded two or three times in different sessions in response to 8000 right-ear 1 kHz pure tones stimuli. Four components were identified: Pa (28 ms), Nb (40 ms), and two subcomponents of the Pb complex, termed Pb1 (52 ms) and Pb2 (74 ms). Based on MEG data, the corresponding sources were localized on the anatomy using individual realistic head models: Pa in the medial portion of Heschl's gyri (H1/H2); Nb/Pb1 in the lateral aspect of the supratemporal gyrus (STG); and Pb2 in the antero-lateral portion of Heschl's gyri. All sources were oriented antero-superiorly. This pattern was clearest in the contralateral hemisphere, where these three activities could be statistically dissociated. Results agree with previous invasive human intracerebral recordings, with animal studies reporting secondary areas involved in the generation of middle latency auditory-evoked components, and with positron emission tomography and functional magnetic resonance imaging studies often reporting these three active areas although without temporal information. The early STG activity may be attributed to parallel thalamo-cortical connections, or to cortico-cortical connections between the primary auditory cortex and the STG, as recently described in humans.  相似文献   

20.
Hemispheric asymmetries during auditory sensory processing were examined using whole-head magnetoencephalographic recordings of auditory evoked responses to monaurally and binaurally presented amplitude-modulated sounds. Laterality indices were calculated for the transient onset responses (P1m and N1m), the transient gamma-band response, the sustained field (SF) and the 40 Hz auditory steady-state response (ASSR). All response components showed laterality toward the hemisphere contralateral to the stimulated ear. In addition, the SF and ASSR showed right hemispheric (RH) dominance. Thus, laterality of sustained response components (SF and ASSR) was distinct from that of transient responses. ASSR and SF are sensitive to stimulus periodicity. Consequently, ASSR and SF likely reflect periodic stimulus attributes and might be relevant for pitch processing based on temporal stimulus regularities. In summary, the results of the present studies demonstrate that asymmetric organization in the cerebral auditory cortex is already established on the level of sensory processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号