首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The left hemisphere specialization for speech perception might arise from asymmetries at more basic levels of auditory processing. In particular, it has been suggested that differences in "temporal" and "spectral" processing exist between the hemispheres. Here we used functional magnetic resonance imaging to test this hypothesis further. Fourteen healthy volunteers listened to sequences of alternating pure tones that varied in the temporal and spectral domains. Increased temporal variation was associated with activation in Heschl's gyrus (HG) bilaterally, whereas increased spectral variation activated the superior temporal gyrus (STG) bilaterally and right posterior superior temporal sulcus (STS). Responses to increased temporal variation were lateralized to the left hemisphere; this left lateralization was greater in posteromedial HG, which is presumed to correspond to the primary auditory cortex. Responses to increased spectral variation were lateralized to the right hemisphere specifically in the anterior STG and posterior STS. These findings are consistent with the notion that the hemispheres are differentially specialized for processing auditory stimuli even in the absence of linguistic information.  相似文献   

2.
Neural substrates of phonemic perception   总被引:5,自引:2,他引:3  
The temporal lobe in the left hemisphere has long been implicated in the perception of speech sounds. Little is known, however, regarding the specific function of different temporal regions in the analysis of the speech signal. Here we show that an area extending along the left middle and anterior superior temporal sulcus (STS) is more responsive to familiar consonant-vowel syllables during an auditory discrimination task than to comparably complex auditory patterns that cannot be associated with learned phonemic categories. In contrast, areas in the dorsal superior temporal gyrus bilaterally, closer to primary auditory cortex, are activated to the same extent by the phonemic and nonphonemic sounds. Thus, the left middle/anterior STS appears to play a role in phonemic perception. It may represent an intermediate stage of processing in a functional pathway linking areas in the bilateral dorsal superior temporal gyrus, presumably involved in the analysis of physical features of speech and other complex non-speech sounds, to areas in the left anterior STS and middle temporal gyrus that are engaged in higher-level linguistic processes.  相似文献   

3.
To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).  相似文献   

4.
Spectral and temporal processing in human auditory cortex.   总被引:9,自引:5,他引:4  
Hierarchical processing suggests that spectrally and temporally complex stimuli will evoke more activation than do simple stimuli, particularly in non-primary auditory fields. This hypothesis was tested using two tones, a single frequency tone and a harmonic tone, that were either static or frequency modulated to create four stimuli. We interpret the location of differences in activation by drawing comparisons between fMRI and human cytoarchitectonic data, reported in the same brain space. Harmonic tones produced more activation than single tones in right Heschl's gyrus (HG) and bilaterally in the lateral supratemporal plane (STP). Activation was also greater to frequency-modulated tones than to static tones in these areas, plus in left HG and bilaterally in an anterolateral part of the STP and the superior temporal sulcus. An elevated response magnitude to both frequency-modulated tones was found in the lateral portion of the primary area, and putatively in three surrounding non-primary regions on the lateral STP (one anterior and two posterior to HG). A focal site on the posterolateral STP showed an especially high response to the frequency-modulated harmonic tone. Our data highlight the involvement of both primary and lateral non-primary auditory regions.  相似文献   

5.
BACKGROUND: The extent to which complex auditory stimuli are processed and differentiated during general anesthesia is unknown. The authors used blood oxygenation level-dependent functional magnetic resonance imaging to examine the processing words (10 per period; compared with scrambled words) and nonspeech human vocal sounds (10 per period; compared with environmental sounds) during propofol anesthesia. METHODS: Seven healthy subjects were tested. Propofol was given by a computer-controlled pump to obtain stable plasma concentrations. Data were acquired during awake baseline, sedation (propofol concentration in arterial plasma: 0.64 +/- 0.13 microg/ml; mean +/- SD), general anesthesia (4.62 +/- 0.57 microg/ml), and recovery. Subjects were asked to memorize the words. RESULTS: During all periods including anesthesia, the sounds conditions combined elicited significantly greater activations than silence bilaterally in primary auditory cortices (Heschl gyrus) and adjacent regions within the planum temporale. During sedation and anesthesia, however, the magnitude of the activations was reduced by 40-50% (P < 0.05). Furthermore, anesthesia abolished voice-specific activations seen bilaterally in the superior temporal sulcus during the other periods as well as word-specific activations bilaterally in the Heschl gyrus, planum temporale, and superior temporal gyrus. However, scrambled words paradoxically elicited significantly more activation than normal words bilaterally in planum temporale during anesthesia. Recognition the next day occurred only for words presented during baseline plus recovery and was correlated (P < 0.01) with activity in right and left planum temporale. CONCLUSIONS: The authors conclude that during anesthesia, the primary and association auditory cortices remain responsive to complex auditory stimuli, but in a nonspecific way such that the ability for higher-level analysis is lost.  相似文献   

6.
Background: The extent to which complex auditory stimuli are processed and differentiated during general anesthesia is unknown. The authors used blood oxygenation level-dependent functional magnetic resonance imaging to examine the processing words (10 per period; compared with scrambled words) and nonspeech human vocal sounds (10 per period; compared with environmental sounds) during propofol anesthesia.

Methods: Seven healthy subjects were tested. Propofol was given by a computer-controlled pump to obtain stable plasma concentrations. Data were acquired during awake baseline, sedation (propofol concentration in arterial plasma: 0.64 +/- 0.13 [mu]g/ml; mean +/- SD), general anesthesia (4.62 +/- 0.57 [mu]g/ml), and recovery. Subjects were asked to memorize the words.

Results: During all periods including anesthesia, the sounds conditions combined elicited significantly greater activations than silence bilaterally in primary auditory cortices (Heschl gyrus) and adjacent regions within the planum temporale. During sedation and anesthesia, however, the magnitude of the activations was reduced by 40-50% (P < 0.05). Furthermore, anesthesia abolished voice-specific activations seen bilaterally in the superior temporal sulcus during the other periods as well as word-specific activations bilaterally in the Heschl gyrus, planum temporale, and superior temporal gyrus. However, scrambled words paradoxically elicited significantly more activation than normal words bilaterally in planum temporale during anesthesia. Recognition the next day occurred only for words presented during baseline plus recovery and was correlated (P < 0.01) with activity in right and left planum temporale.  相似文献   


7.
We aimed at testing the cortical representation of complex natural sounds within auditory cortex by conducting 2 human magnetoencephalography experiments. To this end, we employed an adaptation paradigm and presented subjects with pairs of complex stimuli, namely, animal vocalizations and spectrally matched noise. In Experiment 1, we presented stimulus pairs of same or different animal vocalizations and same or different noise. Our results suggest a 2-step process of adaptation effects: first, we observed a general item-unspecific reduction of the N1m peak amplitude at 100 ms, followed by an item-specific amplitude reduction of the P2m component at 200 ms after stimulus onset for both animal vocalizations and noise. Multiple dipole source modeling revealed the right lateral Heschl's gyrus and the bilateral superior temporal gyrus as sites of adaptation. In Experiment 2, we tested for cross-adaptation between animal vocalizations and spectrally matched noise sounds, by presenting pairs of an animal vocalization and its corresponding or a different noise sound. We observed cross-adaptation effects for the P2m component within bilateral superior temporal gyrus. Thus, our results suggest selectivity of the evoked magnetic field at 200 ms after stimulus onset in nonprimary auditory cortex for the spectral fine structure of complex sounds rather than their temporal dynamics.  相似文献   

8.
OBJECTIVE: The goal was to assess auditory cortex activation evoked by pure-tone stimulus with functional MRI. METHODS: Five healthy children, aged 7 to 10 years, were studied. Hearing evaluation was performed by pure-tone audiometry in a sound-treated room and in the MRI scanner with the scanner noise in the background. Subjects were asked to listen to pure tones (500, 1000, 2000, and 4000 Hz) at thresholds determined in the MRI scanner. Functional image processing was performed with a cross-correlation technique with a correlation coefficient of 0.5 (P < 0.0001). Auditory cortex activation was assessed by observing activated pixels in functional images. RESULTS: Functional images of auditory cortex activation were obtained in 3 children. All children showed activation in Heschl's gyrus, middle temporal gyrus, superior temporal gyrus, and planum temporale. The number of activated pixels in auditory cortexes ranged from 4 to 33. CONCLUSIONS: Functional images of auditory cortex activation evoked by pure-tone stimuli are obtained in healthy children with the functional MRI technique.  相似文献   

9.
The mismatch negativity response, considered a brain correlate of automatic preattentive auditory processing, is enhanced for word stimuli as compared with acoustically matched pseudowords. This lexical enhancement, taken as a signature of activation of language-specific long-term memory traces, was investigated here using functional magnetic resonance imaging to complement the previous electrophysiological studies. In passive oddball paradigm, word stimuli were randomly presented as rare deviants among frequent pseudowords; the reverse conditions employed infrequent pseudowords among word stimuli. Random-effect analysis indicated clearly distinct patterns for the different lexical types. Whereas the hemodynamic mismatch response was significant for the word deviants, it did not reach significance for the pseudoword conditions. This difference, more pronounced in the left than right hemisphere, was also assessed by analyzing average parameter estimates in regions of interests within both temporal lobes. A significant hemisphere-by-lexicality interaction confirmed stronger blood oxygenation level-dependent mismatch responses to words than pseudowords in the left but not in the right superior temporal cortex. The increased left superior temporal activation and the laterality of cortical sources elicited by spoken words compared with pseudowords may indicate the activation of cortical circuits for lexical material even in passive oddball conditions and suggest involvement of the left superior temporal areas in housing such word-processing neuronal circuits.  相似文献   

10.
Speech perception requires cortical mechanisms capable of analysing and encoding successive spectral (frequency) changes in the acoustic signal. To study temporal speech processing in the human auditory cortex, we recorded intracerebral evoked potentials to syllables in right and left human auditory cortices including Heschl's gyrus (HG), planum temporale (PT) and the posterior part of superior temporal gyrus (area 22). Natural voiced /ba/, /da/, /ga/) and voiceless (/pa/, /ta/, /ka/) syllables, spoken by a native French speaker, were used to study the processing of a specific temporally based acoustico-phonetic feature, the voice onset time (VOT). This acoustic feature is present in nearly all languages, and it is the VOT that provides the basis for the perceptual distinction between voiced and voiceless consonants. The present results show a lateralized processing of acoustic elements of syllables. First, processing of voiced and voiceless syllables is distinct in the left, but not in the right HG and PT. Second, only the evoked potentials in the left HG, and to a lesser extent in PT, reflect a sequential processing of the different components of the syllables. Third, we show that this acoustic temporal processing is not limited to speech sounds but applies also to non-verbal sounds mimicking the temporal structure of the syllable. Fourth, there was no difference between responses to voiced and voiceless syllables in either left or right areas 22. Our data suggest that a single mechanism in the auditory cortex, involved in general (not only speech-specific) temporal processing, may underlie the further processing of verbal (and non-verbal) stimuli. This coding, bilaterally localized in auditory cortex in animals, takes place specifically in the left HG in man. A defect of this mechanism could account for hearing discrimination impairments associated with language disorders.  相似文献   

11.
The rat auditory cortex is divided anatomically into several areas, but little is known about the functional differences in information processing between these areas. To determine the filter properties of rat posterior auditory field (PAF) neurons, we compared neurophysiological responses to simple tones, frequency modulated (FM) sweeps, and amplitude modulated noise and tones with responses of primary auditory cortex (A1) neurons. PAF neurons have excitatory receptive fields that are on average 65% broader than A1 neurons. The broader receptive fields of PAF neurons result in responses to narrow and broadband inputs that are stronger than A1. In contrast to A1, we found little evidence for an orderly topographic gradient in PAF based on frequency. These neurons exhibit latencies that are twice as long as A1. In response to modulated tones and noise, PAF neurons adapt to repeated stimuli at significantly slower rates. Unlike A1, neurons in PAF rarely exhibit facilitation to rapidly repeated sounds. Neurons in PAF do not exhibit strong selectivity for rate or direction of narrowband one octave FM sweeps. These results indicate that PAF, like nonprimary visual fields, processes sensory information on larger spectral and longer temporal scales than primary cortex.  相似文献   

12.
Many socially significant biological stimuli are polymodal, and information processing is enhanced for polymodal over unimodal stimuli. The human superior temporal sulcus (STS) region has been implicated in processing socially relevant stimuli--particularly those derived from biological motion such as mouth movements. Single unit studies in monkeys have demonstrated that regions of STS are polysensory--responding to visual, auditory and somato-sensory stimuli, and human neuroimaging studies have shown that lip-reading activates auditory regions of the lateral temporal lobe. We evaluated whether concurrent speech sounds and mouth movements were more potent activators of STS than either speech sounds or mouth movements alone. In an event-related fMRI study, subjects observed an animated character that produced audiovisual speech and the audio and visual components of speech alone. Strong activation of the STS region was evoked in all three conditions, with greatest levels of activity elicited by audiovisual speech. Subsets of activated voxels within the STS region demonstrated overadditivity (audiovisual > audio + visual) and underadditivity (audiovisual < audio + visual). These results confirm the polysensory nature of STS region and demonstrate for the first time that polymodal interactions may both potentiate and inhibit activation.  相似文献   

13.
We evaluated the neural substrates of cross-modal binding and divided attention during audio-visual speech integration using functional magnetic resonance imaging. The subjects (n = 17) were exposed to phonemically concordant or discordant auditory and visual speech stimuli. Three different matching tasks were performed: auditory-auditory (AA), visual-visual (VV) and auditory-visual (AV). Subjects were asked whether the prompted pair were congruent or not. We defined the neural substrates for the within-modal matching tasks by VV-AA and AA-VV. We defined the cross-modal area as the intersection of the loci defined by AV-AA and AV-VV. The auditory task activated the bilateral anterior superior temporal gyrus and superior temporal sulcus, the left planum temporale and left lingual gyrus. The visual task activated the bilateral middle and inferior frontal gyrus, right occipito-temporal junction, intraparietal sulcus and left cerebellum. The bilateral dorsal premotor cortex, posterior parietal cortex (including the bilateral superior parietal lobule and the left intraparietal sulcus) and right cerebellum showed more prominent activation during AV compared with AA and VV. Within these areas, the posterior parietal cortex showed more activation during concordant than discordant stimuli, and hence was related to cross-modal binding. Our results indicate a close relationship between cross-modal attentional control and cross-modal binding during speech reading.  相似文献   

14.
To investigate the cortical basis of color and form concepts, we examined event-related functional magnetic resonance imaging (fMRI) responses to matched words related to abstract color and form information. Silent word reading elicited activity in left temporal and frontal cortex, where category-specific activity differences were also observed. Whereas color words preferentially activated anterior parahippocampal gyrus, form words evoked category-specific activity in fusiform and middle temporal gyrus as well as premotor and dorsolateral prefrontal areas in inferior and middle frontal gyri. These results demonstrate that word meanings and concepts are not processed by a unique cortical area, but by different sets of areas, each of which may contribute differentially to conceptual semantic processing. We hypothesize that the anterior parahippocampal activation to color words indexes computation of the visual feature conjunctions and disjunctions necessary for classifying visual stimuli under a color concept. The predominant premotor and prefrontal activation to form words suggests action-related information processing and may reflect the involvement of neuronal elements responding in an either-or fashion to mirror neurons related to adumbrating shapes.  相似文献   

15.
Speech processing in auditory cortex and beyond is a remarkable yet poorly understood faculty of the listening brain. Here we show that stop consonants, as the most transient constituents of speech, are sufficient to involve speech perception circuits in the human superior temporal cortex. Left anterolateral superior temporal cortex showed a stronger response in blood oxygenation level-dependent functional magnetic resonance imaging (fMRI) to intelligible consonantal bursts compared with incomprehensible control sounds matched for spectrotemporal complexity. Simultaneously, the left posterior superior temporal plane (including planum temporale [PT]) exhibited a noncategorical responsivity to complex stimulus acoustics across all trials, showing no preference for intelligible speech sounds. Multistage hierarchical processing of speech sounds is thus revealed with fMRI, providing evidence for a role of the PT in the fundamental stages of the acoustic analysis of complex sounds, including speech.  相似文献   

16.
The computation of speech codes (i.e. phonology) is an important aspect of word reading. Understanding the neural systems and mech- anisms underlying phonological processes provides a foundation for the investigation of language in the brain. We used high-resolution three-dimensional positron emission tomography (PET) to investigate neural systems essential for phonological processes. The burden of neural activities on the computation of speech codes was maximized by three rhyming tasks (rhyming words, pseudowords and words printed in mixed letter cases). Brain activation patterns associated with these tasks were compared with those of two baseline tasks involving visual feature detection. Results suggest strong left lateralized epicenters of neural activity in rhyming irrespective of gender. Word rhyming activated the same brain regions engaged in pseudoword rhyming, suggesting conjoint neural networks for phonological processing of words and pseudowords. However, pseudoword rhyming induced the largest change in cerebral blood flow and activated more voxels in the left posterior prefrontal regions and the left inferior occipital-temporal junction. In addition, pseudoword rhyming activated the left supramarginal gyrus, which was not apparent in word rhyming. These results suggest that rhyming pseudowords requires active participation of extended neural systems and networks not observed for rhyming words. The implications of the results on theories and models of visual word reading and on selective reading dysfunctions after brain lesions are discussed.  相似文献   

17.
Hall et al. (Hall et al., 2002, Cerebral Cortex 12:140-149) recently showed that pulsed frequency-modulated tones generate considerably higher activation than their unmodulated counterparts in non-primary auditory regions immediately posterior and lateral to Heschl's gyrus (HG). Here, we use fMRI to explore the type of modulation necessary to evoke such differential activation. Carrier signals were a single tone and a harmonic-complex tone, with a 300 Hz fundamental, that were modulated at a rate of 5 Hz either in frequency, or in amplitude, to create six stimulus conditions (unmodulated, FM, AM). Relative to the silent baseline, the modulated tones, in particular, activated widespread regions of the auditory cortex bilaterally along the supra-temporal plane. When compared with the unmodulated tones, both AM and FM tones generated significantly greater activation in lateral HG and the planum temporale, replicating the previous findings. These activation patterns were largely overlapping, indicating a common sensitivity to both AM and FM. Direct comparisons between AM and FM revealed a higher magnitude of activation in response to the variation in amplitude than in frequency, plus a small part of the posterolateral region in the right hemisphere whose response was specifically AM-, and not FM-, dependent. The dominant pattern of activation was that of co-localized activation by AM and FM, which is consistent with a common neural code for AM and FM within these brain regions.  相似文献   

18.
We hypothesized that areas in the temporal lobe that have been implicated in the phonological processing of spoken words would also be activated during the generation and phonological processing of imagined speech. We tested this hypothesis using functional magnetic resonance imaging during a behaviorally controlled task of metrical stress evaluation. Subjects were presented with bisyllabic words and had to determine the alternation of strong and weak syllables. Thus, they were required to discriminate between weak-initial words and strong-initial words. In one condition, the stimuli were presented auditorily to the subjects (by headphones). In the other condition the stimuli were presented visually on a screen and subjects were asked to imagine hearing the word. Results showed activation of the supplementary motor area, inferior frontal gyrus (Broca's area) and insula in both conditions. In the superior temporal gyrus (STG) and in the superior temporal sulcus (STS) strong activation was observed during the auditory (perceptual) condition. However, a region located in the posterior part of the STS/STG also responded during the imagery condition. No activation of this same region of the STS was observed during a control condition which also involved processing of visually presented words, but which required a semantic decision from the subject. We suggest that processing of metrical stress, with or without auditory input, relies in part on cortical interface systems located in the posterior part of STS/STG. These results corroborate behavioral evidence regarding phonological loop involvement in auditory-verbal imagery.  相似文献   

19.
Most functional imaging studies of the auditory system have employed complex stimuli. We used positron emission tomography to map neural responses to 0.5 and 4.0 kHz sine-wave tones presented to the right ear at 30, 50, 70 and 90 dB HL and found activation in a complex neural network of elements traditionally associated with the auditory system as well as non-traditional sites such as the posterior cingulate cortex. Cingulate activity was maximal at low stimulus intensities, suggesting that it may function as a gain control center. In the right temporal lobe, the location of the maximal response varied with the intensity, but not with the frequency of the stimuli. In the left temporal lobe, there was evidence for tonotopic organization: a site lateral to the left primary auditory cortex was activated equally by both tones while a second site in primary auditory cortex was more responsive to the higher frequency. Infratentorial activations were contralateral to the stimulated ear and included the lateral cerebellum, the lateral pontine tegmentum, the midbrain and the medial geniculate. Contrary to predictions based on cochlear membrane mechanics, at each intensity, 4.0 kHz stimuli were more potent activators of the brain than the 0.5 kHz stimuli.  相似文献   

20.
Speech contains prosodic cues such as pauses between different phrases of a sentence. These intonational phrase boundaries (IPBs) elicit a specific component in event-related brain potential studies, the so-called closure positive shift. The aim of the present functional magnetic resonance imaging study is to identify the neural correlates of this prosody-related component in sentences containing segmental and prosodic information (natural speech) and hummed sentences only containing prosodic information. Sentences with 2 IPBs both in normal and hummed speech activated the middle superior temporal gyrus, the rolandic operculum, and the gyrus of Heschl more strongly than sentences with 1 IPB. The results from a region of interest analysis of auditory cortex and auditory association areas suggest that the posterior rolandic operculum, in particular, supports the processing of prosodic information. A comparison of natural speech and hummed sentences revealed a number of left-hemispheric areas within the temporal lobe as well as in the frontal and parietal lobe that were activated more strongly for natural speech than for hummed sentences. These areas constitute the neural network for the processing of natural speech. The finding that no area was activated more strongly for hummed sentences compared with natural speech suggests that prosody is an integrated part of natural speech.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号