首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We hypothesized that areas in the temporal lobe that have been implicated in the phonological processing of spoken words would also be activated during the generation and phonological processing of imagined speech. We tested this hypothesis using functional magnetic resonance imaging during a behaviorally controlled task of metrical stress evaluation. Subjects were presented with bisyllabic words and had to determine the alternation of strong and weak syllables. Thus, they were required to discriminate between weak-initial words and strong-initial words. In one condition, the stimuli were presented auditorily to the subjects (by headphones). In the other condition the stimuli were presented visually on a screen and subjects were asked to imagine hearing the word. Results showed activation of the supplementary motor area, inferior frontal gyrus (Broca's area) and insula in both conditions. In the superior temporal gyrus (STG) and in the superior temporal sulcus (STS) strong activation was observed during the auditory (perceptual) condition. However, a region located in the posterior part of the STS/STG also responded during the imagery condition. No activation of this same region of the STS was observed during a control condition which also involved processing of visually presented words, but which required a semantic decision from the subject. We suggest that processing of metrical stress, with or without auditory input, relies in part on cortical interface systems located in the posterior part of STS/STG. These results corroborate behavioral evidence regarding phonological loop involvement in auditory-verbal imagery.  相似文献   

2.
Neural substrates of phonemic perception   总被引:5,自引:2,他引:3  
The temporal lobe in the left hemisphere has long been implicated in the perception of speech sounds. Little is known, however, regarding the specific function of different temporal regions in the analysis of the speech signal. Here we show that an area extending along the left middle and anterior superior temporal sulcus (STS) is more responsive to familiar consonant-vowel syllables during an auditory discrimination task than to comparably complex auditory patterns that cannot be associated with learned phonemic categories. In contrast, areas in the dorsal superior temporal gyrus bilaterally, closer to primary auditory cortex, are activated to the same extent by the phonemic and nonphonemic sounds. Thus, the left middle/anterior STS appears to play a role in phonemic perception. It may represent an intermediate stage of processing in a functional pathway linking areas in the bilateral dorsal superior temporal gyrus, presumably involved in the analysis of physical features of speech and other complex non-speech sounds, to areas in the left anterior STS and middle temporal gyrus that are engaged in higher-level linguistic processes.  相似文献   

3.
Cortical dynamics of spoken word perception is not well understood. The possible interplay between analysis of sound form and meaning, in particular, remains elusive. We used magnetoencephalography to study cortical manifestation of phonological and semantic priming. Ten subjects listened to lists of 4 words. The first 3 words set a semantic or phonological context, and the list-final word was congruent or incongruent with this context. Attenuation of activation by priming during the first 3 words and increase of activation to semantic or phonological mismatch in the list-final word provided converging evidence: The superior temporal cortex bilaterally was involved in both analysis of sound form and meaning but the role of each hemisphere varied over time. Sensitivity to sound form was observed at approximately 100 ms after word onset, followed by sensitivity to semantic aspects from approximately 250 ms onwards, in the left hemisphere. From approximately 450 ms onwards, the picture was changed, with semantic effects now present bilaterally, accompanied by a subtle late effect of sound form in the right hemisphere. Present MEG data provide a detailed spatiotemporal account of neural mechanisms during speech perception that may underlie characterizations obtained with other neuroimaging methods less sensitive in temporal or spatial domain.  相似文献   

4.
Human temporal lobe activation by speech and nonspeech sounds   总被引:27,自引:18,他引:9  
Functional organization of the lateral temporal cortex in humans is not well understood. We recorded blood oxygenation signals from the temporal lobes of normal volunteers using functional magnetic resonance imaging during stimulation with unstructured noise, frequency-modulated (FM) tones, reversed speech, pseudowords and words. For all conditions, subjects performed a material-nonspecific detection response when a train of stimuli began or ceased. Dorsal areas surrounding Heschl's gyrus bilaterally, particularly the planum temporale and dorsolateral superior temporal gyrus, were more strongly activated by FM tones than by noise, suggesting a role in processing simple temporally encoded auditory information. Distinct from these dorsolateral areas, regions centered in the superior temporal sulcus bilaterally were more activated by speech stimuli than by FM tones. Identical results were obtained in this region using words, pseudowords and reversed speech, suggesting that the speech-tones activation difference is due to acoustic rather than linguistic factors. In contrast, previous comparisons between word and nonword speech sounds showed left-lateralized activation differences in more ventral temporal and temporoparietal regions that are likely involved in processing lexical-semantic or syntactic information associated with words. The results indicate functional subdivision of the human lateral temporal cortex and provide a preliminary framework for understanding the cortical processing of speech sounds.  相似文献   

5.
Many socially significant biological stimuli are polymodal, and information processing is enhanced for polymodal over unimodal stimuli. The human superior temporal sulcus (STS) region has been implicated in processing socially relevant stimuli--particularly those derived from biological motion such as mouth movements. Single unit studies in monkeys have demonstrated that regions of STS are polysensory--responding to visual, auditory and somato-sensory stimuli, and human neuroimaging studies have shown that lip-reading activates auditory regions of the lateral temporal lobe. We evaluated whether concurrent speech sounds and mouth movements were more potent activators of STS than either speech sounds or mouth movements alone. In an event-related fMRI study, subjects observed an animated character that produced audiovisual speech and the audio and visual components of speech alone. Strong activation of the STS region was evoked in all three conditions, with greatest levels of activity elicited by audiovisual speech. Subsets of activated voxels within the STS region demonstrated overadditivity (audiovisual > audio + visual) and underadditivity (audiovisual < audio + visual). These results confirm the polysensory nature of STS region and demonstrate for the first time that polymodal interactions may both potentiate and inhibit activation.  相似文献   

6.
Background: The extent to which complex auditory stimuli are processed and differentiated during general anesthesia is unknown. The authors used blood oxygenation level-dependent functional magnetic resonance imaging to examine the processing words (10 per period; compared with scrambled words) and nonspeech human vocal sounds (10 per period; compared with environmental sounds) during propofol anesthesia.

Methods: Seven healthy subjects were tested. Propofol was given by a computer-controlled pump to obtain stable plasma concentrations. Data were acquired during awake baseline, sedation (propofol concentration in arterial plasma: 0.64 +/- 0.13 [mu]g/ml; mean +/- SD), general anesthesia (4.62 +/- 0.57 [mu]g/ml), and recovery. Subjects were asked to memorize the words.

Results: During all periods including anesthesia, the sounds conditions combined elicited significantly greater activations than silence bilaterally in primary auditory cortices (Heschl gyrus) and adjacent regions within the planum temporale. During sedation and anesthesia, however, the magnitude of the activations was reduced by 40-50% (P < 0.05). Furthermore, anesthesia abolished voice-specific activations seen bilaterally in the superior temporal sulcus during the other periods as well as word-specific activations bilaterally in the Heschl gyrus, planum temporale, and superior temporal gyrus. However, scrambled words paradoxically elicited significantly more activation than normal words bilaterally in planum temporale during anesthesia. Recognition the next day occurred only for words presented during baseline plus recovery and was correlated (P < 0.01) with activity in right and left planum temporale.  相似文献   


7.
There is an increasing interest to integrate electrophysiological and hemodynamic measures for characterizing spatial and temporal aspects of cortical processing. However, an informative combination of responses that have markedly different sensitivities to the underlying neural activity is not straightforward, especially in complex cognitive tasks. Here, we used parametric stimulus manipulation in magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) recordings on the same subjects, to study effects of noise on processing of spoken words and environmental sounds. The added noise influenced MEG response strengths in the bilateral supratemporal auditory cortex, at different times for the different stimulus types. Specifically for spoken words, the effect of noise on the electrophysiological response was remarkably nonlinear. Therefore, we used the single-subject MEG responses to construct parametrization for fMRI data analysis and obtained notably higher sensitivity than with conventional stimulus-based parametrization. fMRI results showed that partly different temporal areas were involved in noise-sensitive processing of words and environmental sounds. These results indicate that cortical processing of sounds in background noise is stimulus specific in both timing and location and provide a new functionally meaningful platform for combining information obtained with electrophysiological and hemodynamic measures of brain function.  相似文献   

8.
The left hemisphere specialization for speech perception might arise from asymmetries at more basic levels of auditory processing. In particular, it has been suggested that differences in "temporal" and "spectral" processing exist between the hemispheres. Here we used functional magnetic resonance imaging to test this hypothesis further. Fourteen healthy volunteers listened to sequences of alternating pure tones that varied in the temporal and spectral domains. Increased temporal variation was associated with activation in Heschl's gyrus (HG) bilaterally, whereas increased spectral variation activated the superior temporal gyrus (STG) bilaterally and right posterior superior temporal sulcus (STS). Responses to increased temporal variation were lateralized to the left hemisphere; this left lateralization was greater in posteromedial HG, which is presumed to correspond to the primary auditory cortex. Responses to increased spectral variation were lateralized to the right hemisphere specifically in the anterior STG and posterior STS. These findings are consistent with the notion that the hemispheres are differentially specialized for processing auditory stimuli even in the absence of linguistic information.  相似文献   

9.
BACKGROUND: The extent to which complex auditory stimuli are processed and differentiated during general anesthesia is unknown. The authors used blood oxygenation level-dependent functional magnetic resonance imaging to examine the processing words (10 per period; compared with scrambled words) and nonspeech human vocal sounds (10 per period; compared with environmental sounds) during propofol anesthesia. METHODS: Seven healthy subjects were tested. Propofol was given by a computer-controlled pump to obtain stable plasma concentrations. Data were acquired during awake baseline, sedation (propofol concentration in arterial plasma: 0.64 +/- 0.13 microg/ml; mean +/- SD), general anesthesia (4.62 +/- 0.57 microg/ml), and recovery. Subjects were asked to memorize the words. RESULTS: During all periods including anesthesia, the sounds conditions combined elicited significantly greater activations than silence bilaterally in primary auditory cortices (Heschl gyrus) and adjacent regions within the planum temporale. During sedation and anesthesia, however, the magnitude of the activations was reduced by 40-50% (P < 0.05). Furthermore, anesthesia abolished voice-specific activations seen bilaterally in the superior temporal sulcus during the other periods as well as word-specific activations bilaterally in the Heschl gyrus, planum temporale, and superior temporal gyrus. However, scrambled words paradoxically elicited significantly more activation than normal words bilaterally in planum temporale during anesthesia. Recognition the next day occurred only for words presented during baseline plus recovery and was correlated (P < 0.01) with activity in right and left planum temporale. CONCLUSIONS: The authors conclude that during anesthesia, the primary and association auditory cortices remain responsive to complex auditory stimuli, but in a nonspecific way such that the ability for higher-level analysis is lost.  相似文献   

10.
We evaluated the neural substrates of cross-modal binding and divided attention during audio-visual speech integration using functional magnetic resonance imaging. The subjects (n = 17) were exposed to phonemically concordant or discordant auditory and visual speech stimuli. Three different matching tasks were performed: auditory-auditory (AA), visual-visual (VV) and auditory-visual (AV). Subjects were asked whether the prompted pair were congruent or not. We defined the neural substrates for the within-modal matching tasks by VV-AA and AA-VV. We defined the cross-modal area as the intersection of the loci defined by AV-AA and AV-VV. The auditory task activated the bilateral anterior superior temporal gyrus and superior temporal sulcus, the left planum temporale and left lingual gyrus. The visual task activated the bilateral middle and inferior frontal gyrus, right occipito-temporal junction, intraparietal sulcus and left cerebellum. The bilateral dorsal premotor cortex, posterior parietal cortex (including the bilateral superior parietal lobule and the left intraparietal sulcus) and right cerebellum showed more prominent activation during AV compared with AA and VV. Within these areas, the posterior parietal cortex showed more activation during concordant than discordant stimuli, and hence was related to cross-modal binding. Our results indicate a close relationship between cross-modal attentional control and cross-modal binding during speech reading.  相似文献   

11.
The computation of speech codes (i.e. phonology) is an important aspect of word reading. Understanding the neural systems and mech- anisms underlying phonological processes provides a foundation for the investigation of language in the brain. We used high-resolution three-dimensional positron emission tomography (PET) to investigate neural systems essential for phonological processes. The burden of neural activities on the computation of speech codes was maximized by three rhyming tasks (rhyming words, pseudowords and words printed in mixed letter cases). Brain activation patterns associated with these tasks were compared with those of two baseline tasks involving visual feature detection. Results suggest strong left lateralized epicenters of neural activity in rhyming irrespective of gender. Word rhyming activated the same brain regions engaged in pseudoword rhyming, suggesting conjoint neural networks for phonological processing of words and pseudowords. However, pseudoword rhyming induced the largest change in cerebral blood flow and activated more voxels in the left posterior prefrontal regions and the left inferior occipital-temporal junction. In addition, pseudoword rhyming activated the left supramarginal gyrus, which was not apparent in word rhyming. These results suggest that rhyming pseudowords requires active participation of extended neural systems and networks not observed for rhyming words. The implications of the results on theories and models of visual word reading and on selective reading dysfunctions after brain lesions are discussed.  相似文献   

12.
The dorsal bank of the primate superior temporal sulcus (STS) is a polysensory area with rich connections to unimodal sensory association cortices. These include auditory projections that process complex acoustic information, including conspecific vocalizations. We investigated whether an extensive left posterior temporal (Wernicke's area) lesion, which included destruction of early auditory cortex, may contribute to impaired spoken narrative comprehension as a consequence of reduced function in the anterior STS, a region not included within the boundary of infarction. Listening to narratives in normal subjects activated the posterior-anterior extent of the left STS, as far forward as the temporal pole. The presence of a Wernicke's area lesion was associated with both impaired sentence comprehension and a reduced physiological response to heard narratives in the intact anterior left STS when compared to aphasic patients without temporal lobe damage and normal controls. Thus, in addition to the loss of language function in left posterior temporal cortex as the direct result of infarction, posterior ablation that includes primary and early association auditory cortex impairs language function in the intact anterior left temporal lobe. The implication is that clinical studies of language on stroke patients have underestimated the role of left anterior temporal cortex in comprehension of narrative speech.  相似文献   

13.
The processing of single words that varied in their semantic (concrete/abstract word) and syntactic (content/function word) status was investigated under different task demands (semantic/ syntactic task) in an event-related functional magnetic resonance imaging experiment. Task demands to a large degree determined which subparts of the neuronal network supporting word processing were activated. Semantic task demands selectively activated the left pars triangularis of the inferior frontal gyrus (BA 45) and the posterior part of the left middle/superior temporal gyrus (BA 21/22/37). In contrast, syntactic processing requirements led to an increased activation in the inferior tip of the left frontal operculum (BA 44) and the cortex lining the junction of the inferior frontal and inferior precentral sulcus (BA 44/6). Moreover, for these latter areas a word class by concreteness interaction was observed when a syntactic judgement was required. This interaction can be interpreted as a prototypicality effect: non-prototypical members of a word class, i.e. concrete function words and abstract content words, showed a larger activation than prototypical members, i.e. abstract function words and concrete content words. The combined data suggest that the activation pattern underlying word processing is predicted neither by syntactic class nor semantic concreteness but, rather, by task demands focusing either on semantic or syntactic aspects. Thus, our findings that semantic and syntactic aspects of processing are both functionally distinct and involve different subparts of the neuronal network underlying word processing support a domain-specific organization of the language system.  相似文献   

14.
A number of regions of the temporal and frontal lobes are known to be important for spoken language comprehension, yet we do not have a clear understanding of their functional role(s). In particular, there is considerable disagreement about which brain regions are involved in the semantic aspects of comprehension. Two functional magnetic resonance studies use the phenomenon of semantic ambiguity to identify regions within the fronto-temporal language network that subserve the semantic aspects of spoken language comprehension. Volunteers heard sentences containing ambiguous words (e.g. 'the shell was fired towards the tank') and well-matched low-ambiguity sentences (e.g. 'her secrets were written in her diary'). Although these sentences have similar acoustic, phonological, syntactic and prosodic properties (and were rated as being equally natural), the high-ambiguity sentences require additional processing by those brain regions involved in activating and selecting contextually appropriate word meanings. The ambiguity in these sentences goes largely unnoticed, and yet high-ambiguity sentences produced increased signal in left posterior inferior temporal cortex and inferior frontal gyri bilaterally. Given the ubiquity of semantic ambiguity, we conclude that these brain regions form an important part of the network that is involved in computing the meaning of spoken sentences.  相似文献   

15.
Speech perception requires cortical mechanisms capable of analysing and encoding successive spectral (frequency) changes in the acoustic signal. To study temporal speech processing in the human auditory cortex, we recorded intracerebral evoked potentials to syllables in right and left human auditory cortices including Heschl's gyrus (HG), planum temporale (PT) and the posterior part of superior temporal gyrus (area 22). Natural voiced /ba/, /da/, /ga/) and voiceless (/pa/, /ta/, /ka/) syllables, spoken by a native French speaker, were used to study the processing of a specific temporally based acoustico-phonetic feature, the voice onset time (VOT). This acoustic feature is present in nearly all languages, and it is the VOT that provides the basis for the perceptual distinction between voiced and voiceless consonants. The present results show a lateralized processing of acoustic elements of syllables. First, processing of voiced and voiceless syllables is distinct in the left, but not in the right HG and PT. Second, only the evoked potentials in the left HG, and to a lesser extent in PT, reflect a sequential processing of the different components of the syllables. Third, we show that this acoustic temporal processing is not limited to speech sounds but applies also to non-verbal sounds mimicking the temporal structure of the syllable. Fourth, there was no difference between responses to voiced and voiceless syllables in either left or right areas 22. Our data suggest that a single mechanism in the auditory cortex, involved in general (not only speech-specific) temporal processing, may underlie the further processing of verbal (and non-verbal) stimuli. This coding, bilaterally localized in auditory cortex in animals, takes place specifically in the left HG in man. A defect of this mechanism could account for hearing discrimination impairments associated with language disorders.  相似文献   

16.
Word processing is often probed with experiments where a target word is primed by preceding semantically or phonologically related words. Behaviorally, priming results in faster reaction times, interpreted as increased efficiency of cognitive processing. At the neural level, priming reduces the level of neural activation, but the actual neural mechanisms that could account for the increased efficiency have remained unclear. We examined whether enhanced information transfer among functionally relevant brain areas could provide such a mechanism. Neural activity was tracked with magnetoencephalography while subjects read lists of semantically or phonologically related words. Increased priming resulted in reduced cortical activation. In contrast, coherence between brain regions was simultaneously enhanced. Furthermore, while the reduced level of activation was detected in the same area and time window (superior temporal cortex [STC] at 250-650 ms) for both phonological and semantic priming, the spatiospectral connectivity patterns appeared distinct for the 2 processes. Causal interactions further indicated a driving role for the left STC in phonological processing. Our results highlight coherence as a neural mechanism of priming and dissociate semantic and phonological processing via their distinct connectivity profiles.  相似文献   

17.
Little is known about the neural correlates of affective prosody in the context of affective semantic discourse. We used functional magnetic resonance imaging to investigate this issue while subjects performed 1) affective classification of sentences having an affective semantic content and 2) grammatical classification of sentences with neutral semantic content. Sentences of each type were produced half by actors and half by a text-to-speech software lacking affective prosody. Compared with neutral sentences processing, sentences with affective semantic content--with or without affective prosody--led to an increase in activation of a left inferior frontal area involved in the retrieval of semantic knowledge. In addition, the posterior part of the left superior temporal sulcus (STS) together with the medial prefrontal cortex were recruited, although not activated by neutral sentences classification. Interestingly, these areas have been described as implicated during self-reflection or other's mental state inference that possibly occurred during the affective classification task. When affective prosody was present, additional rightward activations of the human-selective voice area and the posterior part of STS were observed, corresponding to the processing of speaker's voice emotional content. Accurate affective communication, central to social interactions, requires the cooperation of semantics, affective prosody, and mind-reading neural networks.  相似文献   

18.
Learning-induced changes in the cerebral processing of voice identity   总被引:1,自引:0,他引:1  
Temporal voice areas showing a larger activity for vocal than non-vocal sounds have been identified along the superior temporal sulcus (STS); more voice-sensitive areas have been described in frontal and parietal lobes. Yet, the role of voice-sensitive regions in representing voice identity remains unclear. Using a functional magnetic resonance adaptation design, we aimed at disentangling acoustic- from identity-based representations of voices. Sixteen participants were scanned while listening to pairs of voices drawn from morphed continua between 2 initially unfamiliar voices, before and after a voice learning phase. In a given pair, the first and second stimuli could be identical or acoustically different and, at the second session, perceptually similar or different. At both sessions, right mid-STS/superior temporal gyrus (STG) and superior temporal pole (sTP) showed sensitivity to acoustical changes. Critically, voice learning induced changes in the acoustical processing of voices in inferior frontal cortices (IFCs). At the second session only, right IFC and left cingulate gyrus showed sensitivity to changes in perceived identity. The processing of voice identity appears to be subserved by a large network of brain areas ranging from the sTP, involved in an acoustic-based representation of unfamiliar voices, to areas along the convexity of the IFC for identity-related processing of familiar voices.  相似文献   

19.
Recent research indicates that non-tonal novel events, deviating from an ongoing auditory environment, elicit a positive event-related potential (ERP), the novel P3. Although a variety of studies examined the neural network engaged in novelty detection, there is no complete picture of the underlying brain mechanisms. This experiment investigated these neural mechanisms by combining ERP and functional magnetic resonance imaging (fMRI). Hemodynamic and electrophysiological responses were measured in the same subjects using the same experimental design. The ERP analysis revealed a novel P3, while the fMRI responses showed bilateral foci in the middle part of the superior temporal gyrus. When subjects attended to the novel stimuli only identifiable novel sounds evoked a N4-like negativity. Subjects showing a strong N4-effect had additional fMRI activation in right prefrontal cortex (rPFC) as compared to subjects with a weak N4-effect. This pattern of results suggests that novelty processing not only includes the registration of deviancy but may also lead to a fast access and retrieval of related semantic concepts. The fMRI activation pattern suggests that the superior temporal gyrus is involved in novelty detection, whereas accessing and retrieving semantic concepts related to novel sounds additionally engages the rPFC.  相似文献   

20.
A large-scale study of 484 elementary school children (6-10 years) performing word repetition tasks in their native language (L1-Japanese) and a second language (L2-English) was conducted using functional near-infrared spectroscopy. Three factors presumably associated with cortical activation, language (L1/L2), word frequency (high/low), and hemisphere (left/right), were investigated. L1 words elicited significantly greater brain activation than L2 words, regardless of semantic knowledge, particularly in the superior/middle temporal and inferior parietal regions (angular/supramarginal gyri). The greater L1-elicited activation in these regions suggests that they are phonological loci, reflecting processes tuned to the phonology of the native language, while phonologically unfamiliar L2 words were processed like nonword auditory stimuli. The activation was bilateral in the auditory and superior/middle temporal regions. Hemispheric asymmetry was observed in the inferior frontal region (right dominant), and in the inferior parietal region with interactions: low-frequency words elicited more right-hemispheric activation (particularly in the supramarginal gyrus), while high-frequency words elicited more left-hemispheric activation (particularly in the angular gyrus). The present results reveal the strong involvement of a bilateral language network in children's brains depending more on right-hemispheric processing while acquiring unfamiliar/low-frequency words. A right-to-left shift in laterality should occur in the inferior parietal region, as lexical knowledge increases irrespective of language.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号