首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Osnes B  Hugdahl K  Specht K 《NeuroImage》2011,54(3):2437-2445
Several reports of premotor cortex involvement in speech perception have been put forward. Still, the functional role of premotor cortex is under debate. In order to investigate the functional role of premotor cortex, we presented parametrically varied speech stimuli in both a behavioral and functional magnetic resonance imaging (fMRI) study. White noise was transformed over seven distinct steps into a speech sound and presented to the participants in a randomized order. As control condition served the same transformation from white noise into a music instrument sound. The fMRI data were modelled with Dynamic Causal Modeling (DCM) where the effective connectivity between Heschl's gyrus, planum temporale, superior temporal sulcus and premotor cortex were tested. The fMRI results revealed a graded increase in activation in the left superior temporal sulcus. Premotor cortex activity was only present at an intermediate step when the speech sounds became identifiable but were still distorted but was not present when the speech sounds were clearly perceivable. A Bayesian model selection procedure favored a model that contained significant interconnections between Heschl's gyrus, planum temporal, and superior temporal sulcus when processing speech sounds. In addition, bidirectional connections between premotor cortex and superior temporal sulcus and from planum temporale to premotor cortex were significant. Processing non-speech sounds initiated no significant connections to premotor cortex. Since the highest level of motor activity was observed only when processing identifiable sounds with incomplete phonological information, it is concluded that premotor cortex is not generally necessary for speech perception but may facilitate interpreting a sound as speech when the acoustic input is sparse.  相似文献   

2.
Zhang Y  Kuhl PK  Imada T  Kotani M  Tohkura Y 《NeuroImage》2005,26(3):703-720
Linguistic experience alters an individual's perception of speech. We here provide evidence of the effects of language experience at the neural level from two magnetoencephalography (MEG) studies that compare adult American and Japanese listeners' phonetic processing. The experimental stimuli were American English /ra/ and /la/ syllables, phonemic in English but not in Japanese. In Experiment 1, the control stimuli were /ba/ and /wa/ syllables, phonemic in both languages; in Experiment 2, they were non-speech replicas of /ra/ and /la/. The behavioral and neuromagnetic results showed that Japanese listeners were less sensitive to the phonemic /r-l/ difference than American listeners. Furthermore, processing non-native speech sounds recruited significantly greater brain resources in both hemispheres and required a significantly longer period of brain activation in two regions, the superior temporal area and the inferior parietal area. The control stimuli showed no significant differences except that the duration effect in the superior temporal cortex also applied to the non-speech replicas. We argue that early exposure to a particular language produces a "neural commitment" to the acoustic properties of that language and that this neural commitment interferes with foreign language processing, making it less efficient.  相似文献   

3.
While persistence of subtle phonological deficits in dyslexic adults is well documented, deficit of categorical perception of phonemes has received little attention so far. We studied learning of phoneme categorization during an activation H(2)O(15) PET experiment in 14 dyslexic adults and 16 normal readers with similar age, handedness and performance IQ. Dyslexic subjects exhibited typical, marked impairments in reading and phoneme awareness tasks. During the PET experiment, subjects performed a discrimination task involving sine wave analogues of speech first presented as pairs of electronic sounds and, after debriefing, as syllables /ba/ and /da/. Discrimination performance and brain activation were compared between the acoustic mode and the speech mode of the task which involved physically identical stimuli; signal changes in the speech mode relative to the acoustic mode revealed the neural counterparts of phonological top-down processes that are engaged after debriefing. Although dyslexic subjects showed good abilities to learn discriminating speech sounds, their performance remained lower than those of normal readers on the discrimination task over the whole experiment. Activation observed in the speech mode in normal readers showed a strongly left-lateralized pattern involving the superior temporal, inferior parietal and inferior lateral frontal cortex. Frontal and parietal subparts of these left-sided regions were significantly more activated in the control group than in the dyslexic group. Activations in the right frontal cortex were larger in the dyslexic group than in the control group for both speech and acoustic modes relative to rest. Dyslexic subjects showed an unexpected large deactivation in the medial occipital cortex for the acoustic mode that may reflect increased effortful attention to auditory stimuli.  相似文献   

4.
We investigated the perception and categorization of speech (vowels, syllables) and non-speech (tones, tonal contours) stimuli using MEG. In a delayed-match-to-sample paradigm, participants listened to two sounds and decided if they sounded exactly the same or different (auditory discrimination, AUD), or if they belonged to the same or different categories (category discrimination, CAT). Stimuli across the two conditions were identical; the category definitions for each kind of sound were learned in a training session before recording. MEG data were analyzed using an induced wavelet transform method to investigate task-related differences in time-frequency patterns. In auditory cortex, for both AUD and CAT conditions, an alpha (8-13 Hz) band activation enhancement during the delay period was found for all stimulus types. A clear difference between AUD and CAT conditions was observed for the non-speech stimuli in auditory areas and for both speech and non-speech stimuli in frontal areas. The results suggest that alpha band activation in auditory areas is related to both working memory and categorization for new non-speech stimuli. The fact that the dissociation between speech and non-speech occurred in auditory areas, but not frontal areas, points to different categorization mechanisms and networks for newly learned (non-speech) and natural (speech) categories.  相似文献   

5.
Healthy subjects show increased activation in left temporal lobe regions in response to speech sounds compared to complex nonspeech sounds. Abnormal lateralization of speech-processing regions in the temporal lobes has been posited to be a cardinal feature of schizophrenia. Event-related fMRI was used to test the hypothesis that schizophrenic patients would show an abnormal pattern of hemispheric lateralization when detecting speech compared with complex nonspeech sounds in an auditory oddball target-detection task. We predicted that differential activation for speech in the vicinity of the superior temporal sulcus would be greater in schizophrenic patients than in healthy subjects in the right hemisphere, but less in patients than in healthy subjects in the left hemisphere. Fourteen patients with schizophrenia (selected from an outpatient population, 2 females, 12 males, mean age 35.1 years) and 29 healthy subjects (8 females, 21 males, mean age 29.3 years) were scanned while they performed an auditory oddball task in which the oddball stimuli were either speech sounds or complex nonspeech sounds. Compared to controls, individuals with schizophrenia showed greater differential activation between speech and nonspeech in right temporal cortex, left superior frontal cortex, and the left temporal-parietal junction. The magnitude of the difference in the left temporal parietal junction was significantly correlated with severity of disorganized thinking. This study supports the hypothesis that aberrant functional lateralization of speech processing is an underlying feature of schizophrenia and suggests the magnitude of the disturbance in speech-processing circuits may be associated with severity of disorganized thinking.  相似文献   

6.
Specht K  Reul J 《NeuroImage》2003,20(4):1944-1954
With this study, we explored the blood oxygen level-dependent responses within the temporal lobe to short auditory stimuli of different classes. To address this issue, we performed an attentive listening event-related fMRI study, where subjects were required to concentrate during the presentation of different types of stimuli. Because the order of stimuli was randomized and not predictable for the subject, the observed differences between the stimuli types were interpreted as an automatic effect and were not affected by attention. We used three types of stimuli: tones, sounds of animals and instruments, and words. We found in all cases bilateral activations of the primary and secondary auditory cortex. The strength and lateralization depended on the type of stimulus. The tone trials led to the weakest and smallest activations. The perception of sounds increased the activated network bilaterally into the superior temporal sulcus mainly on the right and the perception of words led to the highest activation within the left superior temporal sulcus as well as in left inferior frontal gyrus. Within the left temporal sulcus, we were able to distinguish between different subsystems, showing an extending activation from posterior to anterior for speech and speechlike information. Whereas posterior parts were involved in analyzing the complex auditory structure of sounds and speech, the middle and anterior parts responded strongest only in the perception of speech. In summary, a functional segregation of the temporal lobes into several subsystems responsible for auditory processing was visible. A lateralization for verbal stimuli to the left and sounds to the right was already detectable when short stimuli were used.  相似文献   

7.
The key question in understanding the nature of speech perception is whether the human brain has unique speech-specific mechanisms or treats all sounds equally. We assessed possible differences between the processing of speech and complex nonspeech sounds in the two cerebral hemispheres by measuring the magnetic equivalent of the mismatch negativity, the brain's automatic change–detection response, which was elicited by speech sounds and by similarly complex nonspeech sounds with either fast or slow acoustic transitions. Our results suggest that the right hemisphere is predominant in the perception of slow acoustic transitions, whereas neither hemisphere clearly dominates the discrimination of nonspeech sounds with fast acoustic transitions. In contrast, the perception of speech stimuli with similarly rapid acoustic transitions was dominated by the left hemisphere, which may be explained by the presence of acoustic templates (long-term memory traces) for speech sounds formed in this hemisphere.  相似文献   

8.
Many people exposed to sinewave analogues of speech first report hearing them as electronic glissando and, later, when they switch into a 'speech mode', hearing them as syllables. This perceptual switch modifies their discrimination abilities, enhancing perception of differences that cross phonemic boundaries while diminishing perception of differences within phonemic categories. Using high-density evoked potentials and fMRI in a discrimination paradigm, we studied the changes in brain activity that are related to this change in perception. With ERPs, we observed that phonemic coding is faster than acoustic coding: The electrophysiological mismatch response (MMR) occurred earlier for a phonemic change than for an equivalent acoustic change. The MMR topography was also more asymmetric for a phonemic change than for an acoustic change. In fMRI, activations were also significantly asymmetric, favoring the left hemisphere in both perception modes. Furthermore, switching to the speech mode significantly enhanced activation in the posterior parts of the left superior gyrus and sulcus relative to the non-speech mode. When responses to a change of stimulus were studied, a cluster of voxels in the supramarginal gyrus was activated significantly more by a phonemic change than by an acoustic change. These results demonstrate that phoneme perception in adults relies on a specific and highly efficient left-hemispheric network, which can be activated in top-down fashion when processing ambiguous speech/non-speech stimuli.  相似文献   

9.
Mortensen MV  Mirz F  Gjedde A 《NeuroImage》2006,31(2):842-852
The left inferior prefrontal cortex (LIPC) is involved in speech comprehension by people who hear normally. In contrast, functional brain mapping has not revealed incremental activity in this region when users of cochlear implants comprehend speech without silent repetition. Functional brain maps identify significant changes of activity by comparing an active brain state with a presumed baseline condition. It is possible that cochlear implant users recruited alternative neuronal resources to the task in previous studies, but, in principle, it is also possible that an aberrant baseline condition masked the functional increase. To distinguish between the two possibilities, we tested the hypothesis that activity in the LIPC characterizes high speech comprehension in postlingually deaf CI users. We measured cerebral blood flow changes with positron emission tomography (PET) in CI users who listened passively to a range of speech and non-speech stimuli. The pattern of activation varied with the stimulus in users with high speech comprehension, unlike users with low speech comprehension. The high-comprehension group increased the activity in prefrontal and temporal regions of the cerebral cortex and in the right cerebellum. In these subjects, single words and speech raised activity in the LIPC, as well as in left and right temporal regions, both anterior and posterior, known to be activated in speech recognition and complex phoneme analysis in normal hearing. In subjects with low speech comprehension, sites of increased activity were observed only in the temporal lobes. We conclude that increased activity in areas of the LIPC and right temporal lobe is involved in speech comprehension after cochlear implantation.  相似文献   

10.
The separation of concurrent sounds is paramount to human communication in everyday settings. The primary auditory cortex and the planum temporale are thought to be essential for both the separation of physical sound sources into perceptual objects and the comparison of those representations with previously learned acoustic events. To examine the role of these areas in speech separation, we measured brain activity using event-related functional Magnetic Resonance Imaging (fMRI) while participants were asked to identify two phonetically different vowels presented simultaneously. The processing of brief speech sounds (200 ms in duration) activated the thalamus and superior temporal gyrus bilaterally, left anterior temporal lobe, and left inferior temporal gyrus. A comparison of fMRI signals between trials in which participants successfully identified both vowels as opposed to when only one of the two vowels was recognized revealed enhanced activity in left thalamus, Heschl's gyrus, superior temporal gyrus, and the planum temporale. Because participants successfully identified at least one of the two vowels on each trial, the difference in fMRI signal indexes the extra computational work needed to segregate and identify successfully the other concurrently presented vowel. The results support the view that auditory cortex in or near Heschl's gyrus as well as in the planum temporale are involved in sound segregation and reveal a link between left thalamo-cortical activation and the successful separation and identification of simultaneous speech sounds.  相似文献   

11.
The extent to which visual word perception engages speech codes (i.e., phonological recoding) remains a crucial question in understanding mechanisms of reading. In this study, we used functional magnetic resonance imaging techniques combined with behavioral response measures to examine neural responses to focused versus incidental phonological and semantic processing of written words. Three groups of subjects made simple button-pressing responses in either phonologically (rhyming-judgment) or semantically (category-judgment) focused tasks or both tasks with identical sets of visual stimuli. In the phonological tasks, subjects were given both words and pseudowords separated in different scan runs. The baseline task required feature search of scrambled letter strings created from the stimuli for the experimental conditions. The results showed that cortical regions associated with both semantic and phonological processes were strongly activated when the task required active processing of word meaning. However, when subjects were actively processing the speech sounds of the same set of written words, brain areas typically engaged in semantic processing became silent. In addition, subjects who performed both the rhyming and the semantic tasks showed diverse and significant bilateral activation in the prefrontal, temporal, and other brain regions. Taken together, the pattern of brain activity provides evidence of a neural basis supporting the theory that in normal word reading, phonological recoding is automatic and facilitates semantic processing of written words, while rapid comprehension of word meaning requires devoted attention. These results also raise questions about including multiple cognitive tasks in the same neuroimaging sessions.  相似文献   

12.
Traditionally, the left frontal and parietal lobes have been associated with language production while regions in the temporal lobe are seen as crucial for language comprehension. However, recent evidence suggests that the classical language areas constitute an integrated network where each area plays a crucial role both in speech production and perception. We used functional MRI to examine whether observing speech motor movements (without auditory speech) relative to non-speech motor movements preferentially activates the cortical speech areas. Furthermore, we tested whether the activation in these regions was modulated by task difficulty. This dissociates between areas that are actively involved with speech perception from regions that show an obligatory activation in response to speech movements (e.g. areas that automatically activate in preparation for a motoric response). Specifically, we hypothesized that regions involved with decoding oral speech would show increasing activation with increasing difficulty. We found that speech movements preferentially activate the frontal and temporal language areas. In contrast, non-speech movements preferentially activate the parietal region. Degraded speech stimuli increased both frontal and parietal lobe activity but did not differentially excite the temporal region. These findings suggest that the frontal language area plays a role in visual speech perception and highlight the differential roles of the classical speech and language areas in processing others' motor speech movements.  相似文献   

13.
A vivid perception of a moving human can be evoked when viewing a few point-lights on the joints of an invisible walker. This special visual ability for biological motion perception has been found to involve the posterior superior temporal sulcus (STSp). However, in everyday life, human motion can also be recognized using acoustic cues. In the present study, we investigated the neural substrate of human motion perception when listening to footsteps, by means of a sparse sampling functional MRI design. We first showed an auditory attentional network that shares frontal and parietal areas previously found in visual attention paradigms. Second, an activation was observed in the auditory cortex (Heschl's gyrus and planum temporale), likely to be related to low-level sound processing. Most strikingly, another activation was evidenced in a STSp region overlapping the temporal biological motion area previously reported using visual input. We thus propose that a part of the STSp region might be a supramodal area involved in human motion recognition, irrespective of the sensory modality input.  相似文献   

14.
Humans and many other animals use acoustical signals to mediate social interactions with conspecifics. The evolution of sound-based communication is still poorly understood and its neural correlates have only recently begun to be investigated. In the present study, we applied functional MRI to humans and macaque monkeys listening to identical stimuli in order to compare the cortical networks involved in the processing of vocalizations. At the first stages of auditory processing, both species showed similar fMRI activity maps within and around the lateral sulcus (the Sylvian fissure in humans). Monkeys showed remarkably similar responses to monkey calls and to human vocal sounds (speech or otherwise), mainly in the lateral sulcus and the adjacent superior temporal gyrus (STG). In contrast, a preference for human vocalizations and especially for speech was observed in the human STG and superior temporal sulcus (STS). The STS and Broca's region were especially responsive to intelligible utterances. The evolution of the language faculty in humans appears to have recruited most of the STS. It may be that in monkeys, a much simpler repertoire of vocalizations requires less involvement of this temporal territory.  相似文献   

15.
It is commonly assumed that, in the cochlea and the brainstem, the auditory system processes speech sounds without differentiating them from any other sounds. At some stage, however, it must treat speech sounds and nonspeech sounds differently, since we perceive them as different. The purpose of this study was to delimit the first location in the auditory pathway that makes this distinction using functional MRI, by identifying regions that are differentially sensitive to the internal structure of speech sounds as opposed to closely matched control sounds. We analyzed data from nine right-handed volunteers who were scanned while listening to natural and synthetic vowels, or to nonspeech stimuli matched to the vowel sounds in terms of their long-term energy and both their spectral and temporal profiles. The vowels produced more activation than nonspeech sounds in a bilateral region of the superior temporal sulcus, lateral and inferior to regions of auditory cortex that were activated by both vowels and nonspeech stimuli. The results suggest that the perception of vowel sounds is compatible with a hierarchical model of primate auditory processing in which early cortical stages of processing respond indiscriminately to speech and nonspeech sounds, and only higher regions, beyond anatomically defined auditory cortex, show selectivity for speech sounds.  相似文献   

16.
The neural substrates underlying speech perception are still not well understood. Previously, we found dissociation of speech and nonspeech processing at the earliest cortical level (AI), using speech and nonspeech complexity dimensions. Acoustic differences between speech and nonspeech stimuli in imaging studies, however, confound the search for linguistic-phonetic regions. Presently, we used sinewave speech (SWsp) and nonspeech (SWnon), which replace speech formants with sinewave tones, in order to match acoustic spectral and temporal complexity while contrasting phonetics. Chord progressions (CP) were used to remove the effects of auditory coherence and object processing. Twelve normal RH volunteers were scanned with fMRI while listening to SWsp, SWnon, CP, and a baseline condition arranged in blocks. Only two brain regions, in bilateral superior temporal sulcus, extending more posteriorly on the left, were found to prefer the SWsp condition after accounting for acoustic modulation and coherence effects. Two regions responded preferentially to the more frequency-modulated stimuli, including one that overlapped the right temporal phonetic area and another in the left angular gyrus far from the phonetic area. These findings are proposed to form the basis for the two subtypes of auditory word deafness. Several brain regions, including auditory and non-auditory areas, preferred the coherent auditory stimuli and are likely involved in auditory object recognition. The design of the current study allowed for separation of acoustic spectrotemporal, object recognition, and phonetic effects resulting in distinct and overlapping components.  相似文献   

17.
While the neural correlates of unconscious perception and subliminal priming have been largely studied for visual stimuli, little is known about their counterparts in the auditory modality. Here we used a subliminal speech priming method in combination with fMRI to investigate which regions of the cerebral network for language can respond in the absence of awareness. Participants performed a lexical decision task on target items preceded by subliminal primes, which were either phonetically identical or different from the target. Moreover, the prime and target could be spoken by the same speaker or by two different speakers. Word repetition reduced the activity in the insula and in the left superior temporal gyrus. Although the priming effect on reaction times was independent of voice manipulation, neural repetition suppression was modulated by speaker change in the superior temporal gyrus while the insula showed voice-independent priming. These results provide neuroimaging evidence of subliminal priming for spoken words and inform us on the first, unconscious stages of speech perception.  相似文献   

18.
Evoked magnetic fields were recorded from 18 adult volunteers using magnetoencephalography (MEG) during perception of speech stimuli (the endpoints of a voice onset time (VOT) series ranging from /ga/ to /ka/), analogous nonspeech stimuli (the endpoints of a two-tone series varying in relative tone onset time (TOT), and a set of harmonically complex tones varying in pitch. During the early time window (approximately 60 to approximately 130 ms post-stimulus onset), activation of the primary auditory cortex was bilaterally equal in strength for all three tasks. During the middle (approximately 130 to 800 ms) and late (800 to 1400 ms) time windows of the VOT task, activation of the posterior portion of the superior temporal gyrus (STGp) was greater in the left hemisphere than in the right hemisphere, in both group and individual data. These asymmetries were not evident in response to the nonspeech stimuli. Hemispheric asymmetries in a measure of neurophysiological activity in STGp, which includes the supratemporal plane and cortex inside the superior temporal sulcus, may reflect a specialization of association auditory cortex in the left hemisphere for processing speech sounds. Differences in late activation patterns potentially reflect the operation of a postperceptual process (e.g., rehearsal in working memory) that is restricted to speech stimuli.  相似文献   

19.
Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech.  相似文献   

20.
Kang E  Lee DS  Kang H  Hwang CH  Oh SH  Kim CS  Chung JK  Lee MC 《NeuroImage》2006,32(1):423-431
Speech perception in face-to-face conversation involves processing of speech sounds (auditory) and speech-associated mouth/lip movements (visual) from a speaker. Using PET where no scanner noise was present, brain regions involved in speech cue processing were investigated with the normal hearing subjects with no previous lip-reading training (N = 17) carrying out a semantic plausibility decision on spoken sentences delivered in a movie file. Multimodality was ensured at the sensory level in all four conditions. Sensory-specific speech cue of one sensory modality, i.e., auditory speech (A condition) or mouth movement (V condition), was delivered with a control stimulus of the other modality whereas speech cues of both sensory modalities (AV condition) were delivered during bimodal condition. In comparison to the control condition, extensive activations in the superior temporal regions were observed bilaterally during the A condition but these activations were reduced in extent and left lateralized during the AV condition. Polymodal region such as left posterior superior temporal sulcus (pSTS) involved in cross-modal interaction/integration of audiovisual speech was found to be activated during the A and more so during the AV conditions but not during the V condition. Activations were observed in Broca's (BA 44), medial frontal (BA 8), and anterior ventrolateral prefrontal (BA 47) regions in the left during the V condition, where lip-reading performance was less successful. Results indicated that the speech-associated lip movements (visual speech cue) rendered suppression on the activity in the right auditory temporal regions. Overadditivity (AV > A + V) observed in the right postcentral region during the bimodal condition relative to the sum of unimodal speech conditions was also associated with reduced activity during the V condition. These findings suggested that visual speech cue could exert an inhibitory modulatory effect on the brain activities in the right hemisphere during the cross-modal interaction of audiovisual speech perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号