首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The left superior temporal cortex shows greater responsiveness to speech than to non-speech sounds according to previous neuroimaging studies, suggesting that this brain region has a special role in speech processing. However, since speech sounds differ acoustically from the non-speech sounds, it is possible that this region is not involved in speech perception per se, but rather in processing of some complex acoustic features. "Sine wave speech" (SWS) provides a tool to study neural speech specificity using identical acoustic stimuli, which can be perceived either as speech or non-speech, depending on previous experience of the stimuli. We scanned 21 subjects using 3T functional MRI in two sessions, both including SWS and control stimuli. In the pre-training session, all subjects perceived the SWS stimuli as non-speech. In the post-training session, the identical stimuli were perceived as speech by 16 subjects. In these subjects, SWS stimuli elicited significantly stronger activity within the left posterior superior temporal sulcus (STSp) in the post- vs. pre-training session. In contrast, activity in this region was not enhanced after training in 5 subjects who did not perceive SWS stimuli as speech. Moreover, the control stimuli, which were always perceived as non-speech, elicited similar activity in this region in both sessions. Altogether, the present findings suggest that activation of the neural speech representations in the left STSp might be a pre-requisite for hearing sounds as speech.  相似文献   

2.
The key question in understanding the nature of speech perception is whether the human brain has unique speech-specific mechanisms or treats all sounds equally. We assessed possible differences between the processing of speech and complex nonspeech sounds in the two cerebral hemispheres by measuring the magnetic equivalent of the mismatch negativity, the brain's automatic change–detection response, which was elicited by speech sounds and by similarly complex nonspeech sounds with either fast or slow acoustic transitions. Our results suggest that the right hemisphere is predominant in the perception of slow acoustic transitions, whereas neither hemisphere clearly dominates the discrimination of nonspeech sounds with fast acoustic transitions. In contrast, the perception of speech stimuli with similarly rapid acoustic transitions was dominated by the left hemisphere, which may be explained by the presence of acoustic templates (long-term memory traces) for speech sounds formed in this hemisphere.  相似文献   

3.
The neural substrates underlying speech perception are still not well understood. Previously, we found dissociation of speech and nonspeech processing at the earliest cortical level (AI), using speech and nonspeech complexity dimensions. Acoustic differences between speech and nonspeech stimuli in imaging studies, however, confound the search for linguistic-phonetic regions. Presently, we used sinewave speech (SWsp) and nonspeech (SWnon), which replace speech formants with sinewave tones, in order to match acoustic spectral and temporal complexity while contrasting phonetics. Chord progressions (CP) were used to remove the effects of auditory coherence and object processing. Twelve normal RH volunteers were scanned with fMRI while listening to SWsp, SWnon, CP, and a baseline condition arranged in blocks. Only two brain regions, in bilateral superior temporal sulcus, extending more posteriorly on the left, were found to prefer the SWsp condition after accounting for acoustic modulation and coherence effects. Two regions responded preferentially to the more frequency-modulated stimuli, including one that overlapped the right temporal phonetic area and another in the left angular gyrus far from the phonetic area. These findings are proposed to form the basis for the two subtypes of auditory word deafness. Several brain regions, including auditory and non-auditory areas, preferred the coherent auditory stimuli and are likely involved in auditory object recognition. The design of the current study allowed for separation of acoustic spectrotemporal, object recognition, and phonetic effects resulting in distinct and overlapping components.  相似文献   

4.
The focus of our magnetoencephalographic (MEG) study was to obtain further insight into the neuronal organization of language processing in stutterers. We recorded neuronal activity of 10 male developmental stutterers and 10 male controls, while they listened to pure tones, to words in order to repeat them, and to sentences in order to either repeat or transform them into passive form. Stimulation with pure tones resulted in similar activation patterns in the two groups, but differences emerged in the more complex auditory language tasks. In the stutterers, the left inferior frontal cortex was activated for a short while from 95 to 145 ms after sentence onset, which was not evident in the controls nor in either group during the word task. In both subject groups, the left rolandic area was activated when listening to the speech stimuli, but in the stutterers, there was an additional activation of the right rolandic area from 315 ms onwards, which was more pronounced in the sentence than word task. Activation of areas typically associated with language production was thus observed also during speech perception both in controls and in stutterers. Previous research on speech production in stutterers has found abnormalities in both the amount and timing of activation in these areas. The present data suggest that activation in the left inferior frontal and right rolandic areas in stutterers differs from that in controls also during speech perception.  相似文献   

5.
In a recent electroencephalography (EEG) study (Takeichi et al., 2007a), we developed a new technique for assessing speech comprehension using speech degraded by m-sequence modulation and found a correlation peak with a 400-ms delay. This peak depended on the comprehensibility of the modulated speech sounds. Here we report the results of a functional magnetic resonance imaging (fMRI) experiment comparable to our previous EEG experiment. We examined brain areas related to verbal comprehension of the modulated speech sound to examine which neural system processes this modulated speech. A non-integer, alternating-block factorial design was used with 23 Japanese-speaking participants, with time reversal and m-sequence modulation as factors. A main effect of time reversal was found in the left temporal cortex along the superior temporal sulcus (BA21 and BA39), left precentral gyrus (BA6) and right inferior temporal gyrus (BA21). A main effect of modulation was found in the left postcentral gyrus (BA43) and the right medial frontal gyri (BA6) as an increase by modulation and in the left temporal cortex (BA21, 39), parahippocampal gyrus (BA34), posterior cingulate (BA23), caudate and thalamus and right superior temporal gyrus (BA38) as a decrease by modulation. An interaction effect associated specifically with non-modulated speech was found in the left frontal gyrus (BA47), left occipital cortex in the cuneus (BA18), left precuneus (BA7, 31), right precuneus (BA31) and right thalamus (forward > reverse). The other interaction effect associated specifically with modulation of speech sound was found in the inferior frontal gyrus in the opercular area (BA44) (forward > reverse). Estimated scalp projection of the component correlation function (Cao et al., 2002) for the corresponding EEG data (Takeichi et al., 2007a, showed leftward dominance. Hence, activities in the superior temporal sulcus (BA21 and BA39), which are commonly observed for speech processing, as well as left precentral gyrus (BA6) and left inferior frontal gyrus in the opercular area (BA44) is suggested to contribute to the comprehension-related EEG signal.  相似文献   

6.
Discriminating complex sounds relies on multiple stages of differential brain activity. The specific roles of these stages and their links to perception were the focus of the present study. We presented 250 ms duration sounds of living and man-made objects while recording 160-channel electroencephalography (EEG). Subjects categorized each sound as that of a living, man-made or unknown item. We tested whether/when the brain discriminates between sound categories even when not transpiring behaviorally. We applied a single-trial classifier that identified voltage topographies and latencies at which brain responses are most discriminative. For sounds that the subjects could not categorize, we could successfully decode the semantic category based on differences in voltage topographies during the 116-174 ms post-stimulus period. Sounds that were correctly categorized as that of a living or man-made item by the same subjects exhibited two periods of differences in voltage topographies at the single-trial level. Subjects exhibited differential activity before the sound ended (starting at 112 ms) and on a separate period at ~270 ms post-stimulus onset. Because each of these periods could be used to reliably decode semantic categories, we interpreted the first as being related to an implicit tuning for sound representations and the second as being linked to perceptual decision-making processes. Collectively, our results show that the brain discriminates environmental sounds during early stages and independently of behavioral proficiency and that explicit sound categorization requires a subsequent processing stage.  相似文献   

7.
The length of a vocal tract is reflected in the sound it is producing. The length of the vocal tract is correlated with body size and humans are very good at making size judgments based on the acoustic effect of vocal tract length only. Here we investigate the underlying mechanism for processing this main auditory cue to size information in the human brain. Sensory encoding of the acoustic effect of vocal tract length (VTL) depends on a time-stabilized spectral scaling mechanism that is independent of glottal pulse rate (GPR, or voice pitch); we provide evidence that a potential neural correlate for such a mechanism exists in the medial geniculate body (MGB). The perception of the acoustic effect of speaker size is influenced by GPR suggesting an interaction between VTL and GPR processing; such an interaction occurs only at the level of non-primary auditory cortex in planum temporale and anterior superior temporal gyrus. Our findings support a two-stage model for the processing of size information in speech based on an initial stage of sensory analysis as early as MGB, and a neural correlate of the perception of source size in non-primary auditory cortex.  相似文献   

8.
Functional specialization of the human auditory cortex in processing phonetic vs musical sounds was investigated. While subjects watched a silent self-selected movie, they were presented with sequences consisting of frequent and infrequent phonemes (/e/ and /o/, respectively) or chords (A major and A minor, respectively). The subjects' brain responses to these sounds were recorded with a 122-channel whole-head magnetometer. The data indicated that within the right hemisphere, the magnetoencephalographic (MEG) counterpart MMNm of the mismatch negativity (MMN) elicited by an infrequent chord change was stronger than the MMNm elicited by a phoneme change. Within the left hemisphere, the MMNm strength for a chord vs phoneme change did not significantly differ. Furthermore, the MMNm sources for the phoneme and chord changes were posterior to the P1m sources generated at or near the primary auditory areas. In addition, the MMNm source for a phoneme change was superior to that for the chord change in both hemispheres. The data thus provide evidence for spatially distinct cortical areas in both hemispheres specialized in representing phonetic and musical sounds.  相似文献   

9.
To delineate if the change in cortical excitability persists across migraine attacks, visual evoked magnetic fields (VEF) were measured in patients with migraine without aura during the interictal ( n  = 26) or peri-ictal ( n  = 21) periods, and were compared with 30 healthy controls. The visual stimuli were checkerboard reversals with four different check sizes (15', 30', 60' and 120'). For each check size, five sequential blocks of 50 VEF responses were recorded to calculate the percentage change of the P100m amplitude in the second to the fifth blocks in comparison with the first block. At check size 120', interictal patients showed a larger amplitude increment than controls [28.1 ± 38.3% ( s.d. ) vs. 8.7 ± 21.3%] in the second block and a larger increment than peri-ictal patients in the second (28.1 ± 38.3% vs. −3.2 ± 19.2%), fourth (22.7 ± 31.2% vs. −5.7 ± 22.3%) and fifth (20.5 ± 30.4% vs. −10.8 ± 30.1%) blocks ( P  < 0.05). There was no significant difference at other check sizes or between peri-ictal patients and controls. In conclusion, there may be peri-ictal normalization of visual cortical excitability changes in migraine that is dependent on the spatial frequency of the stimuli and reflects a dynamic modulation of cortical activities.  相似文献   

10.
Learning new sounds of speech: reallocation of neural substrates   总被引:5,自引:0,他引:5  
Golestani N  Zatorre RJ 《NeuroImage》2004,21(2):494-506
Functional magnetic resonance imaging (fMRI) was used to investigate changes in brain activity related to phonetic learning. Ten monolingual English-speaking subjects were scanned while performing an identification task both before and after five sessions of training with a Hindi dental-retroflex nonnative contrast. Behaviorally, training resulted in an improvement in the ability to identify the nonnative contrast. Imaging results suggest that the successful learning of a nonnative phonetic contrast results in the recruitment of the same areas that are involved during the processing of native contrasts, including the left superior temporal gyrus, insula-frontal operculum, and inferior frontal gyrus. Additionally, results of correlational analyses between behavioral improvement and the blood-oxygenation-level-dependent (BOLD) signal obtained during the posttraining Hindi task suggest that the degree of success in learning is accompanied by more efficient neural processing in classical frontal speech regions, and by a reduction of deactivation relative to a noise baseline condition in left parietotemporal speech regions.  相似文献   

11.
We set out to determine whether changes in resting-state cortico-cortical functional connectivity are a feature of early-stage Parkinson's disease (PD), explore how functional coupling might evolve over the course of the disease and establish its relationship with clinical deficits. Whole-head magnetoencephalography was performed in an eyes-closed resting-state condition in 70 PD patients with varying disease duration (including 18 recently diagnosed, drug-naive patients) in an "OFF" medication state and 21 controls. Neuropsychological testing was performed in all subjects. Data analysis involved calculation of three synchronization likelihood (SL, a general measure of linear and non-linear temporal correlations between time series) measures which reflect functional connectivity within (local) and between (intrahemispheric and interhemispheric) ten major cortical regions in five frequency bands. Recently diagnosed, drug-naive patients showed an overall increase in alpha1 SL relative to controls. Cross-sectional analysis in all patients revealed that disease duration was positively associated with alpha2 and beta SL measures, while severity of parkinsonism was positively associated with theta and beta SL measures. Moderately advanced patients had increases in theta, alpha1, alpha2 and beta SL, particularly with regard to local SL. In recently diagnosed patients, cognitive perseveration was associated with increased interhemispheric alpha1 SL. Increased resting-state cortico-cortical functional connectivity in the 8-10 Hz alpha range is a feature of PD from the earliest clinical stages onward. With disease progression, neighboring frequency bands become increasingly involved. These findings suggest that changes in functional coupling over the course of PD may be linked to the topographical progression of pathology over the brain.  相似文献   

12.
Processing syntax is believed to be a higher cognitive function involving cortical regions outside sensory cortices. In particular, previous studies revealed that early syntactic processes at around 100-200 ms affect brain activations in anterior regions of the superior temporal gyrus (STG), while independent studies showed that pure auditory perceptual processing is related to sensory cortex activations. However, syntax-related modulations of sensory cortices were reported recently, thereby adding diverging findings to the previous studies. The goal of the present magnetoencephalography study was to localize the cortical regions underlying early syntactic processes and those underlying perceptual processes using a within-subject design. Sentences varying the factors syntax (correct vs. incorrect) and auditory space (standard vs. change of interaural time difference (ITD)) were auditorily presented. Both syntactic and auditory spatial anomalies led to very early activations (40-90 ms) in the STG. Around 135 ms after violation onset, differential effects were observed for syntax and auditory space, with syntactically incorrect sentences leading to activations in the anterior STG, whereas ITD changes elicited activations more posterior in the STG. Furthermore, our observations strongly indicate that the anterior and the posterior STG are activated simultaneously when a double violation is encountered. Thus, the present findings provide evidence of a dissociation of speech-related processes in the anterior STG and the processing of auditory spatial information in the posterior STG, compatible with the view of different processing streams in the temporal cortex.  相似文献   

13.
Studies investigating the cerebral areas involved in visual processes generally oppose either different tasks or different stimulus types. This work addresses, by fMRI, the interaction between the type of task (discrimination vs. categorization) and the type of stimulus (Latin letters, well-known geometrical figures, and Korean letters). Behavioral data revealed that the two tasks did not differ in term of percentage of errors or correct responses, but a delay of 185 ms was observed for the categorization task in comparison with the discrimination task. All conditions activated a common neural network that includes both striate and extrastriate areas, especially the fusiform gyri, the precunei, the insulae, and the dorsolateral frontal cortex. In addition, interaction analysis revealed that the right insula was sensitive to both tasks and stimuli, and that stimulus type induced several significant signal variations for the categorization task in right frontal cortex, the right middle occipital gyrus, the right cuneus, and the left and right fusiform gyri, whereas for the discrimination task, significant signal variations were observed in the right occipito-parietal junction only. Finally, analyzing the latency of the BOLD signal also revealed a differential neural dynamics according to tasks but not to stimulus type. These temporal differences suggest a parallel hemisphere processing in the discrimination task vs. a cooperative interhemisphere processing in the categorization task that may reflect the observed differences in reaction time.  相似文献   

14.
Human speech perception is highly resilient to acoustic distortions. In addition to distortions from external sound sources, degradation of the acoustic structure of the sound itself can substantially reduce the intelligibility of speech. The degradation of the internal structure of speech happens, for example, when the digital representation of the signal is impoverished by reducing its amplitude resolution. Further, the perception of speech is also influenced by whether the distortion is transient, coinciding with speech, or is heard continuously in the background. However, the complex effects of the acoustic structure and continuity of the distortion on the cortical processing of degraded speech are unclear. In the present magnetoencephalography study, we investigated how the cortical processing of degraded speech sounds as measured through the auditory N1m response is affected by variation of both the distortion type (internal, external) and the continuity of distortion (transient, continuous). We found that when the distortion was continuous, the N1m was significantly delayed, regardless of the type of distortion. The N1m amplitude, in turn, was affected only when speech sounds were degraded with transient internal distortion, which resulted in larger response amplitudes. The results suggest that external and internal distortions of speech result in divergent patterns of activity in the auditory cortex, and that the effects are modulated by the temporal continuity of the distortion.  相似文献   

15.
Language production and perception imply motor system recruitment. Therefore, language should obey the theory of shared motor representation between self and other, by means of mirror-like systems. These mirror-like systems (referring to single-unit recordings in animals) show the property to be recruited both when accomplishing and when perceiving a goal-directed action, whatever the sensory modality may be. This hypothesis supposes that a neural network for self-awareness is involved to distinguish speech production from speech listening. We used fMRI to test this assumption in 12 healthy subjects, who performed two different block-design experiments. The first experiment showed involvement of a lateral mirror-like network in speech listening, including ventral premotor cortex, superior temporal sulcus and the inferior parietal lobule (IPL). The activity of this mirror-like network is associated with the perception of an intelligible speech. The second experiment looked at a self-awareness network. It showed involvement of a medial resting-state network, including the medial parietal and medial prefrontal cortices, during the 'self-generated voice' condition, as opposed to passive speech listening. Our results support the fact that deactivation of this medial network, in association with modulation of the activity of the IPL (part of the mirror-like network previously described), is linked to self-awareness in speech processing. Overall, these results support the idea that self-awareness is present when distinguishing between speech production and speech listening situations, and may depend on these two different parieto-frontal networks.  相似文献   

16.
In dyslexia, it is consistently found that letter strings produce an abnormally weak or no response in the left occipitotemporal cortex. Time-sensitive imaging techniques have located this deficit to the category-specific processing stage at about 150 ms after stimulus presentation. The typically reported behavioral impairments in dyslexia suggest that the lack of occipitotemporal activation is specific to reading. It could, however, also reflect a more general dysfunction in the left inferior occipitotemporal cortex or in the time window of category-specific activation (150 to 200 ms). As early cortical processing of faces follows a sequence practically identical to that for letter strings, both in location and in timing, we investigated these possibilities by comparing face-specific occipitotemporal activations in dyslexic and non-reading-impaired subjects. We found that both the stage of general visual feature analysis at about 100 ms and the earliest face-specific activation at about 150 ms were essentially normal in the dyslexic individuals. The present results emphasize the special nature of the occipitotemporal abnormality to letter strings in dyslexia. However, in behavioral tests dyslexic subjects were slower and more error-prone than non-reading-impaired subjects in judging the similarity of faces and geometrical shapes. This effect may be related to reduced activation of the right parietotemporal cortex at about 250 ms after stimulus onset.  相似文献   

17.
Yrttiaho S  May PJ  Tiitinen H  Alku P 《NeuroImage》2011,55(3):1252-1259
Most speech sounds are periodic due to the vibration of the vocal folds. Non-invasive studies of the human brain have revealed a periodicity-sensitive population in the auditory cortex which might contribute to the encoding of speech periodicity. Since the periodicity of natural speech varies from (almost) periodic to aperiodic, one may argue that speech aperiodicity could similarly be represented by a dedicated neuron population. In the current magnetoencephalography study, cortical sensitivity to periodicity was probed with natural periodic vowels and their aperiodic counterparts in a stimulus-specific adaptation paradigm. The effects of intervening adaptor stimuli on the N1m elicited by the probe stimuli (the actual effective stimuli) were studied under interstimulus intervals (ISIs) of 800 and 200 ms. The results indicated a periodicity-dependent release from adaptation which was observed for aperiodic probes alternating with periodic adaptors under both ISIs. Such release from adaptation can be attributed to the activation of a distinct neural population responsive to aperiodic (probe) but not to periodic (adaptor) stimuli. Thus, the current results suggest that the aperiodicity of speech sounds may be represented not only by decreased activation of the periodicity-sensitive population but, additionally, by the activation of a distinct cortical population responsive to speech aperiodicity.  相似文献   

18.
Pain is a multidimensional phenomenon. Previous psychological studies have shown that a person's subjective pain threshold can change when certain emotions are recognized. We examined this association with magnetoencephalography. Magnetic field strength was recorded with a 306-channel neuromagnetometer while 19 healthy subjects (7 female, 12 male; age range = 20-30 years) experienced pain stimuli in different emotional contexts induced by the presentation of sad, happy, or neutral facial stimuli. Subjects also rated their subjective pain intensity. We hypothesized that pain stimuli were affected by sadness induced by facial recognition. We found: 1) the intensity of subjective pain ratings increased in the sad emotional context compared to the happy and the neutral contexts, and 2) event-related desynchronization of lower beta bands in the right hemisphere after pain stimuli was larger in the sad emotional condition than in the happy emotional condition. Previous studies have shown that event-related desynchronization in these bands could be consistently observed over the primary somatosensory cortex. These findings suggest that sadness can modulate neural responses to pain stimuli, and that brain processing of pain stimuli had already been affected, at the level of the primary somatosensory cortex, which is critical for sensory processing of pain. PERSPECTIVE: We found that subjective pain ratings and cortical beta rhythms after pain stimuli are influenced by the sad emotional context. These results may contribute to understanding the broader relationship between pain and negative emotion.  相似文献   

19.
The inferior frontal and superior temporal areas in the left hemisphere are well-known to be crucial for language processing in most right-handed individuals. This has been established by classical neurological investigations and neuropsychological studies along with metabolic brain imaging have recently revealed converging evidence. Here, we use fast neurophysiological brain imaging, magnetoencephalography (MEG), and L1 Minimum-Norm Current Estimates to investigate the time course of cortical activation underlying the magnetic Mismatch Negativity elicited by a spoken word. Left superior-temporal areas became active 136 ms after the information in the acoustic input was sufficient for identifying the word, and activation of the left inferior-frontal cortex followed after an additional delay of 22 ms. By providing answers to the where- and when-questions of cortical activation, MEG recordings paired with current estimates of the underlying cortical sources may advance our understanding of the spatiotemporal dynamics of distributed neuronal networks involved in cognitive processing in the human brain.  相似文献   

20.
Gow DW  Segawa JA  Ahlfors SP  Lin FH 《NeuroImage》2008,43(3):614-623
Behavioral and functional imaging studies have demonstrated that lexical knowledge influences the categorization of perceptually ambiguous speech sounds. However, methodological and inferential constraints have so far been unable to resolve the question of whether this interaction takes the form of direct top-down influences on perceptual processing, or feedforward convergence during a decision process. We examined top-down lexical influences on the categorization of segments in a /s/-/integral/ continuum presented in different lexical contexts to produce a robust Ganong effect. Using integrated MEG/EEG and MRI data we found that, within a network identified by 40 Hz gamma phase locking, activation in the supramarginal gyrus associated with wordform representation influences phonetic processing in the posterior superior temporal gyrus during a period of time associated with lexical processing. This result provides direct evidence that lexical processes influence lower level phonetic perception, and demonstrates the potential value of combining Granger causality analyses and high spatiotemporal resolution multimodal imaging data to explore the functional architecture of cognition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号