首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We investigated the functional neuroanatomy of vowel processing. We compared attentive auditory perception of natural German vowels to perception of nonspeech band-passed noise stimuli using functional magnetic resonance imaging (fMRI). More specifically, the mapping in auditory cortex of first and second formants was considered, which spectrally characterize vowels and are linked closely to phonological features. Multiple exemplars of natural German vowels were presented in sequences alternating either mainly along the first formant (e.g., [u]-[o], [i]-[e]) or along the second formant (e.g., [u]-[i], [o]-[e]). In fixed-effects and random-effects analyses, vowel sequences elicited more activation than did nonspeech noise in the anterior superior temporal cortex (aST) bilaterally. Partial segregation of different vowel categories was observed within the activated regions, suggestive of a speech sound mapping across the cortical surface. Our results add to the growing evidence that speech sounds, as one of the behaviorally most relevant classes of auditory objects, are analyzed and categorized in aST. These findings also support the notion of an auditory "what" stream, with highly object-specialized areas anterior to primary auditory cortex.  相似文献   

2.
Prenatal learning of speech rhythm and melody is well documented. Much less is known about the earliest acquisition of segmental speech categories. We tested whether newborn infants perceive native vowels, but not nonspeech sounds, through some existing (proto-)categories, and whether they do so more robustly for some vowels than for others. Sensory event-related potentials (ERP), and mismatch responses (MMR), were obtained from 104 neonates acquiring Czech. The ERPs elicited by vowels were larger than the ERPs to nonspeech sounds, and reflected the differences between the individual vowel categories. The MMRs to changes in vowels but not in nonspeech sounds revealed left-lateralized asymmetrical processing patterns: a change from a focal [a] to a nonfocal [ɛ], and the change from short [ɛ] to long [ɛ:] elicited more negative MMR responses than reverse changes. Contrary to predictions, we did not find evidence of a developmental advantage for vowel length contrasts (supposedly most readily available in utero) over vowel quality contrasts (supposedly less salient in utero). An explanation for these asymmetries in terms of differential degree of prior phonetic warping of speech sounds is proposed. Future studies with newborns with different language backgrounds should test whether the prenatal learning scenario proposed here is plausible.  相似文献   

3.
Schizophrenia is associated with dysfunction in language processing. At the earliest stage of language processing, dysfunction of categorical perception of speech sounds in schizophrenia has been demonstrated in a behavioral task. The aim of this study was to assess automatic categorical perception of speech sounds as reflected by event-related changes in magnetic field power in schizophrenia. Using a whole-head magnetoencephalographic recording, the magnetic counterpart of mismatch negativity (MMNm) elicited by a phonetic change was evaluated in 16 right-handed patients with chronic schizophrenia and in 19 age-, sex-, and parental socioeconomic status-matched normal control subjects. Three types of MMNm (MMNm in response to a duration decrement of pure-tone stimuli; a vowel within-category change [duration decrement of Japanese vowel /a/]; vowel across-category change [Japanese vowel /a/ versus /o/]) were recorded. While the schizophrenia group showed an overall reduction in magnetic field power of MMNm, a trend was found toward more distinct abnormalities under the condition of vowel across-category change than under that of duration decrement of a vowel or tone. The patient group did not show abnormal asymmetries of MMNm power under any of the conditions. This study provides physiological evidence for impaired categorical perception of speech sounds in the bilateral auditory cortex in schizophrenia. The language-related dysfunction in schizophrenic patients may be present at the early stage of auditory processing of relatively simple stimuli such as phonemes, and not just at stages involving higher order semantic processes.  相似文献   

4.
The mismatch negativity component (MMN) of auditory event-related potentials (ERP) was recorded in four aphasic patients and in age, gender and education matched controls. The MMN changes elicited by tone, vowel, voicing stop consonant and place-of articulation contrasts were recorded over both hemispheres. The results of MMN amplitude, latency and distribution differences between aphasics and controls were analyzed in detail. An extensive neuropsychological investigation was performed in order to highlight the assumed dissociation and possible interactions between the impaired acoustic/phonetic perception and deficient comprehension in aphasic patients. Our principal finding was that MMN elicited by pitch deviations is not enough sensitive to distinguish between patients and age-matched controls. The MMN elicited by consonant contrasts was found to be the most vulnerable in aphasic patients investigated. The MMN elicited by voicing ([ba:] vs. [pa:]) and place-of-articulation ([ba:] vs. [ga:]) could be characterized by total lack, distorted or very limited distribution and correlated with the patients' performance shown in the behavioral phoneme discrimination task. The magnitude of the deficient phoneme (vowel and consonant contrasts) processing shown by MMN anomalies was proportionally related to the non-word discrimination and did not interact with the word discrimination performance. The impact of deficient speech sound processing on higher level processes may depend on the type of aphasia, while the presence of perceptual deficits in processing acoustic/phonetic contrasts seems to be independent of the type of aphasia.  相似文献   

5.
Congenital amusia is a disorder in the perception and production of musical pitch. It has been suggested that early exposure to a tonal language may compensate for the pitch disorder (Peretz, 2008). If so, it is reasonable to expect that there would be different characterizations of pitch perception in music and speech in congenital amusics who speak a tonal language, such as Mandarin. In this study, a group of 11 adults with amusia whose first language was Mandarin were tested with melodic contour and speech intonation discrimination and identification tasks. The participants with amusia were impaired in discriminating and identifying melodic contour. These abnormalities were also detected in identifying both speech and non-linguistic analogue derived patterns for the Mandarin intonation tasks. In addition, there was an overall trend for the participants with amusia to show deficits with respect to controls in the intonation discrimination tasks for both speech and non-linguistic analogues. These findings suggest that the amusics’ melodic pitch deficits may extend to the perception of speech, and could potentially result in some language deficits in those who speak a tonal language.  相似文献   

6.
《Clinical neurophysiology》2010,121(4):533-541
ObjectiveSeveral studies have explored the processing specificity of music and speech, but only a few have addressed the processing autonomy of their fundamental components: pitch and phonemes. Here, we examined the additivity of the mismatch negativity (MMN) indexing the early interactions between vowels and pitch when sung.MethodsEvent-related potentials (ERPs) were recorded while participants heard frequent sung vowels and rare stimuli deviating in pitch only, in vowel only, or in both pitch and vowel. The task was to watch a silent movie while ignoring the sounds.ResultsAll three types of deviants elicited both an MMN and a P3a ERP component. The observed MMNs were of similar amplitude for the three types of deviants and the P3a was larger for double deviants. The MMNs to deviance in vowel and deviance in pitch were not additive.ConclusionsThe underadditivity of the MMN responses suggests that vowel and pitch differences are processed by interacting neural networks.SignificanceThe results indicate that vowel and pitch are processed as integrated units, even at a pre-attentive level. Music-processing specificity thus rests on more complex dimensions of music and speech.  相似文献   

7.
8.
The present study uses electroencephalography (EEG) and a new stimulation paradigm, the 'continuous stimulation paradigm', to investigate the neural correlate of phonological processing in human auditory cortex. Evoked responses were recorded to stimuli consisting of a control sound (1000 ms) immediately followed by a test sound (150 ms). On half of the trials, the control sound was a noise and the test sound a vowel; to control for unavoidable effects of spectral change at the transition, the roles of the stimuli were reversed on the other half of the trials. The acoustical properties of the vowel and noise sounds were carefully matched to isolate the response specific to phonological processing. As the unspecific response to sound energy onset has subsided by the transition to the test sound, we hypothesized that the transition response from a noise to a vowel would reveal vowel-specific processing. Contrary to this expectation, however, the most striking difference between vowel and noise processing was a large, vertex-negative sustained response to the vowel control sound, which had a fast onset (30-50 ms) and remained constant throughout presentation of the vowel. The vowel-specific response was isolated using a subtraction technique analogous to that commonly applied in neuroimaging studies. This similarity in analysis methodology enabled close comparison of the EEG data collected in the present study with relevant functional magnetic resonance (fMRI) literature. Dipole source analysis revealed the vowel-specific component to be located anterior and inferior to primary auditory cortex, consistent with previous data investigating speech processing with fMRI.  相似文献   

9.
Two experiments were carried out involving the measurement of simple reaction-time when subjects responded to speech and to non-speech stimuli. In the first, subjects were required to make a speech response (uttering the vowel [a: ]) to one speech stimulus (the vowel [a: ]) and three non-speech stimuli (a complex tone, a telephone bell and a click). The click stimulus gave significantly longer reaction-times than the other three stimuli; since all stimuli were equated for peak intensity delivered to the subjects' ears, this was due to the short duration of the click (25 msec). There was no evidence that compatibility between the speech stimulus and the speech response had any influence on reaction-time. The second experiment employed a 2 X 2 design with 2 stimuli and 2 response modes. The stimuli were the vowel [a: ] and the telephone bell; the response modes were key-pressing and uttering the vowel [a: ]. The speech stimulus and the speech response gave significantly longer reaction-times than the non-speech simtulus and response. The minimum time for a reaction requiring speech reception is of the order of 180 msec. and the use of the motor speech mechanism adds about 30 msec. to reaction-time. Again no interaction was found between stimulus and response and this is probably due to the extremely simple nature of the speech tasks imposed.  相似文献   

10.
Language comprehension depends on tight functional interactions between distributed brain regions. While these interactions are established for semantic and syntactic processes, the functional network of speech intonation – the linguistic variation of pitch – has been scarcely defined. Particularly little is known about intonation in tonal languages, in which pitch not only serves intonation but also expresses meaning via lexical tones. The present study used psychophysiological interaction analyses of functional magnetic resonance imaging data to characterise the neural networks underlying intonation and tone processing in native Mandarin Chinese speakers. Participants categorised either intonation or tone of monosyllabic Mandarin words that gradually varied between statement and question and between Tone 2 and Tone 4. Intonation processing induced bilateral fronto‐temporal activity and increased functional connectivity between left inferior frontal gyrus and bilateral temporal regions, likely linking auditory perception and labelling of intonation categories in a phonological network. Tone processing induced bilateral temporal activity, associated with the auditory representation of tonal (phonemic) categories. Together, the present data demonstrate the breadth of the functional intonation network in a tonal language including higher‐level phonological processes in addition to auditory representations common to both intonation and tone.  相似文献   

11.
Research in auditory neuroscience has largely neglected the possible effects of different listening tasks on activations of auditory cortex (AC). In the present study, we used high‐resolution fMRI to compare human AC activations with sounds presented during three auditory and one visual task. In all tasks, subjects were presented with pairs of Finnish vowels, noise bursts with pitch and Gabor patches. In the vowel pairs, one vowel was always either a prototypical /i/ or /ae/ (separately defined for each subject) or a nonprototype. In different task blocks, subjects were either required to discriminate (same/different) vowel pairs, to rate vowel “goodness” (first/second sound was a better exemplar of the vowel class), to discriminate pitch changes in the noise bursts, or to discriminate Gabor orientation changes. We obtained distinctly different AC activation patterns to identical sounds presented during the four task conditions. In particular, direct comparisons between the vowel tasks revealed stronger activations during vowel discrimination in the anterior and posterior superior temporal gyrus (STG), while the vowel rating task was associated with increased activations in the inferior parietal lobule (IPL). We also found that AC areas in or near Heschl's gyrus (HG) were sensitive to the speech‐specific difference between a vowel prototype and nonprototype during active listening tasks. These results show that AC activations to speech sounds are strongly dependent on the listening tasks. Hum Brain Mapp, 2013. © 2012 Wiley Periodicals, Inc.  相似文献   

12.
The present study aimed to find out how different stages of cortical auditory processing (sound encoding, discrimination, and orienting) are affected in children with autism. To this end, auditory event-related potentials (ERP) were studied in 15 children with autism and their controls. Their responses were recorded for pitch, duration, and vowel changes in speech stimuli, and for corresponding changes in the non-speech counterparts of the stimuli, while the children watched silent videos and ignored the stimuli. The responses to sound repetition were diminished in amplitude in the children with autism, reflecting impaired sound encoding. The mismatch negativity (MMN), an ERP indexing sound discrimination, was enhanced in the children with autism as far as pitch changes were concerned. This is consistent with earlier studies reporting auditory hypersensitivity and good pitch-processing abilities, as well as with theories proposing enhanced perception of local stimulus features in individuals with autism. The discrimination of duration changes was impaired in these children, however. Finally, involuntary orienting to sound changes, as reflected by the P3a ERP, was more impaired for speech than non-speech sounds in the children with autism, suggesting deficits particularly in social orienting. This has been proposed to be one of the earliest symptoms to emerge, with pervasive effects on later development.  相似文献   

13.
Many speech sounds, such as vowels, exhibit a characteristic pattern of spectral peaks, referred to as formants, the frequency positions of which depend both on the phonological identity of the sound (e.g. vowel type) and on the vocal‐tract length of the speaker. This study investigates the processing of formant information relating to vowel type and vocal‐tract length in human auditory cortex by measuring electroencephalographic (EEG) responses to synthetic unvoiced vowels and spectrally matched noises. The results revealed specific sensitivity to vowel formant information in both anterior (planum polare) and posterior (planum temporale) regions of auditory cortex. The vowel‐specific responses in these two areas appeared to have different temporal dynamics; the anterior source produced a sustained response for as long as the incoming sound was a vowel, whereas the posterior source responded transiently when the sound changed from a noise to a vowel, or when there was a change in vowel type. Moreover, the posterior source appeared to be largely invariant to changes in vocal‐tract length. The current findings indicate that the initial extraction of vowel type from formant information is complete by the level of non‐primary auditory cortex, suggesting that speech‐specific processing may involve primary auditory cortex, or even subcortical structures. This challenges the view that specific sensitivity to speech emerges only beyond unimodal auditory cortex.  相似文献   

14.
15.
This study aimed at determining how the human brain automatically processes phoneme categories irrespective of the large acoustic inter-speaker variability. Subjects were presented with 450 different speech stimuli, equally distributed across the [a], [i], and [u] vowel categories, and each uttered by a different male speaker. A 306-channel magnetoencephalogram (MEG) was used to record N1m, the magnetic counterpart of the N1 component of the auditory event-related potential (ERP). The N1m amplitude and source locations differed between vowel categories. We also found that the spectrum dissimilarities were reproduced in the cortical representations of the large set of the phonemes used in this study: vowels with similar spectral envelopes had closer cortical representations than those whose spectral differences were the largest. Our data further extend the notion of differential cortical representations in response to vowel categories, previously demonstrated by using only one or a few tokens representing each category.  相似文献   

16.
17.
The principal aim of this study was to evaluate pre-schoolers’ expressive intonation in light of current debates about the underlying nature of language impairment (LI). Children with LI typically have deficits in grammar, a component of language that is phonologically represented on the segmental level. The hypothesis is that children with LI do not have deficits of this type when grammar is conveyed by intonation, a pitch-based component of language that is phonologically represented on the suprasegmental level. This study focused on the richly diversified suprasegmental patterns of sentences in which the speaker produces a series of items in a list. To address the hypothesis, list intonation in the speech of 4-year-olds with and without LI was acoustically analysed. Lists produced by children with LI were comparable to those produced by children with normal language development (NL). The results do not support the claim that LI stems from a poor understanding of grammatical principles. Rather, LI reflects an underlying impairment of segmental information processing. The discussion focuses on two characteristics of pitch contours which may account for the resilience of intonation in children with LI. Namely, steady state versus transient signals and universal symbol meanings versus arbitrary relationships between form and function.  相似文献   

18.
《Clinical neurophysiology》2003,114(4):652-661
Objective: To determine the ERP characteristics and ERP indices of central speech sound encoding and discrimination in young children.Methods: Auditory sensory event-related potentials (ERPs) and the ERP index of auditory sensory discrimination (the mismatch negativity, MMN) were elicited by vowel stimuli in 3-year-old children. In an oddball paradigm, the standard stimulus was vowel /a/, one deviant stimulus was vowel /o/ (the across-category change), and the other was nasalized vowel /a/ (within-category change). In addition, the ERP changes occurring during the 14 min uninterrupted recording were examined.Results: As indexed by the sensory P1, N2, and N4 peaks, the 3-year-old children's transient neural encoding of vowels was comparable to that earlier registered in 1-year-old children but also showed vowel-specific characteristics observed in school-age children. The 3-year-old's MMN was comparable in amplitude to the school-age children's MMN and appeared to be sensitive to the across-category aspects of vowel changes. However, its latency was longer in the 3-year-olds than in school-age children. Among the sensory ERPs, only the N4 peak showed significant diminution during the experiment. The across-category change MMN diminished after 10 min of the recording, however, over the frontal areas only.Conclusions: In the 3-year-old children, the sensory processing of vowels exhibited transitional characteristics between those observed in infants and school-age children. The auditory sensory discrimination in the 3-year-olds appeared to be sensitive to the phonemic aspects of stimulus change. The frontally-predominant MMN diminution during the experiment might indicate the greater refractoriness of its frontal-lobe generators. In general, the auditory sensory ERPs show distinct maturational profiles from that of the MMN.  相似文献   

19.
The present study investigated the effect of fear on paralinguistic aspects of speech in patients suffering from panic disorder with agoraphobia (N = 25). An experiment was conducted that comprised two modules: Autobiographical Talking and Script Talking. Each module consisted of two emotional conditions: Fearful and Happy. Speech was recorded digitally and analyzed using PRAAT, a computer program designed to extract paralinguistic measures from digitally recorded spoken sound. In addition to subjective fear, several speech characteristics were measured as a reflection of psychophysiology: rate of speech, mean pitch and pitch variability. Results show that in Autobiographical Talking speech was slower, had a lower pitch, and a lower pitch variability than in Script Talking. Pitch variability was lower in Fearful than in Happy speech. The findings indicate that paralinguistic aspects of speech, especially pitch variability, are promising measures to gain information about fear processing during the recollection of autobiographical memories.  相似文献   

20.
It remains a matter of controversy precisely what kind of neural mechanisms underlie functional asymmetries in speech processing. Whereas some studies support speech-specific circuits, others suggest that lateralization is dictated by relative computational demands of complex auditory signals in the spectral or time domains. To examine how the brain processes linguistically relevant spectral and temporal information, a functional magnetic resonance imaging study was conducted using Thai speech, in which spectral processing associated with lexical tones and temporal processing associated with vowel length can be differentiated. Ten Thai and 10 Chinese subjects were asked to perform discrimination judgments of pitch and timing patterns presented in the same auditory stimuli under two different conditions: speech (Thai) and nonspeech (hums). In the speech condition, tasks required judging Thai tones (T) and vowel length (VL); in the nonspeech condition, homologous pitch contours (P) and duration patterns (D). A remaining task required listening passively to nonspeech hums (L). Only the Thai group showed activation in the left inferior prefrontal cortex in speech minus nonspeech contrasts for spectral (T vs. P) and temporal (VL vs. D) cues. Thai and Chinese groups, however, exhibited similar fronto-parietal activation patterns in nonspeech hums minus passive listening contrasts for spectral (P vs. L) and temporal (D vs. L) cues. It appears that lower level specialization for acoustic cues in the spectral and temporal domains cannot be generalized to abstract higher order levels of phonological processing. Regardless of the neural mechanisms underlying low-level auditory processing, our findings clearly indicate that hemispheric specialization is sensitive to language-specific factors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号