首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 881 毫秒
1.
Hemispheric asymmetries for processing duration of non-verbal and verbal sounds were investigated in 60 right-handed subjects. Two dichotic tests with attention directed to one ear were used, one with complex tones and one with consonant-vowel syllables. Stimuli had three possible durations: 350, 500, and 650 ms. Subjects judged whether the duration of a probe was same or different compared to the duration of the target presented before it. Target and probe were part of two dichotic pairs presented with 1s interstimulus interval and occurred on the same side. Dependent variables were reaction time and accuracy. Results showed a significant right ear advantage for both dependent variables with both complex tones and consonant-vowel syllables. This study provides behavioural evidence of a left hemisphere specialization for duration perception of both musical and speech sounds in line with the current view based on a parameter--rather than domain-specific structuring of hemispheric perceptual asymmetries.  相似文献   

2.
To further clarify the neural mechanisms underlying the cortical encoding of speech sounds, we have recorded multiple unit activity (MUA) in the primary auditory cortex (A1) and thalamocortical (TC) radiations of an awake monkey to 3 consonant-vowel syllables, /da/, /ba/ and /ta/, that vary in their consonant place of articulation and voice onset time (VOT). In addition, we have examined the responses to the syllables' isolated formants and formant pairs. Response features are related to the cortical tonotopic organization, as determined by examining the responses to selected pure tones. MUA patterns that differentially reflect the spectral characteristics of the steady-state formant frequencies and formant transition onset frequencies underlying consonant place of articulation occur at sites with similarly differentiated tone responses. Whereas the detailed spectral characteristics of the speech sounds are reflected in low frequency cortical regions, both low and high frequency areas generate responses that reflect their temporal characteristics of fundamental frequency and VOT. Formant interactions modulate the responses to the whole syllables. These interactions may sharpen response differences that reflect consonant place of articulation. Response features noted in A1 also occur in TC fibers. Thus, differences in the encoding of speech sounds between the thalamic and cortical levels may include further opportunities for formant interactions within auditory cortex. One effect could be to heighten response contrast between complex stimuli with subtle acoustical differences.  相似文献   

3.
The influence of musical experience on free-recall dichotic listening to environmental sounds, two-tone sequences, and consonant-vowel (CV) syllables was investigated. A total of 60 healthy right-handed participants were divided into two groups according to their active musical competence ("musicians" and "non-musicians"). In both groups, we found a left ear advantage (LEA) for nonverbal stimuli (environmental sounds and two-tone sequences) and a right ear advantage (REA) for CV syllables. Dichotic listening to environmental sounds was uninfluenced by musical experience. The total accuracy of recall for two-tone sequences was higher in musicians than in non-musicians but the lateralisation was similar in both groups. For CV syllables a lower REA was found in male but not female musicians in comparison to non-musicians. The results indicate a specific sex-dependent effect of musical experience on lateralisation of phonological auditory processing.  相似文献   

4.
Pan CL  Kuo MF  Hsieh ST 《Neurology》2004,63(12):2387-2389
The authors describe a patient with auditory agnosia caused by a tectal germinoma. Despite having normal audiometric tests, the patient failed to recognize words and musical characters. On head MRI, the inferior colliculi were infiltrated by tumor. Neuropsychological tests revealed severe impairment in recognition of environmental sounds and words, defective musical perception, and stop consonant-vowel discrimination. Inferior colliculus may play a role in the analysis of sound properties.  相似文献   

5.
Recent neuroimaging and neuropsychological data suggest that speech perception is supported in bilaterally auditory areas. We evaluate this issue building on well-known behavioral effects. While undergoing positron emission tomography (PET), subjects performed standard auditory tasks: direction discrimination of frequency-modulated (FM) tones, categorical perception (CP) of consonant-vowel (CV) syllables, and word/non-word judgments (lexical decision, LD). Compared to rest, the three conditions led to bilateral activation of the auditory cortices. However, lateralization patterns differed as a function of stimulus type: the LD task generated stronger responses in the left, the FM task a stronger response in the right hemisphere. Contrasts between either words or syllables versus FM were associated with significantly greater activity bilaterally in superior temporal gyrus (STG) ventro-lateral to Heschl's gyrus. These activations extended into the superior temporal sulcus (STS) and the middle temporal gyrus (MTG) and were greater in the left. The same areas were more active in the LD than the CP task. In contrast, the FM task was associated with significantly greater activity in the right lateral-posterior STG and lateral MTG. The findings argue for a view in which speech perception is mediated bilaterally in the auditory cortices and that the well-documented lateralization is likely associated with processes subsequent to the auditory analysis of speech.  相似文献   

6.
Using functional connectivity analysis of functional magnetic resonance imaging data, we investigated the role of the inferior frontal gyrus in categorization of simple sounds. We found stronger functional connectivity between left inferior frontal gyrus and auditory processing areas in the temporal cortex during categorization of speech (vowels, syllables) and nonspeech (tones, combinations of tones and sweeps) sounds relative to an auditory discrimination task; the hemispheric lateralization varied depending on the speech-like properties of the sounds. Our results attest to the importance of interactions between temporal cortex and left inferior frontal gyrus in sound categorization. Further, we found different functional connectivity patterns between left inferior frontal gyrus and other brain regions implicated in categorization of syllables compared with other stimuli, reflecting the greater facility for categorization of syllables.  相似文献   

7.
We studied auditory evoked potentials (AEPs) in an 82-year-old female patient who became suddenly deaf following the second of two strokes. The patient showed markedly elevated pure tone thresholds, was unable to discriminate sounds and could not understand speech. Brain-stem auditory evoked potentials (BAEPs) were normal. CT scans revealed bilateral lesions of the superior temporal plane which included auditory cortex. Two experiments were performed. In the first, tones, complex sounds and speech stimuli were presented at intensities above and below the patient's perceptual threshold. P1, N1 and P2 components were elicited by each of the stimuli--whether or not they were perceived. In particular, stimuli presented below threshold evoked large amplitude, short latency responses comparable to those produced in a control subject. In a second experiment, the refractory properties of the N1-P2 were examined using trains of tones. They were also found to be similar to those of normal subjects. Shifts in the pitch of the tones near the end of the train (when refractory effects were maximal) evoked N1-P2s with enhanced amplitudes, although the change in pitch was not perceived by the patient. In both experiments AEP scalp topographies were normal. The results suggest that bitemporal lesions of auditory cortex can dissociate auditory perception and long-latency auditory evoked potentials. A review of evoked potential studies of cortical deafness suggests that the neural circuits responsible for N1-P2 generation lie in close proximity to those necessary for auditory perception.  相似文献   

8.
Understanding how the developing brain processes auditory information is a critical step toward the clarification of infants' perception of speech and music. We have reported that the infant brain perceives pitch information in speech sounds. Here, we used multichannel near-infrared spectroscopy to examine whether the infant brain is sensitive to information of pitch changes in auditory sequences. Three types of auditory sequences with distinct temporal structures of pitch changes were presented to 3- and 6-month-old infants: a long condition of 12 successive tones constructing a chromatic scale (600 ms), a short condition of four successive tones constructing a chromatic scale (200 ms), and a random condition of random tone sequences (50 ms per tone). The difference among the conditions was only in the sequential order of the tones, which causes pitch changes between the successive tones. We found that the bilateral temporal regions of both ages of infants showed significant activation under the three conditions. The stimulus-dependent activation was observed in the right temporoparietal region of the both infant groups; the 3- and 6-month-old infants showed the most prominent activation under the random and short conditions, respectively. Our findings indicate that the infant brain, which shows functional differentiation and lateralization in auditory-related areas, is capable of responding to more than single tones of pitch information. These results suggest that the right temporoparietal region of the infants increases sensitivity to auditory sequences, which have temporal structures similar to those of syllables in speech sounds, in the course of development.  相似文献   

9.
Dichotic speech (CV syllables) and non-verbal stimuli (melodies, tones and triple tones) were presented to 20 French Canadian (10 men, 10 women) and 19 Chinese (10 men, 9 women) right-handed university students to investigate cultural and sexual differences in auditory functional asymmetry. Analyses of variances on correct scores showed a REA in the perception of speech and a LEA in the perception of melodies and triple tones for both groups. Cross-cultural effects were observed for speech only, indicating a better overall performance on the part of French Canadian students. Moreover, women were better than men in the perception of tones and triple tones, suggesting that women may attend more to intonational information. Finally, the results for both ethnic groups showed the same pattern of interhemispheric functional asymmetry for speech and non-speech material.  相似文献   

10.
The N1m component of the auditory evoked magnetic field in response to tones and complex sounds was examined in order to clarify whether the tonotopic representation in the human secondary auditory cortex is based on perceived pitch or the physical frequency spectrum of the sound. The investigated stimulus parameters were the fundamental frequencies (F0 = 250, 500 and 1000 Hz), the spectral composition of the higher harmonics of the missing fundamental sounds (2nd to 5th, 6th to 9th and 10th to 13th harmonic) and the frequencies of pure tones corresponding to F0 and to the lowest component of each complex sound. Tonotopic gradients showed that high frequencies were more medially located than low frequencies for the pure tones and for the centre frequency of the complex tones. Furthermore, in the superior-inferior direction, the tonotopic gradients were different between pure tones and complex sounds. The results were interpreted as reflecting different processing in the auditory cortex for pure tones and complex sounds. This hypothesis was supported by the result of evoked responses to complex sounds having longer latencies. A more pronounced tonotopic representation in the right hemisphere gave evidence for right hemispheric dominance in spectral processing.  相似文献   

11.
OBJECTIVE: The purpose of this study is to expand our understanding of how the human auditory brainstem encodes temporal and spectral acoustic cues in voiced stop consonant-vowel syllables. METHODS: Auditory evoked potentials measuring activity from the brainstem of 22 normal learning children were recorded to the voiced stop consonant syllables [ga], [da], and [ba]. Spectrotemporal information distinguishing these voiced consonant-vowel syllables is contained within the first few milliseconds of the burst and the formant transition to the vowel. Responses were compared across stimuli with respect to their temporal and spectral content. RESULTS: Brainstem response latencies change in a predictable manner in response to systematic alterations in a speech syllable indicating that the distinguishing acoustic cues are represented by neural response timing (synchrony). Spectral analyses of the responses show frequency distribution differences across stimuli (some of which appear to represent acoustic characteristics created by difference tones of the stimulus formants) indicating that neural phase-locking is also important for encoding these acoustic elements. CONCLUSIONS: Considered within the context of existing knowledge of brainstem encoding of speech-sound structure, these data are the beginning of a comprehensive delineation of how the human auditory brainstem encodes perceptually critical features of speech. SIGNIFICANCE: The results of this study could be used to determine how neural encoding is disrupted in the clinical populations for whom stop consonants pose particular perceptual challenges (e.g., hearing impaired individuals and poor readers).  相似文献   

12.
A growing body of evidence indicates bilateral but asymmetric hemispheric involvement in speech perception. We used magnetoencephalography to record neuromagnetic evoked responses in 10 adults to consonant-vowel syllables that differ in a single phonetic feature, place of articulation. We report differential activation patterns in M100 latency, with larger differences in the right hemisphere than the left. These findings suggest that left and right auditory fields make differential contributions to speech processing.  相似文献   

13.
Several previous functional imaging experiments have demonstrated that auditory presentation of speech, relative to tones or scrambled speech, activate the superior temporal sulci (STS) bilaterally. In this study, we attempted to segregate the neural responses to phonological, lexical, and semantic input by contrasting activation elicited by heard words, meaningless syllables, and environmental sounds. Inevitable differences between the duration and amplitude of each stimulus type were controlled with auditory noise bursts matched to each activation stimulus. Half the subjects were instructed to say "okay" in response to presentation of all stimuli. The other half repeated back the words and syllables, named the source of the sounds, and said "okay" to the control stimuli (noise bursts). We looked for stimulus effects that were consistent across task. The results revealed that central regions in the STS were equally responsive to speech (words and syllables) and familiar sounds, whereas the posterior and anterior regions of the left superior temporal gyrus were more active for speech. The effect of semantic input was small but revealed more activation in the inferior temporal cortex for words and familiar sounds than syllables and noise. In addition, words (relative to syllables, sounds, and noise) enhanced activation in the temporo-parietal areas that have previously been linked to modality independent semantic processing. Thus, in cognitive terms, we dissociate phonological (speech) and semantic responses and propose that word specificity arises from functional integration among shared phonological and semantic areas.  相似文献   

14.
Adolescence is a time of great change in the brain in terms of structure and function. It is possible to track the development of neural function across adolescence using auditory event-related potentials (ERPs). This study tested if the brain's functional processing of sound changed across adolescence. We measured passive auditory t-complex peaks to pure tones and consonant-vowel (CV) syllables in 90 children and adolescents aged 10–18 years, as well as 10 adults. Across adolescence, Na amplitude increased to tones and speech at the right, but not left, temporal site. Ta amplitude decreased at the right temporal site for tones, and at both sites for speech. The Tb remained constant at both sites. The Na and Ta appeared to mature later in the right than left hemisphere. The t-complex peaks Na and Tb exhibited left lateralization and Ta showed right lateralization. Thus, the functional processing of sound continued to develop across adolescence and into adulthood.  相似文献   

15.
Musicians exhibit enhanced perception of emotion in speech, although the biological foundations for this advantage remain unconfirmed. In order to gain a better understanding for the influences of musical experience on neural processing of emotionally salient sounds, we recorded brainstem potentials to affective human vocal sounds. Musicians showed enhanced time-domain response magnitude to the most spectrally complex portion of the stimulus and decreased magnitude to the more periodic, less complex portion. Enhanced phase-locking to stimulus periodicity was likewise seen in musicians' responses to the complex portion. These results suggest that auditory expertise engenders both enhancement and efficiency of subcortical neural responses that are intricately connected with acoustic features important for the communication of emotional states. Our findings provide the first biological evidence for behavioral observations indicating that musical training enhances the perception of vocally expressed emotion in addition to establishing a subcortical role in the auditory processing of emotional cues.  相似文献   

16.
Specific Language Impairment (SLI) is a developmental disorder affecting language learning across a number of domains. These difficulties are thought to be related to difficulties processing auditory speech, given findings of imperfect auditory processing across nonspeech tones, individual speech sounds and syllables. However the relationship of auditory difficulties to language development remains unclear. Perceiving connected speech involves resolving coarticulation, the imperceptible blending of speech movements across adjacent sounds, which gives rise to subtle variations in speech sounds. The present study used event-related potentials (ERPs) to examine neural responses to coarticulation in school age children with and without SLI. Atypical neural responses were observed for the SLI group in ERP indices of prelexical-phonological but not lexical stages of processing. Specifically, incongruent coarticulatory information resulted in a modulation of the N100 in the SLI but not typically developing group while a Phonological Mapping Negativity was elicited in the typically developing but not SLI group, unless additional cues were present. Neural responses to unexpected lexical mismatches indexed by the N400 ERP component were the same for both groups. The results demonstrate a relative insensitivity to important subphonemic features in SLI.  相似文献   

17.
To date, the underlying cognitive and neural mechanisms of absolute pitch (AP) have remained elusive. In the present fMRI study, we investigated verbal and tonal perception and working memory in musicians with and without absolute pitch. Stimuli were sine wave tones and syllables (names of the scale tones) presented simultaneously. Participants listened to sequences of five stimuli, and then rehearsed internally either the syllables or the tones. Finally participants indicated whether a test stimulus had been presented during the sequence. For an auditory stroop task, half of the tonal sequences were congruent (frequencies of tones corresponded to syllables which were the names of the scale tones) and half were incongruent (frequencies of tones did not correspond to syllables). Results indicate that first, verbal and tonal perception overlap strongly in the left superior temporal gyrus/sulcus (STG/STS) in AP musicians only. Second, AP is associated with the categorical perception of tones. Third, the left STG/STS is activated in AP musicians only for the detection of verbal‐tonal incongruencies in the auditory stroop task. Finally, verbal labelling of tones in AP musicians seems to be automatic. Overall, a unique feature of AP appears to be the similarity between verbal and tonal perception. Hum Brain Mapp, 2013. © 2012 Wiley Periodicals, Inc.  相似文献   

18.
We report on enhanced processing of speech sounds in congenitally and early blind individuals compared with normally seeing individuals. Two different consonant-vowel (CV) syllables were presented via headphones on each presentation. We used a dichotic listening (DL) procedure with pairwise presentations of CV syllables. The typical finding in this paradigm is a right ear advantage, indicating better processing of the CV-syllable stimuli in the left hemisphere. The dichotic listening procedure involved three different conditions, with instructions to pay attention to the right ear stimulus, the left ear stimulus or no specific instruction. The participants were 14 congenitally or early blind Finnish-speaking individuals that were compared with 129 normal seeing Finnish-speaking individuals. The blind participants reported overall significantly more correct syllables than seeing control subjects. When instructed to pay attention to the left ear stimulus and only report from the attended channel, they were again significantly better than the seeing control subjects. These findings indicate effects of hemispheric reorganization in blind individuals at both the sensory and cognitive levels of information processing in the auditory sensory modality.  相似文献   

19.
OBJECTIVE: To examine how auditory brain responses change with increased spectral complexity of sounds in musicians and non-musicians. METHODS: Event-related potentials (ERPs) and fields (ERFs) to binaural piano tones were measured in musicians and non-musicians. The stimuli were C4 piano tones and a pure sine tone of the C4 fundamental frequency (f0). The first piano tone contained f0 and the first eight harmonics, the second piano tone consisted of f0 and the first two harmonics and the third piano tone consisted of f0. RESULTS: Subtraction of ERPs of the piano tone with only the fundamental from ERPs of the harmonically rich piano tones yielded positive difference waves peaking at 130 ms (DP130) and 300 ms (DP300). The DP130 was larger in musicians than non-musicians and both waves were maximally recorded over the right anterior scalp. ERP source analysis indicated anterior temporal sources with greater strength in the right hemisphere for both waves. Arbitrarily using these anterior sources to analyze the MEG signals showed a DP130m in musicians but not in non-musicians. CONCLUSIONS: Auditory responses in the anterior temporal cortex to complex musical tones are larger in musicians than non-musicians. SIGNIFICANCE: Neural networks in the anterior temporal cortex are activated during the processing of complex sounds. Their greater activation in musicians may index either underlying cortical differences related to musical aptitude or cortical modification by acoustical training.  相似文献   

20.
It has been demonstrated that vowel information can be extracted from speech sounds without attention focused on them, despite widely varying non-speech acoustic information in the input. The present study tested whether even complex tones that were constructed based on F0, F1 and F2 vowel frequencies to resemble the defining features of speech sounds, but were not speech, are categorized pre-attentively according to vowel space information. The Mismatch Negativity brain response was elicited by infrequent tokens of the complex tones, showing that the auditory system can pre-attentively categorize speech information on the basis of the minimal, defining auditory features. The human mind extracts the language-relevant information from complex tones despite the non-relevant variation in the sound input.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号