首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Otoacoustic emissions (OAEs) were evaluated in 51 ears of 30 patients with a severe auditory brainstem response (ABR) waveform abnormality. Thirteen ears showed no ABR to click sound of higher intensity than 100 dBSPL (group 1). Fourteen ears exhibited only wave V or a decreased amplitude pattern of ABR (group 2). Twenty-four ears showed a predominant wave I or no wave III pattern (group 3). Almost all the ears with absent ABR showed no OAE, which strongly suggested hearing loss of cochlear origin, although one patient with alternating hemiplegia of childhood exhibited definite OAEs and auditory reactions without ABR. One patient with mitochondrial myopathy, encephalopathy, lactic acidosis, and strokelike episodes (MELAS) and her mother in group 2 had OAE abnormalities, which also suggested mild to severe hearing impairment. When OAEs are present, an accompanying ABR abnormality may be produced by brainstem dysfunction of the underlying disorder such as Pelizaeus-Merzbacher disease. There was a significant relationship (chi-square test P<0.001) between the positivity of the distortion product OAE response and the clinical auditory reactions in 24 patients, although their ABR abnormalities did not reflect hearing impairment directly. Careful examination of both audiometry and OAEs might be necessary for further assessment of the hearing function in pediatric patients with neurological disorders and specific auditory nerve disease.  相似文献   

2.
M Shindo  K Kaga  Y Tanaka  M Hiraiwa  M Io 《Brain and nerve》1983,35(12):1177-1183
The patient was a right-handed boy. Pregnancy and delivery were normal. There were no neonatal complications. His development was normal until the age of 6 years and 10 months when he suddenly fell into coma. He was admitted to a hospital, where he was diagnosed as having encephalopathy of unknown cause. It was found that he did not respond to verbal stimuli after recovery from coma. He was referred to our hospital for detailed examination of hearing at the age of 7 years, and received neuroradiological and audiological examinations including X-ray CT, positron-CT, behavioral audiometry, auditory evoked response audiometry as well as other hearing and speech tests. Pure tone audiometry showed that auditory thresholds for pure tones ranging from 125 to 8,000 Hz were normal, while speech audiometry demonstrated that he was unable to discriminate test words. However, he accurately recognized environmental sounds and noises as well as sounds produced by musical instruments. The decreased uptake of 11C-glucose at the left temporal lobe was demonstrated by positron CT, although no abnormality was showed by X-ray CT. WISC-R showed he had normal intelligence. ITPA demonstrated that the psycholinguistic ability via visual channel remained intact, whereas auditory memory and auditory closure were markedly impaired. The boy had no difficulties in reading, naming and writing. Speech therapy was started as soon as diagnosis of word deafness was made. Until 6 months after onset, he never understood and repeated what was said to him.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

3.
We studied auditory evoked potentials (AEPs) in an 82-year-old female patient who became suddenly deaf following the second of two strokes. The patient showed markedly elevated pure tone thresholds, was unable to discriminate sounds and could not understand speech. Brain-stem auditory evoked potentials (BAEPs) were normal. CT scans revealed bilateral lesions of the superior temporal plane which included auditory cortex. Two experiments were performed. In the first, tones, complex sounds and speech stimuli were presented at intensities above and below the patient's perceptual threshold. P1, N1 and P2 components were elicited by each of the stimuli--whether or not they were perceived. In particular, stimuli presented below threshold evoked large amplitude, short latency responses comparable to those produced in a control subject. In a second experiment, the refractory properties of the N1-P2 were examined using trains of tones. They were also found to be similar to those of normal subjects. Shifts in the pitch of the tones near the end of the train (when refractory effects were maximal) evoked N1-P2s with enhanced amplitudes, although the change in pitch was not perceived by the patient. In both experiments AEP scalp topographies were normal. The results suggest that bitemporal lesions of auditory cortex can dissociate auditory perception and long-latency auditory evoked potentials. A review of evoked potential studies of cortical deafness suggests that the neural circuits responsible for N1-P2 generation lie in close proximity to those necessary for auditory perception.  相似文献   

4.
A 42-year-old man suffered damage to the left supra-sylvian areas due to a stroke and presented with verbal short-term memory (STM) deficits. He occasionally could not recall even a single syllable that he had heard one second before. A study of mismatch negativity using magnetoencephalography suggested that the duration of auditory sensory (echoic) memory traces was reduced on the affected side of the brain. His maximum digit span was four with auditory presentation (equivalent to the 1st percentile for normal subjects), whereas it was up to six with visual presentation (almost within the normal range). He simply showed partial recall in the digit span task, and there was no self correction or incorrect reproduction. From these findings, reduced echoic memory was thought to have affected his verbal short-term retention. Thus, the impairment of verbal short-term memory observed in this patient was “pure auditory” unlike previously reported patients with deficits of the phonological short-term store (STS), which is the next higher-order memory system. We report this case to present physiological and behavioral data suggesting impaired short-term storage of verbal information, and to demonstrate the influence of deterioration of echoic memory on verbal STM.  相似文献   

5.
Temporal and speech processing deficits in auditory neuropathy   总被引:8,自引:0,他引:8  
Zeng FG  Oba S  Garde S  Sininger Y  Starr A 《Neuroreport》1999,10(16):3429-3435
Auditory neuropathy affects the normal synchronous activity in the auditory nerve, without affecting the amplification function in the inner ear. Patients with auditory neuropathy often complain that they can hear sounds, but cannot understand speech. Here we report psychophysical tests indicating that these patients' poor speech recognition is due to a severe impairment in their temporal processing abilities. We also simulate this temporal processing impairment in normally hearing listeners and produce similar speech recognition deficits. This study demonstrates the importance of neural synchrony for auditory perceptions including speech recognition in humans. The results should contribute to better diagnosis and treatment of auditory neuropathy.  相似文献   

6.
A 60-year-old right-handed man showed dysprosody and agnosia for environmental sounds. His mother tongue was Japanese, and he could not speak foreign languages. He gradually developed difficulty in speaking from the age of 57 years, speaking non-native Japanese. In addition, he often complained of difficulty in hearing sounds, but audiometry showed no abnormalities. At the age of 60 years, the standard language test of aphasia showed no abnormalities in repetition, verbal comprehension, or reading, suggesting the absence of aphasia. However, in speaking, marked abnormality in rhythm, and occasional lack of postpositional particles and syllable-stumblings were observed. Writing was almost accurate, but a few grammatical errors were observed in speaking were observed. There were no cerebellar symptoms, pyramidal signs, pathologic reflexes, or abnormalities in phonation-related organs. Though the recognition of verbal sounds was maintained, impairment in the recognition of non-verbal sounds was observed. An environmental sound perception test showed correct answers only in 8 of 21 non-verbal sound sources (such as a car starting, glass breaking and so on), suggesting agnosia for environmental sounds. He insisted that the difficulty in perception was due to hearing impairment. However, re-examination with an increase in the sound volume showed similar results. He had no inconvenience in daily life and was not aware of agnosia for environmental sounds. He could recognize and differentiate sounds he heard once. His intelligence was normal, and neither apraxia nor frontal lobe symptoms were observed. MRI of the brain revealed slight atrophy of the right temporal lobe. Cerebral blood flow SPECT showed decreased blood flow from the superior temporal gyrus to the area around the arcuate fasciculi in the right temporal lobe. We considered that the lesion responsible for environmental auditory sound agnosia was present in the area around the secondary auditory area of the right temporal lobe and this patient differed from slowly progressive aphasia characterized by decreased blood flow in the left temporal lobe. Although the pathological process occurring in the area of hypoperfusion remained unclear, early stage of some degenerative disorders was more likely than cerebrovascular disease.  相似文献   

7.
We report a pediatric patient with auditory agnosia as a sequel of herpes encephalitis. His early development was completely normal. He uttered three words at 12 months old. Disease onset was 1 year and 2 months of age. He was discharged from the hospital seemingly with no sequel; however, he could not recover his intelligible words even at age 2 years. He was diagnosed as having auditory agnosia caused by bilateral temporal lobe injury. We began to train him at once, individually and intensively. Adult patients with pure auditory agnosia followed by two episodes of temporal lobe infarction have impairment in central hearing but not inner language. Therefore, they can communicate by reading and writing. Moreover, impairment in hearing is not always severe and is often transient. However, despite long-term (more than 15 years) energetic education and almost normal intellectual ability (Performance IQ of Wechsler Intelligence Scale for Children-Revised was 91), our patient's language ability was extremely poor. Cerebral plasticity could not work fully on our patient, whose bilateral temporal lobe was severely injured in early childhood. The establishment of a systematic training method in such patients is an urgent objective in this field.  相似文献   

8.
After a right temporoparietal stroke, a left-handed man lost the ability to understand speech and environmental sounds but developed greater appreciation for music. The patient had preserved reading and writing but poor verbal comprehension. Slower speech, single syllable words, and minimal written cues greatly facilitated his verbal comprehension. On identifying environmental sounds, he made predominant acoustic errors. Although he failed to name melodies, he could match, describe, and sing them. The patient had normal hearing except for presbyacusis, right-ear dominance for phonemes, and normal discrimination of basic psychoacoustic features and rhythm. Further testing disclosed difficulty distinguishing tone sequences and discriminating two clicks and short-versus-long tones, particularly in the left ear. Together, these findings suggest impairment in a direct route for temporal analysis and auditory word forms in his right hemisphere to Wernicke's area in his left hemisphere. The findings further suggest a separate and possibly rhythm-based mechanism for music recognition.  相似文献   

9.
There is a growing and unprecedented interest in the objective evaluation of the subcortical processes that are involved in speech perception, with potential clinical applications in speech and language impairments. Here, we review the studies illustrating the development of electrophysiological methods for assessing speech encoding in the human brainstem: from the pioneer recordings of click-evoked auditory brainstem responses (ABR), via studies of frequency-following responses (FFR) to the most recent measurements of speech ABR (SABR) or ABR in response to speech sounds. Recent research on SABR has provided new insights in the understanding of subcortical auditory processing mechanisms. The SABR test is an objective and non-invasive tool for assessing individual capacity of speech encoding in the brainstem. SABR characteristics are potentially useful both as a diagnosis tool of speech encoding deficits and as an assessment tool of the efficacy of rehabilitation programs in patients with learning and/or auditory processing disorders.  相似文献   

10.
OBJECTIVE: Data from a full assessment of auditory perception in patients with schizophrenia were used to investigate whether auditory hallucinations are associated with abnormality of central auditory processing. METHOD: Three groups of subjects participated in auditory assessments: 22 patients with psychosis and a recent history of auditory hallucinations, 16 patients with psychosis but no history of auditory hallucinations, and 22 normal subjects. Nine auditory assessments, including auditory brainstem response, monotic and dichotic speech perception tests, and nonspeech perceptual tests, were performed. Statistical analyses for group differences were performed using analysis of variance and Kruskal-Wallis tests. The results of individual patients with test scores in the severely abnormal range (more than three standard deviations from the mean for the normal subjects) were examined for patterns that suggested sites of dysfunction in the central auditory system. RESULTS: The results showed significant individual variability among the subjects in both patient groups. There were no group differences on tests that are sensitive to low brainstem function. Both patient groups performed poorly in tests that are sensitive to cortical or high brainstem function, and hallucinating patients differed from nonhallucinating patients in scores on tests of filtered speech perception and response bias patterns on dichotic speech tests. Six patients in the hallucinating group had scores in the severely abnormal range on more than one test. CONCLUSIONS: Hallucinations may be associated with auditory dysfunction in the right hemisphere or in the interhemispheric pathways. However, comparison of results for the patient groups suggests that the deficits seen in hallucinating patients may represent a greater degree of the same types of deficits seen in nonhallucinating patients.  相似文献   

11.
We examined the processing of verbal and nonverbal auditory stimuli using an event-related functional magnetic resonance imaging (fMRI) study to reveal the neural underpinnings of rapid temporal information processing and it's relevance during speech perception. In the context of a clustered sparse-temporal fMRI data collection eight right-handed native German speakers performed: (i) an auditory gap detection task; and (ii) a CV syllable discrimination task. A tone perception task served as a nontemporal control condition. Here we aimed to research to what extent the left hemisphere preferentially processes linguistically relevant temporal information available in speech and nonspeech stimuli. Furthermore, we sought to find out as to whether a left hemisphere's preference for linguistically relevant temporal information is specifically constrained to verbal utterances or if nonlinguistic temporal information may also activate these areas. We collected haemodynamic responses from three time points of acquisition (TPA) with varying temporal distance from stimulus onset to gain an insight on the time course of auditory processing. Results show exclusively left-sided activations of primary and secondary auditory cortex associated with the perception of rapid temporal information. Furthermore, the data shows an overlap of activations evoked by nonspeech sounds and speech stimuli within primary and secondary auditory cortex of the left hemisphere. The present data clearly support the assumption of a shared neural network for rapid temporal information processing within the auditory domain for both speech and nonspeech signals situated in left superior temporal areas.  相似文献   

12.
OBJECTIVE: The authors tested a model of hallucinated "voices" based on a neural network computer simulation of disordered speech perception. METHOD: Twenty-four patients with schizophrenia spectrum disorders who reported hallucinated voices were compared with 21 patients with schizophrenia spectrum disorders who did not report voices and 26 normal subjects. Narrative speech perception was assessed through use of a masked speech tracking task with three levels of superimposed phonetic noise. A sentence repetition task was used to assess grammar-dependent verbal working memory, and an auditory continuous performance task was used to assess nonlanguage attention. RESULTS: Masked speech tracking task and sentence repetition performance by hallucinating patients was impaired relative to both nonhallucinating patients and normal subjects. Although both hallucinating and nonhallucinating patients demonstrated auditory attention impairments when compared to normal subjects, the two patient groups did not differ with respect to these variables. CONCLUSIONS: Results support the hypothesis that hallucinated voices in schizophrenia arise from disrupted speech perception and verbal working memory systems rather than from nonlanguage cognitive or attentional deficits.  相似文献   

13.
The goals were to study the physiological effects of auditory nerve myelinopathy in chinchillas and to test the hypothesis that myelin abnormalities could account for auditory neuropathy, a hearing disorder characterized by absent auditory brainstem responses (ABRs) with preserved outer hair cell function. Doxorubicin, a cytotoxic drug used as an experimental demyelinating agent, was injected into the auditory nerve bundle of 18 chinchillas; six other chinchillas were injected with vehicle alone. Cochlear microphonics, compound action potentials (CAPs), inferior colliculus evoked potentials (IC-EVPs), cubic distortion product otoacoustic emissions and ABRs were recorded before and up to 2 months after injection. Cochleograms showed no hair cell loss in any of the animals and measures of outer hair cell function were normal (cubic distortion product otoacoustic emissions) or enhanced (cochlear microphonics) after injection. ABR was present in animals with mild myelin damage (n = 10) and absent in animals with severe myelin damage that included the myelin surrounding spiral ganglion cell bodies and fibers in Rosenthal's canal (n = 8). Animals with mild damage had reduced response amplitudes at 1 day, followed by recovery of CAP and enhancement of the IC-EVP. In animals with severe damage, CAP and IC-EVP thresholds were elevated, amplitudes were reduced, and latencies were prolonged at 1 day and thereafter. CAPs deteriorated over time, whereas IC-EVPs partially recovered; latencies remained consistently prolonged despite changes in amplitudes. The results support auditory nerve myelinopathy as a possible pathomechanism of auditory neuropathy but indicate that myelinopathy must be severe before physiological measures are affected.  相似文献   

14.
Congenitally blind individuals have been found to show superior performance in perceptual and memory tasks. In the present study, we asked whether superior stimulus encoding could account for performance in memory tasks. We characterized the performance of a group of congenitally blind individuals on a series of auditory, memory and executive cognitive tasks and compared their performance to that of sighted controls matched for age, education and musical training.As expected, we found superior verbal spans among congenitally blind individuals. Moreover, we found superior speech perception, measured by resilience to noise, and superior auditory frequency discrimination. However, when memory span was measured under conditions of equivalent speech perception, by adjusting the signal to noise ratio for each individual to the same level of perceptual difficulty (80% correct), the advantage in memory span was completely eliminated. Moreover, blind individuals did not possess any advantage in cognitive executive functions, such as manipulation of items in memory and math abilities. We propose that the short-term memory advantage of blind individuals results from better stimulus encoding, rather than from superiority at subsequent processing stages.  相似文献   

15.
In Landau-Kleffner syndrome (LKS), the prominent and often first symptom is auditory verbal agnosia, which may affect nonverbal sounds. It was early suggested that the subsequent decline of speech expression might result from defective auditory analysis of the patient's own speech. Indeed, despite normal hearing levels, the children behave as if they were deaf, and very rapidly speech expression deteriorates and leads to the receptive aphasia typical of LKS. The association of auditory agnosia more or less restricted to speech with severe language decay prompted numerous studies aimed at specifying the defect in auditory processing and its pathophysiology. Long-term follow-up studies have addressed the issue of the outcome of verbal auditory processing and the development of verbal working memory capacities following the deprivation of phonologic input during the critical period of language development. Based on a review of neurophysiologic and neuropsychological studies of auditory and phonologic disorders published these last 20 years, we discuss the association of verbal agnosia and speech production decay, and try to explain the phonologic working memory deficit in the late outcome of LKS within the Hickok and Poeppel dual-stream model of speech processing.  相似文献   

16.
《Clinical neurophysiology》2014,125(1):148-153
ObjectiveTo compare the detectability of the different auditory evoked responses in patients with retrocochlear lesion.MethodsThe 40-Hz auditory steady state response (ASSR) and the N1m auditory cortical response were examined by magnetoencephalography in 4 patients with vestibular schwannoma, in whom the auditory brainstem response (ABR) was absent.ResultsApparent N1m responses were observed despite total absence of the ABR or absence except for small wave I in all patients, although the latency of N1m was delayed in most patients. On the other hand, clear ASSFs could be observed only in one patient. Very small 40-Hz ASSFs could be detected in 2 patients (amplitude less than 1 fT), but no apparent ASSFs were observed in one patient, in whom maximum speech intelligibility was extremely low and the latency of N1m was most prolonged.ConclusionThe N1m response and 40-Hz ASSR could be detected in patients with absent ABR, but the N1m response appeared to be more detectable than the 40-Hz ASSR.SignificanceCombined assessment with several different evoked responses may be useful to evaluate the disease conditions of patients with retrocochlear lesions.  相似文献   

17.
Reading difficulties are associated with problems in processing and manipulating speech sounds. Dyslexic individuals seem to have, for instance, difficulties in perceiving the length and identity of consonants. Using magnetoencephalography (MEG), we characterized the spatio-temporal pattern of auditory cortical activation in dyslexia evoked by three types of natural bisyllabic pseudowords (/ata/, /atta/, and /a a/), complex nonspeech sound pairs (corresponding to /atta/ and /a a/) and simple 1-kHz tones. The most robust difference between dyslexic and non-reading-impaired adults was seen in the left supratemporal auditory cortex 100 msec after the onset of the vowel /a/. This N100m response was abnormally strong in dyslexic individuals. For the complex nonspeech sounds and tone, the N100m response amplitudes were similar in dyslexic and nonimpaired individuals. The responses evoked by syllable /ta/ of the pseudoword /atta/ also showed modest latency differences between the two subject groups. The responses evoked by the corresponding nonspeech sounds did not differ between the two subject groups. Further, when the initial formant transition, that is, the consonant, was removed from the syllable /ta/, the N100m latency was normal in dyslexic individuals. Thus, it appears that dyslexia is reflected as abnormal activation of the auditory cortex already 100 msec after speech onset, manifested as abnormal response strengths for natural speech and as delays for speech sounds containing rapid frequency transition. These differences between the dyslexic and nonimpaired individuals also imply that the N100m response codes stimulus-specific features likely to be critical for speech perception. Which features of speech (or nonspeech stimuli) are critical in eliciting the abnormally strong N100m response in dyslexic individuals should be resolved in future studies.  相似文献   

18.
Hypercoupling of activity in speech‐perception‐specific brain networks has been proposed to play a role in the generation of auditory‐verbal hallucinations (AVHs) in schizophrenia; however, it is unclear whether this hypercoupling extends to nonverbal auditory perception. We investigated this by comparing schizophrenia patients with and without AVHs, and healthy controls, on task‐based functional magnetic resonance imaging (fMRI) data combining verbal speech perception (SP), inner verbal thought generation (VTG), and nonverbal auditory oddball detection (AO). Data from two previously published fMRI studies were simultaneously analyzed using group constrained principal component analysis for fMRI (group fMRI‐CPCA), which allowed for comparison of task‐related functional brain networks across groups and tasks while holding the brain networks under study constant, leading to determination of the degree to which networks are common to verbal and nonverbal perception conditions, and which show coordinated hyperactivity in hallucinations. Three functional brain networks emerged: (a) auditory‐motor, (b) language processing, and (c) default‐mode (DMN) networks. Combining the AO and sentence tasks allowed the auditory‐motor and language networks to separately emerge, whereas they were aggregated when individual tasks were analyzed. AVH patients showed greater coordinated activity (deactivity for DMN regions) than non‐AVH patients during SP in all networks, but this did not extend to VTG or AO. This suggests that the hypercoupling in AVH patients in speech‐perception‐related brain networks is specific to perceived speech, and does not extend to perceived nonspeech or inner verbal thought generation.  相似文献   

19.
Visual speech perception without primary auditory cortex activation   总被引:3,自引:0,他引:3  
Speech perception is conventionally thought to be an auditory function, but humans often use their eyes to perceive speech. We investigated whether visual speech perception depends on processing by the primary auditory cortex in hearing adults. In a functional magnetic resonance imaging experiment, a pulse-tone was presented contrasted with gradient noise. During the same session, a silent video of a talker saying isolated words was presented contrasted with a still face. Visual speech activated the superior temporal gyrus anterior, posterior, and lateral to the primary auditory cortex, but not the region of the primary auditory cortex. These results suggest that visual speech perception is not critically dependent on the region of primary auditory cortex.  相似文献   

20.
We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M's unusual neuropsychological profile. We also examined the patient's and controls’ neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient's brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls’ data. This substantial reorganization of auditory processing likely supported the recovery of M's speech processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号