首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
This paper describes low-frequency auditory steady-state responses (ASSRs) to speech-weighted noise stimuli. The effect of modulation frequency was evaluated within the frequency range below 40 Hz. Furthermore, objective ASSR measures were related to speech understanding performance in normal-hearing and hearing-impaired listeners. The variability in ASSR recordings over independent test sessions was larger between subjects than within. Trends of increased responses around 10 and/or 20 Hz were found in all subjects. Obtained latency estimates of the responses pointed to primarily cortical sources involved in ASSR generation at low frequencies. Furthermore, significant differences between normal-hearing and hearing-impaired adults were found for ASSRs to stimuli related to the temporal envelope of speech. Comparing these responses with phoneme identification scores over different stimulus levels showed both measures increased with stimulus level in a similar way (ρ=0.82). At a fixed stimulus level, ASSRs were significantly correlated with speech reception thresholds for phonemes and sentences in noise (ρ from ?0.45 to ?0.53). These results indicate that objective low-frequency ASSRs are related to behavioral speech understanding, independently of level.  相似文献   

3.
4.
Subcortical neural coding mechanisms for auditory temporal processing   总被引:9,自引:0,他引:9  
Biologically relevant sounds such as speech, animal vocalizations and music have distinguishing temporal features that are utilized for effective auditory perception. Common temporal features include sound envelope fluctuations, often modeled in the laboratory by amplitude modulation (AM), and starts and stops in ongoing sounds, which are frequently approximated by hearing researchers as gaps between two sounds or are investigated in forward masking experiments. The auditory system has evolved many neural processing mechanisms for encoding important temporal features of sound. Due to rapid progress made in the field of auditory neuroscience in the past three decades, it is not possible to review all progress in this field in a single article. The goal of the present report is to focus on single-unit mechanisms in the mammalian brainstem auditory system for encoding AM and gaps as illustrative examples of how the system encodes key temporal features of sound. This report, following a systems analysis approach, starts with findings in the auditory nerve and proceeds centrally through the cochlear nucleus, superior olivary complex and inferior colliculus. Some general principles can be seen when reviewing this entire field. For example, as one ascends the central auditory system, a neural encoding shift occurs. An emphasis on synchronous responses for temporal coding exists in the auditory periphery, and more reliance on rate coding occurs as one moves centrally. In addition, for AM, modulation transfer functions become more bandpass as the sound level of the signal is raised, but become more lowpass in shape as background noise is added. In many cases, AM coding can actually increase in the presence of background noise. For gap processing or forward masking, coding for gaps changes from a decrease in spike firing rate for neurons of the peripheral auditory system that have sustained response patterns, to an increase in firing rate for more central neurons with transient responses. Lastly, for gaps and forward masking, as one ascends the auditory system, some suppression effects become quite long (echo suppression), and in some stimulus configurations enhancement to a second sound can take place.  相似文献   

5.
6.
7.
PURPOSE: The present study examines the brain basis of listening to spoken words in noise, which is a ubiquitous characteristic of communication, with the focus on the dorsal auditory pathway. METHOD: English-speaking young adults identified single words in 3 listening conditions while their hemodynamic response was measured using fMRI: speech in quiet, speech in moderately loud noise (signal-to-noise ratio [SNR] 20 dB), and in loud noise (SNR -5 dB). RESULTS: Behaviorally, participants' performance (both accuracy and reaction time) did not differ between the quiet and SNR 20 dB condition, whereas they were less accurate and responded slower in the SNR -5 dB condition compared with the other 2 conditions. In the superior temporal gyrus (STG), both left and right auditory cortex showed increased activation in the noise conditions relative to quiet, including the middle portion of STG (mSTG). Although the right posterior STG (pSTG) showed similar activation for the 2 noise conditions, the left pSTG showed increased activation in the SNR -5 dB condition relative to the SNR 20 dB condition. CONCLUSION: We found cortical task-independent and noise-dependent effects concerning speech perception in noise involving bilateral mSTG and left pSTG. These results likely reflect demands in acoustic analysis, auditory-motor integration, and phonological memory, as well as auditory attention.  相似文献   

8.
Scientific evidence has proved reorganisation processes in the auditory cortex after sensorineural hearing loss and overstimulation of certain tonotopic cortical areas, as we see in auditory conditioning techniques. Acoustic rehabilitation reduces the impact of these reorganisation changes. Recent theories explain tinnitus mechanisms as a negative consequence of neural plasticity in the central nervous system after a peripheral aggression. Auditory discrimination training (ADT) could partially reverse the wrong changes in tonotopic representation and improve tinnitus. We discuss different studies and their efficacy on tinnitus perception and annoyance. Indications, method, dose and sound strategy need to be implemented.  相似文献   

9.
Hromádka T  Zador AM 《Hearing research》2007,229(1-2):180-185
Since the earliest studies of auditory cortex, it has been clear that an animal's behavioral or attentional state can play a crucial role in shaping the response characteristics of single neurons. Much of what has been learned about attention has been made using human and animal models, but little is known about the cellular and synaptic mechanisms by which attentional modulation of neuronal responses occurs. The use of rodent experimental models allows us to exploit the full armamentarium of modern cellular and molecular neuroscience techniques. Here we present our program for studying auditory attention, specifically for development of rodent models of attention and finding the neural correlates of attention.  相似文献   

10.
ObjectiveTo assess the impact of rehabilitation systems (CROS: Contralateral Routing of Signal; BAHA: Bone-Anchored Hearing Aid; CI: cochlear implant) on cortical auditory evoked potentials (CAEP) and auditory performance in unilateral hearing loss.Subjects and methodTwenty-one adults with unilateral hearing loss, using CROS (n = 6), BAHA (n = 6) or CI (n = 9), were included. Seven normal-hearing subjects served as controls. CAEPs were recorded for a (/ba/) speech stimulus; for patients, tests were conducted with and without their auditory rehabilitation. Amplitude and latency of the various CAEP components of the global field power (GFP) were measured, and scalp potential fields were mapped. Behavioral assessment used sentence recognition in noise, with and without spatial cues.ResultsOnly CI induced N1 peak amplitude change (P < 0.05). CI and CROS increased polarity inversion amplitude in the contralateral ear, and frontocentral negativity on the scalp potential map. CI improved understanding when speech was presented to the implanted ear and noise to the healthy ear, and vice-versa.ConclusionCochlear implantation had the greatest impact on CAEP morphology and auditory performance. A longitudinal study could analyze progression of cortical reorganization.  相似文献   

11.
The majority of research findings to date indicate that spatial cues play a minor role in enhancing listeners' ability to parse and detect a sound of interest when it is presented in a complex auditory scene comprising multiple simultaneous sounds. Frequency and temporal differences between sound streams provide more reliable cues for scene analysis as well as for directing attention to relevant auditory 'objects' in complex displays. The present study used naturalistic sounds with varying spectro-temporal profiles to examine whether spatial separation of sound sources can enhance target detection in an auditory search paradigm. The arrays of sounds were presented in virtual auditory space over headphones. The results of Experiment 1 suggest that target detection is enhanced when sound sources are spatially separated relative to when they are presented at the same location. Experiment 2 demonstrated that this effect is most prominent within the first 250 ms of exposure to the array of sounds. These findings suggest that spatial cues may be effective for enhancing early processes such as stream segregation, rather than simply directing attention to objects that have already been segmented.  相似文献   

12.
Both psychophysical and physiological studies have examined plasticity of spatial auditory processing. While there is a great deal known about how the system computes basic cues that influence spatial perception, less is known about how these cues are integrated to form spatial percepts and how the auditory system adapts and calibrates in order to maintain accurate spatial perception. After summarizing evidence for plasticity in the spatial auditory pathway, this paper reviews a statistical, decision-theory model of short-term plasticity and a system-level model of the spatial auditory pathway that may help elucidate how long- and short-term experiences influence the computations underlying spatial hearing.  相似文献   

13.
OBJECTIVE. Auditory neuropathy spectrum disorder (ANSD) affects approximately 10% of patients with sensorineural hearing loss. While many studies report abnormalities at the level of the cochlea, auditory nerve, and brainstem in children with ANSD, much less is known about their cortical development. We examined central auditory maturation in 21 children with ANSD. DESIGN. Morphology, latency and amplitude of the P1 cortical auditory evoked potential (CAEP) were used to assess auditory cortical maturation. Children's scores on a measure of auditory skill development (IT-MAIS) were correlated with CAEPs. Study Sample. Participants were 21 children with ANSD. All were hearing aid users. RESULT: Children with ANSD exhibited differences in central auditory maturation. Overall, two-thirds of children revealed present P1 CAEP responses. Of these, just over one third (38%) showed normal P1 response morphology, latency and amplitude, while another third (33%) showed delayed P1 response latencies and significantly smaller amplitudes. The remaining children (29%) revealed abnormal or absent P1 responses. Overall, P1 responses were significantly correlated with auditory skill development. CONCLUSION: Our results suggest that P1 CAEP responses may be: (i) A useful indicator of the extent to which neural dys-synchrony disrupts cortical development, (ii) A good predictor of behavioral outcome in children with ANSD.  相似文献   

14.
We have used positron emission tomography (PET) to test a specific hypothesis of a neural system subserving auditory temporal processing (acoustical stimulus duration discrimination). Maps of the cerebral blood flow distribution during specific stimulations were obtained from five normally-hearing and otherwise healthy subjects. The auditory stimuli consisted of sounds of varying duration and of auditorily presented words in which the duration of the initial phoneme was manipulated. All stimuli alternated with conditions of silence in a subtraction paradigm. The blood flow distribution was mapped with O-15-labelled water. The results demonstrated that stimuli requiring recognizing, memorizing, or attending to specific target sounds during temporal processing generally resulted in significant activation of both frontal lobes and the parietal lobe in the right hemisphere. Based on these results, we hypothesise that a network consisting of anterior and posterior auditory attention and short-term memory sites subserves acoustical stimulus duration perception and analysis (auditory temporal processing).  相似文献   

15.
Scene analysis involves the process of segmenting a field of overlapping objects from each other and from the background. It is a fundamental stage of perception in both vision and hearing. The auditory system encodes complex cues that allow listeners to find boundaries between sequential objects, even when no gap of silence exists between them. In this sense, object perception in hearing is similar to perceiving visual objects defined by isoluminant color, motion or binocular disparity. Motion is one such cue: when a moving sound abruptly disappears from one location and instantly reappears somewhere else, the listener perceives two sequential auditory objects. Smooth reversals of motion direction do not produce this segmentation. We investigated the brain electrical responses evoked by this spatial segmentation cue and compared them to the familiar auditory evoked potential elicited by sound onsets. Segmentation events evoke a pattern of negative and positive deflections that are unlike those evoked by onsets. We identified a negative component in the waveform - the Lateralized Object-Related Negativity - generated by the hemisphere contralateral to the side on which the new sound appears. The relationship between this component and similar components found in related paradigms is considered.  相似文献   

16.
Na(+) concentrations in endolymph must be controlled to maintain hair cell function since the transduction channels of hair cells are cation-permeable, but not K(+)-selective. Flooding or fluctuations of the hair cell cytosol with Na(+) would be expected to lead to cellular dysfunction, hearing loss and vertigo. This review briefly describes cellular mechanisms known to be responsible for Na(+) homeostasis in each compartment of the inner ear, including the cochlea, saccule, semicircular canals and endolymphatic sac. The influx of Na(+) into endolymph of each of the organs is likely via passive diffusion, but these pathways have not yet been identified or characterized. Na(+) absorption is controlled by gate-keeper channels in the apical (endolymphatic) membrane of the transporting cells. Highly Na(+)-selective epithelial sodium channels (ENaCs) control absorption by Reissner's membrane, saccular extramacular epithelium, semicircular canal duct epithelium and endolymphatic sac. ENaC activity is controlled by a number of signal pathways, but most notably by genomic regulation of channel numbers in the membrane via glucocorticoid signaling. Non-selective cation channels in the apical membrane of outer sulcus epithelial cells and vestibular transitional cells mediate Na(+) and parasensory K(+) absorption. The K(+)-mediated transduction current in hair cells is also accompanied by a Na(+) flux since the transduction channels are non-selective cation channels. Cation absorption by all of these cells is regulated by extracellular ATP via apical non-selective cation channels (P2X receptors). The heterogeneous population of epithelial cells in the endolymphatic sac is thought to have multiple absorptive pathways for Na(+) with regulatory pathways that include glucocorticoids and purinergic agonists.  相似文献   

17.
Objectives: To study the changes in behavioural and cortical responses over time in a child with single-sided deafness fitted with a cochlear implant (CI).

Methods: Cortical auditory evoked potentials (CAEPs) in noise (+5?dB signal-to-noise ratio) were recorded and auditory skills were assessed using tests of sound localization, spatial speech perception in noise, and self-ratings of auditory abilities (Listening inventory for education, LIFE and Speech, spatial and qualities of hearing questionnaire, SSQ parental version). Measures were obtained prior to and after a CI fitting, including one, six, and 12 months after the CI switch on.

Results: Spatial speech recognition improved over time. At 12 months post-CI, word recognition scores were similar to those of normal hearing children. Signal-to-noise ratios for sentences decreased (i.e. improved) over time post-CI. Sound localization markedly improved at 12 months post-CI compared to baseline. Self-perception of difficulty scores decreased over time. Parental ratings of hearing abilities improved compared to baseline for all subscales. There were changes in the P1–N1–P2 complex at 12 months post-CI, which were clearer frontally across stimuli. Further research is needed to understand the significance of such changes after CI fitting for single-sided deafness.

Conclusion: Although the changes observed could reflect maturational changes, the clinically significant improvement in recognition of speech in noise and improved questionnaire results suggest that the CI was beneficial, consistent with the feedback from the participant.  相似文献   

18.
Auditory cortex contributes to the processing and perception of spectrotemporally complex stimuli. However, the mechanisms by which this is accomplished are not well understood. In this review, we examine evidence that single cortical neurons receive input covering much of the audible spectrum. We then propose an anatomical framework by which spectral information converges on single neurons in primary auditory cortex, via a combination of thalamocortical and intracortical "horizontal" pathways. By its nature, the framework confers sensitivity to specific, spectrotemporally complex stimuli. Finally, to address how spectral integration can be regulated, we show how one neuromodulator, acetylcholine, could act within the hypothesized framework to alter integration in single neurons. The results of these studies promote a cellular understanding of information processing in auditory cortex.  相似文献   

19.
20.
Otoacoustic emissions (OAEs) have become a commonly used clinical tool for assessing cochlear health status, in particular, the integrity of the cochlear amplifier or motor component of cochlear function. Predicting hearing thresholds from OAEs, however, remains a research challenge. Models and experimental data suggest that there are two mechanisms involved in the generation of OAEs. For distortion product, transient, and high-level stimulus frequency emissions, the interaction of multiple sources of emissions in the cochlea leads to amplitude variation in the composite ear canal signal. Multiple sources of emissions complicate simple correlations between audiometric test frequencies and otoacoustic emission frequencies. Current research offers new methods for estimating the individual components of OAE generation. Input-output functions and DP-grams of the nonlinear component of the 2f2-f2 DPOAE may ultimately show better correlations with hearing thresholds. This paper reviews models of OAE generation and methods for estimating the contribution of source components to the composite emission that is recorded in the ear canal. The clinical implications of multiple source components are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号