首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Temporal pitch in electric hearing   总被引:7,自引:0,他引:7  
Zeng FG 《Hearing research》2002,174(1-2):101-106
Both place and temporal codes in the peripheral auditory system contain pitch information, however, their actual use by the brain is unclear. Here pitch data are reported from users of the cochlear implant, which provides the ability to change the temporal code independently from the place code. With fixed electrode stimulation, both frequency discrimination and pitch estimate data show that the cochlear implant users can only discern differences in pitch for frequencies up to about 300 Hz. An integration model can predict pitch estimation from frequency discrimination, reinforcing Fechner's hypothesis relating sensation magnitude to stimulus discriminability. The present results suggest that 300 Hz is the upper boundary of the temporal code and that the absolute place information should be included in the present pitch models. They further suggest that future cochlear implants need to increase the number of independent electrodes to restore normal pitch range and resolution.  相似文献   

2.
This investigation examines temporal processing through successive sites in the rat auditory pathway: auditory nerve (AN), anteroventral cochlear nucleus (AVCN) and the medial nucleus of the trapezoid body (MNTB). The degree of phase-locking, measured as vector strength, varied with intensity relative to the cell's threshold, and saturated at a value that depended upon stimulus frequency. A typical pattern showed decline in the saturated vector strength from approximately 0.8 at 400 Hz to about 0.3 at 2000 Hz, with similar profiles in units with a range of characteristic frequencies (480-32,000 Hz). A new expression for temporal dispersion indicates that this variation corresponds to a limiting degree of temporal imprecision, which is relatively consistent between different cells. From AN to AVCN, an increase in vector strength was seen for frequencies below 1000 Hz. At higher frequencies, a decrease in vector strength was observed. From AVCN to MNTB a tendency for temporal coding to be improved below 800 Hz and degraded further above 1500 Hz was seen. This change in temporal processing ability could be attributed to units classified as primary-like with notch (PL(N)). PL(N) MNTB units showed a similar vector strength distribution to PL(N) AVCN units. Our results suggest that AVCN PL(N) units, representing globular bushy cells, are specialised for enhancing the temporal code at low frequencies and relaying this information to principal cells of the MNTB.  相似文献   

3.
Discharge rates for populations of single auditory nerve fibers in response to 1.5 kHz tone bursts were measured in anesthetized cats. Separate plots of average rate vs. best frequency (rate-place profiles) were made for high, medium and low spontaneous rate (SR) auditory nerve fibers. At the lowest sound levels studied (34 dB SPL), all three SR groups show a peak in the rate-place profile centered around 1.5 kHz. At the highest sound levels studied (87 dB SPL), the average rates of the high and medium SR fibers are saturated across a wide range of best frequencies, but a peak around 1.5 kHz is maintained in the rate-place representation of the low SR fibers. These results show that in addition to the temporal information present in the discharge patterns of auditory nerve fibers, a rate-place representation of a single low-frequency tone exists in the auditory nerve over a wide range of sound levels.  相似文献   

4.
Hanekom JJ  Krüger JJ 《Hearing research》2001,151(1-2):188-204
This paper investigates phase-lock coding of frequency in the auditory system. One objective with the current model was to construct an optimal central estimation mechanism able to extract frequency directly from spike trains. The model bases estimates of the stimulus frequency on inter-spike intervals of spike trains phase-locked to a pure tone stimulus. Phase-locking is the tendency of spikes to cluster around multiples of the stimulus period. It is assumed that these clusters have Gaussian distributions with variance that depends on the amount of phase-locking. Inter-spike intervals are then noisy measurements of the actual period of the stimulus waveform. The problem of estimating frequency from inter-spike intervals can be solved optimally with a Kalman filter. It is shown that the number of inter-spike intervals observed in the stimulus interval determines frequency discrimination at low frequencies, while the variance of spike clusters dominates at higher frequencies. Timing information in spike intervals is sufficient to account for human frequency discrimination performance up to 5000 Hz. When spikes are available on each stimulus cycle, the model can accurately predict frequency discrimination thresholds as a function of frequency, intensity and duration.  相似文献   

5.
Kale S  Heinz MG 《Hearing research》2012,286(1-2):64-75
The ability of auditory-nerve (AN) fibers to encode modulation frequencies, as characterized by temporal modulation transfer functions (TMTFs), generally shows a low-pass shape with a cut-off frequency that increases with fiber characteristic frequency (CF). Because AN-fiber bandwidth increases with CF, this result has been interpreted to suggest that peripheral filtering has a significant effect on limiting the encoding of higher modulation frequencies. Sensorineural hearing loss (SNHL), which is typically associated with broadened tuning, is thus predicted to increase the range of modulation frequencies encoded; however, perceptual studies have generally not supported this prediction. The present study sought to determine whether the range of modulation frequencies encoded by AN fibers is affected by SNHL, and whether the effects of SNHL on envelope coding are similar at all modulation frequencies within the TMTF passband. Modulation response gain for sinusoidally amplitude modulated (SAM) tones was measured as a function of modulation frequency, with the carrier frequency placed at fiber CF. TMTFs were compared between normal-hearing chinchillas and chinchillas with a noise-induced hearing loss for which AN fibers had significantly broadened tuning. Synchrony and phase responses for individual SAM tone components were quantified to explore a variety of factors that can influence modulation coding. Modulation gain was found to be higher than normal in noise-exposed fibers across the entire range of modulation frequencies encoded by AN fibers. The range of modulation frequencies encoded by noise-exposed AN fibers was not affected by SNHL, as quantified by TMTF 3- and 10-dB cut-off frequencies. These results suggest that physiological factors other than peripheral filtering may have a significant role in determining the range of modulation frequencies encoded in AN fibers. Furthermore, these neural data may help to explain the lack of a consistent association between perceptual measures of temporal resolution and degraded frequency selectivity.  相似文献   

6.
Buss E  Hall JW  Grose JH 《Ear and hearing》2004,25(3):242-250
OBJECTIVE: The purpose of this study was to examine the effect of sensorineural hearing loss on the ability to make use of fine temporal information and to evaluate the relation between this ability and the ability to recognize speech. DESIGN: Fourteen observers with normal hearing and 12 observers with sensorineural hearing loss were tested on open-set word recognition and on psychophysical tasks thought to reflect use of fine-structure cues: the detection of 2 Hz frequency modulation (FM) and the discrimination of the rate of amplitude modulation (AM) and quasifrequency modulation (QFM). RESULTS: The results showed relatively poor performance for observers with sensorineural hearing loss on both the speech recognition and psychoacoustical tasks. Of particular interest was the finding of significant correlations within the hearing-loss group between speech recognition performance and the psychoacoustical tasks based on frequency modulation, which are thought to reflect the quality of the coding of temporal fine structure. CONCLUSIONS: These results suggest that sensorineural hearing loss may be associated with a reduced ability to use fine temporal information that is coded by neural phase-locking to stimulus fine-structure and that this may contribute to poor speech recognition performance and to poor performance on psychoacoustical tasks that depend on temporal fine structure.  相似文献   

7.
Recent perceptual studies suggest that listeners with sensorineural hearing loss (SNHL) have a reduced ability to use temporal fine-structure cues, whereas the effects of SNHL on temporal envelope cues are generally thought to be minimal. Several perceptual studies suggest that envelope coding may actually be enhanced following SNHL and that this effect may actually degrade listening in modulated maskers (e.g., competing talkers). The present study examined physiological effects of SNHL on envelope coding in auditory nerve (AN) fibers in relation to fine-structure coding. Responses were compared between anesthetized chinchillas with normal hearing and those with a mild–moderate noise-induced hearing loss. Temporal envelope coding of narrowband-modulated stimuli (sinusoidally amplitude-modulated tones and single-formant stimuli) was quantified with several neural metrics. The relative strength of envelope and fine-structure coding was compared using shuffled correlogram analyses. On average, the strength of envelope coding was enhanced in noise-exposed AN fibers. A high degree of enhanced envelope coding was observed in AN fibers with high thresholds and very steep rate-level functions, which were likely associated with severe outer and inner hair cell damage. Degradation in fine-structure coding was observed in that the transition between AN fibers coding primarily fine structure or envelope occurred at lower characteristic frequencies following SNHL. This relative fine-structure degradation occurred despite no degradation in the fundamental ability of AN fibers to encode fine structure and did not depend on reduced frequency selectivity. Overall, these data suggest the need to consider the relative effects of SNHL on envelope and fine-structure coding in evaluating perceptual deficits in temporal processing of complex stimuli.  相似文献   

8.
Natural sounds can be characterized by their fluctuations in amplitude and frequency. Ageing may affect sensitivity to some forms of fluctuations more than others. The present study used individual differences across a wide age range (20–79 years) to test the hypothesis that slow-rate, low-carrier frequency modulation (FM) is coded by phase-locked auditory-nerve responses to temporal fine structure (TFS), whereas fast-rate FM is coded via rate-place (tonotopic) cues, based on amplitude modulation (AM) of the temporal envelope after cochlear filtering. Using a low (500 Hz) carrier frequency, diotic FM and AM detection thresholds were measured at slow (1 Hz) and fast (20 Hz) rates in 85 listeners. Frequency selectivity and TFS coding were assessed using forward masking patterns and interaural phase disparity tasks (slow dichotic FM), respectively. Comparable interaural level disparity tasks (slow and fast dichotic AM and fast dichotic FM) were measured to control for effects of binaural processing not specifically related to TFS coding. Thresholds in FM and AM tasks were correlated, even across tasks thought to use separate peripheral codes. Age was correlated with slow and fast FM thresholds in both diotic and dichotic conditions. The relationship between age and AM thresholds was generally not significant. Once accounting for AM sensitivity, only diotic slow-rate FM thresholds remained significantly correlated with age. Overall, results indicate stronger effects of age on FM than AM. However, because of similar effects for both slow and fast FM when not accounting for AM sensitivity, the effects cannot be unambiguously ascribed to TFS coding.  相似文献   

9.
At the level of the brainstem, precise temporal information is essential for some aspects of binaural processing, while at the level of the cortex, rate and place mechanisms for neural coding seem to predominate. However, we now show that precise timing of steady-state responses to pure tones occurs in the primary auditory cortex (AI). Recordings were made from 163 multi-units in guinea pig AI. All units increased their firing rate in response to pure tones at 100 Hz and 46 (28%) gave sustained responses which were synchronised with the stimulus waveform (phase-locking). The phase-locking units were clustered together in columns. Phase-locking was generally strongest in layers III and IV but was also recorded in layers I, II and V. Good phase-locking was observed over a range of 60-250 Hz: some units (30%) were narrow band while others (37%) were low-pass (33% were not determined). Phase-locking strength was also influenced by sound level: some units showed monotonic increases in strength with level and others were non-monotonic. Ten of the units provided a good temporal representation of the fundamental frequency (270 Hz) of a guinea pig vocalisation (rumble) and may be involved in analysing communication calls.  相似文献   

10.
The period of complex signals is encoded in the bullfrog’s eighth nerve by a synchrony code based on phase-locked responding. We examined how these arrays of phase-locked activity are represented in different subnuclei of the auditory midbrain, the torus semicircularis (TS). Recording sites in different areas of the TS differ in their ability to synchronize to the envelope of complex stimuli, and these differences in synchronous activity are related to response latency. Cells in the caudal principal nucleus (cell sparse zone) have longer latencies, and show little or no phase-locked activity, even in response to low modulation rates, while some cells in lateral areas of the TS (magnocellular nucleus, lateral part of principal nucleus) synchronize to rates as high as 90–100 Hz. At midlevels of the TS, there is a lateral-to-medial gradient of synchronization ability: cells located more laterally show better phase-locking than those located more medially. Pooled all-order interval histograms from short latency cells located in the lateral TS represent the waveform periodicity of a biologically relevant complex harmonic signal at different stimulus levels, and in a manner consistent with behavioral data from vocalizing male frogs. Long latency cells in the caudal parts of the TS (cell sparse zone, caudal magnocellular nucleus) code stimulus period by changes in spike rate, rather than by changes in synchronized activity. These data suggest that neural codes based on rate processing and time domain processing are represented in anatomically different areas of the TS. They further show that a population-based analysis can increase the precision with which temporal features are represented in the central auditory system.  相似文献   

11.
Abstract

Objective

To assess the auditory performance of Digisonic® cochlear implant users with electric stimulation (ES) and electro-acoustic stimulation (EAS) with special attention to the processing of low-frequency temporal fine structure.

Method

Six patients implanted with a Digisonic® SP implant and showing low-frequency residual hearing were fitted with the Zebra® speech processor providing both electric and acoustic stimulation. Assessment consisted of monosyllabic speech identification tests in quiet and in noise at different presentation levels, and a pitch discrimination task using harmonic and disharmonic intonating complex sounds ( ). These tests investigate place and time coding through pitch discrimination. All tasks were performed with ES only and with EAS.

Results

Speech results in noise showed significant improvement with EAS when compared to ES. Whereas EAS did not yield better results in the harmonic intonation test, the improvements in the disharmonic intonation test were remarkable, suggesting better coding of pitch cues requiring phase locking.

Discussion

These results suggest that patients with residual hearing in the low-frequency range still have good phase-locking capacities, allowing them to process fine temporal information. ES relies mainly on place coding but provides poor low-frequency temporal coding, whereas EAS also provides temporal coding in the low-frequency range. Patients with residual phase-locking capacities can make use of these cues.  相似文献   

12.
The primary purpose of this investigation was to determine whether temporal coding in the auditory system was the same for younger and older listeners. Temporal coding was assessed by amplitude-modulated auditory steady-state responses (AM ASSRs) as a physiologic measure of phase-locking capability. The secondary purpose of this study was to determine whether AM ASSRs were related to behavioral speech understanding ability. AM ASSRs showed that the ability of the auditory system to phase lock to a temporally altered signal is dependent on modulation rate, carrier frequency, and age of the listener. Specifically, the interaction of frequency and age showed that younger listeners had more phase locking than old listeners at 500 Hz. The number of phase-locked responses for the 500 Hz carrier frequency was significantly correlated to word-recognition performance. In conclusion, the effect of aging on temporal processing, as measured by phase locking with AM ASSRs, was found for low-frequency stimuli where phase locking in the auditory system should be optimal. The exploration, and use, of electrophysiologic responses to measure auditory timing analysis in humans has the potential to facilitate the understanding of speech perception difficulties in older listeners.  相似文献   

13.
言语识别中的时域及频域信息   总被引:4,自引:1,他引:4  
本文对言语识别中的声学要素从时域和频域的角度进行探讨,旨在为人工耳蜗编码策略的改善提供理论依据。声码器技术被用于一系列的实验以确定时域和频域信息对言语识别和汉语四声识别的相互作用。频域信息是由声码器中的频道数来决定,而时域信息则是由声码器的低通滤波器的截止频率来决定。听力正常成人参加了各项感知试验。结果表明,时域和频域信息都对音素识别很重要。在安静环境下。辅音和元音识别率分别在8和12频道及16Hz和4Hz的低通截止频率时达到平台成绩。在噪声环境下,元音识别受益于增高的频道数。汉语四声的识别需要256Hz的低通截止频率才达到平台成绩,这一频率比英语音素识别所需的时域信息高得多。声调识别率在本研究中最高频道数12时仍未见饱和。为了研究细微结构和时域包络对四声识别的相对重要性.我们用声嵌合技术将不同声调信号的时域包络和细微结构进行对换。感知实验结果表明,声调识别主要取决于细微结构,这一点与音乐感知的结果类似,而不象言语识别,后者主要依赖于时域包络信息。因此,增加人工耳蜗系统中有效的频道数将有助于尤其是噪声环境下的言语识别。将人工耳蜗刺激中提供更多的细微结构信息可能会提高患者声调识别的成绩。  相似文献   

14.
Modulations of amplitude and frequency are common features of natural sounds, and are prominent in behaviorally important communication sounds. The mammalian auditory cortex is known to contain representations of these important stimulus parameters. This study describes the distributed representations of tone frequency and modulation rate in the rat primary auditory cortex (A1). Detailed maps of auditory cortex responses to single tones and tone trains were constructed from recordings from 50-60 microelectrode penetrations introduced into each hemisphere. Recorded data demonstrated that the cortex uses a distributed coding strategy to represent both spectral and temporal information in the rat, as in other species. Just as spectral information is encoded in the firing patterns of neurons tuned to different frequencies, temporal information appears to be encoded using a set of filters covering a range of behaviorally important repetition rates. Although the average A1 repetition rate transfer function (RRTF) was low-pass with a sharp drop-off in evoked spikes per tone above 9 pulses per second (pps), individual RRTFs exhibited significant structure between 4 and 10 pps, including substantial facilitation or depression to tones presented at specific rates. No organized topography of these temporal filters could be determined.  相似文献   

15.
he whole nerve action potential (AP) from the auditory nerve and midbrain averaged evoked potential (AEP) were recorded in Hyla chrysoscelis and H. versicolor in response to synthesized amplitude-modulated stimuli with variable modulation frequencies (Fm). The AP from these frogs is similar to the potential described for mammals and showed a bandpass characteristic in its ability to follow sinusoidally amplitude-modulated (AM) sound stimuli. A lesioning study suggests that the midbrain AEP is a localized neural response of neurons near the ventral border of the torus semicircularis. The AEP is a complex waveform consisting of fast and slow components. The fast component encodes the temporal structure of acoustic stimuli and is used to measure temporal sensitivity in these two species. The AEP behaves like a low-pass filter with a cutoff frequency of 250 Hz when tracking AM signals. Threshold for detection requires a modulation depth of 8–12% of the total stimulus amplitude (ΔI = 1.5-2.0 dB). Relative to the eighth nerve AP, the AEP displays an enhanced coding of AM signals when Fm < 100 Hz, and a slightly inferior ability to code Fm above 250 Hz. The AEP reflects only that portion of the neural response that encodes amplitude fluctuations. In comparison to the range of amplitude fluctuations coded by single units in the rat inferior colliculus or by human evoked potentials, the frog AEP codes higher rates of Fm. The proposal that these frogs process AM stimuli solely on the basis of amplitude fluctuations, and do not use spectral cues at higher modulation frequencies is considered. The AM sensitivity of the AEP, which encompasses most biologically relevant rates of amplitude fluctuation for the animal, and the limited frequency resolution of the periphery, lend support to this proposal. However, convergent spectral processing at higher auditory centers cannot be excluded by this study. Psychophysical tests will be required to determine whether both of these mechanisms may be operating during temporal information processing in anurans.  相似文献   

16.
Previous experiments on the sense of hearing in goldfish have used a stimulus generalization paradigm to investigate the perceptual dimensions evoked by spectrally and temporally complex sounds. The present experiments investigated the effects on perception of the frequency separation between two tones. In the first set of experiments, six groups of goldfish were classically conditioned to a single tone and then tested for generalization to two-tone complexes having one frequency component equal to the conditioning tone, and the other differing by 2–256 Hz. Generalization declined with increasing frequency differences up to about 32 Hz, and then increased for wider frequency separations. These functions indicate that a restricted range of beat rates produces a perceptual quality that is quite unlike that of a single tone. The generalization function of frequency separation resembles the inverse of the ‘fluctuation strength' and ‘roughness' functions for human listeners. The second experiment investigated the effects of spectral location on the perception of a 32 Hz beat rate. Goldfish were conditioned to a two-tone complex (500 and 532 Hz) and then tested for generalization to single tones at various frequencies between 200 and 1200 Hz, and to two-tone complexes having a 32 Hz beat rate but with the lower tone component at various frequencies. For single-tone stimuli, generalization was relatively weak but showed a peak at 500 Hz. For the two-tone stimuli, generalization was more robust, but showed a similarly shaped gradient centered on 500 Hz. Thus, goldfish behaved as if they had acquired information about both temporal modulation and the frequency location of the tone components. These perceptual behaviors appear to be shared with humans and other vertebrates.  相似文献   

17.
Perceptual studies of speech intelligibility have shown that slow variations of acoustic envelope (ENV) in a small set of frequency bands provides adequate information for good perceptual performance in quiet, whereas acoustic temporal fine-structure (TFS) cues play a supporting role in background noise. However, the implications for neural coding are prone to misinterpretation because the mean-rate neural representation can contain recovered ENV cues from cochlear filtering of TFS. We investigated ENV recovery and spike-time TFS coding using objective measures of simulated mean-rate and spike-timing neural representations of chimaeric speech, in which either the ENV or the TFS is replaced by another signal. We (a) evaluated the levels of mean-rate and spike-timing neural information for two categories of chimaeric speech, one retaining ENV cues and the other TFS; (b) examined the level of recovered ENV from cochlear filtering of TFS speech; (c) examined and quantified the contribution to recovered ENV from spike-timing cues using a lateral inhibition network (LIN); and (d) constructed linear regression models with objective measures of mean-rate and spike-timing neural cues and subjective phoneme perception scores from normal-hearing listeners. The mean-rate neural cues from the original ENV and recovered ENV partially accounted for perceptual score variability, with additional variability explained by the recovered ENV from the LIN-processed TFS speech. The best model predictions of chimaeric speech intelligibility were found when both the mean-rate and spike-timing neural cues were included, providing further evidence that spike-time coding of TFS cues is important for intelligibility when the speech envelope is degraded.  相似文献   

18.
K G Hill  G Stange  J Mo 《Hearing research》1989,39(1-2):63-73
Spike potentials were recorded from single fibres in the auditory nerve of the pigeon. In responses elicited by tonal stimuli, the timing of each spike relative to stimulus waveform was measured and period histograms were constructed. Phase locking of spikes was estimated in terms of a synchronicity index obtained by vector addition within the period histogram. A second measure of synchrony in the spike responses was obtained, that of temporal dispersion. For a population of fibres, vector strength of phase locking decreased for frequencies above 1 kHz, as reported for several other species. Temporal dispersion, however, also decreased with frequency, indicating enhanced temporal synchrony as frequency increased within the bandwidth of phase locking. The upper frequency limit of phase locking appears to depend on irreducible jitter of biological origin in the timing of spikes. For individual fibres, the bandwidth of synchronization of spikes consistently exceeds the response area, covering in addition the areas of suppression adjacent to the response area. Spike trains suppressed by a tonal stimulus become synchronized to that stimulus. Phase angles of synchronized responses systematically change as a function of tone level, when tone frequency is above or below CF, as reported for other avian species. Synchronicity and phase angle intensity functions are quite independent of spike rate intensity functions.  相似文献   

19.
In providing profoundly hearing-impaired persons with processed speech through a signal-processing hearing aid, it is important that the new speech code matches their auditory capacities. This processing capacity for auditory information was investigated in this study. In part 1, the subjects’ ability to judge similarities among 8 different but related harmonic complexes was studied. The patterns contained different numbers of harmonics to a 125-Hz fundamental frequency; the harmonics had been spread over the spectrum in various ways. The perceptual judgments appeared to be based on a temporal cue, beat strength, and a spectral cue, related to the balance of high and low frequency components. In part 2, three sets of synthetic vowels were presented to the subjects. Each vowel was realized by summing harmonically related in-phase sinusoids at two formant frequencies. The sets differed in the number of sinusoids per formant: 1, 2 or 3. It was found that the subjects used spectral cues and vowel length for differentiating among the vowels. The overall results show the limited but perhaps usable ability of the profoundly impaired ear to handle spectral information. Implications of these results for the development of signal-processing hearing aids for the profoundly hearing impaired are discussed.  相似文献   

20.
Numerous studies have demonstrated that the frequency spectrum of sounds is represented in the neural code of single auditory nerve fibres both spatially and temporally, but few experiments have been designed to test which of these two representations of frequency is used in the discrimination of complex sounds such as speech and music. This paper reviews the roles of place and temporal coding of frequency in the nervous system as a basis for frequency discrimination of complex sounds such as those in speech. Animal studies based on frequency analysis in the cochlea have shown that the place code changes systematically as a function of sound intensity and therefore lacks the robustness required to explain pitch perception (in humans), which is nearly independent of sound intensity. Further indication that the place principle plays a minor role in discrimination of speech comes from observations that signs of impairment of the spectral analysis in the cochlea in some individuals are not associated with impairments in speech discrimination. The importance of temporal coding is supported by the observation that injuries to the auditory nerve, assumed to impair temporal coherence of the discharges of auditory nerve fibres, are associated with grave impairments in speech discrimination. These observations indicate that temporal coding of sounds is more important for discrimination of speech than place coding. The implications of these findings for the design of prostheses such as cochlear implants are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号