首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Pinna-based spectral cues for sound localization in cat.   总被引:7,自引:0,他引:7  
The directional dependence of the transfer function from free field plane waves to a point near the tympanic membrane (TM) was measured in anesthetized domestic cats. A probe tube microphone was placed approximately 3 mm from the TM from beneath the head in order to keep the pinna intact. Transfer functions were computed as the ratio of the spectrum of a click recorded near the TM to the spectrum of the click in freefield. We analyze the transfer functions in three frequency ranges: low frequencies (less than 5 kHz) where interaural level differences vary smoothly with azimuth; midfrequencies (5-18 kHz) where a prominent spectral notch is observed; and high frequencies (greater than 18 kHz) where the transfer functions vary greatly with source location. Because no two source directions produce the same transfer function, the spectrum of a broadband sound at the TM could serve as a sound localization cue for both elevation and azimuth. In particular, we show that source direction is uniquely determined, for source directions in front of the cat, from the frequencies of the midfrequency spectral notches in the two ears. The validity of the transfer functions as measures of the acoustic input to the auditory system is considered in terms of models of sound propagation in the ear canal.  相似文献   

2.
Cortical responses are adjusted and optimized throughout life to meet changing behavioral demands and to compensate for peripheral damage. The cholinergic nucleus basalis (NB) gates cortical plasticity and focuses learning on behaviorally meaningful stimuli. By systematically varying the acoustic parameters of the sound paired with NB activation, we have previously shown that tone frequency and amplitude modulation rate alter the topography and selectivity of frequency tuning in primary auditory cortex. This result suggests that network-level rules operate in the cortex to guide reorganization based on specific features of the sensory input associated with NB activity. This report summarizes recent evidence that temporal response properties of cortical neurons are influenced by the spectral characteristics of sounds associated with cholinergic modulation. For example, repeated pairing of a spectrally complex (ripple) stimulus decreased the minimum response latency for the ripple, but lengthened the minimum latency for tones. Pairing a rapid train of tones with NB activation only increased the maximum following rate of cortical neurons when the carrier frequency of each train was randomly varied. These results suggest that spectral and temporal parameters of acoustic experiences interact to shape spectrotemporal selectivity in the cortex. Additional experiments with more complex stimuli are needed to clarify how the cortex learns natural sounds such as speech.  相似文献   

3.
The ability of humans to localize sounds remains relatively constant across a range of intensities well above detection threshold, and increasing the spectral content of the stimulus results in an improvement in localization ability. For broadband stimuli, intensities near detection threshold result in fewer and weaker binaural cues used in azimuth localization because the stimulus energy at the high- and low-frequency ends of the audible spectrum fall below detection threshold. Thus, the ability to localize broadband sounds in azimuth is predicted to be degraded at audible but near threshold stimulus intensities. The spectral cues for elevation localization (spectral peaks and notches generated by the head-related transfer function) span a narrower frequency range than those for azimuth. As the stimulus intensity decreases, the ability to detect the stimulus frequencies corresponding to the spectral notches will be more strongly affected than the ability to detect frequencies outside the range where these spectral cues are useful. Consequently, decreasing the stimulus intensity should degrade localization in both azimuth and elevation and create a greater deficit in elevation localization due to the narrower band of audible frequencies containing elevation cues compared to azimuth cues. The present study measured the ability of 11 normal human subjects to localize broadband noise stimuli along the midsagittal plane and horizontal meridian at stimulus intensities of 14, 22, and 30 dB above the subject's detection threshold using a go/no-go behavioral paradigm. Localization ability decreased in both azimuth and elevation with decreasing stimulus intensity, and this effect was greater on localization in elevation than on localization in azimuth. The differential effects of stimulus intensity on sound localization in azimuth and elevation found in the present study may provide a valuable tool in investigating the neural correlates of sound location perception.  相似文献   

4.
Auditory cortex updates incoming information on a segment by segment basis for human speech and animal communication. Measuring repetition rate transfer functions (RRTFs) captures temporal responses to repetitive sounds. In this study, we used repetitive click trains to describe the spatial distribution of RRTF responses in cat anterior auditory field (AAF) and to discern potential variations in local temporal processing capacity. A majority of RRTF filters are band-pass. Temporal parameters estimated from RRTFs and corrected for characteristic frequency or latency dependencies are non-homogeneously distributed across AAF. Unlike the shallow global gradient observed in spectral receptive field parameters, transitions from loci with high to low temporal parameters are steep. Quantitative spatial analysis suggests non-uniform, circumscribed local organization for temporal pattern processing superimposed on global organization for spectral processing in cat AAF.  相似文献   

5.
Numerous studies have demonstrated that the frequency spectrum of sounds is represented in the neural code of single auditory nerve fibres both spatially and temporally, but few experiments have been designed to test which of these two representations of frequency is used in the discrimination of complex sounds such as speech and music. This paper reviews the roles of place and temporal coding of frequency in the nervous system as a basis for frequency discrimination of complex sounds such as those in speech. Animal studies based on frequency analysis in the cochlea have shown that the place code changes systematically as a function of sound intensity and therefore lacks the robustness required to explain pitch perception (in humans), which is nearly independent of sound intensity. Further indication that the place principle plays a minor role in discrimination of speech comes from observations that signs of impairment of the spectral analysis in the cochlea in some individuals are not associated with impairments in speech discrimination. The importance of temporal coding is supported by the observation that injuries to the auditory nerve, assumed to impair temporal coherence of the discharges of auditory nerve fibres, are associated with grave impairments in speech discrimination. These observations indicate that temporal coding of sounds is more important for discrimination of speech than place coding. The implications of these findings for the design of prostheses such as cochlear implants are discussed.  相似文献   

6.
The auditory-evoked responses have been recorded on 5 subject by vertex, right temporal and left temporal electrodes simultaneously. 30 dB sensation level clicks were used as stimuli; one click was presented only to the right ear, or one click only to the left ear, or one click to the right ear and another click to the left ear with a variable interaural time difference in this latter case (0-150 ms). The N-P amplitude variations and the N and P latency variations have been studied and compared to those observed in the perceived lateralizations of the sound source.  相似文献   

7.
The head-related transfer function (HRTF) of the cat adds directionally dependent energy minima to the amplitude spectrum of complex sounds. These spectral notches are a principal cue for the localization of sound source elevation. Physiological evidence suggests that the dorsal cochlear nucleus (DCN) plays a critical role in the brainstem processing of this directional feature. Type O units in the central nucleus of the inferior colliculus (ICC) are a primary target of ascending DCN projections and, therefore, may represent midbrain specializations for the auditory processing of spectral cues for sound localization. Behavioral studies confirm a loss of sound orientation accuracy when DCN projections to the inferior colliculus are surgically lesioned. This study used simple analogs of HRTF notches to characterize single-unit response patterns in the ICC of decerebrate cats that may contribute to the directional sensitivity of the brain's spectral processing pathways. Manipulations of notch frequency and bandwidth demonstrated frequency-specific excitatory responses that have the capacity to encode HRTF-based cues for sound source location. These response patterns were limited to type O units in the ICC and have not been observed for the projection neurons of the DCN. The unique spectral integration properties of type O units suggest that DCN influences are transformed into a more selective representation of sound source location by a local convergence of wideband excitatory and frequency-tuned inhibitory inputs.  相似文献   

8.
Background noise poses a significant obstacle for auditory perception, especially among individuals with hearing loss. To better understand the physiological basis of this perceptual impediment, the present study evaluated the effects of background noise on the auditory nerve representation of head-related transfer functions (HRTFs). These complex spectral shapes describe the directional filtering effects of the head and torso. When a broadband sound passes through the outer ear en route to the tympanic membrane, the HRTF alters its spectrum in a manner that establishes the perceived location of the sound source. HRTF-shaped noise shares many of the acoustic features of human speech, while communicating biologically relevant localization cues that are generalized across mammalian species. Previous studies have used parametric manipulations of random spectral shapes to elucidate HRTF coding principles at various stages of the cat’s auditory system. This study extended that body of work by examining the effects of sound level and background noise on the quality of spectral coding in the auditory nerve. When fibers were classified by their spontaneous rates, the coding properties of the more numerous low-threshold, high-spontaneous rate fibers were found to degrade at high presentation levels and in low signal-to-noise ratios. Because cats are known to maintain accurate directional hearing under these challenging listening conditions, behavioral performance may be disproportionally based on the enhanced dynamic range of the less common high-threshold, low-spontaneous rate fibers.  相似文献   

9.
Objectives: This study aimed to improve access to high-frequency interaural level differences (ILD), by applying extreme frequency compression (FC) in the hearing aid (HA) of 13 bimodal listeners, using a cochlear implant (CI) and conventional HA in opposite ears.

Design: An experimental signal-adaptive frequency-lowering algorithm was tested, compressing frequencies above 160?Hz into the individual audible range of residual hearing, but only for consonants (adaptive FC), thus protecting vowel formants, with the aim to preserve speech perception. In a cross-over design with at least 5 weeks of acclimatization between sessions, bimodal performance with and without adaptive FC was compared for horizontal sound localization, speech understanding in quiet and in noise, and vowel, consonant and voice-pitch perception.

Results: On average, adaptive FC did not significantly affect any of the test results. Yet, two subjects who were fitted with a relatively weak frequency compression ratio, showed improved horizontal sound localization. After the study, four subjects preferred adaptive FC, four preferred standard frequency mapping, and four had no preference. Noteworthy, the subjects preferring adaptive FC were those with best performance on all tasks, both with and without adaptive FC.

Conclusion: On a group level, extreme adaptive FC did not change sound localization and speech understanding in bimodal listeners. Possible reasons are too strong compression ratios, insufficient residual hearing or that the adaptive switching, although preserving vowel perception, may have been ineffective to produce consistent ILD cues. Individual results suggested that two subjects were able to integrate the frequency-compressed HA input with that of the CI, and benefitted from enhanced binaural cues for horizontal sound localization.  相似文献   

10.
Frequency transformation by the external ears provides the spectral cues for localization of broadband sounds in the vertical plane. When human subjects listen to spectrally-impoverished narrowband sounds presented in a free field, the perceived locations vary with the centre frequency and are largely independent of the actual source locations. The present study explored the substrate of spatial illusion by examining the responses of cortical neurons to narrowband stimuli. Single-unit responses were recorded in area A2 of anaesthetized cats. Broadband noise bursts were presented at 14 locations in the vertical median plane, from 60 degrees below the front horizon, up and over the head, to 20 degrees below the rear horizon. Narrowband (1/6-oct) noise bursts were presented at + 80 degrees elevation. An artificial neural network was trained to recognize the spike patterns elicited by broadband noise and, thereby, to register the spike patterns with sound-source elevation. When the trained network was presented with neural responses elicited by narrowband noise, the elevation estimated by the neural network varied with the centre frequency of the narrowband stimuli. Consistent with psychophysical results in human, the locations associated with a given centre frequency could be predicted by comparing the stimulus spectrum with the directional transfer functions of the cat's external ear. The results support the hypothesis that full spike patterns (including spike counts and spike timing) of cortical neurons code information about sound location and that the auditory cortical neurons play a pivotal role in localization behaviour.  相似文献   

11.
《Acta oto-laryngologica》2012,132(2):263-266
Frequency transformation by the external ears provides the spectral cues for localization of broadband sounds in the vertical plane. When human subjects listen to spectrally-impoverished narrowband sounds presented in a free field, the perceived locations vary with the centre frequency and are largely independent of the actual source locations. The present study explored the substrate of spatial illusion by examining the responses of cortical neurons to narrowband stimuli. Single-unit responses were recorded in area A2 of anaesthetized cats. Broadband noise bursts were presented at 14 locations in the vertical median plane, from 60° below the front horizon, up and over the head, to 20° below the rear horizon. Narrowband (1/6-oct) noise bursts were presented at +80° elevation. An artificial neural network was trained to recognize the spike patterns elicited by broadband noise and, thereby, to register the spike patterns with sound-source elevation. When the trained network was presented with neural responses elicited by narrowband noise, the elevation estimated by the neural network varied with the centre frequency of the narrowband stimuli. Consistent with psychophysical results in human, the locations associated with a given centre frequency could be predicted by comparing the stimulus spectrum with the directional transfer functions of the cat's external ear. The results support the hypothesis that full spike patterns (including spike counts and spike timing) of cortical neurons code information about sound location and that the auditory cortical neurons play a pivotal role in localization behaviour.  相似文献   

12.
Minimum audible angles (m.a.a.s) of untrained subjects were measured in a room using pure tone (0.5 to 8 kHz) and click train (noise) stimuli (two alternative, forced-choice, constant stimulus with feedback and head movements permitted, horizontal plane, 0 degree azimuth). The m.a.a.s and standard deviations (SD) were 3.0 degrees +/- 5.2 degrees for click trains and 10.9 degrees +/- 21.0 degrees for pure tones. The m.a.a.s did not vary significantly with frequency. The m.a.a.s and their SDs matched values reported from localization error studies. Narrowing the testing range from 32 degrees to 8 degrees resulted in random responses to the pure tones, though the click trains were readily localized. One subject presented with 2500 trials using an 8 kHz pure tone (with feedback, 16 degrees range) increased her responses from random to 88% correct during the testing. The click train m.a.a.s probably reflect the typical noise localizational abilities of the general population. For pure-tone m.a.a.s, experience/training may result in improved accuracy not applicable to the general public. The presence of a well defined time clue and a broad bandwidth sound results in significantly lower m.a.a.s than were obtained using pure tones which presumably present only interaural phase or intensity clues.  相似文献   

13.
The ability of the auditory organ to resolve brief changes in an acoustic signal presented either monaurally or binaurally is not only of great importance in the processing of speech, it is also involved in the localization of sound stimuli and in selective listening. In the latter context, the electric activity of the primary auditory cortical projection field AI of the cat has been studied with the aim of evaluating specific response patterns evoked by brief changes in interaural time difference. The differences in response of the neuron populations sampled by two recording electrodes indicate that, within this area, there are significant differences in temporal resolution ability. Whereas click stimuli elicit distinct potential patterns at the two sites, with a brief change in interaural time difference, a marked response is recorded by only one of the electrodes. This response is characterized by a decrease in amplitude as the interaural time difference is reduced and as the duration of the time-shift stimulus decreases.  相似文献   

14.
BackgroundPast research has reported that children with repeated occurrences of otitis media at an early age have a negative impact on speech perception at a later age. The present study necessitates documenting the temporal and spectral processing on speech perception in noise from normal and atypical groups.ObjectivesThe present study evaluated the relation between speech perception in noise and temporal; and spectral processing abilities in children with normal and atypical groups.MethodsThe study included two experiments. In the first experiment, temporal resolution and frequency discrimination of listeners with normal group and three subgroups of atypical groups (had a history of OM) a) less than four episodes b) four to nine episodes and c) More than nine episodes during their chronological age of 6 months to 2 years) were evaluated using measures of temporal modulation transfer function and frequency discrimination test. In the second experiment, SNR 50 was evaluated on each group of study participants. All participants had normal hearing and middle ear status during the course of testing.ResultsDemonstrated that children with atypical group had significantly poorer modulation detection threshold, peak sensitivity and bandwidth; and frequency discrimination to each F0 than normal hearing listeners. Furthermore, there was a significant correlation seen between measures of temporal resolution; frequency discrimination and speech perception in noise. It infers atypical groups have significant impairment in extracting envelope as well as fine structure cues from the signal.ConclusionThe results supported the idea that episodes of OM before 2 years of agecan produce periods of sensory deprivation that alters the temporal and spectral skills which in turn has negative consequences on speech perception in noise.  相似文献   

15.
We disrupted periodicity cues by temporally jittering the speech signal to explore how such distortion might affect word identification. Jittering distorts the fine structure of the speech signal with negligible alteration of either its long-term spectral or amplitude envelope characteristics. In Experiment 1, word identification in noise was significantly reduced in young, normal-hearing adults when sentences were temporally jittered at frequencies below 1.2kHz. The accuracy of the younger adults in identifying jittered speech in noise was similar to that found previously for older adults with good audiograms when they listened to intact speech in noise. In Experiment 2, to rule out the possibility that the reductions in word identification were due to spectral distortion, we also tested a simulation of cochlear hearing loss that produced spectral distortion equivalent to that produced by jittering, but this simulation had significantly less temporal distortion than was produced by jittering. There was no significant reduction in the accuracy of word identification when only the frequency region below 1.2kHz was spectrally distorted. Hence, it is the temporal distortion rather than the spectral distortion of the low-frequency components that disrupts word identification.  相似文献   

16.
The temporal fine structure (TFS) of sound contributes significantly to the perception of music and speech in noise. The evaluation of new strategies to improve TFS delivery in cochlear implants (CIs) relies upon the assessment of fine structure encoding. Most modern CI sound processing schemes do not encode within-channel TFS per se, but some TFS information is delivered through temporal envelope cues across multiple channels. Positive and negative Schroeder-phase harmonic complexes differ primarily in acoustic TFS and provide a potential test of TFS discrimination ability in CI users for current and future processing strategies. The ability to discriminate Schroeder-phase stimuli was evaluated in 24 CI users and 7 normal-hearing listeners at four fundamental frequencies: 50, 100, 200, and 400 Hz. The dependent variables were percent correct at each fundamental frequency, average score across all fundamental frequencies, and a maximum-likelihood-predicted threshold fundamental frequency for 75% correct. CI listeners scored better than chance for all fundamental frequencies tested. The 50-Hz, average, and predicted threshold scores correlated significantly with consonant–nucleus–consonant word scores. The 200-Hz score correlated with a measure of speech perception in speech-shaped noise. Pitch-direction sensitivity is predicted jointly by the 400-Hz Schroeder score and a spectral ripple discrimination task. The results demonstrate that the Schroeder test is a potentially useful measure of clinically relevant temporal processing abilities in CI users.  相似文献   

17.
OBJECTIVE: To quantify binaural advantage for auditory localization in the horizontal plane by bilateral cochlear implant (CI) recipients. Also, to determine whether the use of dual microphones with one implant improves localization. METHODS: Twenty subjects from the UK multicenter trial of bilateral cochlear implantation with Nucleus 24 K/M device were recruited. Sound localization was assessed in an anechoic room with an 11-loudspeaker array under four test conditions: right CI, left CI, binaural CI, and dual microphone. Two runs were undertaken for each of five stimuli (speech, tones, noise, transients, and reverberant speech). Order of conditions was counterbalanced across subjects. RESULTS: Mean localization error with bilateral implants was 24 degrees compared with 67 degrees for monaural implant and dual microphone conditions (chance performance is 65 degrees). Normal controls average 2 to 3 degrees in similar conditions. Binaural performance was significantly better than monaural performance for all subjects, for all stimulus types, and for different sound sources. Only small differences in performance with different stimuli were observed. CONCLUSIONS: Bilateral cochlear implantation with the Nucleus 24 device provides marked improvement in horizontal plane localization abilities compared with unilateral CI use for a range of stimuli having different spectral and temporal characteristics. Benefit was obtained by all subjects, for all stimulus types, and for all sound directions. However, binaural performance was still worse than that obtained by normal hearing listeners and hearing aid users with the same methodology. Monaural localization performance was at chance. There is no benefit for localization with dual microphones.  相似文献   

18.
There are three main cues to sound location: the interaural differences in time (ITD) and level (ILD) as well as the monaural spectral shape cues. These cues are generated by the spatial- and frequency-dependent filtering of propagating sound waves by the head and external ears. Although the chinchilla has been used for decades to study the anatomy, physiology, and psychophysics of audition, including binaural and spatial hearing, little is actually known about the sound pressure transformations by the head and pinnae and the resulting sound localization cues available to them. Here, we measured the directional transfer functions (DTFs), the directional components of the head-related transfer functions, for 9 adult chinchillas. The resulting localization cues were computed from the DTFs. In the frontal hemisphere, spectral notch cues were present for frequencies from ~6-18?kHz. In general, the frequency corresponding to the notch increased with increases in source elevation as well as in azimuth towards the ipsilateral ear. The ILDs demonstrated a strong correlation with source azimuth and frequency. The maximum ILDs were <10?dB for frequencies <5?kHz, and ranged from 10-30?dB for the frequencies >5?kHz. The maximum ITDs were dependent on frequency, yielding 236?μs at 4?kHz and 336?μs at 250?Hz. Removal of the pinnae eliminated the spectral notch cues, reduced the acoustic gain and the ILDs, altered the acoustic axis, and reduced the ITDs.  相似文献   

19.
The spectral responses of cat single primary auditory nerve fibers to sinusoidal amplitude-modulated (AM) and double-sideband (DSB) acoustic signals applied to the ear were examined. DSB is an amplitude-modulated signal with a suppressed carrier. Period histograms were compiled from the neural spike-train data, and the frequency spectrum was determined by Fourier transforming these histograms. For DSB signals, spectral components were found to be present at the frequencies of the stimulus as well as at certain combination frequencies. For AM signals, several clusters of spectral components were present. The lowest-frequency cluster consisted of components at DC, at the modulation frequency, and at its harmonics. A higher frequency cluster occurs around a component with the frequency of the carrier. The components of cluster are separated from the carrier by the modulation frequency and its harmonics. Yet higher-frequency clusters appear around multiples of the carrier frequency with components at frequencies separated from these multiples by the modulation frequency and its harmonics. The magnitudes of these spectral components were determined for carrier frequencies located below, at, and above the characteristic frequency of the units, and for different stimulus levels, modulation frequencies, and modulation depths. The low-frequency components present in the neural spike train appear to be the result of demodulation taking place in the inner ear. The demodulated components are strong and are present over a wide range of sound levels, carrier frequencies, modulation frequencies, and nerve-fiber characteristics. This demodulation may be significant for speech recognition.  相似文献   

20.
This personal reflection outlines the discoveries at the University of Melbourne leading to the multi-channel cochlear implant, and its development industrially by Cochlear Limited. My earlier experimental electrophysiological research demonstrated temporal coding occurred for only low frequencies, i.e. below 200-500 pulses/second. I was able to confirm these findings perceptually in behaviourally conditioned animals. In addition, these studies showed that temporal discrimination occurred across spatial coding channels. These experimental results correlated with the later conscious experience for electrical stimulation in my implant patients. In addition, the mid-to-high frequencies were coded in part by place of stimulation using bipolar and monopolar stimulation to restrict current spread. Furthermore, place of stimulation had the qualities of sharpness and dullness, and was also experienced as vowels. Owing to the limitation in coding speech with a physiological model due to the overlap of electrical current leading to unpredictable variations in loudness, a speech coding strategy that extracted the most important speech features for transmission through an electro-neural 'bottle-neck' to the brain was explored. Our inaugural strategy, discovered in 1978, extracted the second formant for place of stimulation, voicing for rate of stimulation, and sound pressure for current level. This was the first coding strategy to provide open-set speech understanding, as shown by standard audiological tests, and it became the first clinically successful interface between the world and human consciousness. This strategy was improved with place coding for the third formant or high-frequency spectrum, and then the spectral maxima. In 1989, I operated on our first patient to receive a bilateral implant, and in 1990, the first with a bimodal processor. The psychophysics and speech perception for these showed that the stimuli from each side could be fused into a single image, and localized according to differences in intensity and time of arrival of the stimuli. There were significant improvements for speech perception in noise. In 1985, I implanted our first children with the multi-channel prosthesis and found that speech understanding and spoken language were greatly improved the younger the child at surgery, and especially when younger than 12 months. Speech understanding was strongly related to the development of place coding. In 1990, the US Food and Drug Administration approved the implant for deaf children, the first by any world health regulatory body making it the first major advance in helping deaf children to communicate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号