共查询到20条相似文献,搜索用时 15 毫秒
1.
Simon Carlile Toby Blackman 《Journal of the Association for Research in Otolaryngology》2014,15(2):249-263
Previous research has demonstrated that, over a period of weeks, the auditory system accommodates to changes in the monaural spectral cues for sound locations within the frontal region of space. We were interested to determine if similar accommodation could occur for locations in the posterior regions of space, i.e. in the absence of contemporaneous visual information that indicates any mismatch between the perceived and actual location of a sound source. To distort the normal spectral cues to sound location, eight listeners wore small moulds in each ear. HRTF recordings confirmed that while the moulds substantially altered the monaural spectral cues, sufficient residual cues were retained to provide a basis for relearning. Compared to control measures, sound localization performance initially decreased significantly, with a sevenfold increase in front–back confusions and elevation errors more than doubled. Subjects wore the moulds continuously for a period of up to 60 days (median 38 days), over which time performance improved but remained significantly poorer than control levels. Sound localization performance for frontal locations (audio-visual field) was compared with that for posterior space (audio-only field), and there was no significant difference between regions in either the extent or rate of accommodation. This suggests a common mechanism for both regions of space that does not rely on contemporaneous visual information as a teacher signal for recalibration of the auditory system to modified spectral cues. 相似文献
2.
Duck O. Kim Brian Bishop Shigeyuki Kuwada 《Journal of the Association for Research in Otolaryngology》2010,11(4):541-557
There are numerous studies measuring the transfer functions representing signal transformation between a source and each ear canal, i.e., the head-related transfer functions (HRTFs), for various species. However, only a handful of these address the effects of sound source distance on HRTFs. This is the first study of HRTFs in the rabbit where the emphasis is on the effects of sound source distance and azimuth on HRTFs. With the rabbit placed in an anechoic chamber, we made acoustic measurements with miniature microphones placed deep in each ear canal to a sound source at different positions (10–160 cm distance, ±150° azimuth). The sound was a logarithmically swept broadband chirp. For comparisons, we also obtained the HRTFs from a racquetball and a computational model for a rigid sphere. We found that (1) the spectral shape of the HRTF in each ear changed with sound source location; (2) interaural level difference (ILD) increased with decreasing distance and with increasing frequency. Furthermore, ILDs can be substantial even at low frequencies when distance is close; and (3) interaural time difference (ITD) decreased with decreasing distance and generally increased with decreasing frequency. The observations in the rabbit were reproduced, in general, by those in the racquetball, albeit greater in magnitude in the rabbit. In the sphere model, the results were partly similar and partly different than those in the racquetball and the rabbit. These findings refute the common notions that ILD is negligible at low frequencies and that ITD is constant across frequency. These misconceptions became evident when distance-dependent changes were examined. 相似文献
3.
Lina A. J. Reiss Ramnarayan Ramachandran Bradford J. May 《Journal of the Association for Research in Otolaryngology》2011,12(1):71-88
Background noise poses a significant obstacle for auditory perception, especially among individuals with hearing loss. To
better understand the physiological basis of this perceptual impediment, the present study evaluated the effects of background
noise on the auditory nerve representation of head-related transfer functions (HRTFs). These complex spectral shapes describe
the directional filtering effects of the head and torso. When a broadband sound passes through the outer ear en route to the
tympanic membrane, the HRTF alters its spectrum in a manner that establishes the perceived location of the sound source. HRTF-shaped
noise shares many of the acoustic features of human speech, while communicating biologically relevant localization cues that
are generalized across mammalian species. Previous studies have used parametric manipulations of random spectral shapes to
elucidate HRTF coding principles at various stages of the cat’s auditory system. This study extended that body of work by
examining the effects of sound level and background noise on the quality of spectral coding in the auditory nerve. When fibers
were classified by their spontaneous rates, the coding properties of the more numerous low-threshold, high-spontaneous rate
fibers were found to degrade at high presentation levels and in low signal-to-noise ratios. Because cats are known to maintain
accurate directional hearing under these challenging listening conditions, behavioral performance may be disproportionally
based on the enhanced dynamic range of the less common high-threshold, low-spontaneous rate fibers. 相似文献
4.
The dorsal cochlear nucleus (DCN) receives afferent input from the auditory nerve and is thus usually thought of as a monaural
nucleus, but it also receives inputs from the contralateral cochlear nucleus as well as descending projections from binaural
nuclei. Evidence suggests that some of these commissural and efferent projections are excitatory, whereas others are inhibitory.
The goals of this study were to investigate the nature and effects of these inputs in the DCN by measuring DCN principal cell
(type IV unit) responses to a variety of contralateral monaural and binaural stimuli. As expected, the results of contralateral
stimulation demonstrate a mixture of excitatory and inhibitory influences, although inhibitory effects predominate. Most type
IV units are weakly, if at all, inhibited by tones but are strongly inhibited by broadband noise (BBN). The inhibition evoked
by BBN is also low threshold and short latency. This inhibition is abolished and excitation is revealed when strychnine, a
glycine-receptor antagonist, is applied to the DCN; application of bicuculline, a GABAA-receptor antagonist, has similar effects but does not block the onset of inhibition. Manipulations of discrete fiber bundles
suggest that the inhibitory, but not excitatory, inputs to DCN principal cells enter the DCN via its output pathway, and that
the short latency inhibition is carried by commissural axons. Consistent with their respective monaural effects, responses
to binaural tones as a function of interaural level difference are essentially the same as responses to ipsilateral tones,
whereas binaural BBN responses decrease with increasing contralateral level. In comparison to monaural responses, binaural
responses to virtual space stimuli show enhanced sensitivity to the elevation of a sound source in ipsilateral space but reduced
sensitivity in contralateral space. These results show that the contralateral inputs to the DCN are functionally relevant
in natural listening conditions, and that one role of these inputs is to enhance DCN processing of spectral sound localization
cues produced by the pinna. 相似文献
5.
The contention that normally binaural listeners can localize sound under monaural conditions has been challenged by Wightman and Kistler (J. Acoust. Soc. Am. 101:1050–1063, 1997), who found that listeners are almost completely unable to localize virtual sources of sound when sound is presented to only one ear. Wightman and Kistlers results raise the question of whether monaural spectral cues are used by listeners to localize sound under binaural conditions. We have examined the possibility that monaural spectral cues provide useful information regarding sound-source elevation and front–back hemifield when interaural time differences are available to specify sound-source lateral angle. The accuracy with which elevation and front–back hemifield could be determined was compared between a monaural condition and a binaural condition in which a wide-band signal was presented to the near ear and a version of the signal that had been lowpass-filtered at 2.5 kHz was presented to the far ear. It was found that accuracy was substantially greater in the latter condition, suggesting that information regarding sound-source lateral angle is required for monaural spectral cues to elevation and front–back hemifield to be correctly interpreted. 相似文献
6.
Peter Keating Fernando R. Nodal Kohilan Gananandan Andreas L. Schulz Andrew J. King 《Journal of the Association for Research in Otolaryngology》2013,14(4):561-572
Although the ferret has become an important model species for studying both fundamental and clinical aspects of spatial hearing, previous behavioral work has focused on studies of sound localization and spatial release from masking in the free field. This makes it difficult to tease apart the role played by different spatial cues. In humans and other species, interaural time differences (ITDs) and interaural level differences (ILDs) play a critical role in sound localization in the azimuthal plane and also facilitate sound source separation in noisy environments. In this study, we used a range of broadband noise stimuli presented via customized earphones to measure ITD and ILD sensitivity in the ferret. Our behavioral data show that ferrets are extremely sensitive to changes in either binaural cue, with levels of performance approximating that found in humans. The measured thresholds were relatively stable despite extensive and prolonged (>16 weeks) testing on ITD and ILD tasks with broadband stimuli. For both cues, sensitivity was reduced at shorter durations. In addition, subtle effects of changing the stimulus envelope were observed on ITD, but not ILD, thresholds. Sensitivity to these cues also differed in other ways. Whereas ILD sensitivity was unaffected by changes in average binaural level or interaural correlation, the same manipulations produced much larger effects on ITD sensitivity, with thresholds declining when either of these parameters was reduced. The binaural sensitivity measured in this study can largely account for the ability of ferrets to localize broadband stimuli in the azimuthal plane. Our results are also broadly consistent with data from humans and confirm the ferret as an excellent experimental model for studying spatial hearing. 相似文献
7.
Thornton JL Chevallier KM Koka K Lupo JE Tollin DJ 《Journal of the Association for Research in Otolaryngology》2012,13(5):641-654
Otitis media with effusion (OME) is a pathologic condition of the middle ear that leads to a mild to moderate conductive hearing loss as a result of fluid in the middle ear. Recurring OME in children during the first few years of life has been shown to be associated with poor detection and recognition of sounds in noisy environments, hypothesized to result due to altered sound localization cues. To explore this hypothesis, we simulated a middle ear effusion by filling the middle ear space of chinchillas with different viscosities and volumes of silicone oil to simulate varying degrees of OME. While the effects of middle ear effusions on the interaural level difference (ILD) cue to location are known, little is known about whether and how middle ear effusions affect interaural time differences (ITDs). Cochlear microphonic amplitudes and phases were measured in response to sounds delivered from several locations in azimuth before and after filling the middle ear with fluid. Significant attenuations (20–40 dB) of sound were observed when the middle ear was filled with at least 1.0 ml of fluid with a viscosity of 3.5 Poise (P) or greater. As expected, ILDs were altered by ~30 dB. Additionally, ITDs were shifted by ~600 μs for low frequency stimuli (<4 kHz) due to a delay in the transmission of sound to the inner ear. The data show that in an experimental model of OME, ILDs and ITDs are shifted in the spatial direction of the ear without the experimental effusion. 相似文献
8.
Ying-Yee Kong Ala Somarowthu Nai Ding 《Journal of the Association for Research in Otolaryngology》2015,16(6):783-796
This study investigates the effect of spectral degradation on cortical speech encoding in complex auditory scenes. Young normal-hearing listeners were simultaneously presented with two speech streams and were instructed to attend to only one of them. The speech mixtures were subjected to noise-channel vocoding to preserve the temporal envelope and degrade the spectral information of speech. Each subject was tested with five spectral resolution conditions (unprocessed speech, 64-, 32-, 16-, and 8-channel vocoder conditions) and two target-to-masker ratio (TMR) conditions (3 and 0 dB). Ongoing electroencephalographic (EEG) responses and speech comprehension were measured in each spectral and TMR condition for each subject. Neural tracking of each speech stream was characterized by cross-correlating the EEG responses with the envelope of each of the simultaneous speech streams at different time lags. Results showed that spectral degradation and TMR both significantly influenced how top-down attention modulated the EEG responses to the attended and unattended speech. That is, the EEG responses to the attended and unattended speech streams differed more for the higher (unprocessed, 64 ch, and 32 ch) than the lower (16 and 8 ch) spectral resolution conditions, as well as for the higher (3 dB) than the lower TMR (0 dB) condition. The magnitude of differential neural modulation responses to the attended and unattended speech streams significantly correlated with speech comprehension scores. These results suggest that severe spectral degradation and low TMR hinder speech stream segregation, making it difficult to employ top-down attention to differentially process different speech streams. 相似文献
9.
10.
Gene Expression Profiles of the Rat Cochlea,Cochlear Nucleus,and Inferior Colliculus 总被引:4,自引:2,他引:2
Younsook Cho Tzy-Wen L. Gong Timo St?ver Margaret I. Lomax Richard A. Altschuler 《Journal of the Association for Research in Otolaryngology》2002,3(1):54-67
High-throughput DNA microarray technology allows for the assessment of large numbers of genes and can reveal gene expression in a specific region, differential gene expression between regions, as well as changes in gene expression under changing experimental conditions or with a particular disease. The present study used a gene array to profile normal gene expression in the rat whole cochlea, two subregions of the cochlea (modiolar and sensorineural epithelium), and the cochlear nucleus and inferior colliculus of the auditory brainstem. The hippocampus was also assessed as a well-characterized reference tissue. Approximately 40% of the 588 genes on the array showed expression over background. When the criterion for a signal threshold was set conservatively at twice background, the number of genes above the signal threshold ranged from approximately 20% in the cochlea to 30% in the inferior colliculus. While much of the gene expression pattern was expected based on the literature, gene profiles also revealed expression of genes that had not been reported previously. Many genes were expressed in all regions while others were differentially expressed (defined as greater than a twofold difference in expression between regions). A greater number of differentially expressed genes were found when comparing peripheral (cochlear) and central nervous system regions than when comparing the central auditory regions and the hippocampus. Several families of insulin-like growth factor binding proteins, matrix metalloproteinases, and tissue inhibitor of metalloproteinases were among the genes expressed at much higher levels in the cochlea compared with the central nervous system regions. 相似文献
11.
Jianwen Wendy Gu Barbara S. Herrmann Robert A. Levine Jennifer R. Melcher 《Journal of the Association for Research in Otolaryngology》2012,13(6):819-833
Numerous studies have demonstrated elevated spontaneous and sound-evoked brainstem activity in animal models of tinnitus, but data on brainstem function in people with this common clinical condition are sparse. Here, auditory nerve and brainstem function in response to sound was assessed via auditory brainstem responses (ABR) in humans with tinnitus and without. Tinnitus subjects showed reduced wave I amplitude (indicating reduced auditory nerve activity) but enhanced wave V (reflecting elevated input to the inferior colliculi) compared with non-tinnitus subjects matched in age, sex, and pure-tone threshold. The transformation from reduced peripheral activity to central hyperactivity in the tinnitus group was especially apparent in the V/I and III/I amplitude ratios. Compared with a third cohort of younger, non-tinnitus subjects, both tinnitus, and matched, non-tinnitus groups showed elevated thresholds above 4 kHz and reduced wave I amplitude, indicating that the differences between tinnitus and matched non-tinnitus subjects occurred against a backdrop of shared peripheral dysfunction that, while not tinnitus specific, cannot be discounted as a factor in tinnitus development. Animal lesion and human neuroanatomical data combine to indicate that waves III and V in humans reflect activity in a pathway originating in the ventral cochlear nucleus (VCN) and with spherical bushy cells (SBC) in particular. We conclude that the elevated III/I and V/I amplitude ratios in tinnitus subjects reflect disproportionately high activity in the SBC pathway for a given amount of peripheral input. The results imply a role for the VCN in tinnitus and suggest the SBC pathway as a target for tinnitus treatment. 相似文献
12.
目的研究小鼠中脑下丘单个神经元对单耳对侧、同侧声刺激的突触反应及双侧同时刺激的整合反应,探索其潜在的神经生理学基础与神经环路。方法52只正常C57小鼠,采用声刺激系统记录下丘单个神经元对单耳对侧、单耳同侧声刺激的频率-幅度反应域(frequency-amplitude response areas,FARA),获得神经元的特征频率(characteristic frequency,CF)和最低阈值(minimum threshold,MT)。采用活体全细胞膜片钳技术,在最适声刺激条件下(即声音参数为CF与MT),记录下丘同一个神经元分别对单耳对侧、单耳同侧声刺激的突触反应,以及双侧声音同时刺激时的突触整合反应,对记录到的突触反应及整合反应进行分类分析。结果共记录到146个下丘神经元,CF-对侧与CF-同侧分别为14.9±4.8、14.7±5.0 kHz,两者成直线相关且相关系数为1.0258;MT-对侧为19.3±19.3 dB,显著低于MT-同侧(45.1±18.6 dB),差异有统计学意义(P<0.001)。根据神经元对单耳刺激的反应是兴奋(excitation,E)、无反应(no response,O)还是抑制(inhibition,I),可将这146个神经元分成7种不同类型,即EE、EO、EI、II、IO、IE和CM(complex-mode,CM)双耳神经元,分别占66.4%(97/146)、15.8%(23/146)、4.1%(6/146)、6.8%(10/146)、1.4%(2/146)、1.4%(2/146)和4.1%(6/146)。根据双耳整合特性,EE神经元可分成抑制EE/I、易化EE/F和无整合EE/N三种类型,分别占38.1%(37/97)、20.6%(20/97)和41.2%(40/97)。EO和II神经元对双耳信息的整合反应可出现抑制和无整合两种,EI神经元仅表现出抑制的整合反应,而IO和IE神经元则呈双耳无整合效应。结论EE、EO、EI、II、IO、IE和CM这7种不同类型的下丘双耳神经元各自具有不同的双侧突触结构和神经环路。 相似文献
13.
Anke Neuheiser Minoo Lenarz Guenter Reuter Roger Calixto Ingo Nolte Thomas Lenarz Hubert H. Lim 《Journal of the Association for Research in Otolaryngology》2010,11(4):689-708
The auditory midbrain implant (AMI), which consists of a single shank array designed for stimulation within the central nucleus
of the inferior colliculus (ICC), has been developed for deaf patients who cannot benefit from a cochlear implant. Currently,
performance levels in clinical trials for the AMI are far from those achieved by the cochlear implant and vary dramatically
across patients, in part due to stimulation location effects. As an initial step towards improving the AMI, we investigated
how stimulation of different regions along the isofrequency domain of the ICC as well as varying pulse phase durations and
levels affected auditory cortical activity in anesthetized guinea pigs. This study was motivated by the need to determine
in which region to implant the single shank array within a three-dimensional ICC structure and what stimulus parameters to
use in patients. Our findings indicate that complex and unfavorable cortical activation properties are elicited by stimulation
of caudal–dorsal ICC regions with the AMI array. Our results also confirm the existence of different functional regions along
the isofrequency domain of the ICC (i.e., a caudal–dorsal and a rostral–ventral region), which has been traditionally unclassified.
Based on our study as well as previous animal and human AMI findings, we may need to deliver more complex stimuli than currently
used in the AMI patients to effectively activate the caudal ICC or ensure that the single shank AMI is only implanted into
a rostral–ventral ICC region in future patients. 相似文献
14.
Tessa-Jonne F. Ropp Kerrie L. Tiedemann Eric D. Young Bradford J. May 《Journal of the Association for Research in Otolaryngology》2014,15(6):1007-1022
This study describes the long-term effects of sound-induced cochlear trauma on spontaneous discharge rates in the central nucleus of the inferior colliculus (ICC). As in previous studies, single-unit recordings in Sprague–Dawley rats revealed pervasive increases in spontaneous discharge rates. Based on differences in their sources of input, it was hypothesized that physiologically defined neural populations of the auditory midbrain would reveal the brainstem sources that dictate ICC hyperactivity. Abnormal spontaneous activity was restricted to target neurons of the ventral cochlear nucleus. Nearly identical patterns of hyperactivity were observed in the contralateral and ipsilateral ICC. The elevation in spontaneous activity extended to frequencies well below and above the region of maximum threshold shift. This lack of frequency organization suggests that ICC hyperactivity may be influenced by regions of the brainstem that are not tonotopically organized. Sound-induced hyperactivity is often observed in animals with behavioral signs of tinnitus. Prior to electrophysiological recording, rats were screened for tinnitus by measuring gap pre-pulse inhibition of the acoustic startle reflex (GPIASR). Rats with positive phenotypes did not exhibit unique patterns of ICC hyperactivity. This ambiguity raises concerns regarding animal behavioral models of tinnitus. If our screening procedures were valid, ICC hyperactivity is observed in animals without behavioral indications of the disorder. Alternatively, if the perception of tinnitus is strictly linked to ongoing ICC hyperactivity, our current behavioral approach failed to provide a reliable assessment of tinnitus state. 相似文献
15.
Heath G. Jones Kanthaiah Koka Jennifer L. Thornton Daniel J. Tollin 《Journal of the Association for Research in Otolaryngology》2011,12(2):127-140
Sounds are filtered in a spatial- and frequency-dependent manner by the head and pinna giving rise to the acoustical cues
to sound source location. These spectral and temporal transformations are dependent on the physical dimensions of the head
and pinna. Therefore, the magnitudes of binaural sound location cues—the interaural time (ITD) and level (ILD) differences—are
hypothesized to systematically increase while the lower frequency limit of substantial ILD production is expected to decrease
due to the increase in head and pinna size during development. The frequency ranges of the monaural spectral notch cues to
source elevation are also expected to decrease. This hypothesis was tested here by measuring directional transfer functions
(DTFs), the directional components of head-related transfer functions, and the linear dimensions of the head and pinnae for
chinchillas from birth through adulthood. Dimensions of the head and pinna increased by factors of 1.8 and 2.42, respectively,
reaching adult values by ~6 weeks. From the DTFs, the ITDs, ILDs, and spectral shape cues were computed. Maximum ITDs increased
by a factor of 1.75, from ~160 μs at birth (P0-1, first postnatal day) to 280 μs in adults. ILDs depended on source location
and frequency exhibiting a shift in the frequency range of substantial ILD (>10 dB) from higher to lower frequencies with
increasing head and pinnae size. Similar trends were observed for the spectral notch frequencies which ranged from 14.7–33.4 kHz
at P0-1 to 5.3–19.1 kHz in adults. The development of the spectral notch cues, the spatial- and frequency-dependent distributions
of DTF amplitude gain, acoustic directionality, maximum gain, and the acoustic axis were systematically related to the dimensions
of the head and pinnae. The dimension of the head and pinnae in the chinchilla as well as the acoustical properties associated
with them are mature by ~6 weeks. 相似文献
16.
Julia Kerstin Maier David McAlpine Georg M. Klump Daniel Pressnitzer 《Journal of the Association for Research in Otolaryngology》2010,11(2):319-328
In order to investigate whether performance in an auditory spatial discrimination task depends on the prevailing listening conditions, we tested the ability of human listeners to discriminate target sounds with and without presentation of a preceding sound. Target sounds were either lateralized by means of interaural time differences (ITDs) of +400, 0, or −400 μs or interaural level differences (ILDs) with the same subjective intracranial locations. The preceding sound was always lateralized by means of ITD. This allowed for testing whether the effects of a preceding sound were location- or cue-specific. Preceding sounds and target sounds were randomly paired across trials. Listeners had to discriminate whether they perceived the target sounds as coming from the same or different intracranial locations. Finally, stimuli were selected so that, without any preceding sound, ITD and ILD cues were equally discriminable at all target lateralizations. Stimuli were 800 Hz-wide, 400-ms duration bands of noise centered at 500 Hz, presented over headphones. The duration of the preceding sound was randomly selected from a uniform distribution spanning from 1s to 2s. Results show that discriminability of both binaural cues was improved for midline target positions when preceding sound and targets were co-located, whereas it was impaired when preceding sound and targets came from different positions. No effect of the preceding sound was found for left or right target positions. These results are compatible with a purely bottom–up mechanism based on adaptive coding of ITD around the midline that may be combined with top–down mechanisms to increase localization accuracy in realistic listening conditions. 相似文献
17.
18.
Yang Li Tessa-Jonne F. Ropp Bradford J. May Eric D. Young 《Journal of the Association for Research in Otolaryngology》2015,16(4):487-505
Acoustic trauma damages the cochlea but secondarily modifies circuits of the central auditory system. Changes include decreases in inhibitory neurotransmitter systems, degeneration and rewiring of synaptic circuits, and changes in neural activity. Little is known about the consequences of these changes for the representation of complex sounds. Here, we show data from the dorsal cochlear nucleus (DCN) of rats with a moderate high-frequency hearing loss following acoustic trauma. Single-neuron recording was used to estimate the organization of neurons’ receptive fields, the balance of inhibition and excitation, and the representation of the spectra of complex broadband stimuli. The complex stimuli had random spectral shapes (RSSs), and the responses were fit with a model that allows the quality of the representation and its degree of linearity to be estimated. Tone response maps of DCN neurons in rat are like those in other species investigated previously, suggesting the same general organization of this nucleus. Following acoustic trauma, abnormal response types appeared. These can be interpreted as reflecting degraded tuning in auditory nerve fibers plus loss of inhibitory inputs in DCN. Abnormal types are somewhat more prevalent at later times (103–376 days) following the exposure, but not significantly so. Inhibition became weaker in post-trauma neurons that retained inhibitory responses but also disappeared in many neurons. The quality of the representation of spectral shape, measured by sensitivity to the spectral shapes of RSS stimuli, was decreased following trauma; in fact, neurons with abnormal response types responded mainly to overall stimulus level, and not spectral shape. 相似文献
19.
Amanda M. Lauer Sean J. Slee Bradford J. May 《Journal of the Association for Research in Otolaryngology》2011,12(5):633-645
The acoustic basis of auditory spatial acuity was investigated in CBA/129 mice by relating patterns of behavioral errors to
directional features of the head-related transfer function (HRTF). Behavioral performance was assessed by training the mice
to lick a water spout during sound presentations from a “safe” location and to suppress the response during presentations
from “warning” locations. Minimum audible angles (MAAs) were determined by delivering the safe and warning sounds from different
locations in the inter-aural horizontal and median vertical planes. HRTFs were measured at the same locations by implanting
a miniature microphone and recording the gain of sound energy near the ear drum relative to free field. Mice produced an average
MAA of 31° when sound sources were located in the horizontal plane. Acoustic measures indicated that binaural inter-aural
level differences (ILDs) and monaural spectral features of the HRTF change systematically with horizontal location and therefore
may have contributed to the accuracy of behavioral performance. Subsequent manipulations of the auditory stimuli and the directional
properties of the ear produced errors that suggest the mice primarily relied on ILD cues when discriminating changes in azimuth.
The MAA increased beyond 80° when the importance of ILD cues was minimized by testing in the median vertical plane. Although
acoustic measures demonstrated a less robust effect of vertical location on spectral features of the HRTF, this poor performance
provides further evidence for the insensitivity to spectral cues that was noted during behavioral testing in the horizontal
plane. 相似文献
20.
Filip Asp Elina Mäki-Torkko Eva Karltorp Henrik Harder Leif Hergils Gunnar Eskilsson 《International journal of audiology》2015,54(2):77-88
Objective: To study the development of the bilateral benefit in children using bilateral cochlear implants by measurements of speech recognition and sound localization. Design: Bilateral and unilateral speech recognition in quiet, in multi-source noise, and horizontal sound localization was measured at three occasions during a two-year period, without controlling for age or implant experience. Longitudinal and cross-sectional analyses were performed. Results were compared to cross-sectional data from children with normal hearing. Study sample: Seventy-eight children aged 5.1–11.9 years, with a mean bilateral cochlear implant experience of 3.3 years and a mean age of 7.8 years, at inclusion in the study. Thirty children with normal hearing aged 4.8–9.0 years provided normative data. Results: For children with cochlear implants, bilateral and unilateral speech recognition in quiet was comparable whereas a bilateral benefit for speech recognition in noise and sound localization was found at all three test occasions. Absolute performance was lower than in children with normal hearing. Early bilateral implantation facilitated sound localization. Conclusions: A bilateral benefit for speech recognition in noise and sound localization continues to exist over time for children with bilateral cochlear implants, but no relative improvement is found after three years of bilateral cochlear implant experience. 相似文献