首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Sound localization in the horizontal (azimuth) plane relies mainly on interaural time differences (ITDs) and interaural level differences (ILDs). Both are distorted in listeners with acquired unilateral conductive hearing loss (UCHL), reducing their ability to localize sound. Several studies demonstrated that UCHL listeners had some ability to localize sound in azimuth. To test whether listeners with acquired UCHL use strongly perturbed binaural difference cues, we measured localization while they listened with a sound-attenuating earmuff over their impaired ear. We also tested the potential use of monaural pinna-induced spectral-shape cues for localization in azimuth and elevation, by filling the cavities of the pinna of their better-hearing ear with a mould. These conditions were tested while a bone-conduction device (BCD), fitted to all UCHL listeners in order to provide hearing from the impaired side, was turned off. We varied stimulus presentation levels to investigate whether UCHL listeners were using sound level as an azimuth cue. Furthermore, we examined whether horizontal sound-localization abilities improved when listeners used their BCD. Ten control listeners without hearing loss demonstrated a significant decrease in their localization abilities when they listened with a monaural plug and muff. In 4/13 UCHL listeners we observed good horizontal localization of 65 dB SPL broadband noises with their BCD turned off. Localization was strongly impaired when the impaired ear was covered with the muff. The mould in the good ear of listeners with UCHL deteriorated the localization of broadband sounds presented at 45 dB SPL. This demonstrates that they used pinna cues to localize sounds presented at low levels. Our data demonstrate that UCHL listeners have learned to adapt their localization strategies under a wide variety of hearing conditions and that sound-localization abilities improved with their BCD turned on.  相似文献   

2.
Several studies have attributed deterioration of sound localization in the horizontal (azimuth) and vertical (elevation) planes to an age-related decline in binaural processing and high-frequency hearing loss (HFHL). The latter might underlie decreased elevation performance of older adults. However, as the pinnae keep growing throughout life, we hypothesized that larger ears might enable older adults to localize sounds in elevation on the basis of lower frequencies, thus (partially) compensating their HFHL. In addition, it is not clear whether sound localization has already matured at a very young age, when the body is still growing, and the binaural and monaural sound-localization cues change accordingly. The present study investigated sound-localization performance of children (7–11 years), young adults (20–34 years), and older adults (63–80 years) under open-loop conditions in the two-dimensional frontal hemifield. We studied the effect of age-related hearing loss and ear size on localization responses to brief broadband sound bursts with different bandwidths. We found similar localization abilities in azimuth for all listeners, including the older adults with HFHL. Sound localization in elevation for the children and young adult listeners with smaller ears improved when stimuli contained frequencies above 7 kHz. Subjects with larger ears could also judge the elevation of sound sources restricted to lower frequency content. Despite increasing ear size, sound localization in elevation deteriorated in older adults with HFHL. We conclude that the binaural localization cues are successfully used well into later stages of life, but that pinna growth cannot compensate the more profound HFHL with age.  相似文献   

3.
Sound localization performance is degraded at low stimulus intensities in humans, and while the sound localization ability of humans and macaque monkeys appears similar, the effects of intensity have yet to be described in the macaque. We therefore defined the ability of four macaque monkeys to localize broadband noise stimuli at four different absolute intensities and six different starting locations in azimuth. Results indicate that performance was poorest at the lowest intensity tested (25 dB SPL), intermediate at 35 dB SPL, and equivalent at 55 and 75 dB SPL. Localization performance was best at 0 degree (directly in front of the animal) and was systematically degraded at more peripheral locations (+/-30 degrees and 90 degrees) and worst at a location directly behind the animal. Reaction times showed the same trends, with reaction times increasing with decreasing stimulus intensity, even under conditions where the monkey discriminated the location change with the same performance. These results indicate that sound level as well as position profoundly influences sound localization ability.  相似文献   

4.
The auditory spatial acuity of the domestic cat in the interaural horizontal plane was examined using broadband noise and nine pure-tone stimuli ranging in frequency from 0.5 to 32 kHz. Acuity in the median vertical plane was also examined using broadband noise and three pure tones of frequencies 2, 8 and 16 kHz. Minimum audible angles (MAAs) for a reference source directly in front of an animal were measured in the horizontal plane for five cats and in the vertical plane for four. The smallest MAAs measured were those for the noise stimulus, for which MAAs in the horizontal and vertical planes were similar in magnitude. Horizontal plane MAAs for low-frequency tones were smaller than those for high, and the pattern of MAA change with frequency was consistent with the use of interaural phase and sound pressure level difference cues to localize low- and high-frequency tones, respectively. Three of the four cats trained on the vertical plane MAA task did not achieve criterion performance for any of the three pure tones, and the MAAs obtained from the fourth cat at each frequency were relatively large. Vertical plane performance was consistent with the use of spectral transformation cues to discern the elevation of a complex stimulus.  相似文献   

5.
OBJECTIVES: The main purpose of the study was to assess the ability of adults with bilateral cochlear implants to localize noise and speech signals in the horizontal plane. A second objective was to measure the change in localization performance in these adults between approximately 5 and 15 mo after activation. A third objective was to evaluate the relative roles of interaural level difference (ILD) and interaural temporal difference (ITD) cues in localization by these subjects. DESIGN: Twenty-two adults, all postlingually deafened and all bilaterally fitted with MED-EL COMBI 40+ cochlear implants, were tested in a modified source identification task. Subjects were tested individually in an anechoic chamber, which contained an array of 43 numbered loudspeakers extending from -90 degrees to +90 degrees azimuth. On each trial, a 200-msec signal (either a noise burst or a speech sample) was presented from one of 17 active loudspeakers (span: +/-80 degrees ), and the subject had to identify which source from the 43 loudspeakers in the array produced the signal. Subjects were tested in three conditions: left device only active, right device only active, and both devices active. Twelve of the 22 subjects were retested approximately 10 mo after their first test. In Experiment 2, the spectral content and rise-decay time of the noise stimulus were manipulated. RESULTS: The relationship between source azimuth and response azimuth was characterized in terms of the adjusted constant error (?). (1) With both devices active, ? for the noise stimulus varied from 8.1 degrees to 43.4 degrees (mean: 24.1 degrees ). By comparison, ? for a group of listeners with normal hearing ranged from 3.5 degrees to 7.8 degrees (mean: 5.6 degrees ). When subjects listened in unilateral mode (with one device turned off), ? was at or near chance (50.5 degrees ) in all cases. However, when considering unilateral performance on each subject's better side, average ? for the speech stimulus was 47.9 degrees , which was significantly (but only slightly) better than chance. (2) When listening bilaterally, error score was significantly lower for the speech stimulus (mean ? = 21.5 degrees ) than for the noise stimulus (mean ? = 24.1 degrees ). (3) As a group, the 12 subjects who were retested 10 mo after their first visit showed no significant improvement in localization performance during the intervening time. However, two subjects who performed very poorly during their first visit showed dramatic improvement (error scores were halved) over the intervening time. In Experiment 2, removing the high-frequency content of noise signals resulted in significantly poorer performance, but removing the low-frequency content or increasing the rise-decay time did not have an effect. CONCLUSIONS: In agreement with previously reported data, subjects with bilateral cochlear implants localized sounds in the horizontal plane remarkably well when using both of their devices, but they generally could not localize sounds when either device was deactivated. They could localize the speech signal with slightly, but significantly better accuracy than the noise, possibly due to spectral differences in the signals, to the availability of envelope ITD cues with the speech but not the noise signal, or to more central factors related to the social salience of speech signals. For most subjects the remarkable ability to localize sounds has stabilized by 5 mo after activation. However, for some subjects who perform poorly initially, there can be substantial improvement past 5 mo. Results from Experiment 2 suggest that ILD cues underlie localization ability for noise signals, and that ITD cues do not contribute.  相似文献   

6.
Pinna-based spectral cues for sound localization in cat.   总被引:7,自引:0,他引:7  
The directional dependence of the transfer function from free field plane waves to a point near the tympanic membrane (TM) was measured in anesthetized domestic cats. A probe tube microphone was placed approximately 3 mm from the TM from beneath the head in order to keep the pinna intact. Transfer functions were computed as the ratio of the spectrum of a click recorded near the TM to the spectrum of the click in freefield. We analyze the transfer functions in three frequency ranges: low frequencies (less than 5 kHz) where interaural level differences vary smoothly with azimuth; midfrequencies (5-18 kHz) where a prominent spectral notch is observed; and high frequencies (greater than 18 kHz) where the transfer functions vary greatly with source location. Because no two source directions produce the same transfer function, the spectrum of a broadband sound at the TM could serve as a sound localization cue for both elevation and azimuth. In particular, we show that source direction is uniquely determined, for source directions in front of the cat, from the frequencies of the midfrequency spectral notches in the two ears. The validity of the transfer functions as measures of the acoustic input to the auditory system is considered in terms of models of sound propagation in the ear canal.  相似文献   

7.
Head-related transfer functions of the Rhesus monkey   总被引:1,自引:0,他引:1  
Head-related transfer functions (HRTFs) are direction-specific acoustic filters formed by the head, the pinnae and the ear canals. They can be used to assess acoustical cues available for sound localization and to construct virtual auditory environments. We measured the HRTFs of three anesthetized Rhesus monkeys (Macaca mulatta) from 591 locations in the frontal hemisphere ranging from -90 degrees (left) to 90 degrees (right) in azimuth and -60 degrees (down) to 90 degrees (up) in elevation for frequencies between 0.5 and 15 kHz. Acoustic validation of the HRTFs shows good agreement between free field and virtual sound sources. Monaural spectra exhibit deep notches at frequencies above 9 kHz, providing putative cues for elevation discrimination. Interaural level differences (ILDs) and interaural time differences (ITDs) generally vary monotonically with azimuth between 0.5 and 8 kHz, suggesting that these two cues can be used to discriminate azimuthal position. Comparison with published subsets of HRTFs from squirrel monkeys (Saimiri sciureus) shows good agreement. Comparison with published human HRTFs from the frontal hemisphere demonstrates overall similarity in the patterns of ILD and ITD, suggesting that the Rhesus monkey is a good acoustic model for these two sound localization cues in humans. Finally, the measured ITDs in the horizontal plane agree well between -40 degrees and 40 degrees in azimuth with those calculated from a spherical head model with a radius of 52 mm, one-half the interaural distance of the monkey.  相似文献   

8.
The head-related transfer function (HRTF) of the cat adds directionally dependent energy minima to the amplitude spectrum of complex sounds. These spectral notches are a principal cue for the localization of sound source elevation. Physiological evidence suggests that the dorsal cochlear nucleus (DCN) plays a critical role in the brainstem processing of this directional feature. Type O units in the central nucleus of the inferior colliculus (ICC) are a primary target of ascending DCN projections and, therefore, may represent midbrain specializations for the auditory processing of spectral cues for sound localization. Behavioral studies confirm a loss of sound orientation accuracy when DCN projections to the inferior colliculus are surgically lesioned. This study used simple analogs of HRTF notches to characterize single-unit response patterns in the ICC of decerebrate cats that may contribute to the directional sensitivity of the brain's spectral processing pathways. Manipulations of notch frequency and bandwidth demonstrated frequency-specific excitatory responses that have the capacity to encode HRTF-based cues for sound source location. These response patterns were limited to type O units in the ICC and have not been observed for the projection neurons of the DCN. The unique spectral integration properties of type O units suggest that DCN influences are transformed into a more selective representation of sound source location by a local convergence of wideband excitatory and frequency-tuned inhibitory inputs.  相似文献   

9.
The ability of chinchillas to localize sound was examined behaviorally using a conditioned avoidance procedure in which the animals were trained to discriminate left from right sound sources. Their minimum audible angle was 15.6° for 100-ms broadband noise making them one of the more accurate rodents, although they are not as accurate as primates and carnivores. Thresholds obtained for filtered noise stimuli demonstrated that chinchillas are equally accurate in localizing either low- or high-frequency noise. Further, they are able to use both interaural phase-difference and interaural intensity-difference cues as demonstrated by their ability to localize both low- and high-frequency pure tones. Finally, analysis of the chinchilla retina supports the hypothesis that the role of auditory localization in directing the eyes to sound sources played a role in the evolution of auditory spatial perception.  相似文献   

10.
Although localization of sound in elevation is believed to depend on spectral cues, it has been shown with human listeners that the temporal features of sound can also greatly affect localization performance. Of particular interest is a phenomenon known as the negative level effect, which describes the deterioration of localization ability in elevation with increasing sound level and is observed only with impulsive or short-duration sound. The present study uses the gaze positions of domestic cats as measures of perceived locations of sound targets varying in azimuth and elevation. The effects of sound level on localization in terms of accuracy, precision, and response latency were tested for sound with different temporal features, such as a click train, a single click, a continuous sound that had the same frequency spectrum of the click train, and speech segments. In agreement with previous human studies, negative level effects were only observed with click-like stimuli and only in elevation. In fact, localization of speech sounds in elevation benefited significantly when the sound level increased. Our findings indicate that the temporal continuity of a sound can affect the frequency analysis performed by the auditory system, and the variation in the frequency spectrum contained in speech sound does not interfere much with the spectral coding for its location in elevation.  相似文献   

11.
Frequency transformation by the external ears provides the spectral cues for localization of broadband sounds in the vertical plane. When human subjects listen to spectrally-impoverished narrowband sounds presented in a free field, the perceived locations vary with the centre frequency and are largely independent of the actual source locations. The present study explored the substrate of spatial illusion by examining the responses of cortical neurons to narrowband stimuli. Single-unit responses were recorded in area A2 of anaesthetized cats. Broadband noise bursts were presented at 14 locations in the vertical median plane, from 60 degrees below the front horizon, up and over the head, to 20 degrees below the rear horizon. Narrowband (1/6-oct) noise bursts were presented at + 80 degrees elevation. An artificial neural network was trained to recognize the spike patterns elicited by broadband noise and, thereby, to register the spike patterns with sound-source elevation. When the trained network was presented with neural responses elicited by narrowband noise, the elevation estimated by the neural network varied with the centre frequency of the narrowband stimuli. Consistent with psychophysical results in human, the locations associated with a given centre frequency could be predicted by comparing the stimulus spectrum with the directional transfer functions of the cat's external ear. The results support the hypothesis that full spike patterns (including spike counts and spike timing) of cortical neurons code information about sound location and that the auditory cortical neurons play a pivotal role in localization behaviour.  相似文献   

12.
Interaural time differences (ITDs) can be used to localize sounds in the horizontal plane. ITDs can be extracted from either the fine structure of low-frequency sounds or from the envelopes of high-frequency sounds. Studies of the latter have included stimuli with periodic envelopes like amplitude-modulated tones or transposed stimuli, and high-pass filtered Gaussian noises. Here, four experiments are presented investigating the perceptual relevance of ITD cues in synthetic and recorded “rustling” sounds. Both share the broad long-term power spectrum with Gaussian noise but provide more pronounced envelope fluctuations than Gaussian noise, quantified by an increased waveform fourth moment, W. The current data show that the JNDs in ITD for band-pass rustling sounds tended to improve with increasing W and with increasing bandwidth when the sounds were band limited. In contrast, no influence of W on JND was observed for broadband sounds, apparently because of listeners' sensitivity to ITD in low-frequency fine structure, present in the broadband sounds. Second, it is shown that for high-frequency rustling sounds ITD JNDs can be as low as 30 μs. The third result was that the amount of dominance for ITD extraction of low frequencies decreases systematically with increasing amount of envelope fluctuations. Finally, it is shown that despite the exceptionally good envelope ITD sensitivity evident with high-frequency rustling sounds, minimum audible angles of both synthetic and recorded high-frequency rustling sounds in virtual acoustic space are still best when the angular information is mediated by interaural level differences.  相似文献   

13.
《Acta oto-laryngologica》2012,132(2):263-266
Frequency transformation by the external ears provides the spectral cues for localization of broadband sounds in the vertical plane. When human subjects listen to spectrally-impoverished narrowband sounds presented in a free field, the perceived locations vary with the centre frequency and are largely independent of the actual source locations. The present study explored the substrate of spatial illusion by examining the responses of cortical neurons to narrowband stimuli. Single-unit responses were recorded in area A2 of anaesthetized cats. Broadband noise bursts were presented at 14 locations in the vertical median plane, from 60° below the front horizon, up and over the head, to 20° below the rear horizon. Narrowband (1/6-oct) noise bursts were presented at +80° elevation. An artificial neural network was trained to recognize the spike patterns elicited by broadband noise and, thereby, to register the spike patterns with sound-source elevation. When the trained network was presented with neural responses elicited by narrowband noise, the elevation estimated by the neural network varied with the centre frequency of the narrowband stimuli. Consistent with psychophysical results in human, the locations associated with a given centre frequency could be predicted by comparing the stimulus spectrum with the directional transfer functions of the cat's external ear. The results support the hypothesis that full spike patterns (including spike counts and spike timing) of cortical neurons code information about sound location and that the auditory cortical neurons play a pivotal role in localization behaviour.  相似文献   

14.
There are three main cues to sound location: the interaural differences in time (ITD) and level (ILD) as well as the monaural spectral shape cues. These cues are generated by the spatial- and frequency-dependent filtering of propagating sound waves by the head and external ears. Although the chinchilla has been used for decades to study the anatomy, physiology, and psychophysics of audition, including binaural and spatial hearing, little is actually known about the sound pressure transformations by the head and pinnae and the resulting sound localization cues available to them. Here, we measured the directional transfer functions (DTFs), the directional components of the head-related transfer functions, for 9 adult chinchillas. The resulting localization cues were computed from the DTFs. In the frontal hemisphere, spectral notch cues were present for frequencies from ~6-18?kHz. In general, the frequency corresponding to the notch increased with increases in source elevation as well as in azimuth towards the ipsilateral ear. The ILDs demonstrated a strong correlation with source azimuth and frequency. The maximum ILDs were <10?dB for frequencies <5?kHz, and ranged from 10-30?dB for the frequencies >5?kHz. The maximum ITDs were dependent on frequency, yielding 236?μs at 4?kHz and 336?μs at 250?Hz. Removal of the pinnae eliminated the spectral notch cues, reduced the acoustic gain and the ILDs, altered the acoustic axis, and reduced the ITDs.  相似文献   

15.
An experiment was conducted to determine the effects of completely-in-the-canal (CIC) hearing aids on auditory localization performance. Six normal-hearing listeners localized a 750-ms broadband noise from loudspeakers ranging in azimuth from -180 degrees to +180 degrees and in elevation from -75 degrees to +90 degrees. Independent variables included the presence or absence of the hearing aid and the elevation of the source. Dependent measures included azimuth error, elevation error, and the percentage of trials resulting in a front-back confusion. The findings indicate a statistically significant decrement in localization acuity, both in azimuth and elevation, occasioned by the wearing of CIC hearing aids. However, the magnitude of this decrement was small compared to those typically caused by other ear-canal occlusions, such as earplugs, and would probably not engender mislocalization of real-world sounds.  相似文献   

16.
The ability to localize sound sources in space is of considerable importance to the human safety- and survival-system. Consequently the current scientific interest in improving the safety-standard i. e. in air-traffic control has provided a new momentum for investigating spatial hearing. This review deals with the nature and the relative salience of the localization cues. Localization refers to judgements of the direction and distance of a sound source but here we will deal with direction only. We begin with a short introduction into the so-called Duplex theory which dates back to John William Strutt (later Lord Rayleigh). The idea is that sound localization is based on interaural time differences (ITD) at low frequencies and interaural level differences (ILD) at high frequencies. If the head remains stationary neither a given ITD nor an ILD can sufficiently define the position of a sound source in space. On such a theoretical basis cones of confusion which open outward from each ear can be predicted ambiguously projecting any source on the surface of such a cone onto an interaural axis. Our restricted ability at localizing sound sources in the vertical median plane is another example of possible ambiguity. At the end of the 19th century scientists already realized that occlusion of the pinnae cavities decreases localization competence. As a result of later achievements in physics and signal-theory it became more obvious that the pinnae may provide an additional cue for spatial hearing and that the outer ear together with the head and the upper torso form a sophisticated direction-dependent filter. The action of such a filter is mathematically described by the so-called Anatomical Transfer Function (ATF). The spectral patterning of the sound produced by the pinnae and the head is most effective when the source has spectral energy over a wide range and contains frequencies above 6 kHz, that is it contains wavelengths short enough to interact with the anatomical characteristics of the outer ears. Scientific findings further suggest that spectral patterns like peaks and notches may also be exploited monaurally, albeit an a priori-knowledge at the central-auditive level concerning the corresponding transfer functions and relevant real-world sounds is required. Binaural spectral cues are more likely to play a major role in localization. They are derived from another transfer function, the so-called Interaural Transfer Function (ITF), being the ratio of the ATFs at the two ears. The contributions of all these cues may sometimes not be enough to prevent the listener from opting for the wrong direction. But things can be eased by allowing head-movements: More than 60 years ago science reasoned that small head movements could provide the information necessary to resolve most of the ambiguities. Recent studies have proved that these findings have been accurate all along.  相似文献   

17.
Listeners with sensorineural hearing loss have well-documented elevated hearing thresholds; reduced auditory dynamic ranges; and reduced spectral (or frequency) resolution that may reduce speech intelligibility, especially in the presence of competing sounds. Amplification and amplitude compression partially compensate for elevated thresholds and reduced dynamic ranges but do not remediate the loss in spectral resolution. Spectral-enhancement processing algorithms have been developed that putatively compensate for decreased spectral resolution by increasing the spectral contrast, or the peak-to-trough ratio, of the speech spectrum. Several implementations have been proposed, with mixed success. It is unclear whether the lack of strong success was due to specific implementation parameters or whether the concept of spectral enhancement is fundamentally flawed. The goal of this study was to resolve this ambiguity by testing the effects of spectral enhancement on detection and discrimination of simple, well-defined signals. To that end, groups of normal-hearing (NH) and hearing-impaired (HI) participants listened in 2 psychophysical experiments, including detection and frequency discrimination of narrowband noise signals in the presence of broadband noise. The NH and HI listeners showed an improved ability to detect and discriminate narrowband increments when there were spectral decrements (notches) surrounding the narrowband signals. Spectral enhancements restored increment detection thresholds to within the normal range when both energy and spectral-profile cues were available to listeners. When only spectral-profile cues were available for frequency discrimination tasks, performance improved for HI listeners, but not all HI listeners reached normal levels of discrimination. These results suggest that listeners are able to take advantage of the local improvement in signal-to-noise ratio provided by the spectral decrements.  相似文献   

18.
Steady-state potentials evoked in response to binaural, sinusoidally amplitude-modulated (AM) pure tones and broadband noise signals were recorded differentially from position F4 and the ipsilateral mastoid on the human scalp. The responses elicited by the AM stimuli were approximately periodic waveforms whose energy was predominantly at the modulation frequency of the stimulus. The magnitude of responses was between 0.1 and 4 microV for modulation frequencies between 2 and 400 Hz imposed on a 1-kHz carrier signal. The magnitude of the responses increased linearly with log modulation depth for low (4 Hz) and high (80 Hz) modulation rates. The response magnitude also increased linearly with the mean intensity of the sound for intensities up to 60 dB above the subject's pure tone threshold; at higher levels the response saturated. The relationship between response magnitude and modulation frequency (the modulation transfer function) was a lowpass function for both pure tone and broadband noise carrier signals. The modulation transfer functions were similar to those obtained from human psychophysical measurements where spectral cues are either unavailable or not used by the subject. The responses also contained a significant component at the second harmonic of the modulation frequency. The magnitude of this component was greatest at modulation rates between 5 and 20 Hz. The responses elicited by ipsilateral and contralateral monaural stimulation were approximately equal in magnitude, and binaural stimulation produced a potential 30% greater than the individual monaural responses. It is suggested that the evoked response represents the entrained neural activity to temporal amplitude fluctuations, and reflects the psychophysically measured performance of the auditory system for the detection and analysis of amplitude modulation.  相似文献   

19.
The location of a sound source is derived by the auditory system from spatial cues present in the signals at the two ears. These cues include interaural timing and level differences, as well as monaural spectral cues generated by the external ear. The values of these cues vary with individual differences in the shape and dimensions of the head and external ears. We have examined the neurophysiological consequences of these intersubject variations by recording the responses of neurons in ferret primary auditory cortex to virtual sound sources mimicking the animal's own ears or those of other ferrets. For most neurons, the structure of the spatial response fields changed significantly when acoustic cues measured from another animal were presented. This is consistent with the finding that humans localize less accurately when listening to virtual sounds from other subjects. To examine the role of experience in shaping the ability to localize sound, we have studied the behavioural consequences of altering binaural cues by chronically plugging one ear. Ferrets raised and tested with one ear plugged learned to localize as accurately as control animals, which is consistent with previous findings that the representation of auditory space in the midbrain can accommodate abnormal sensory cues during development. Adaptive changes in behaviour were also observed in adults, particularly if they were provided with regular practice in the localization task. Together, these findings suggest that the neural circuits responsible for sound localization can be recalibrated throughout life.  相似文献   

20.
OBJECTIVE:: The main purpose of the study was to assess the ability of adults with unilateral cochlear implants to localize noise and speech signals in the horizontal plane. DESIGN:: Six unilaterally implanted adults, all postlingually deafened and all fitted with MED-EL COMBI 40+ devices, were tested with a modified source identification task. Subjects were tested individually in an anechoic chamber, which contained an array of 43 numbered loudspeakers extending from -90 degrees to +90 degrees azimuth. On each trial, a 200 millisecond signal (either a noise burst or a speech sample) was presented from one of nine active loudspeakers, and the subject had to identify which source (from the 43 loudspeakers in the array) produced the signal. RESULTS:: The relationship between source azimuth and response azimuth was characterized in terms of the adjusted constant error (C). C for three subjects was near chance (50.5 degrees ), whereas C for the remaining three subjects was significantly better than chance (35 degrees -44 degrees ). By comparison, C for a group of normal-hearing listeners was 5.6 degrees . For two of the three subjects who performed better than chance, monaural cues were determined to be the basis for their localization performance. CONCLUSIONS:: Some unilaterally implanted subjects can localize sounds at a better than chance level, apparently because they can learn to make use of subtle monaural cues based on frequency-dependent head-shadow effects. However, their performance is significantly poorer than that reported in previous studies of bilaterally implanted subjects, who are able to take advantage of binaural cues.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号