首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Listeners with normal hearing (NH) and with sensorineural hearing impairment (HI) were tested on a speech-recognition task requiring across-frequency integration of temporal speech information. Listeners with NH correctly identified a majority of key words in everyday sentences when presented with a synchronous pair of speech-modulated tones at 750 and 3,000 Hz. They could tolerate small amounts (12.5 ms) of across-frequency asynchrony, but performance fell as the delay between bands was increased to 100 ms. Listeners with HI performed more poorly than those with NH when presented with synchronous across-frequency information. Further, performance of listeners with HI fell as a function of asynchrony more steeply than that of their NH counterparts. These results suggest that listeners with HI have particular difficulty comparing and effectively processing temporal speech information at different frequencies. The increased influence of asynchrony indicates that these listeners are especially hindered by slight disruptions in across-frequency information, which implies a less robust comparison mechanism. The results could not be attributed to differences in signal or sensation level, or in listener age, but instead appear to be related to the degree of hearing loss. This across-frequency deficit is unlikely to be attributed to known processing difficulties and may exist in addition to other known disruptions.  相似文献   

2.
Abstract

The contribution of temporal fine-structure (TFS) cues to consonant identification was compared for seven young adults with normal hearing and five young adults with mild-to-moderate hearing loss and flat, high- or low-frequency gently sloping audiograms. Nonsense syllables were degraded using two schemes (PM: phase modulation; FM: frequency modulation) designed to remove temporal envelope (E) cues while preserving TFS cues in 16 0.35-octave-wide frequency bands spanning the range of 80 to 8020 Hz. For both schemes, hearing-impaired listeners performed significantly above chance level (PM: 36%; FM: 31%; chance level: 6.25%), but more poorly than normal-hearing listeners (PM: 80%; FM: 65%). Three hearing-impaired listeners showed normal or near-normal reception of nasality information. These results indicate that for mild to moderate levels of hearing loss, cochlear damage reduces but does not abolish the ability to use the TFS cues of speech. The deficits observed for both schemes in hearing-impaired listeners suggest involvement of factors other than only poor reconstruction of temporal envelope from temporal fine structure.

Sumario

Se comparó la contribución de claves temporales de estructura fina (TFS) para la identificación de consonantes en seis adultos jóvenes con audición normal y cinco adultos jóvenes con hipoacusia superficial a moderada con audiogramas planos, o con caídas suaves en graves o en agudos. Se degradaron sílabas sin sentido utilizando dos esquemas (PM: modulación de fase; FM: modulación de frecuencia) diseñados para remover las claves de envoltura temporal (E) mientras que se preservaron las caves TFS en 16 bandas con anchos de frecuencia de 0.35 de octava, espaciando el rango de 80 a 8020Hz. Para ambos esquemas, los oyentes con hipoacusia se desempeñaron significativamente por encima del nivel de azar (PM: 36%; FM: 31%; Nivel de azar : 6.5%) pero mucho menos que los normoyentes (PM: 80%; FM: 65%). Tres sujetos con hipoacusia mostraron una recepción normal o casi normal de la información de nasalidad. Estos resultados indican que para la hipoacusia superficial a moderada el daño coclear reduce pero no cancela la habilidad para usar las claves TFS del lenguaje. Los déficits observados para ambos esquemas en sujetos hipoacúsicos sugiere una participación de factores diferentes a la pobre reconstrucción de la envoltura temporal de la estructura fina.  相似文献   

3.
This study investigates covariation of perception and production of vowel contrasts in speakers who use cochlear implants and identification of those contrasts by listeners with normal hearing. Formant measures were made of seven vowel pairs whose members are neighboring in acoustic space. The vowels were produced in carrier phrases by 8 postlingually deafened adults, before and after they received their cochlear implants (CI). Improvements in a speaker's production and perception of a given vowel contrast and normally hearing listeners' identification of that contrast in masking noise tended to occur together. Specifically, speakers who produced vowel pairs with reduced contrast in the pre-CI condition (measured by separation in the acoustic vowel space) and who showed improvement in their perception of these contrasts post-CI (measured with a phoneme identification test) were found to have enhanced production contrasts post-CI in many cases. These enhanced production contrasts were associated, in turn, with enhanced masked word recognition, as measured from responses of a group of 10 normally hearing listeners. The results support the view that restoring self-hearing allows a speaker to adjust articulatory routines to ensure sufficient perceptual contrast for listeners.  相似文献   

4.
Listeners were asked to detect amplitude modulation (AM) of a target (or signal) carrier that was presented in isolation or in the presence of an additional (masker) carrier. The signal was modulated at a rate of 10 Hz, and the masker was unmodulated or was modulated at a rate of 2, 10, or 40 Hz. Nine listeners had normal hearing, 4 had a bilateral hearing loss, and 4 had a unilateral hearing loss; those with a unilateral loss were tested in both ears. The listeners with a hearing loss had normal hearing at 1 kHz and a 30- to 40-dB loss at 4 kHz. The carrier frequencies were 984 and 3952 Hz. In one set of conditions, the lower frequency carrier was the signal and the higher frequency carrier was the masker. In the other set, the reverse was true. For the impaired ears, the carriers were presented at 70 dB SPL. For the normal ears, either the carriers were both presented at 70 dB SPL or the higher frequency carrier was reduced to 40 dB SPL to simulate the lower sensation level experienced by the impaired ears. There was considerable individual variability in the results, and there was no clear effect of hearing loss. These results suggest that a mild, presumably cochlear hearing loss does not affect the ability to process AM in one frequency region in the presence of competing AM from another region.  相似文献   

5.
Temporal, spectral, and combined temporal-spectral resolution of hearing was assessed by recording masked hearing thresholds. The masker was an octave band noise. Spectral resolution was assessed by introducing a spectral gap of half an octave bandwidth in the masker. A 50-msec gap assessed temporal resolution. The spectral and temporal gaps were used separately or simultaneously. Normal-hearing and hearing-impaired subjects participated. For each masking condition, the subjects were tested at masker levels 50, 60, 70, and 80 dB SPL and at test-tone frequencies 0.5, 1, 2, and 4 kHz. Normal-hearing subjects showed reduced masking with spectral and temporal gaps. The combination of spectral and temporal gap reduced masking further. The release of masking was dependent upon the masker level. Hearing-impaired subjects showed less release of masking than normal-hearing subjects. The degree of hearing impairment was inversely related to release of masking. Reliability of the test procedure was assessed.  相似文献   

6.
OBJECTIVE: This study was designed to measure the ability of listeners with and without sensorineural hearing loss to discriminate silent gaps between noise band markers of different frequencies presented in an anechoic and a reverberant listening environment. DESIGN: A two-interval, two-alternative, forced-choice paradigm was used to measure gap discrimination ability for six listeners with normal-hearing and six listeners with sensorineural hearing impairment. Marker stimuli were narrow bands of noise centered at frequencies from 500 to 7000 Hz. The center frequency of the leading marker was held constant at 2000 Hz and the center frequency of the trailing marker was varied randomly across runs. Stimuli were presented in two virtual listening environments (anechoic and reverberant). The listeners' task was to indicate which interval contained the marker pair separated by the larger silent gap. Gap discrimination was measured as a function of the center frequency of the trailing marker and as a function of listening environment. RESULTS: Gap discrimination thresholds (msec) varied as a function of the center frequency of the trailing marker. As the trailing marker frequency increased above and decreased below the leading marker frequency (2000 Hz), gap thresholds increased significantly. Hearing loss and listening environment did not have a significant effect on gap discrimination thresholds. Analysis of the gap discrimination functions revealed significantly steeper slopes for trailing marker frequencies below 2000 Hz than for trailing marker frequencies above 2000 Hz. A possible age effect was observed in the data and significant correlations were found between age and function slopes for several conditions. CONCLUSIONS: Gap discrimination becomes more difficult as the frequency disparity between leading and trailing noise bands increases. This pattern of results occurs independent of hearing loss but may be influenced by listener age.  相似文献   

7.
Exposure to modified speech has been shown to benefit children with language-learning impairments with respect to their language skills (M. M. Merzenich et al., 1998; P. Tallal et al., 1996). In the study by Tallal and colleagues, the speech modification consisted of both slowing down and amplifying fast, transitional elements of speech. In this study, we examined whether the benefits of modified speech could be extended to provide intelligibility improvements for children with severe-to-profound hearing impairment who wear sensory aids. In addition, the separate effects on intelligibility of slowing down and amplifying speech were evaluated. Two groups of listeners were employed: 8 severe-to-profoundly hearing-impaired children and 5 children with normal hearing. Four speech-processing conditions were tested: (1) natural, unprocessed speech; (2) envelope-amplified speech; (3) slowed speech; and (4) both slowed and envelope-amplified speech. For each condition, three types of speech materials were used: words in sentences, isolated words, and syllable contrasts. To degrade the performance of the normal-hearing children, all testing was completed with a noise background. Results from the hearing-impaired children showed that all varieties of modified speech yielded either equivalent or poorer intelligibility than unprocessed speech. For words in sentences and isolated words, the slowing-down of speech had no effect on intelligibility scores whereas envelope amplification, both alone and combined with slowing-down, yielded significantly lower scores. Intelligibility results from normal-hearing children listening in noise were somewhat similar to those from hearing-impaired children. For isolated words, the slowing-down of speech had no effect on intelligibility whereas envelope amplification degraded intelligibility. For both subject groups, speech processing had no statistically significant effect on syllable discrimination. In summary, without extensive exposure to the speech processing conditions, children with impaired hearing and children with normal hearing listening in noise received no intelligibility advantage from either slowed speech or envelope-amplified speech.  相似文献   

8.
9.
Consonant recognition was measured as a function of the degree of spectral resolution of the speech stimulus in normally hearing listeners and listeners with moderate sensorineural hearing loss. Previous work (Turner, Souza, and Forget, 1995) has shown that listeners with sensorineural hearing loss could recognize consonants as well as listeners with normal hearing when speech was processed to have only one channel of spectral resolution. The hypothesis tested in the present experiment was that when speech was limited to a small number of spectral channels, both normally hearing and hearing-impaired listeners would continue to perform similarly. As the stimuli were presented with finer degrees of spectral resolution, and the poorer-than-normal spectral resolving abilities of the hearing-impaired listeners became a limiting factor, one would predict that the performance of the hearing-impaired listeners would then become poorer than the normally hearing listeners. Previous research on the frequency-resolution abilities of listeners with mild-to-moderate hearing loss suggests that these listeners have critical bandwidths three to four times larger than do listeners with normal hearing. In the present experiment, speech stimuli were processed to have 1, 2, 4, or 8 channels of spectral information. Results for the 1-channel speech condition were consistent with the previous study in that both groups of listeners performed similarly. However, the hearing-impaired listeners performed more poorly than the normally hearing listeners for all other conditions, including the 2-channel speech condition. These results would appear to contradict the original hypothesis, in that listeners with moderate sensorineural hearing loss would be expected to have at least 2 channels of frequency resolution. One possibility is that the frequency resolution of hearing-impaired listeners may be much poorer than previously estimated; however, a subsequent filtered speech experiment did not support this explanation. The present results do indicate that although listeners with hearing loss are able to use the temporal-envelope information of a single channel in a normal fashion, when given the opportunity to combine information across more than one channel, they show deficient performance.  相似文献   

10.
PURPOSE: To determine if listeners with normal hearing and listeners with sensorineural hearing loss give different perceptual weightings to cues for stop consonant place of articulation in noise versus reverberation listening conditions. METHOD: Nine listeners with normal hearing (23-28 years of age) and 10 listeners with sensorineural hearing loss (31-79 years of age, median 66 years) participated. The listeners were asked to label the consonantal portion of synthetic CV stimuli as either /p/ or /t/. Two cues were varied: (a) the amplitude of the spectral peak in the F4/F5 frequency region of the burst was varied across a 30-dB range relative to the adjacent vowel peak amplitude in the same frequency region, (b) F2/F3 formant transition onset frequencies were either appropriate for /p/, /t/ or neutral for the labial/alveolar contrast. RESULTS: Weightings of relative amplitude and transition cues for voiceless stop consonants depended on the listening condition (quiet, noise, or reverberation), hearing loss, and age of listener. The effects of age with hearing loss reduced the perceptual integration of cues, particularly in reverberation. The effects of hearing loss reduced the effectiveness of both cues, notably relative amplitude in reverberation. CONCLUSIONS: Reverberation and noise conditions have different perceptual effects. Hearing loss and age may have different, separable effects.  相似文献   

11.
PURPOSE: To compare the effects of speech presentation level on acceptance of noise in listeners with normal and impaired hearing. METHOD: Participants were listeners with normal (n = 24) and impaired (n = 46) hearing who were matched for conventional acceptable noise level (ANL). ANL was then measured at 8 fixed speech presentation levels (40, 45, 50, 55, 60, 65, 70, and 75 dB HL) to determine if global ANL (i.e., ANL averaged across speech presentation levels) or ANL growth (i.e., the slope of the ANL function) varied between groups. RESULTS: The effects of speech presentation level on acceptance of noise were evaluated using global ANLs and ANL growth. Results showed global ANL and ANL growth were not significantly different for listeners with normal and impaired hearing, and neither ANL measure was related to pure-tone average for listeners with impaired hearing. Additionally, conventional ANLs were significantly correlated with both global ANLs and ANL growth for all listeners. CONCLUSION: These results indicate that the effects of speech presentation level on acceptance of noise are not related to hearing sensitivity. These results further indicate that a listener's conventional ANL was related to his or her global ANL and ANL growth.  相似文献   

12.
The purpose of this study was to determine the effects of reverberation and noise on the precedence effect in listeners with hearing loss. Lag burst thresholds (LBTs) for 4-ms noise bursts were obtained for 2 groups of participants: impaired hearing and normal hearing. Data were collected in reverberant and anechoic environments in quiet and noise, at sensation levels of 10, 20, 30, 40, and 50 dB. Results indicated a significant effect of reverberation on LBTs for both participant groups. LBTs increased with sensation level in the reverberant environment and decreased with increasing sensation level in the anechoic environment. There was no effect of hearing loss on LBTs. When the change in LBT due to noise was compared, the effect of noise depended on group and environment, with a greater effect of noise on the performance of listeners with impaired hearing. It is likely that the ability to fuse direct sounds and early reflections is degraded in listeners with impaired hearing and that this contributes to the difficulties experienced by these listeners in reverberation and noise.  相似文献   

13.
Music perception with temporal cues in acoustic and electric hearing   总被引:1,自引:0,他引:1  
Kong YY  Cruz R  Jones JA  Zeng FG 《Ear and hearing》2004,25(2):173-185
OBJECTIVE: The first specific aim of the present study is to compare the ability of normal-hearing and cochlear implant listeners to use temporal cues in three music perception tasks: tempo discrimination, rhythmic pattern identification, and melody identification. The second aim is to identify the relative contribution of temporal and spectral cues to melody recognition in acoustic and electric hearing. DESIGN: Both normal-hearing and cochlear implant listeners participated in the experiments. Tempo discrimination was measured in a two-interval forced-choice procedure in which subjects were asked to choose the faster tempo at four standard tempo conditions (60, 80, 100, and 120 beats per minute). For rhythmic pattern identification, seven different rhythmic patterns were created and subjects were asked to read and choose the musical notation displayed on the screen that corresponded to the rhythmic pattern presented. Melody identification was evaluated with two sets of 12 familiar melodies. One set contained both rhythm and melody information (rhythm condition), whereas the other set contained only melody information (no-rhythm condition). Melody stimuli were also processed to extract the slowly varying temporal envelope from 1, 2, 4, 8, 16, 32, and 64 frequency bands, to create cochlear implant simulations. Subjects listened to a melody and had to respond by choosing one of the 12 names corresponding to the melodies displayed on a computer screen. RESULTS: In tempo discrimination, the cochlear implant listeners performed similarly to the normal-hearing listeners with rate discrimination difference limens obtained at 4-6 beats per minute. In rhythmic pattern identification, the cochlear implant listeners performed 5-25 percentage points poorer than the normal-hearing listeners. The normal-hearing listeners achieved perfect scores in melody identification with and without the rhythmic cues. However, the cochlear implant listeners performed significantly poorer than the normal-hearing listeners in both rhythm and no-rhythm conditions. The simulation results from normal-hearing listeners showed a relatively high level of performance for all numbers of frequency bands in the rhythm condition but required as many as 32 bands in the no-rhythm condition. CONCLUSIONS: Cochlear-implant listeners performed normally in tempo discrimination, but significantly poorer than normal-hearing listeners in rhythmic pattern identification and melody recognition. While both temporal (rhythmic) and spectral (pitch) cues contribute to melody recognition, cochlear-implant listeners mostly relied on the rhythmic cues for melody recognition. Without the rhythmic cues, high spectral resolution with as many as 32 bands was needed for melody recognition for normal-hearing listeners. This result indicates that the present cochlear implants provide sufficient spectral cues to support speech recognition in quiet, but they are not adequate to support music perception. Increasing the number of functional channels and improved encoding of the fine structure information are necessary to improve music perception for cochlear implant listeners.  相似文献   

14.
In this study, the performance of 48 listeners with normal hearing was compared to the performance of 46 listeners with documented hearing loss. Various conditions of directional and omnidirectional hearing aid use were studied. The results indicated that when the noise around a listener was stationary, a first- or second-order directional microphone allowed a group of hearing-impaired listeners with mild-to-moderate, bilateral, sensorineural hearing loss to perform similarly to normal hearing listeners on a speech-in-noise task (i.e., they required the same signal-to-noise ratio to achieve 50% understanding). When the noise source was moving around the listener, only the second-order (three-microphone) system set to an adaptive directional response (where the polar pattern changes due to the change in noise location) allowed a group of hearing-impaired individuals with mild-to-moderate sensorineural hearing loss to perform similarly to young, normal-hearing individuals.  相似文献   

15.
OBJECTIVES: In general, auditory cortex on the left side of the brain is specialized for processing of acoustic stimuli with complex temporal structure including speech, and the right hemisphere is primary for spectral processing and favors tonal stimuli and music. This asymmetry in processing is further emphasized when hemisphere-favored stimuli are presented to the contralateral ear. The purpose of the first experiment is to further investigate the properties that dictate lateralized processing of auditory stimuli by ear and the relationship between auditory task and stimulus type. Next, it is not clear what compensation may exist for the loss of function of one ear and consequently, reduced access to functions primary performed in the opposite hemisphere, in the case of early unilateral profound hearing loss. The purpose of experiment 2 is to determine if any compensation for loss of function is seen in persons with early unilateral deafness. DESIGN: Experiment 1: Gap detection thresholds were determined in 30 right-handed listeners with normal hearing using wide-band noise markers (temporally complex), 400 and 4000 Hz pure tones presented individually to the left and right ears. Experiment 2: The same procedure was administered to listeners with early-onset, severe-to-profound unilateral deafness (seven left ear deaf and five right ear deaf) in the hearing ear alone. RESULTS: A significant right ear advantage was found for gap detection threshold using noise markers and a smaller left ear advantage was found for tonal stimuli. Listeners with unilateral deafness demonstrated that the hearing ear, left or right, performed in a manner similar to listeners with normal hearing. CONCLUSIONS: Results indicate that (1) gap marker, more than task, was the salient feature in determining laterality of processing in this experiment, (2) the two ears have distinct processing capacity based on stimulus type, and (3) compensation for loss is not apparent in persons with congenital unilateral deafness.  相似文献   

16.
Several authors have evaluated consonant-to-vowel ratio (CVR) enhancement as a means to improve speech recognition in listeners with hearing impairment, with the intention of incorporating this approach into emerging amplification technology. Unfortunately, most previous studies have enhanced CVRs by increasing consonant energy, thus possibly confounding CVR effects with consonant audibility. In this study, we held consonant audibility constant by reducing vowel transition and steady-state energy rather than increasing consonant energy. Performance-by-intensity (PI) functions were obtained for recognition of voiceless stop consonants (/p/, /t/, /k/) presented in isolation (burst and aspiration digitally separated from the vowel) and for consonant-vowel syllables, with readdition of the vowel /a/. There were three CVR conditions: normal CVR, vowel reduction by 6 dB, and vowel reduction by 12 dB. Testing was conducted in broadband noise fixed at 70 dB SPL and at 85 dB SPL. Six adults with sensorineural hearing impairment and 2 adults with normal hearing served as listeners. Results indicated that CVR enhancement did not improve identification performance when consonant audibility was held constant, except at the higher noise level for one listener with hearing impairment. The re-addition of the vowel energy to the isolated consonant did, however, produce large and significant improvements in phoneme identification.  相似文献   

17.
Objective: To examine the impact of visual cues, speech materials, age and listening condition on the frequency bandwidth necessary for optimizing speech recognition performance. Design: Using a randomized repeated measures design; speech recognition performance was assessed using four speech perception tests presented in quiet and noise in 13 LP filter conditions and presented in multimodalities. Participants’ performance data were fitted with a Boltzmann function to determine optimal performance (10% below performance achieved in FBW). Study sample: Thirty adults (18–63 years) and thirty children (7–12 years) with normal hearing. Results: Visual cues significantly reduced the bandwidth required for optimizing speech recognition performance for listeners. The type of speech material significantly impacted the bandwidth required for optimizing performance. Both groups required significantly less bandwidth in quiet, although children required significantly more than adults. The widest bandwidth required was for the phoneme detection task in noise where children required a bandwidth of 7399 Hz and adults 6674 Hz. Conclusions: Listeners require significantly less bandwidth for optimizing speech recognition performance when assessed using sentence materials with visual cues. That is, the amount of bandwidth systematically decreased as a function of increased contextual, linguistic, and visual content.  相似文献   

18.
Souza PE  Kitch V 《Ear and hearing》2001,22(2):112-119
OBJECTIVE: The purpose of this study was to examine the importance of amplitude envelope cues to sentence identification for aged listeners. We also examined the effect of increasing alterations (i.e., compression ratio) and amount of available frequency content (i.e., number of channels) for this population. DESIGN: Thirty-six listeners were classified according to their age (35 or younger versus 65 and older) and hearing status (normal hearing versus hearing impaired). Within each hearing status, mean hearing threshold thresholds for the young and aged listeners were matched as closely as possible through 4 kHz to control for sensitivity differences across age, and all listeners passed a cognitive screening battery. Accuracy of synthetic sentence identification was measured using stimuli processed to restrict spectral information. Performance was measured as a function of age, hearing status, amount of spectral information, and degradation of the amplitude envelope (using fast-acting compression with compression ratios ranging from 1:1 to 5:1). RESULTS: Mean identification scores decreased significantly with increasing age, the presence of hear- c ing loss, the removal of spectral information, and with increasing distortion of the amplitude envelope (i.e., higher compression ratios). There was a consistent performance gap between young and aged listeners, regardless of the magnitude of change to the amplitude envelope. This suggests that some cue other than amplitude envelope variations is inaccessible to the aged listeners. CONCLUSIONS: Although aged listeners performed more poorly overall, they did not show greater susceptibility to alterations in amplitude-envelope cues, such as those produced by fast-acting amplitude compression systems. It is therefore unlikely that compression parameters such as attack and release time or compression ratio would need to be a differentially programmed for aged listeners. Instead, the data suggest two possibilities: aged listeners have difficulty accessing the fine-structure temporal cues present in speech, and/or performance is degraded by age-related loss of function at a central processing level.  相似文献   

19.
The two aims of this study were (a) to determine the perceptual weight given formant transition and relative amplitude information for labeling fricative place of articulation perception and (b) to determine the extent of integration of relative amplitude and formant transition cues. Seven listeners with normal hearing and 7 listeners with sensorineural hearing loss participated. The listeners were asked to label the fricatives of synthetic consonant-vowel stimuli as either /s/ or [see text]. Across the stimuli, 3 cues were varied: (a) The amplitude of the spectral peak in the 2500-Hz range of the frication relative to the adjacent vowel peak amplitude in the same frequency region, (b) the frication duration, which was either 50 or 140 ms, and (c) the second formant transition onset frequency, which was varied from 1200 to 1800 Hz. An analysis of variance model was used to determine weightings for the relative amplitude and transition cues for the different frication duration conditions. A 30-ms gap of silence was inserted between the frication and vocalic portions of the stimuli, with the intent that a temporal separation of frication and transition information might affect how the cues were integrated. The weighting given transition or relative amplitude differed between the listening groups and depended on frication duration. Use of the transition cue was most affected by insertion of the silent gap. Listeners with hearing loss had smaller interaction terms for the cues than listeners with normal hearing, suggesting less integration of cues.  相似文献   

20.
An experiment was conducted to determine the effects of completely-in-the-canal (CIC) hearing aids on auditory localization performance. Six normal-hearing listeners localized a 750-ms broadband noise from loudspeakers ranging in azimuth from -180 degrees to +180 degrees and in elevation from -75 degrees to +90 degrees. Independent variables included the presence or absence of the hearing aid and the elevation of the source. Dependent measures included azimuth error, elevation error, and the percentage of trials resulting in a front-back confusion. The findings indicate a statistically significant decrement in localization acuity, both in azimuth and elevation, occasioned by the wearing of CIC hearing aids. However, the magnitude of this decrement was small compared to those typically caused by other ear-canal occlusions, such as earplugs, and would probably not engender mislocalization of real-world sounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号