首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Older listeners' use of temporal cues altered by compression amplification.   总被引:2,自引:0,他引:2  
This study compared the ability of younger and older listeners to use temporal information in speech when that information was altered by compression amplification. Recognition of vowel-consonant-vowel syllables was measured for four groups of adult listeners (younger normal hearing, older normal hearing, younger hearing impaired, older hearing impaired). There were four conditions. Syllables were processed with wide-dynamic range compression (WDRC) amplification and with linear amplification. In each of those conditions, recognition was measured for syllables containing only temporal information and for syllables containing spectral and temporal information. Recognition of WDRC-amplified speech provided an estimate of the ability to use altered amplitude envelope cues. Syllables were presented with a high-frequency masker to minimize confounding differences in high-frequency sensitivity between the younger and older groups. Scores were lower for WDRC-amplified speech than for linearly amplified speech, and older listeners performed more poorly than younger listeners. When spectral information was unrestricted, the age-related decrement was similar for both amplification types. When spectral information was restricted for listeners with normal hearing, the age-related decrement was greater for WDRC-amplified speech than for linearly amplified speech. When spectral information was restricted for listeners with hearing loss, the age-related decrement was similar for both amplification types. Clinically, these results imply that when spectral cues are available (i.e., when the listener has adequate spectral resolution) older listeners can use WDRC hearing aids to the same extent as younger listeners. For older listeners without hearing loss, poorer scores for compression-amplified speech suggest an age-related deficit in temporal resolution.  相似文献   

2.
The goal of this study was to examine the ability to combine temporal-envelope information across frequency channels. Three areas were addressed: (a) the effects of hearing loss, (b) the effects of age and (c) whether such effects increase with the number of frequency channels. Twenty adults aged 23-80 years with hearing loss ranging from mild to severe and a control group of 6 adults with normal hearing participated. Stimuli were vowel-consonant-vowel syllables. Consonant identification was measured for 5 conditions: (a) 1-channel temporal-envelope information (minimal spectral cues), (b) 2-channel, (c) 4-channel, (d) 8-channel, and (e) an unprocessed (maximal spectral cues) speech condition. Performance of listeners with normal hearing and listeners with hearing loss was similar in the 1-channel condition. Performance increased with the number of frequency channels in both groups; however, increasing the number of channels led to smaller improvements in consonant identification in listeners with hearing loss. Older listeners performed more poorly than younger listeners but did not have more difficulty combining temporal cues across channels than in a simple, 1-channel temporal task. Age was a significant predictor of nonsense syllable identification, whereas amount of hearing loss was not. The results support an age-related deficit in use of temporal-envelope information with age, regardless of the number of channels.  相似文献   

3.
(1) Objective: Objective of this study is to know how a frequency compression hearing aid with new concepts is beneficial for severe-to-profound hearing impairments. (2) Methods: Clinical trials of this hearing aid were conducted for 11 severe-to-profound hearing impaired listeners. These 11 wore the frequency compression hearing aid in their daily life and reported subjectively on its performance. Speech recognition tests with five listeners and audio-visual short sentence recognition tests with three listeners were also conducted. This hearing aid can separately adjust the fundamental frequency from the spectral envelope of input speech and can adjust frequency response by use of a post-processing digital filter. (3) Results: Five listeners out of these 11 came to prefer this hearing aid in their daily life and are still wearing it. The results of the speech recognition tests show that the speech recognition scores were not improved for all listeners and the results of the audio-visual short sentence recognition tests do that the audio-visual recognition scores were improved for two listeners. (4) Conclusion: There were some severe-to-profound hearing impaired listeners who preferred the frequency compression hearing aid finally. It is also suggested that the benefits of this hearing aid may be evaluated correctly using not only speech but also visual materials.  相似文献   

4.
PURPOSE: To determine if listeners with normal hearing and listeners with sensorineural hearing loss give different perceptual weightings to cues for stop consonant place of articulation in noise versus reverberation listening conditions. METHOD: Nine listeners with normal hearing (23-28 years of age) and 10 listeners with sensorineural hearing loss (31-79 years of age, median 66 years) participated. The listeners were asked to label the consonantal portion of synthetic CV stimuli as either /p/ or /t/. Two cues were varied: (a) the amplitude of the spectral peak in the F4/F5 frequency region of the burst was varied across a 30-dB range relative to the adjacent vowel peak amplitude in the same frequency region, (b) F2/F3 formant transition onset frequencies were either appropriate for /p/, /t/ or neutral for the labial/alveolar contrast. RESULTS: Weightings of relative amplitude and transition cues for voiceless stop consonants depended on the listening condition (quiet, noise, or reverberation), hearing loss, and age of listener. The effects of age with hearing loss reduced the perceptual integration of cues, particularly in reverberation. The effects of hearing loss reduced the effectiveness of both cues, notably relative amplitude in reverberation. CONCLUSIONS: Reverberation and noise conditions have different perceptual effects. Hearing loss and age may have different, separable effects.  相似文献   

5.
Vowel identification is largely dependent on listeners’ access to the frequency of two or three peaks in the amplitude spectrum. Earlier work has demonstrated that, whereas normal-hearing listeners can identify harmonic complexes with vowel-like spectral shapes even with very little amplitude contrast between “formant” components and remaining harmonic components, listeners with hearing loss require greater amplitude differences. This is likely the result of the poor frequency resolution that often accompanies hearing loss. Here, we describe an additional acoustic dimension for emphasizing formant versus non-formant harmonics that may supplement amplitude contrast information. The purpose of this study was to determine whether listeners were able to identify “vowel-like” sounds using temporal (component phase) contrast, which may be less affected by cochlear loss than spectral cues, and whether overall identification improves when congruent temporal and spectral information are provided together. Five normal-hearing and five hearing-impaired listeners identified three vowels over many presentations. Harmonics representing formant peaks were varied in amplitude, phase, or a combination of both. In addition to requiring less amplitude contrast, normal-hearing listeners could accurately identify the sounds with less phase contrast than required by people with hearing loss. However, both normal-hearing and hearing-impaired groups demonstrated the ability to identify vowel-like sounds based solely on component phase shifts, with no amplitude contrast information, and they also showed improved performance when congruent phase and amplitude cues were combined. For nearly all listeners, the combination of spectral and temporal information improved identification in comparison to either dimension alone.  相似文献   

6.
Previous studies have shown that altering the amplitude of a consonant in a specific frequency region relative to an adjacent vowel's amplitude in the same frequency region will affect listeners' perception of the consonant place of articulation. Hearing aids with single-channel, fast-acting wide dynamic range compression (WDRC) alter the overall consonant-vowel (CV) intensity ratio by increasing consonant energy. Perhaps one reason WDRC has had limited success in improving speech recognition performance is that the natural amplitude balances between consonant and vowel are altered in crucial frequency regions, thus disturbing the aforementioned amplitude cue for determining place of articulation. The current study investigated the effect of a WDRC circuit on listeners' perception of place of articulation when the relative amplitude of consonant and vowel was manipulated. The stimuli were a continuum of synthetic CV syllables stripped of all place cues except relative consonant amplitudes. Acoustic analysis of the CVs before and after hearing aid processing showed a predictable increase in high-frequency energy, particularly for the burst of the consonant. Alveolar bursts had more high-frequency energy than labial bursts. Twenty-five listeners with normal hearing and 5 listeners with sensorineural hearing loss labeled the consonant sound of the CV syllables in unaided form and after the syllables were recorded through a hearing aid with single-channel WDRC. There were significantly more listeners who were unable to produce a category boundary when labeling the aided stimuli. Of those listeners who did yield a category boundary for both aided and unaided stimuli, there were significantly more alveolar responses for the aided condition. These results can be explained by the acoustic analyses of the aided stimuli.  相似文献   

7.
Listeners with sensorineural hearing loss have well-documented elevated hearing thresholds; reduced auditory dynamic ranges; and reduced spectral (or frequency) resolution that may reduce speech intelligibility, especially in the presence of competing sounds. Amplification and amplitude compression partially compensate for elevated thresholds and reduced dynamic ranges but do not remediate the loss in spectral resolution. Spectral-enhancement processing algorithms have been developed that putatively compensate for decreased spectral resolution by increasing the spectral contrast, or the peak-to-trough ratio, of the speech spectrum. Several implementations have been proposed, with mixed success. It is unclear whether the lack of strong success was due to specific implementation parameters or whether the concept of spectral enhancement is fundamentally flawed. The goal of this study was to resolve this ambiguity by testing the effects of spectral enhancement on detection and discrimination of simple, well-defined signals. To that end, groups of normal-hearing (NH) and hearing-impaired (HI) participants listened in 2 psychophysical experiments, including detection and frequency discrimination of narrowband noise signals in the presence of broadband noise. The NH and HI listeners showed an improved ability to detect and discriminate narrowband increments when there were spectral decrements (notches) surrounding the narrowband signals. Spectral enhancements restored increment detection thresholds to within the normal range when both energy and spectral-profile cues were available to listeners. When only spectral-profile cues were available for frequency discrimination tasks, performance improved for HI listeners, but not all HI listeners reached normal levels of discrimination. These results suggest that listeners are able to take advantage of the local improvement in signal-to-noise ratio provided by the spectral decrements.  相似文献   

8.
For signal detection and identification, the auditory system needs to integrate sound over time. It is frequently assumed that the quantity ultimately integrated is sound intensity and that the integrator is located centrally. However, we have recently shown that absolute thresholds are much better specified as the temporal integral of the pressure envelope than of intensity, and we proposed that the integrator resides in the auditory pathways first synapse. We also suggested a physiologically plausible mechanism for its operation, which was ultimately derived from the specific rate of temporal integration, i.e., the decrease of threshold sound pressure levels with increasing duration. In listeners with sensorineural hearing losses, that rate seems reduced, but it is not fully understood why. Here we propose that in such listeners there may be an elevation in the baseline above which sound pressure is effective in driving the system, in addition to a reduction in sensitivity. We test this simple model using thresholds of cats to stimuli of differently shaped temporal envelopes and durations obtained before and after hearing loss. We show that thresholds, specified as the temporal integral of the effective pressure envelope, i.e., the envelope of the pressure exceeding the elevated baseline, behave almost exactly as the lower thresholds, specified as the temporal integral of the total pressure envelope before hearing loss. Thus, the mechanism of temporal integration is likely unchanged after hearing loss, but the effective portion of the stimulus is. Our model constitutes a successful alternative to the model currently favored to account for altered temporal integration in listeners with sensorineural hearing losses, viz., reduced peripheral compression. Our model does not seem to be at variance with physiological observations and it also qualitatively accounts for a number of phenomena observed in such listeners with suprathreshold stimuli.  相似文献   

9.
The two aims of this study were (a) to determine the perceptual weight given formant transition and relative amplitude information for labeling fricative place of articulation perception and (b) to determine the extent of integration of relative amplitude and formant transition cues. Seven listeners with normal hearing and 7 listeners with sensorineural hearing loss participated. The listeners were asked to label the fricatives of synthetic consonant-vowel stimuli as either /s/ or [see text]. Across the stimuli, 3 cues were varied: (a) The amplitude of the spectral peak in the 2500-Hz range of the frication relative to the adjacent vowel peak amplitude in the same frequency region, (b) the frication duration, which was either 50 or 140 ms, and (c) the second formant transition onset frequency, which was varied from 1200 to 1800 Hz. An analysis of variance model was used to determine weightings for the relative amplitude and transition cues for the different frication duration conditions. A 30-ms gap of silence was inserted between the frication and vocalic portions of the stimuli, with the intent that a temporal separation of frication and transition information might affect how the cues were integrated. The weighting given transition or relative amplitude differed between the listening groups and depended on frication duration. Use of the transition cue was most affected by insertion of the silent gap. Listeners with hearing loss had smaller interaction terms for the cues than listeners with normal hearing, suggesting less integration of cues.  相似文献   

10.
The purpose of this study was twofold: (a) to determine the extent to which 4-channel, slow-acting wide dynamic range amplitude compression (WDRC) can counteract the perceptual effects of reduced auditory dynamic range and (b) to examine the relation between objective measures of speech intelligibility and categorical ratings of speech quality for sentences processed with slow-acting WDRC. Multiband expansion was used to simulate the effects of elevated thresholds and loudness recruitment in normal hearing listeners. While some previous studies have shown that WDRC can improve both speech intelligibility and quality, others have found no benefit. The current experiment shows that moderate amounts of compression can provide a small but significant improvement in speech intelligibility, relative to linear amplification, for simulated-loss listeners with small dynamic ranges (i.e., flat, moderate hearing loss). This benefit was found for speech at conversational levels, both in quiet and in a background of babble. Simulated-loss listeners with large dynamic ranges (i.e., sloping, mild-to-moderate hearing loss) did not show any improvement. Comparison of speech intelligibility scores and subjective ratings of intelligibility showed that listeners with simulated hearing loss could accurately judge the overall intelligibility of speech. However, in all listeners, ratings of pleasantness decreased as the compression ratio increased. These findings suggest that subjective measures of speech quality should be used in conjunction with either objective or subjective measures of speech intelligibility to ensure that participant-selected hearing aid parameters optimize both comfort and intelligibility.  相似文献   

11.
Hearing in noise is a challenge for all listeners, especially for those with hearing loss. This study compares cues used for detection of a low-frequency tone in noise by older listeners with and without hearing loss to those of younger listeners with normal hearing. Performance varies significantly across different reproducible, or “frozen,” masker waveforms. Analysis of these waveforms allows identification of the cues that are used for detection. This study included diotic (N0S0) and dichotic (N0Sπ) detection of a 500-Hz tone, with either narrowband or wideband masker waveforms. Both diotic and dichotic detection patterns (hit and false alarm rates) across the ensembles of noise maskers were predicted by envelope-slope cues, and diotic results were also predicted by energy cues. The relative importance of energy and envelope cues for diotic detection was explored with a roving-level paradigm that made energy cues unreliable. Most older listeners with normal hearing or mild hearing loss depended on envelope-related temporal cues, even for this low-frequency target. As hearing threshold at 500 Hz increased, the cues for diotic detection transitioned from envelope to energy cues. Diotic detection patterns for young listeners with normal hearing are best predicted by a model that combines temporal- and energy-related cues; in contrast, combining cues did not improve predictions for older listeners with or without hearing loss. Dichotic detection results for all groups of listeners were best predicted by interaural envelope cues, which significantly outperformed the classic cues based on interaural time and level differences or their optimal combination.  相似文献   

12.
Music perception with temporal cues in acoustic and electric hearing   总被引:1,自引:0,他引:1  
Kong YY  Cruz R  Jones JA  Zeng FG 《Ear and hearing》2004,25(2):173-185
OBJECTIVE: The first specific aim of the present study is to compare the ability of normal-hearing and cochlear implant listeners to use temporal cues in three music perception tasks: tempo discrimination, rhythmic pattern identification, and melody identification. The second aim is to identify the relative contribution of temporal and spectral cues to melody recognition in acoustic and electric hearing. DESIGN: Both normal-hearing and cochlear implant listeners participated in the experiments. Tempo discrimination was measured in a two-interval forced-choice procedure in which subjects were asked to choose the faster tempo at four standard tempo conditions (60, 80, 100, and 120 beats per minute). For rhythmic pattern identification, seven different rhythmic patterns were created and subjects were asked to read and choose the musical notation displayed on the screen that corresponded to the rhythmic pattern presented. Melody identification was evaluated with two sets of 12 familiar melodies. One set contained both rhythm and melody information (rhythm condition), whereas the other set contained only melody information (no-rhythm condition). Melody stimuli were also processed to extract the slowly varying temporal envelope from 1, 2, 4, 8, 16, 32, and 64 frequency bands, to create cochlear implant simulations. Subjects listened to a melody and had to respond by choosing one of the 12 names corresponding to the melodies displayed on a computer screen. RESULTS: In tempo discrimination, the cochlear implant listeners performed similarly to the normal-hearing listeners with rate discrimination difference limens obtained at 4-6 beats per minute. In rhythmic pattern identification, the cochlear implant listeners performed 5-25 percentage points poorer than the normal-hearing listeners. The normal-hearing listeners achieved perfect scores in melody identification with and without the rhythmic cues. However, the cochlear implant listeners performed significantly poorer than the normal-hearing listeners in both rhythm and no-rhythm conditions. The simulation results from normal-hearing listeners showed a relatively high level of performance for all numbers of frequency bands in the rhythm condition but required as many as 32 bands in the no-rhythm condition. CONCLUSIONS: Cochlear-implant listeners performed normally in tempo discrimination, but significantly poorer than normal-hearing listeners in rhythmic pattern identification and melody recognition. While both temporal (rhythmic) and spectral (pitch) cues contribute to melody recognition, cochlear-implant listeners mostly relied on the rhythmic cues for melody recognition. Without the rhythmic cues, high spectral resolution with as many as 32 bands was needed for melody recognition for normal-hearing listeners. This result indicates that the present cochlear implants provide sufficient spectral cues to support speech recognition in quiet, but they are not adequate to support music perception. Increasing the number of functional channels and improved encoding of the fine structure information are necessary to improve music perception for cochlear implant listeners.  相似文献   

13.
Derleth RP  Dau T  Kollmeier B 《Hearing research》2001,159(1-2):132-149
Three modifications of a psychoacoustically and physiologically motivated processing model [Dau et al., J. Acoust. Soc. Am. 102 (1997a) 2892-2905] are presented and tested. The modifications aim at simulating sensorineural hearing loss and incorporate a level-dependent peripheral compression whose properties are affected by hearing impairment. Model 1 realizes this difference by introducing for impaired listeners an instantaneous level-dependent expansion prior to the adaptation stage of the model. Model 2 and Model 3 realize a level-dependent compression with time constants of 5 and 15 ms, respectively, for normal hearing and a reduced compression for impaired hearing. In Model 2, the compression occurs after the envelope extraction stage, while in Model 3, envelope extraction follows compression. All models account to a similar extent for the recruitment phenomenon measured with narrow-band stimuli and for forward-masking data of normal-hearing and hearing-impaired subjects using a 20-ms, 2-kHz tone signal and a 1-kHz-wide bandpass noise masker centered at 2 kHz. A clear difference between the different models occurs for the processing of temporally fluctuating stimuli. A modulation-rate-independent increase in modulation-response level for simulating impaired hearing is only predicted by Model 1 while the other two models realize a modulation-rate-dependent increase. Hence, the predictions of Model 2 and Model 3 are in conflict with the results of modulation-matching experiments reported in the literature. It is concluded that key properties of sensorineural hearing loss (altered loudness perception, reduced dynamic range, normal temporal properties but prolonged forward-masking effects) can effectively be modeled by incorporating a fast-acting expansion within the current processing model prior to the nonlinear adaptation stage. Based on these findings, a model of both normal and impaired hearing is proposed which incorporates a fast-acting compressive nonlinearity, representing the cochlear nonlinearity (which is reduced in impaired listeners), followed by an instantaneous expansion and the nonlinear adaptation stage which represent aspects of the retro-cochlear information processing in the auditory system.  相似文献   

14.
The present study investigated the ability of normal-hearing listeners and cochlear implant users to recognize vocal emotions. Sentences were produced by 1 male and 1 female talker according to 5 target emotions: angry, anxious, happy, sad, and neutral. Overall amplitude differences between the stimuli were either preserved or normalized. In experiment 1, vocal emotion recognition was measured in normal-hearing and cochlear implant listeners; cochlear implant subjects were tested using their clinically assigned processors. When overall amplitude cues were preserved, normal-hearing listeners achieved near-perfect performance, whereas listeners with cochlear implant recognized less than half of the target emotions. Removing the overall amplitude cues significantly worsened mean normal-hearing and cochlear implant performance. In experiment 2, vocal emotion recognition was measured in listeners with cochlear implant as a function of the number of channels (from 1 to 8) and envelope filter cutoff frequency (50 vs 400 Hz) in experimental speech processors. In experiment 3, vocal emotion recognition was measured in normal-hearing listeners as a function of the number of channels (from 1 to 16) and envelope filter cutoff frequency (50 vs 500 Hz) in acoustic cochlear implant simulations. Results from experiments 2 and 3 showed that both cochlear implant and normal-hearing performance significantly improved as the number of channels or the envelope filter cutoff frequency was increased. The results suggest that spectral, temporal, and overall amplitude cues each contribute to vocal emotion recognition. The poorer cochlear implant performance is most likely attributable to the lack of salient pitch cues and the limited functional spectral resolution.  相似文献   

15.
Consonant identification was measured for normal-hearing listeners using Vowel-Consonant-Vowel stimuli that were either unprocessed or spectrally degraded to force listeners to use temporal-envelope cues. Stimuli were embedded in a steady state or fluctuating noise masker and presented at a fixed signal-to-noise ratio. Fluctuations in the maskers were obtained by applying sinusoidal modulation to: (i) the amplitude of the noise (1st-order SAM masker) or (ii) the modulation depth of a 1st-order SAM noise (2nd-order SAM masker). The frequencies of the amplitude variation fm and the depth variation f'm were systematically varied. Consistent with previous studies, identification scores obtained with unprocessed speech were highest in an 8-Hz, 1st-order SAM masker. Reception of voicing and manner also peaked around fm=8 Hz, while the reception of place of articulation was maximal at a higher frequency (fm=32 Hz). When 2nd-order SAM maskers were used, identification scores and received information for each consonant feature were found to be independent of f'm. They decreased progressively with increasing carrier modulation frequency fm, and ranged between those obtained with the steady state and the 1st-order SAM maskers. Finally, the results obtained with spectrally degraded speech were similar across all types of maskers, although an 8% improvement in the reception of voicing was observed for modulated maskers with fm < 64 Hz compared to the steady-state masker. These data provide additional evidence that listeners take advantage of temporal minima in fluctuating background noises, and suggest that: (i) minima of different durations are required for an optimal reception of the three consonant features and (ii) complex (i.e., 2nd-order) envelope fluctuations in background noise do not degrade speech identification by interfering with speech-envelope processing.  相似文献   

16.
OBJECTIVE: The objective of this study was to compare speech recognition across a sampling of amplification choices available for listeners with severe loss. This includes conventional options (linear with peak clipping and linear with compression limiting) and newer strategies (multichannel wide-dynamic range compression [WDRC]) theorized to better accommodate reduced dynamic range. A second objective was to compare speech quality across the same conditions using a paired-comparison test. DESIGN: Participants were 13 adults with severe sensorineural hearing loss and a control group of seven adults with normal hearing. Test materials included consonant-vowel syllables (speech recognition) and sentences (speech quality). Four amplification conditions were included: peak clipping; compression limiting; two-channel WDRC; and three-channel WDRC, with overall audibility similar across conditions. In the WDRC conditions, the compression ratio was fixed at 3:1 in each channel. Consonant recognition was measured using a closed-set task, and speech quality was measured using a paired-comparison test. RESULTS: For the listeners with severe loss, recognition and preference were lower for a three-channel WDRC system than for a compression limiting system. Specific errors were consistent with poorer transmission of amplitude envelope information by the multichannel WDRC systems. CONCLUSIONS: Under some conditions, the benefit of fast-acting, multichannel WDRC systems relative to more linear amplification strategies may be reduced in listeners with severe loss. Performance decrements with these systems are consistent with consequences of broader auditory filters.  相似文献   

17.
《Acta oto-laryngologica》2012,132(6):630-637
Conclusion: Cochlear implant (CI) recipients’ performance of lexical tone identification and consonant recognition can be enhanced by providing greater spectral details. Objective: To evaluate the effects of increasing the number of total spectral channels on the lexical tone identification and consonant recognition by normally hearing listeners who are native speakers of Mandarin Chinese. Subjects and methods: Lexical tone identification and consonant recognition were measured in 15 Mandarin-speaking, normal-hearing (NH) listeners with varied numbers of total spectral channels (i.e. 4, 6, 8, 10, 12, 16, 20, and 24), using acoustic simulations of CIs. Results: The group of NH listeners’ performance of lexical tone identification ranged from 44.53% to 66.60% with 4–24 spectral channels. The performance of tone identification between channels 4 and 16 remained similar; between channels 16 and 20 performance improved significantly. As regards consonant recognition, the NH listeners’ overall accuracy ranged from 73.17% to 95.33% with 4–24 channels. Steady improvement in consonant recognition accuracy was observed as a function of increasing the spectral channels. With about 12–16 spectral channels, the NH listeners’ overall accuracy in consonant recognition began to be comparable to their accuracy with the unprocessed stimuli.  相似文献   

18.
PURPOSE: When understanding speech in complex listening situations, older adults with hearing loss face the double challenge of cochlear hearing loss and deficits of the aging auditory system. Wide-dynamic range compression (WDRC) is used in hearing aids as remediation for the loss of audibility associated with hearing loss. WDRC processing has the additional effect of altering the acoustics of the speech signal, particularly the temporal envelope. Older listeners are negatively affected by other types of temporal distortions, but this has not been found for the distortion of WDRC processing for simple signals. The purpose of this research was to determine the circumstances under which older adults might be negatively affected by WDRC processing and what compensatory mechanisms those listeners might be using for the listening conditions when speech recognition performance is not affected. METHOD: Two groups of adults with mild to moderate hearing loss were tested: (a) young-old (62-74 years, n=11) and (b) old-old (75-88 years, n=14). The groups did not differ in hearing loss, cognition, working memory, or self-reported health status. Participants heard low-predictability sentences compressed at each of 4 compression settings. The effect of compression on the temporal envelope was quantified by the envelope difference index (EDI; T. W. Fortune, B. D. Woodruff, & D. A. Preves, 1994). The sentences were presented at three rates: (a) normal rate, (b) 50% time compressed, and (c) time restored. RESULTS: There was no difference in performance between age groups, or any interactions involving age. There was a significant interaction between speech rate and EDI value; as the EDI value increased, representing higher amounts of temporal envelope distortion, speech recognition was significantly reduced. At the highest EDI value, this reduction was greater for the time-compressed than the normal rate condition. When time was restored to the time-compressed signals, speech recognition did not improve. CONCLUSION: Temporal envelope changes were detrimental to recognition of low-context speech for older listeners once a certain threshold of distortion was reached, particularly for rapid rate speech. For this sample tested, the effect was not age related within the age range tested here. The results of the time-restored condition suggested that listeners were using acoustic redundancy to compensate for the negative effects of WDRC distortion in the normal rate condition.  相似文献   

19.
The effect of fast-acting compression on speech recognition in fully modulated (FUM) noise in listeners with normal and impaired hearing was investigated in two experiments We wanted to determine the relationships between the benefit from compression and some audiological factors. Furthermore, the sensitivity to changes in compression parameters was also evaluated. The results showed that two-thirds of the listeners performed worse with fast-acting compression than with linear processing. Normal-hearing listeners showed the most benefit from compression. A significant relationship was found between benefit from compression and speech-to-noise ratio at threshold (SNRT) in slightly modulated (SM) noise. Pure-tone threshold was found to be a weak predictor of benefit from compression. No relationship was found between benefit from compression and the release of masking for the FUM noise. The variability in the results across different compression parameters was related to SNRT in SM noise. The results suggest an inverse relationship between benefit from compression and the severity of the suprathreshold hearing loss  相似文献   

20.
The purpose of this study was to determine the role of frequency selectivity and sequential stream segregation in the perception of simultaneous sentences by listeners with sensorineural hearing loss. Simultaneous sentence perception was tested in listeners with normal hearing and with sensorineural hearing loss using sentence pairs consisting of one sentence spoken by a male talker and one sentence spoken by a female talker. Listeners were asked to repeat both sentences and were scored on the number of words repeated correctly in each sentence. Separate scores were obtained for the first and second sentences repeated. Frequency selectivity was assessed using a notched-noise method in which thresholds for a 1,000 Hz pure-tone signal were measured in noise with spectral notch bandwidths of 0, 300, and 600 Hz. Sequential stream segregation was measured using tone sequences consisting of a fixed frequency (A) and a varying frequency tone (B). Tone sequences were presented in an ABA_ABA_... pattern starting at a frequency (B) either below or above the frequency of the fixed 1,000 Hz tone (A). Initially, the frequency difference was large and was gradually decreased until listeners indicated that they could no longer perceptually separate the two tones (fusion threshold). Scores for the first sentence repeated decreased significantly with increasing age. There was a strong relationship between fusion threshold and simultaneous sentence perception, which remained even after partialling out the effects of age. Smaller frequency differences at fusion thresholds were associated with higher sentence scores. There was no relationship between frequency selectivity and simultaneous sentence perception. Results suggest that the abilities to perceptually separate pitch patterns and separate sentences spoken simultaneously by different talkers are mediated by the same underlying perceptual and/or cognitive factors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号