首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In normal hearing (NH), the perception of the gender of a speaker is strongly affected by two anatomically related vocal characteristics: the fundamental frequency (F0), related to vocal pitch, and the vocal tract length (VTL), related to the height of the speaker. Previous studies on gender categorization in cochlear implant (CI) users found that performance was variable, with few CI users performing at the level of NH listeners. Data collected with recorded speech produced by multiple talkers suggests that CI users might rely more on F0 and less on VTL than NH listeners. However, because VTL cannot be accurately estimated from recordings, it is difficult to know how VTL contributes to gender categorization. In the present study, speech was synthesized to systematically vary F0, VTL, or both. Gender categorization was measured in CI users, as well as in NH participants listening to unprocessed (only synthesized) and vocoded (and synthesized) speech. Perceptual weights for F0 and VTL were derived from the performance data. With unprocessed speech, NH listeners used both cues (normalized perceptual weight: F0 = 3.76, VTL = 5.56). With vocoded speech, NH listeners still made use of both cues but less efficiently (normalized perceptual weight: F0 = 1.68, VTL = 0.63). CI users relied almost exclusively on F0 while VTL perception was profoundly impaired (normalized perceptual weight: F0 = 6.88, VTL = 0.59). As a result, CI users’ gender categorization was abnormal compared to NH listeners. Future CI signal processing should aim to improve the transmission of both F0 cues and VTL cues, as a normal gender categorization may benefit speech understanding in competing talker situations.  相似文献   

2.
Unilateral hearing loss (UHL) leads to an imbalanced input to the brain and results in cortical reorganization. In listeners with unilateral impairments, while the perceptual deficits associated with the impaired ear are well documented, less is known regarding the auditory processing in the unimpaired, clinically normal ear. It is commonly accepted that perceptual consequences are unlikely to occur in the normal ear for listeners with UHL. This study investigated whether the temporal resolution in the normal-hearing (NH) ear of listeners with long-standing UHL is similar to those in listeners with NH. Temporal resolution was assayed via measuring gap detection thresholds (GDTs) in within- and between-channel paradigms. GDTs were assessed in the normal ear of adults with long-standing, severe-to-profound UHL (N = 13) and age-matched, NH listeners (N = 22) at two presentation levels (30 and 55 dB sensation level). Analysis indicated that within-channel GDTs for listeners with UHL were not significantly different than those for the NH subject group, but the between-channel GDTs for listeners with UHL were poorer (by greater than a factor of 2) than those for the listeners with NH. The hearing thresholds in the normal or impaired ears were not associated with the elevated between-channel GDTs for listeners with UHL. Contrary to the common assumption that auditory processing capabilities are preserved for the normal ear in listeners with UHL, the current study demonstrated that a long-standing unilateral hearing impairment may adversely affect auditory perception—temporal resolution—in the clinically normal ear. From a translational perspective, these findings imply that the temporal processing deficits in the unimpaired ear of listeners with unilateral hearing impairments may contribute to their overall auditory perceptual difficulties.  相似文献   

3.
Normal-hearing (NH) listeners and hearing-impaired (HI) listeners detected and discriminated time-reversed harmonic complexes constructed of equal-amplitude harmonic components with fundamental frequencies (F0s) ranging from 50 to 800 Hz. Component starting phases were selected according to the positive and negative Schroeder-phase algorithms to produce within-period frequency sweeps with relatively flat temporal envelopes. Detection thresholds were not affected by component starting phases for either group of listeners. At presentation levels of 80 dB SPL, NH listeners could discriminate the two waveforms nearly perfectly when the F0s were less than 300–400 Hz but fell to chance performance for higher F0s. HI listeners performed significantly poorer, with reduced discrimination at several of the F0s. In contrast, at a lower presentation level meant to nearly equate sensation levels for the two groups, NH listeners’ discrimination was poorer than HI listeners at most F0s. Roving presentation levels had little effect on performance by NH listeners but reduced performance by HI listeners. The differential impact of roving level suggests a weaker perception of timbre differences and a greater susceptibility to the detrimental effects of experimental uncertainty in HI listeners.  相似文献   

4.
It has been recently suggested that listeners having a sensorineural hearing impairment (HI) may possess a deficit in their ability to integrate speech information across different frequencies. When presented with a task that required across-frequency integration of speech patterns, listeners with HI performed more poorly than their normal-hearing (NH) counterparts (E. W. Healy & S. P. Bacon, 2002; C. W. Turner, S.-L. Chi, & S. Flock, 1999). E. W. Healy and S. P. Bacon (2002) also showed that performance of the listeners with HI fell more steeply when increasing amounts of temporal asynchrony were introduced to the pair of widely separated patterns. In the current study, the correlations between the fluctuating envelopes of the acoustic stimuli were calculated, both when the patterns were aligned and at various between-band asynchronies. It was found that the rate at which acoustic correlation fell as a function of asynchrony closely matched the rate at which intelligibility fell for the NH listeners. However, the intelligibility scores produced by the listeners with HI fell more steeply than the acoustic analysis would suggest. Thus, these data provide additional support for the hypothesis that individuals having sensorineural HI may have a deficit in their ability to integrate speech information present at different frequencies.  相似文献   

5.
The present study was designed to examine speech recognition in patients with sensorineural hearing loss when the temporal and spectral information in the speech signals were co-varied. Four subjects with mild to moderate sensorineural hearing loss were recruited to participate in consonant and vowel recognition tests that used speech stimuli processed through a noise-excited voeoder. The number of channels was varied between 2 and 32, which defined spectral information. The lowpass cutoff frequency of the temporal envelope extractor was varied from 1 to 512 Hz, which defined temporal information. Results indicate that performance of subjects with sensorineural heating loss varied tremendously among the subjects. For consonant recognition, patterns of relative contributions of spectral and temporal information were similar to those in normal-hearing subjects. The utility of temporal envelope information appeared to be normal in the hearing-impaired listeners. For vowel recognition, which depended predominately on spectral information, the performance plateau was achieved with numbers of channels as high as 16-24, much higher than expected, given that the frequency selectivity in patients with sensorineural hearing loss might be compromised. In order to understand the mechanisms on how hearing-impaired listeners utilize spectral and temporal cues for speech recognition, future studies that involve a large sample of patients with sensorineural hearing loss will be necessary to elucidate the relationship between frequency selectivity as well as central processing capability and speech recognition performance using vocoded signals.  相似文献   

6.
The brain, using expectations, linguistic knowledge, and context, can perceptually restore inaudible portions of speech. Such top-down repair is thought to enhance speech intelligibility in noisy environments. Hearing-impaired listeners with cochlear implants commonly complain about not understanding speech in noise. We hypothesized that the degradations in the bottom-up speech signals due to the implant signal processing may have a negative effect on the top-down repair mechanisms, which could partially be responsible for this complaint. To test the hypothesis, phonemic restoration of interrupted sentences was measured with young normal-hearing listeners using a noise-band vocoder simulation of implant processing. Decreasing the spectral resolution (by reducing the number of vocoder processing channels from 32 to 4) systematically degraded the speech stimuli. Supporting the hypothesis, the size of the restoration benefit varied as a function of spectral resolution. A significant benefit was observed only at the highest spectral resolution of 32 channels. With eight channels, which resembles the resolution available to most implant users, there was no significant restoration effect. Combined electric-acoustic hearing has been previously shown to provide better intelligibility of speech in adverse listening environments. In a second configuration, combined electric-acoustic hearing was simulated by adding low-pass-filtered acoustic speech to the vocoder processing. There was a slight improvement in phonemic restoration compared to the first configuration; the restoration benefit was observed at spectral resolutions of both 16 and 32 channels. However, the restoration was not observed at lower spectral resolutions (four or eight channels). Overall, the findings imply that the degradations in the bottom-up signals alone (such as occurs in cochlear implants) may reduce the top-down restoration of speech.  相似文献   

7.
Compared with normal-hearing listeners, cochlear implant (CI) users display a loss of intelligibility of speech interrupted by silence or noise, possibly due to reduced ability to integrate and restore speech glimpses across silence or noise intervals. The present study was conducted to establish the extent of the deficit typical CI users have in understanding interrupted high-context sentences as a function of a range of interruption rates (1.5 to 24 Hz) and duty cycles (50 and 75 %). Further, factors such as reduced signal quality of CI signal transmission and advanced age, as well as potentially lower speech intelligibility of CI users even in the lack of interruption manipulation, were explored by presenting young, as well as age-matched, normal-hearing (NH) listeners with full-spectrum and vocoded speech (eight-channel and speech intelligibility baseline performance matched). While the actual CI users had more difficulties in understanding interrupted speech and taking advantage of faster interruption rates and increased duty cycle than the eight-channel noise-band vocoded listeners, their performance was similar to the matched noise-band vocoded listeners. These results suggest that while loss of spectro-temporal resolution indeed plays an important role in reduced intelligibility of interrupted speech, these factors alone cannot entirely explain the deficit. Other factors associated with real CIs, such as aging or failure in transmission of essential speech cues, seem to additionally contribute to poor intelligibility of interrupted speech.  相似文献   

8.
Cochlear implant (CI) users find it extremely difficult to discriminate between talkers, which may partially explain why they struggle to understand speech in a multi-talker environment. Recent studies, based on findings with postlingually deafened CI users, suggest that these difficulties may stem from their limited use of vocal-tract length (VTL) cues due to the degraded spectral resolution transmitted by the CI device. The aim of the present study was to assess the ability of adult CI users who had no prior acoustic experience, i.e., prelingually deafened adults, to discriminate between resynthesized “talkers” based on either fundamental frequency (F0) cues, VTL cues, or both. Performance was compared to individuals with normal hearing (NH), listening either to degraded stimuli, using a noise-excited channel vocoder, or non-degraded stimuli. Results show that (a) age of implantation was associated with VTL but not F0 cues in discriminating between talkers, with improved discrimination for those subjects who were implanted at earlier age; (b) there was a positive relationship for the CI users between VTL discrimination and speech recognition score in quiet and in noise, but not with frequency discrimination or cognitive abilities; (c) early-implanted CI users showed similar voice discrimination ability as the NH adults who listened to vocoded stimuli. These data support the notion that voice discrimination is limited by the speech processing of the CI device. However, they also suggest that early implantation may facilitate sensory-driven tonotopicity and/or improve higher-order auditory functions, enabling better perception of VTL spectral cues for voice discrimination.  相似文献   

9.
The ability to discriminate the spectral shapes of complex sounds is critical to accurate speech perception. Part of the difficulty experienced by listeners with hearing loss in understanding speech sounds in noise may be related to a smearing of the internal representation of the spectral peaks and valleys because of the loss of sensitivity and an accompanying reduction in frequency resolution. This study examined the discrimination by hearing-impaired listeners of highly similar harmonic complexes with a single spectral peak located in 1 of 3 frequency regions. The minimum level difference between peak and background harmonics required to discriminate a small change in the spectral center of the peak was measured for peaks located near 2, 3, or 4 kHz. Component phases were selected according to an algorithm thought to produce either highly modulated (positive Schroeder) or very flat (negative Schroeder) internal waveform envelopes in the cochlea. The mean amplitude difference between a spectral peak and the background components required for discrimination of pairs of harmonic complexes (spectral contrast threshold) was from 4 to 19 dB greater for listeners with hearing impairment than for a control group of listeners with normal hearing. In normal-hearing listeners, improvements in threshold were seen with increasing stimulus level, and there was a strong effect of stimulus phase, as the positive Schroeder stimuli always produced lower thresholds than the negative Schroeder stimuli. The listeners with hearing loss showed no consistent spectral contrast effects due to stimulus phase and also showed little improvement with increasing stimulus level, once their sensitivity loss was overcome. The lack of phase and level effects may be a result of the more linear processing occurring in impaired ears, producing poorer-than-normal frequency resolution, a loss of gain for low amplitudes, and an altered cochlear phase characteristic in regions of damage.  相似文献   

10.
Listeners with normal-hearing sensitivity recognize speech more accurately in the presence of fluctuating background sounds, such as a single competing voice, than in unmodulated noise at the same overall level. These performance differences are greatly reduced in listeners with hearing impairment, who generally receive little benefit from fluctuations in masker envelopes. If this lack of benefit is entirely due to elevated quiet thresholds and the resulting inaudibility of low-amplitude portions of signal + masker, then listeners with hearing impairment should derive increasing benefit from masker fluctuations as presentation levels increase. Listeners with normal-hearing (NH) sensitivity and listeners with hearing impairment (HI) were tested for sentence recognition at moderate and high presentation levels in competing speech-shaped noise, in competing speech by a single talker, and in competing time-reversed speech by the same talker. NH listeners showed more accurate recognition at moderate than at high presentation levels and better performance in fluctuating maskers than in unmodulated noise. For these listeners, modulated versus unmodulated performance differences tended to decrease at high presentation levels. Listeners with HI, as a group, showed performance that was more similar across maskers and presentation levels. Considered individually, only 2 out of 6 listeners with HI showed better overall performance and increasing benefit from masker fluctuations as presentation level increased. These results suggest that audibility alone does not completely account for the group differences in performance with fluctuating maskers; suprathreshold processing differences between groups also appear to play an important role. Competing speech frequently provided more effective masking than time-reversed speech containing temporal fluctuations of equal magnitude. This finding is consistent with "informational" masking resulting from competitive processing of words and phrases within the speech masker that would notoccur for time-reversed sentences.  相似文献   

11.
PURPOSE: When understanding speech in complex listening situations, older adults with hearing loss face the double challenge of cochlear hearing loss and deficits of the aging auditory system. Wide-dynamic range compression (WDRC) is used in hearing aids as remediation for the loss of audibility associated with hearing loss. WDRC processing has the additional effect of altering the acoustics of the speech signal, particularly the temporal envelope. Older listeners are negatively affected by other types of temporal distortions, but this has not been found for the distortion of WDRC processing for simple signals. The purpose of this research was to determine the circumstances under which older adults might be negatively affected by WDRC processing and what compensatory mechanisms those listeners might be using for the listening conditions when speech recognition performance is not affected. METHOD: Two groups of adults with mild to moderate hearing loss were tested: (a) young-old (62-74 years, n=11) and (b) old-old (75-88 years, n=14). The groups did not differ in hearing loss, cognition, working memory, or self-reported health status. Participants heard low-predictability sentences compressed at each of 4 compression settings. The effect of compression on the temporal envelope was quantified by the envelope difference index (EDI; T. W. Fortune, B. D. Woodruff, & D. A. Preves, 1994). The sentences were presented at three rates: (a) normal rate, (b) 50% time compressed, and (c) time restored. RESULTS: There was no difference in performance between age groups, or any interactions involving age. There was a significant interaction between speech rate and EDI value; as the EDI value increased, representing higher amounts of temporal envelope distortion, speech recognition was significantly reduced. At the highest EDI value, this reduction was greater for the time-compressed than the normal rate condition. When time was restored to the time-compressed signals, speech recognition did not improve. CONCLUSION: Temporal envelope changes were detrimental to recognition of low-context speech for older listeners once a certain threshold of distortion was reached, particularly for rapid rate speech. For this sample tested, the effect was not age related within the age range tested here. The results of the time-restored condition suggested that listeners were using acoustic redundancy to compensate for the negative effects of WDRC distortion in the normal rate condition.  相似文献   

12.
Differences in fundamental frequency (F0) provide an important cue for segregating simultaneous sounds. Cochlear implants (CIs) transmit F0 information primarily through the periodicity of the temporal envelope of the electrical pulse trains. Successful segregation of sounds with different F0s requires the ability to process multiple F0s simultaneously, but it is unknown whether CI users have this ability. This study measured modulation frequency discrimination thresholds for half-wave rectified sinusoidal envelopes modulated at 115 Hz in CI users and normal-hearing (NH) listeners. The target modulation was presented in isolation or in the presence of an interferer. Discrimination thresholds were strongly affected by the presence of an interferer, even when it was unmodulated and spectrally remote. Interferer modulation increased interference and often led to very high discrimination thresholds, especially when the interfering modulation frequency was lower than that of the target. Introducing a temporal offset between the interferer and the target led to at best modest improvements in performance in CI users and NH listeners. The results suggest no fundamental difference between acoustic and electric hearing in processing single or multiple envelope-based F0s, but confirm that differences in F0 are unlikely to provide a robust cue for perceptual segregation in CI users.  相似文献   

13.
OBJECTIVE: To explore combined acute effects of frequency shift and compression-expansion on speech recognition, using noiseband vocoder processing. DESIGN: Recognition of vowels and consonants, processed with a noiseband vocoder, was measured with five normal-hearing subjects, between the ages of 27 and 35 yr. The speech signal was filtered into 8 or 16 analysis bands and the envelopes were extracted from each band. The carrier noise bands were modulated by the envelopes and resynthesized to produce the processed speech. In the baseline matched condition, the frequency ranges of the corresponding analysis and carrier bands were the same. In the shift only condition, the frequency ranges of the carrier bands were shifted up or down relative to the analysis bands. In the compression and expansion only conditions, the analysis band range was made larger or smaller, respectively, than the carrier band range. By applying the shift to carrier bands and compression or expansion to analysis bands simultaneously, the combined effects of the two spectral distortions on speech recognition were explored. RESULTS: When the spectral distortions of compression-expansion or shift were applied separately, the performance was reduced from the baseline matched condition. However, when the two spectral degradations were applied simultaneously, a compensatory effect was observed; the reduction in performance was smaller for some combinations compared to the reduction observed for each distortion individually. CONCLUSIONS: The results of the present study are consistent with previous vocoder studies with normal-hearing subjects that showed a negative effect of spectral mismatch between analysis and carrier bands on speech recognition. The present results further show that matching the frequency ranges of 1 to 2 kHz, which contain important speech information, can be more beneficial for speech recognition than matching the overall frequency ranges, in certain conditions.  相似文献   

14.
PURPOSE: This study investigated the acoustic characteristics of pediatric cochlear implant (CI) recipients' imitative production of rising speech intonation, in relation to the perceptual judgments by listeners with normal hearing (NH). METHOD: Recordings of a yes-no interrogative utterance imitated by 24 prelingually deafened children with a CI were extracted from annual evaluation sessions. These utterances were perceptually judged by adult NH listeners in regard with intonation contour type (non-rise, partial-rise, or full-rise) and contour appropriateness (on a 5-point scale). Fundamental frequency, intensity, and duration properties of each utterance were also acoustically analyzed. RESULTS: Adult NH listeners' judgments of intonation contour type and contour appropriateness for each CI participant's utterances were highly positively correlated. The pediatric CI recipients did not consistently use appropriate intonation contours when imitating a yes-no question. Acoustic properties of speech intonation produced by these individuals were discernible among utterances of different intonation contour types according to NH listeners' perceptual judgments. CONCLUSIONS: These findings delineated the perceptual and acoustic characteristics of speech intonation imitated by prelingually deafened children and young adults with a CI. Future studies should address whether the degraded signals these individuals perceive via a CI contribute to their difficulties with speech intonation production.  相似文献   

15.
Cochlear implants provide good speech discrimination ability despite highly limited amount of information they transmit compared with normal cochlea. Noise vocoded speech, simulating cochlear implants in normal hearing listeners, have demonstrated that spectrally and temporally degraded speech contains sufficient cues to provide accurate speech discrimination. We hypothesized that neural activity patterns generated in the primary auditory cortex by spectrally and temporally degraded speech sounds will account for the robust behavioral discrimination of speech. We examined the behavioral discrimination of noise vocoded consonants and vowels by rats and recorded neural activity patterns from rat primary auditory cortex (A1) for the same sounds. We report the first evidence of behavioral discrimination of degraded speech sounds by an animal model. Our results show that rats are able to accurately discriminate both consonant and vowel sounds even after significant spectral and temporal degradation. The degree of degradation that rats can tolerate is comparable to human listeners. We observed that neural discrimination based on spatiotemporal patterns (spike timing) of A1 neurons is highly correlated with behavioral discrimination of consonants and that neural discrimination based on spatial activity patterns (spike count) of A1 neurons is highly correlated with behavioral discrimination of vowels. The results of the current study indicate that speech discrimination is resistant to degradation as long as the degraded sounds generate distinct patterns of neural activity.  相似文献   

16.
PURPOSE: To investigate the effects of talker variability on vowel recognition by cochlear implant (CI) users and by normal-hearing (NH) participants listening to 4-channel acoustic CI simulations. METHOD: CI users were tested with their clinically assigned speech processors. For NH participants, 3 CI processors were simulated, using different combinations of carrier type and temporal envelope cutoff frequency (noise band/160 Hz, sine wave/160 Hz, and sine wave/20 Hz). Vowel recognition was measured for 4 talkers, presented in either a single-talker context (1 talker per test block) or a multi-talker context (4 talkers per test block). RESULTS: CI users' vowel recognition was significantly poorer in the multi-talker context than in the single-talker context. When noise-band carriers were used in the simulations, NH performance was not significantly affected by talker variability. However, when sine-wave carriers were used in the simulations, NH performance was significantly affected by talker variability in both envelope filter conditions. CONCLUSIONS: Because fundamental frequency was not preserved by the 20-Hz envelope filter and only partially preserved by the 160-Hz envelope filter, both spectral and temporal cues contributed to the talker variability effects observed with sine-wave carriers. Similarly, spectral and temporal cues may have contributed to the talker variability effects observed with CI participants.  相似文献   

17.
Speech segregation in background noise remains a difficult task for individuals with hearing loss. Several signal processing strategies have been developed to improve the efficacy of hearing assistive technologies in complex listening environments. The present study measured speech reception thresholds in normal-hearing listeners attending to a vocoder based on the Fundamental Asynchronous Stimulus Timing algorithm (FAST: Smith et al. 2014), which triggers pulses based on the amplitudes of channel magnitudes in order to preserve envelope timing cues, with two different reconstruction bandwidths (narrowband and broadband) to control the degree of spectrotemporal resolution. Five types of background noise were used including same male talker, female talker, time-reversed male talker, time-reversed female talker, and speech-shaped noise to probe the contributions of different types of speech segregation cues and to elucidate how degradation affects speech reception across these conditions. Maskers were spatialized using head-related transfer functions in order to create co-located and spatially separated conditions. Results indicate that benefits arising from voicing and spatial cues can be preserved using the FAST algorithm but are reduced with a reduction in spectral resolution.  相似文献   

18.
Any sound can be separated mathematically into a slowly varying envelope and rapidly varying fine-structure component. This property has motivated numerous perceptual studies to understand the relative importance of each component for speech and music perception. Specialized acoustic stimuli, such as auditory chimaeras with the envelope of one sound and fine structure of another have been used to separate the perceptual roles for envelope and fine structure. Cochlear narrowband filtering limits the ability to isolate fine structure from envelope; however, envelope recovery from fine structure has been difficult to evaluate physiologically. To evaluate envelope recovery at the output of the cochlea, neural cross-correlation coefficients were developed that quantify the similarity between two sets of spike-train responses. Shuffled auto- and cross-correlogram analyses were used to compute separate correlations for responses to envelope and fine structure based on both model and recorded spike trains from auditory nerve fibers. Previous correlogram analyses were extended to isolate envelope coding more effectively in auditory nerve fibers with low center frequencies, which are particularly important for speech coding. Recovered speech envelopes were present in both model and recorded responses to one- and 16-band speech fine-structure chimaeras and were significantly greater for the one-band case, consistent with perceptual studies. Model predictions suggest that cochlear recovered envelopes are reduced following sensorineural hearing loss due to broadened tuning associated with outer-hair cell dysfunction. In addition to the within-fiber cross-stimulus cases considered here, these neural cross-correlation coefficients can also be used to evaluate spatiotemporal coding by applying them to cross-fiber within-stimulus conditions. Thus, these neural metrics can be used to quantitatively evaluate a wide range of perceptually significant temporal coding issues relevant to normal and impaired hearing.  相似文献   

19.
Recent studies have demonstrated that the detection of complex temporal envelopes relies - at least partially - on the perception of a distortion component generated by a peripheral (cochlear) and/or central (post-cochlear) non-linearity. In the present study, first- and second-order amplitude modulation (AM) detection thresholds were obtained in normally hearing (NH) and hearing-impaired (HI) listeners using a 2-kHz pure-tone carrier. In both groups of listeners, first-order AM detection thresholds were measured for AM rates fm ranging between 4 and 87 Hz, and second-order AM detection thresholds were measured for second-order AM rates fm' ranging between 4 and 23 Hz, using a fixed first-order 'carrier' AM rate fm of 64 Hz. When the sound pressure level was adjusted in order to yield equal detectability in both groups for the 64-Hz first-order carrier modulation, (i) first-order AM detection thresholds for the HI listeners were normal at fm=87 Hz, and better-than-normal at fm=4 and 16 Hz, and (ii) second-order AM detection thresholds were identical at all modulation rates in NH and HI listeners. Similar results were obtained when the audibility of the 2-kHz pure-tone carrier was equated for both groups, i.e. when listeners were tested at the same sensation level. These results demonstrate clearly that cochlear damage has no effect on the detection of complex temporal envelopes, and indicate that the distortion component must be generated by a more central non-linearity than cochlear compression, transduction, or short-term adaptation.  相似文献   

20.
Hearing in noise is a challenge for all listeners, especially for those with hearing loss. This study compares cues used for detection of a low-frequency tone in noise by older listeners with and without hearing loss to those of younger listeners with normal hearing. Performance varies significantly across different reproducible, or “frozen,” masker waveforms. Analysis of these waveforms allows identification of the cues that are used for detection. This study included diotic (N0S0) and dichotic (N0Sπ) detection of a 500-Hz tone, with either narrowband or wideband masker waveforms. Both diotic and dichotic detection patterns (hit and false alarm rates) across the ensembles of noise maskers were predicted by envelope-slope cues, and diotic results were also predicted by energy cues. The relative importance of energy and envelope cues for diotic detection was explored with a roving-level paradigm that made energy cues unreliable. Most older listeners with normal hearing or mild hearing loss depended on envelope-related temporal cues, even for this low-frequency target. As hearing threshold at 500 Hz increased, the cues for diotic detection transitioned from envelope to energy cues. Diotic detection patterns for young listeners with normal hearing are best predicted by a model that combines temporal- and energy-related cues; in contrast, combining cues did not improve predictions for older listeners with or without hearing loss. Dichotic detection results for all groups of listeners were best predicted by interaural envelope cues, which significantly outperformed the classic cues based on interaural time and level differences or their optimal combination.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号