首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 38 毫秒
1.
Functional simulation of sensorineural hearing impairment is an important research tool that can elucidate the nature of hearing impairments and suggest or eliminate compensatory signal-processing schemes. The objective of the current study was to evaluate the capability of an audibility-based functional simulation of hearing loss to reproduce the auditory-filter characteristics of listeners with sensorineural hearing loss. The hearing-loss simulation used either threshold-elevating noise alone or a combination of threshold-elevating noise and multiband expansion to reproduce the audibility-based characteristics of the loss (including detection thresholds, dynamic range, and loudness recruitment). The hearing losses of 10 listeners with bilateral, mild-to-severe hearing loss were simulated in 10 corresponding groups of 3 age-matched normal-hearing listeners. Frequency selectivity was measured using a notched-noise masking paradigm at five probe frequencies in the range of 250 to 4000 Hz with a fixed probe level of either 70 dB SPL or 8 dB SL (whichever was greater) and probe duration of 200 ms. The hearing-loss simulation reproduced the absolute thresholds of individual hearing-impaired listeners with an average root-mean-squared (RMS) difference of 2.2 dB and the notched-noise masked thresholds with an RMS difference of 5.6 dB. A rounded-exponential model of the notched-noise data was used to estimate equivalent rectangular bandwidths and slopes of the auditory filters. For some subjects and probe frequencies, the simulations were accurate in reproducing the auditory-filter characteristics of the hearing-impaired listeners. In other cases, however, the simulations underestimated the magnitude of the auditory bandwidths for the hearing-impaired listeners, which suggests the possibility of suprathreshold deficits.  相似文献   

2.
Recent perceptual studies suggest that listeners with sensorineural hearing loss (SNHL) have a reduced ability to use temporal fine-structure cues, whereas the effects of SNHL on temporal envelope cues are generally thought to be minimal. Several perceptual studies suggest that envelope coding may actually be enhanced following SNHL and that this effect may actually degrade listening in modulated maskers (e.g., competing talkers). The present study examined physiological effects of SNHL on envelope coding in auditory nerve (AN) fibers in relation to fine-structure coding. Responses were compared between anesthetized chinchillas with normal hearing and those with a mild–moderate noise-induced hearing loss. Temporal envelope coding of narrowband-modulated stimuli (sinusoidally amplitude-modulated tones and single-formant stimuli) was quantified with several neural metrics. The relative strength of envelope and fine-structure coding was compared using shuffled correlogram analyses. On average, the strength of envelope coding was enhanced in noise-exposed AN fibers. A high degree of enhanced envelope coding was observed in AN fibers with high thresholds and very steep rate-level functions, which were likely associated with severe outer and inner hair cell damage. Degradation in fine-structure coding was observed in that the transition between AN fibers coding primarily fine structure or envelope occurred at lower characteristic frequencies following SNHL. This relative fine-structure degradation occurred despite no degradation in the fundamental ability of AN fibers to encode fine structure and did not depend on reduced frequency selectivity. Overall, these data suggest the need to consider the relative effects of SNHL on envelope and fine-structure coding in evaluating perceptual deficits in temporal processing of complex stimuli.  相似文献   

3.
It has been reported that normal-hearing Chinese speakers base their lexical tone recognition on fine structure regardless of temporal envelope cues. However, a few psychoacoustic and perceptual studies have demonstrated that listeners with sensorineural hearing impairment may have an impaired ability to use fine structure information, whereas their ability to use temporal envelope information is close to normal. The purpose of this study is to investigate the relative contributions of temporal envelope and fine structure cues to lexical tone recognition in normal-hearing and hearing-impaired native Mandarin Chinese speakers. Twenty-two normal-hearing subjects and 31 subjects with various degrees of sensorineural hearing loss participated in the study. Sixteen sets of Mandarin monosyllables with four tone patterns for each were processed through a “chimeric synthesizer” in which temporal envelope from a monosyllabic word of one tone was paired with fine structure from the same monosyllable of other tones. The chimeric tokens were generated in the three channel conditions (4, 8, and 16 channels). Results showed that differences in tone responses among the three channel conditions were minor. On average, 90.9%, 70.9%, 57.5%, and 38.2% of tone responses were consistent with fine structure for normal-hearing, moderate, moderate to severe, and severely hearing-impaired groups respectively, whereas 6.8%, 21.1%, 31.4%, and 44.7% of tone responses were consistent with temporal envelope cues for the above-mentioned groups. Tone responses that were consistent neither with temporal envelope nor fine structure had averages of 2.3%, 8.0%, 11.1%, and 17.1% for the above-mentioned groups of subjects. Pure-tone average thresholds were negatively correlated with tone responses that were consistent with fine structure, but were positively correlated with tone responses that were based on the temporal envelope cues. Consistent with the idea that the spectral resolvability is responsible for fine structure coding, these results demonstrated that, as hearing loss becomes more severe, lexical tone recognition relies increasingly on temporal envelope rather than fine structure cues due to the widened auditory filters.  相似文献   

4.
A dysfunction or loss of outer hair cells (OHC) and inner hair cells (IHC), assumed to be present in sensorineural hearing-impaired listeners, affects the processing of sound both at and above the listeners' hearing threshold. A loss of OHC may be responsible for a reduction of cochlear gain, apparent in the input/output function of the basilar membrane and steeper-than-normal growth of loudness with level (recruitment). IHC loss is typically assumed to cause a level-independent loss of sensitivity. In the current study, parameters reflecting individual auditory processing were estimated using two psychoacoustic measurement techniques. Hearing loss presumably attributable to IHC damage and low-level (cochlear) gain were estimated using temporal masking curves (TMC). Hearing loss attributable to OHC (HL(OHC)) was estimated using adaptive categorical loudness scaling (ACALOS) and by fitting a loudness model to measured loudness functions. In a group of listeners with thresholds ranging from normal to mild-to-moderately impaired, the loss in low-level gain derived from TMC was found to be equivalent with HL(OHC) estimates inferred from ACALOS. Furthermore, HL(OHC) estimates obtained using both measurement techniques were highly consistent. Overall, the two methods provide consistent measures of auditory nonlinearity in individual listeners, with ACALOS offering better time efficiency.  相似文献   

5.
The present study was designed to examine speech recognition in patients with sensorineural hearing loss when the temporal and spectral information in the speech signals were co-varied. Four subjects with mild to moderate sensorineural hearing loss were recruited to participate in consonant and vowel recognition tests that used speech stimuli processed through a noise-excited voeoder. The number of channels was varied between 2 and 32, which defined spectral information. The lowpass cutoff frequency of the temporal envelope extractor was varied from 1 to 512 Hz, which defined temporal information. Results indicate that performance of subjects with sensorineural heating loss varied tremendously among the subjects. For consonant recognition, patterns of relative contributions of spectral and temporal information were similar to those in normal-hearing subjects. The utility of temporal envelope information appeared to be normal in the hearing-impaired listeners. For vowel recognition, which depended predominately on spectral information, the performance plateau was achieved with numbers of channels as high as 16-24, much higher than expected, given that the frequency selectivity in patients with sensorineural hearing loss might be compromised. In order to understand the mechanisms on how hearing-impaired listeners utilize spectral and temporal cues for speech recognition, future studies that involve a large sample of patients with sensorineural hearing loss will be necessary to elucidate the relationship between frequency selectivity as well as central processing capability and speech recognition performance using vocoded signals.  相似文献   

6.
Derleth RP  Dau T  Kollmeier B 《Hearing research》2001,159(1-2):132-149
Three modifications of a psychoacoustically and physiologically motivated processing model [Dau et al., J. Acoust. Soc. Am. 102 (1997a) 2892-2905] are presented and tested. The modifications aim at simulating sensorineural hearing loss and incorporate a level-dependent peripheral compression whose properties are affected by hearing impairment. Model 1 realizes this difference by introducing for impaired listeners an instantaneous level-dependent expansion prior to the adaptation stage of the model. Model 2 and Model 3 realize a level-dependent compression with time constants of 5 and 15 ms, respectively, for normal hearing and a reduced compression for impaired hearing. In Model 2, the compression occurs after the envelope extraction stage, while in Model 3, envelope extraction follows compression. All models account to a similar extent for the recruitment phenomenon measured with narrow-band stimuli and for forward-masking data of normal-hearing and hearing-impaired subjects using a 20-ms, 2-kHz tone signal and a 1-kHz-wide bandpass noise masker centered at 2 kHz. A clear difference between the different models occurs for the processing of temporally fluctuating stimuli. A modulation-rate-independent increase in modulation-response level for simulating impaired hearing is only predicted by Model 1 while the other two models realize a modulation-rate-dependent increase. Hence, the predictions of Model 2 and Model 3 are in conflict with the results of modulation-matching experiments reported in the literature. It is concluded that key properties of sensorineural hearing loss (altered loudness perception, reduced dynamic range, normal temporal properties but prolonged forward-masking effects) can effectively be modeled by incorporating a fast-acting expansion within the current processing model prior to the nonlinear adaptation stage. Based on these findings, a model of both normal and impaired hearing is proposed which incorporates a fast-acting compressive nonlinearity, representing the cochlear nonlinearity (which is reduced in impaired listeners), followed by an instantaneous expansion and the nonlinear adaptation stage which represent aspects of the retro-cochlear information processing in the auditory system.  相似文献   

7.
Gap detection is a commonly used measure of temporal resolution, although the mechanisms underlying gap detection are not well understood. To the extent that gap detection depends on processes within, or peripheral to, the auditory brainstem, one would predict that a measure of gap threshold based on the auditory brainstem response (ABR) would be similar to the psychophysical gap detection threshold. Three experiments were performed to examine the relationship between ABR gap threshold and gap detection. Thresholds for gaps in a broadband noise were measured in young adults with normal hearing, using both psychophysical techniques and electrophysiological techniques that use the ABR. The mean gap thresholds obtained with the two methods were very similar, although ABR gap thresholds tended to be lower than psychophysical gap thresholds. There was a modest correlation between psychophysical and ABR gap thresholds across participants. ABR and psychophysical thresholds for noise masked by temporally continuous, high-pass, or spectrally notched noise were measured in adults with normal hearing. Restricting the frequency range with masking led to poorer gap thresholds on both measures. High-pass maskers affected the ABR and psychophysical gap thresholds similarly. Notched-noise-masked ABR and psychophysical gap thresholds were very similar except that low-frequency, notched-noise-masked ABR gap threshold was much poorer at low levels. The ABR gap threshold was more sensitive to changes in signal-to-masker ratio than was the psychophysical gap detection threshold. ABR and psychophysical thresholds for gaps in broadband noise were measured in listeners with sensorineural hearing loss and in infants. On average, both ABR gap thresholds and psychophysical gap detection thresholds of listeners with hearing loss were worse than those of listeners with normal hearing, although individual differences were observed. Psychophysical gap detection thresholds of 3- and 6-month-old infants were an order of magnitude worse than those of adults with normal hearing, as previously reported; however, ABR gap thresholds of 3-month-old infants were no different from those of adults with normal hearing. These results suggest that ABR gap thresholds and psychophysical gap detection depend on at least some of the same mechanisms within the auditory system.  相似文献   

8.
The goal of our study was to identify the role of auditory steady-state responses for hearing assessment in patients with functional hearing loss. The study design was to compare auditory steady-state response thresholds and standard pure-tone audiometry thresholds between patients with functional or sensorineural hearing loss. Subjects comprised 16 patients (24 ears) with functional hearing loss and 17 patients (24 ears) with sensorineural hearing loss. Differences and correlations between auditory steady-state response thresholds and standard pure-tone audiometry thresholds at 500, 1,000, 2,000 and 4,000 Hz were evaluated. In children with functional hearing loss, pure-tone audiometry thresholds and auditory steady-state response thresholds were significantly different at all frequencies and were not significantly correlated. In patients with sensorineural hearing loss, pure-tone audiometry thresholds and auditory steady-state response thresholds did not differ significantly at any frequencies and were significantly correlated. Auditory steady-state responses may have principal role in the assessment of auditory brainstem acuity, particularly at low frequencies in patients with functional hearing loss.  相似文献   

9.
ObjectivesTo determine the influence of hearing loss on perception of vowel slices.MethodsFourteen listeners aged 20-27 participated; ten (6 males) had hearing within normal limits and four (3 males) had moderate-severe sensorineural hearing loss (SNHL). Stimuli were six naturally-produced words consisting of the vowels /i a u æ ɛ ʌ/ in a /b V b/ context. Each word was presented as a whole and in eight slices: the initial transition, one half and one fourth of initial transition, full central vowel, one-half central vowel, ending transition, one half and one fourth of ending transition. Each of the 54 stimuli was presented 10 times at 70 dB SPL (sound press level); listeners were asked to identify the word. Stimuli were shaped using signal processing software for the listeners with SNHL to mimic gain provided by an appropriately-fitting hearing aid.ResultsListeners with SNHL had a steeper rate of decreasing vowel identification with decreasing slice duration as compared to listeners with normal hearing, and the listeners with SNHL showed different patterns of vowel identification across vowels when compared to listeners with normal hearing.ConclusionAbnormal temporal integration is likely affecting vowel identification for listeners with SNHL, which in turn affects vowel internal representation at different levels of the auditory system.  相似文献   

10.
Souza PE  Kitch V 《Ear and hearing》2001,22(2):112-119
OBJECTIVE: The purpose of this study was to examine the importance of amplitude envelope cues to sentence identification for aged listeners. We also examined the effect of increasing alterations (i.e., compression ratio) and amount of available frequency content (i.e., number of channels) for this population. DESIGN: Thirty-six listeners were classified according to their age (35 or younger versus 65 and older) and hearing status (normal hearing versus hearing impaired). Within each hearing status, mean hearing threshold thresholds for the young and aged listeners were matched as closely as possible through 4 kHz to control for sensitivity differences across age, and all listeners passed a cognitive screening battery. Accuracy of synthetic sentence identification was measured using stimuli processed to restrict spectral information. Performance was measured as a function of age, hearing status, amount of spectral information, and degradation of the amplitude envelope (using fast-acting compression with compression ratios ranging from 1:1 to 5:1). RESULTS: Mean identification scores decreased significantly with increasing age, the presence of hear- c ing loss, the removal of spectral information, and with increasing distortion of the amplitude envelope (i.e., higher compression ratios). There was a consistent performance gap between young and aged listeners, regardless of the magnitude of change to the amplitude envelope. This suggests that some cue other than amplitude envelope variations is inaccessible to the aged listeners. CONCLUSIONS: Although aged listeners performed more poorly overall, they did not show greater susceptibility to alterations in amplitude-envelope cues, such as those produced by fast-acting amplitude compression systems. It is therefore unlikely that compression parameters such as attack and release time or compression ratio would need to be a differentially programmed for aged listeners. Instead, the data suggest two possibilities: aged listeners have difficulty accessing the fine-structure temporal cues present in speech, and/or performance is degraded by age-related loss of function at a central processing level.  相似文献   

11.
Unilateral hearing loss (UHL) leads to an imbalanced input to the brain and results in cortical reorganization. In listeners with unilateral impairments, while the perceptual deficits associated with the impaired ear are well documented, less is known regarding the auditory processing in the unimpaired, clinically normal ear. It is commonly accepted that perceptual consequences are unlikely to occur in the normal ear for listeners with UHL. This study investigated whether the temporal resolution in the normal-hearing (NH) ear of listeners with long-standing UHL is similar to those in listeners with NH. Temporal resolution was assayed via measuring gap detection thresholds (GDTs) in within- and between-channel paradigms. GDTs were assessed in the normal ear of adults with long-standing, severe-to-profound UHL (N = 13) and age-matched, NH listeners (N = 22) at two presentation levels (30 and 55 dB sensation level). Analysis indicated that within-channel GDTs for listeners with UHL were not significantly different than those for the NH subject group, but the between-channel GDTs for listeners with UHL were poorer (by greater than a factor of 2) than those for the listeners with NH. The hearing thresholds in the normal or impaired ears were not associated with the elevated between-channel GDTs for listeners with UHL. Contrary to the common assumption that auditory processing capabilities are preserved for the normal ear in listeners with UHL, the current study demonstrated that a long-standing unilateral hearing impairment may adversely affect auditory perception—temporal resolution—in the clinically normal ear. From a translational perspective, these findings imply that the temporal processing deficits in the unimpaired ear of listeners with unilateral hearing impairments may contribute to their overall auditory perceptual difficulties.  相似文献   

12.
Threshold of 4.6-ms tone bursts was measured in quiet and in the presence of a 100% sinusoidally amplitude-modulated speech-shaped noise. For the modulated-noise conditions, the onset of the tone burst coincided either with the maximum or the minimum modulator amplitude. The difference in these two masked thresholds provided an indication of the psychoacoustic modulation depth, or the modulation depth preserved within the auditory system. Modulation frequencies spanning the modulation spectrum of speech (2.5 to 20 Hz) were examined. Tone bursts were 500, 1400, and 4000 Hz. Subjects included normal listeners, normal listeners with a hearing loss simulated by high-pass noise, and hearing-impaired listeners having high-frequency sensorineural hearing loss. Normal listeners revealed a psychoacoustic modulation depth of 30-40 dB for the lowest modulation frequencies which decreased to about 15 dB at 20 Hz. The psychoacoustic modulation depth was decreased in the normal listeners with simulated hearing loss and in the hearing-impaired listeners. There was general agreement in the data, however, for the latter two groups of listeners suggesting that the normal listeners with hearing loss simulated by an additional masking noise provided a good representation of the performance of hearing-impaired listeners on this task.  相似文献   

13.
Conventional pure-tone thresholds were collected as determined at ages between 4 and 8 years from a group of 163 infants, tested by auditory brainstem response (ABR) in the age range between 1 and 3 years old for objective hearing assessment. The subjects suffered from a variety of degrees and types of sensorineural hearing impairment. The prognostic value of the ABR peak V thresholds in response to 0.1 ms clicks with respect to the behavioural thresholds at octave frequencies from 125 to 8,000 Hz obtained later is evaluated. Correlation between ABR and behavioural thresholds is largest in the 1,000- to 8,000-Hz frequency range. Predicted pure-tone audiograms (mean and SD) were determined for each 10-dB class of ABR thresholds. SDs are in the order of 15 to 18 dB in the 500- to 4,000-Hz range and slightly higher at adjacent frequencies (i.e., somewhat larger than in comparable adult studies). Mean pure-tone thresholds in the 1,000- to 8,000-Hz frequency range are up to 20 dB worse than ABR thresholds, which is opposite to findings in normally-hearing subjects. Thus, with an increasing degree of sensorineural hearing impairment, pure-tone thresholds increase at a significantly higher rate than ABR thresholds. The observation is explained in terms of reduced temporal integration in cochlear hearing loss. ABR thresholds worse than 80 dB nHL are demonstrated to have very limited predictive value with respect to the amount of residual hearing, not only in the low- but also in the high-frequency range. The presence of otitis media during ABR testing is shown to make estimation errors increase to more than 25 dB (SD).  相似文献   

14.
Unilateral hearing loss represents great risk to academic backwardness, communication, social development, and also to auditory processing. Thus, the aim of this study was to evaluate the auditory abilities of localization, closing, figure-ground, temporal resolution, and simple temporal ordering in a male 17-year-old individual diagnosed with profound unilateral sensorineural hearing loss of idiopathic etiology, without other alterations. The evaluation process consisted on the application of a checklist, and the conduction of conventional clinical audiological evaluation (pure-tone audiometry, logoaudiometry, and tympanometry), and of monotic (ipsilateral SSI, Filtered speech test) and diotic (Sound localization, Auditory memory for verbal and non-verbal sounds, AFT-R) auditory processing test. Results showed alteration only in the sound localization test. No complaints were reported regarding the abilities of sound localization, attention, discrimination, and comprehension. In this case study, the profound unilateral sensorineural hearing loss did not seem to restrict the development of the auditory processing abilities evaluated, except for the localization of the sound source.  相似文献   

15.
Any sound can be separated mathematically into a slowly varying envelope and rapidly varying fine-structure component. This property has motivated numerous perceptual studies to understand the relative importance of each component for speech and music perception. Specialized acoustic stimuli, such as auditory chimaeras with the envelope of one sound and fine structure of another have been used to separate the perceptual roles for envelope and fine structure. Cochlear narrowband filtering limits the ability to isolate fine structure from envelope; however, envelope recovery from fine structure has been difficult to evaluate physiologically. To evaluate envelope recovery at the output of the cochlea, neural cross-correlation coefficients were developed that quantify the similarity between two sets of spike-train responses. Shuffled auto- and cross-correlogram analyses were used to compute separate correlations for responses to envelope and fine structure based on both model and recorded spike trains from auditory nerve fibers. Previous correlogram analyses were extended to isolate envelope coding more effectively in auditory nerve fibers with low center frequencies, which are particularly important for speech coding. Recovered speech envelopes were present in both model and recorded responses to one- and 16-band speech fine-structure chimaeras and were significantly greater for the one-band case, consistent with perceptual studies. Model predictions suggest that cochlear recovered envelopes are reduced following sensorineural hearing loss due to broadened tuning associated with outer-hair cell dysfunction. In addition to the within-fiber cross-stimulus cases considered here, these neural cross-correlation coefficients can also be used to evaluate spatiotemporal coding by applying them to cross-fiber within-stimulus conditions. Thus, these neural metrics can be used to quantitatively evaluate a wide range of perceptually significant temporal coding issues relevant to normal and impaired hearing.  相似文献   

16.
Hearing aids help compensate for disorders of the ear by amplifying sound; however, their effectiveness also depends on the central auditory system's ability to represent and integrate spectral and temporal information delivered by the hearing aid. The authors report that the neural detection of time-varying acoustic cues contained in speech can be recorded in adult hearing aid users using the acoustic change complex (ACC). Seven adults (50-76 years) with mild to severe sensorineural hearing participated in the study. When presented with 2 identifiable consonant-vowel (CV) syllables ("shee" and "see"), the neural detection of CV transitions (as indicated by the presence of a P1-N1-P2 response) was different for each speech sound. More specifically, the latency of the evoked neural response coincided in time with the onset of the vowel, similar to the latency patterns the authors previously reported in normal-hearing listeners.  相似文献   

17.
Deficits in temporal resolution and/or the precedence effect may underlie part of the speech understanding difficulties experienced by older listeners in degraded acoustic environments. In a previous investigation, R. Roberts and J. Lister (2004) identified a positive correlation between measures of temporal resolution and the precedence effect, specifically across-channel gap detection (as measured dichotically) and fusion. Across-channel gap detection may also be measured using frequency-disparate markers. Thus, the present investigation was designed to determine if the relation is specific to dichotic gap detection or may generalize to all types of across-channel gap detection. Gap-detection thresholds (GDTs) for fixed-frequency and frequency-disparate markers and lag-burst thresholds (LBTs) were measured for 3 groups of listeners: young with normal hearing sensitivity (YNH), older with normal hearing sensitivity (ONH), and older with sensorineural hearing loss (OIH). Also included were conditions of diotic and dichotic GDT. Largest GDTs were measured for the frequency-disparate markers, whereas largest LBTs were measured for the fixed-frequency markers. ONH and OIH listeners exhibited larger frequency-disparate and dichotic GDTs than YNH listeners. Listener age and hearing loss appeared to influence temporal resolution for frequency-disparate and dichotic stimuli, which is potentially important for the resolution of timing cues in speech. Age and hearing loss did not significantly influence fusion as measured by LBTs. Within each participant group, most GDTs and LBTs were positively, but not significantly, correlated. For all participants combined, across-channel GDTs and LBTs were positively and significantly correlated. This suggests that the 2 tasks may rely on a common across-channel temporal mechanism.  相似文献   

18.
Evidence suggests that word recognition depends on numerous talker-, listener-, and stimulus-related characteristics. The current study examined the effects of talker variability and lexical difficulty on spoken-word recognition among four groups of listeners: native listeners with normal hearing or hearing impairment (moderate sensorineural hearing loss) and non-native listeners with normal hearing or hearing impairment. The ability of listeners to accommodate trial-to-trial variations in talkers' voice was assessed by comparing recognition scores for a single-talker condition to those obtained in a multiple-talker condition. Lexical difficulty was assessed by comparing word-recognition performance between lexically "easy" and "hard" words as determined by frequency of occurrence in language and the structural characteristics of similarity neighborhoods formalized in the Neighborhood Activation Model. An up-down adaptive procedure was used to determine the sound pressure level for 50% performance. Non-native listeners in both normal-hearing and hearing-impaired groups required greater intensity for equal intelligibility than the native normal-hearing and hearing-impaired listeners. Results, however, showed significant effects of talker variability and lexical difficulty for the four groups. Structural equation modeling demonstrated that an audibility factor accounts for 2-3 times more variance in performance than does a linguistic-familiarity factor. However, the linguistic-familiarity factor is also essential to the model fit. The results demonstrated effects of talker variability and lexical difficulty on word recognition for both native and nonnative listeners with normal or impaired hearing. The results indicate that linguistic and indexical factors should be considered in the development of speech-recognition tests.  相似文献   

19.
Neural correlates of sensorineural hearing loss   总被引:1,自引:0,他引:1  
Sensorineural hearing loss is characterized by a relatively well defined set of audiological signs and symptoms such as elevated thresholds, abnormally rapid loudness growth, subjective tinnitus, poor speech discrimination, and a reduction in temporal summation of acoustic energy. Knowledge of the underlying neural mechanisms responsible for some of these auditory distortions has progressed substantially within the past 10 yrs as a result of physiological studies on hearing-impaired animals. Some of the important neurophysiological changes relevant to sensorineural hearing loss are reviewed. One important effect associated with sensorineural hearing loss is the broadening of the cochlear filtering mechanism which may influence loudness growth and the perception of complex sounds. The neurophysiological results may also provide new insights in interpreting traditional audiological data and help in developing more refined tests for fitting hearing aids or differentiating patients with sensorineural hearing loss.  相似文献   

20.
Measures of energetic and informational masking were obtained from 46 listeners with sensorineural hearing loss. The task was to detect the presence of a sequence of eight contiguous 60-ms bursts of a pure tone embedded in masker bursts that were played synchronously with the signal. The masker was either a sequence of Gaussian noise bursts (energetic masker) or a sequence of random-frequency 2-tone bursts (informational masker). The 2-tone maskers were of two types: one type that normally tends to produce large amounts of informational masking and a second type that normally tends to produce very little informational masking. The two informational maskers are called "multiple-bursts same" (MBS), because the same frequency components are present in each burst of a sequence, and "multiple-bursts different" (MBD), because different frequency components are presented in each burst of a sequence. The difference in masking observed for these two maskers is thought to occur because the signal perceptually segregates from the masker in the MBD condition but fuses with the masker in MBS. In the present study, the effectiveness of the MBD masker, measured as the signal-to-masker ratio at masked threshold, increased with increasing hearing loss. In contrast, the signal-to-masker ratio at masked threshold for the MBS masker changed much less as a function of hearing loss. These results suggest that sensorineural hearing loss interferes with the ability of the listener to perceptually segregate individual components of complex sounds. The results from the energetic masking condition, which included critical ratio estimates for all listeners and auditory filter characteristics for a subset of the listeners, indicated that increasing hearing loss also reduced frequency selectivity at the signal frequency. Overall, these results suggest that the increased susceptibility to masking observed in listeners with sensorineural hearing loss is a consequence of both peripheral and central processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号