首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Consonant recognition was measured as a function of the degree of spectral resolution of the speech stimulus in normally hearing listeners and listeners with moderate sensorineural hearing loss. Previous work (Turner, Souza, and Forget, 1995) has shown that listeners with sensorineural hearing loss could recognize consonants as well as listeners with normal hearing when speech was processed to have only one channel of spectral resolution. The hypothesis tested in the present experiment was that when speech was limited to a small number of spectral channels, both normally hearing and hearing-impaired listeners would continue to perform similarly. As the stimuli were presented with finer degrees of spectral resolution, and the poorer-than-normal spectral resolving abilities of the hearing-impaired listeners became a limiting factor, one would predict that the performance of the hearing-impaired listeners would then become poorer than the normally hearing listeners. Previous research on the frequency-resolution abilities of listeners with mild-to-moderate hearing loss suggests that these listeners have critical bandwidths three to four times larger than do listeners with normal hearing. In the present experiment, speech stimuli were processed to have 1, 2, 4, or 8 channels of spectral information. Results for the 1-channel speech condition were consistent with the previous study in that both groups of listeners performed similarly. However, the hearing-impaired listeners performed more poorly than the normally hearing listeners for all other conditions, including the 2-channel speech condition. These results would appear to contradict the original hypothesis, in that listeners with moderate sensorineural hearing loss would be expected to have at least 2 channels of frequency resolution. One possibility is that the frequency resolution of hearing-impaired listeners may be much poorer than previously estimated; however, a subsequent filtered speech experiment did not support this explanation. The present results do indicate that although listeners with hearing loss are able to use the temporal-envelope information of a single channel in a normal fashion, when given the opportunity to combine information across more than one channel, they show deficient performance.  相似文献   

2.
A comparison was made between speech recognition performance in conditions of quiet and babble (Speech Perception in Noise Test) and items from a self-assessment scale concerned with communication ability in quiet and noise (Understanding Speech section of Hearing Performance Inventory). Performance on both the speech recognition and self-assessment tests differentiated between normal listeners and individuals with mild-to-moderate sensorineural hearing loss. For the hearing-impaired group, correlations between speech recognition scores and ratings on the self-assessment items were poor, suggesting that performance measured with these tests have only a weak relationship.  相似文献   

3.
Word-recognition scores in quiet and in noise were obtained from both ears of 101 elderly listeners demonstrating sensorineural hearing loss. These performance scores were compared to word-recognition scores predicted using Articulation Index analysis procedures. Negative difference scores (actual performance less predicted performance) would reflect aspects of the hearing impairment and/or the aging process that extend beyond the simple speech audibility constraints imposed by the hearing loss and masking noise. The distributions for both the left and right ears of difference scores in quiet revealed the majority of scores to be grouped near 0. In contrast, both distributions of difference scores in noise were normally distributed around means of approximately -25. These results suggest that the typical elderly hearing-impaired listener should be expected to demonstrate word-recognition performance in quiet similar to that of a normally hearing listener, given the same level of audibility of the speech material. On the other hand, in noise, this typical listener may be expected to demonstrate some word-recognition performance decrement, even after accounting for the audibility constraints of the hearing loss and noise.  相似文献   

4.
This study investigated whether unique consonant recognition and confusion patterns are associated with hearing loss among elderly listeners. Subjects were all greater than 65 years, and had normal hearing, or gradually or sharply sloping sensorineural hearing losses. Recognition of 19 consonants, paired with each of three vowels in a CV format, was assessed at two speech levels in a background of babble (+6 dB signal-to-babble ratio). Analyses of percent correct scores for overall nonsense syllable performance and for consonants according to place, manner, and voicing categories generally revealed better performance by the normal-hearing subjects than by the hearing-impaired subjects. However, individual differences scaling analysis of consonant confusions failed to retrieve speech perception patterns that were unique to listener group. These results tentatively suggest that the presence and configuration of hearing loss among elderly listeners may affect the level of performance but not the specific pattern of performance.  相似文献   

5.
Threshold of 4.6-ms tone bursts was measured in quiet and in the presence of a 100% sinusoidally amplitude-modulated speech-shaped noise. For the modulated-noise conditions, the onset of the tone burst coincided either with the maximum or the minimum modulator amplitude. The difference in these two masked thresholds provided an indication of the psychoacoustic modulation depth, or the modulation depth preserved within the auditory system. Modulation frequencies spanning the modulation spectrum of speech (2.5 to 20 Hz) were examined. Tone bursts were 500, 1400, and 4000 Hz. Subjects included normal listeners, normal listeners with a hearing loss simulated by high-pass noise, and hearing-impaired listeners having high-frequency sensorineural hearing loss. Normal listeners revealed a psychoacoustic modulation depth of 30-40 dB for the lowest modulation frequencies which decreased to about 15 dB at 20 Hz. The psychoacoustic modulation depth was decreased in the normal listeners with simulated hearing loss and in the hearing-impaired listeners. There was general agreement in the data, however, for the latter two groups of listeners suggesting that the normal listeners with hearing loss simulated by an additional masking noise provided a good representation of the performance of hearing-impaired listeners on this task.  相似文献   

6.
The purpose of this investigation was to evaluate the validity and reliability of materials designed for an assessment procedure capable of making meaningful distinctions in speech recognition ability among individuals having mild-to-moderate hearing losses. Sets of phonetic contrasts were presented within sentence contexts to 53 listeners (22 normal hearing, 31 hearing impaired) in 4 listening conditions (quiet and with background competition at signal-to-noise ratios of +5, 0, and -5 dB). The listeners were asked to discriminate pairs of sentences (e.g., "The man hid the dog" and "The man hit the dog") using same-different judgments. Their performances were analyzed in a manner enabling comparisons among items in terms of the classification of phonetic contrasts. Listener performance was also compared to performance on a set of independent variables, including the W-22 and QuickSIN speech tests, high-frequency hearing loss, speech reception threshold, listener age, and others. Results indicated that the new materials distinguished the normal-hearing from the hearing-impaired group and that listener performance (a) declined about 17% for each 5 dB decrement in SNR and (b) was influenced by the phonetic content of items in a manner similar to that reported by G. A. Miller and P. E. Nicely (1955). The performances of the hearing-impaired listeners were much more strongly related to high-frequency hearing loss, listener age, and other variables than were their performances on either the W-22 or QuickSIN tests. These findings are discussed with specific reference to the use of a mathematical model (i.e., the Rasch model for person measurement) for scaling items along a continuum of difficulty. The mathematical model and associated item difficulty values will serve as the basis for construction of a clinically useful computerized, adaptive test of speech recognition ability known as the Speech Sound Pattern Discrimination Test (Bochner, J., Garrison, W., Palmer, L., MacKenzie, D., & Braveman, A., 1997).  相似文献   

7.
OBJECTIVE: Experiments were conducted to examine the effects of lexical information on word recognition among normal hearing listeners and individuals with sensorineural hearing loss. The lexical factors of interest were incorporated in the Neighborhood Activation Model (NAM). Central to this model is the concept that words are recognized relationally in the context of other phonemically similar words. NAM suggests that words in the mental lexicon are organized into similarity neighborhoods and the listener is required to select the target word from competing lexical items. Two structural characteristics of similarity neighborhoods that influence word recognition have been identified; "neighborhood density" or the number of phonemically similar words (neighbors) for a particular target item and "neighborhood frequency" or the average frequency of occurrence of all the items within a neighborhood. A third lexical factor, "word frequency" or the frequency of occurrence of a target word in the language, is assumed to optimize the word recognition process by biasing the system toward choosing a high frequency over a low frequency word. DESIGN: Three experiments were performed. In the initial experiments, word recognition for consonant-vowel-consonant (CVC) monosyllables was assessed in young normal hearing listeners by systematically partitioning the items into the eight possible lexical conditions that could be created by two levels of the three lexical factors, word frequency (high and low), neighborhood density (high and low), and average neighborhood frequency (high and low). Neighborhood structure and word frequency were estimated computationally using a large, on-line lexicon-based Webster's Pocket Dictionary. From this program 400 highly familiar, monosyllables were selected and partitioned into eight orthogonal lexical groups (50 words/group). The 400 words were presented randomly to normal hearing listeners in speech-shaped noise (Experiment 1) and "in quiet" (Experiment 2) as well as to an elderly group of listeners with sensorineural hearing loss in the speech-shaped noise (Experiment 3). RESULTS: The results of three experiments verified predictions of NAM in both normal hearing and hearing-impaired listeners. In each experiment, words from low density neighborhoods were recognized more accurately than those from high density neighborhoods. The presence of high frequency neighbors (average neighborhood frequency) produced poorer recognition performance than comparable conditions with low frequency neighbors. Word frequency was found to have a highly significant effect on word recognition. Lexical conditions with high word frequencies produced higher performance scores than conditions with low frequency words. CONCLUSION: The results supported the basic tenets of NAM theory and identified both neighborhood structural properties and word frequency as significant lexical factors affecting word recognition when listening in noise and "in quiet." The results of the third experiment permit extension of NAM theory to individuals with sensorineural hearing loss. Future development of speech recognition tests should allow for the effects of higher level cognitive (lexical) factors on lower level phonemic processing.  相似文献   

8.
Word recognition functions for Auditec recordings of the CID W-22 stimuli in multitalker noise were obtained using subjects with normal hearing and with mild-to-moderate sensorineural hearing loss. In the first experiment, word recognition functions were generated by varying the signal-to-noise ratio (S/N); whereas in the second experiment, a constant S/N was used and stimulus intensity was varied. The split-half reliability of word recognition scores for the normal-hearing and hearing-impaired groups revealed variability that agreed closely with predictions based on the simple binomial distribution. Therefore, the binomial model appears appropriate for estimating the variability of word recognition scores whether they are obtained in quiet or in a competing background noise. The reliability for threshold (50% point) revealed good stability. The slope of the recognition function was steeper for normal listeners than for the hearing-impaired subjects. Word recognition testing in noise can provide insight into the problems imposed by hearing loss, particularly when evaluating patients with mild hearing loss who exhibit no difficulties with conventional tests. Clinicians should employ a sufficient number of stimuli so that the test is adequately sensitive to differences among listening conditions.  相似文献   

9.
This study examined the performance of four subject groups on several temporally based measures of auditory processing and several measures of speech identification. The four subjects groups were (a) young normal-hearing adults; (b)-hearing-impaired elderly subjects ranging in age from 65 to 75 years; (c) hearing-impaired elderly adults ranging in age from 76 to 86 years; and (d) young normal-hearing listeners with hearing loss simulated with a spectrally shaped masking noise adjusted to match the actual hearing loss of the two elderly groups. In addition to between-group analyses of performance on the auditory processing and speech identification tasks, correlational and regression analyses within the two groups of elderly hearing-impaired listeners were performed. The results revealed that the threshold elevation accompanying sensorineural hearing loss was the primary factor affecting the speech identification performance of the hearing-impaired elderly subjects both as groups and as individuals. However, significant increases in the proportion of speech identification score variance accounted for were obtained in the elderly subjects by including various measures of auditory processing.  相似文献   

10.
This study tested the hypothesis that energetic masking limits the benefits obtained from spatial separation in multiple-talker listening situations, particularly for listeners with sensorineural hearing loss. A speech target was presented simultaneously with two or four speech maskers. The target was always presented diotically, and the maskers were either presented diotically or dichotically. In dichotic configurations, the maskers were symmetrically placed by introducing interaural time differences (ITDs) or infinitely large interaural level differences (ILDs; monaural presentation). Target-to-masker ratios for 50 % correct were estimated. Thresholds in all separated conditions were poorer in listeners with hearing loss than listeners with normal hearing. Moreover, for a given listener, thresholds were similar for conditions with the same number of talkers per ear (e.g., ILD with four talkers equivalent to ITD with two talkers) and hence the same energetic masking. The results are consistent with the idea that increased energetic masking, rather than a specific spatial deficit, may limit performance for hearing-impaired listeners in spatialized speech mixtures.  相似文献   

11.
Two companion experiments were conducted with normal-hearing subjects and subjects with high-frequency, sensorineural hearing loss. In Experiment 1, the validity of a self-assessment device of hearing handicap was evaluated in two groups of hearing-impaired listeners with significantly different consonant-recognition ability. Data for the Hearing Performance Inventory--Revised (Lamb, Owens, & Schubert, 1983) did not reveal differences in self-perceived handicap for the two groups of hearing-impaired listeners; it was sensitive to perceived differences in hearing abilities for listeners who did and did not have a hearing loss. Experiment 2 was aimed at evaluation of consonant error patterns that accounted for observed group differences in consonant-recognition ability. Error patterns on the Nonsense-Syllable Test (NST) across the two subject groups differed in both degree and type of error. Listeners in the group with poorer NST performance always demonstrated greater difficulty with selected low-frequency and high-frequency syllables than did listeners in the group with better NST performance. Overall, the NST was sensitive to differences in consonant-recognition ability for normal-hearing and hearing-impaired listeners.  相似文献   

12.
Functional simulation of sensorineural hearing impairment is an important research tool that can elucidate the nature of hearing impairments and suggest or eliminate compensatory signal-processing schemes. The objective of the current study was to evaluate the capability of an audibility-based functional simulation of hearing loss to reproduce the auditory-filter characteristics of listeners with sensorineural hearing loss. The hearing-loss simulation used either threshold-elevating noise alone or a combination of threshold-elevating noise and multiband expansion to reproduce the audibility-based characteristics of the loss (including detection thresholds, dynamic range, and loudness recruitment). The hearing losses of 10 listeners with bilateral, mild-to-severe hearing loss were simulated in 10 corresponding groups of 3 age-matched normal-hearing listeners. Frequency selectivity was measured using a notched-noise masking paradigm at five probe frequencies in the range of 250 to 4000 Hz with a fixed probe level of either 70 dB SPL or 8 dB SL (whichever was greater) and probe duration of 200 ms. The hearing-loss simulation reproduced the absolute thresholds of individual hearing-impaired listeners with an average root-mean-squared (RMS) difference of 2.2 dB and the notched-noise masked thresholds with an RMS difference of 5.6 dB. A rounded-exponential model of the notched-noise data was used to estimate equivalent rectangular bandwidths and slopes of the auditory filters. For some subjects and probe frequencies, the simulations were accurate in reproducing the auditory-filter characteristics of the hearing-impaired listeners. In other cases, however, the simulations underestimated the magnitude of the auditory bandwidths for the hearing-impaired listeners, which suggests the possibility of suprathreshold deficits.  相似文献   

13.
PURPOSE: To determine if listeners with normal hearing and listeners with sensorineural hearing loss give different perceptual weightings to cues for stop consonant place of articulation in noise versus reverberation listening conditions. METHOD: Nine listeners with normal hearing (23-28 years of age) and 10 listeners with sensorineural hearing loss (31-79 years of age, median 66 years) participated. The listeners were asked to label the consonantal portion of synthetic CV stimuli as either /p/ or /t/. Two cues were varied: (a) the amplitude of the spectral peak in the F4/F5 frequency region of the burst was varied across a 30-dB range relative to the adjacent vowel peak amplitude in the same frequency region, (b) F2/F3 formant transition onset frequencies were either appropriate for /p/, /t/ or neutral for the labial/alveolar contrast. RESULTS: Weightings of relative amplitude and transition cues for voiceless stop consonants depended on the listening condition (quiet, noise, or reverberation), hearing loss, and age of listener. The effects of age with hearing loss reduced the perceptual integration of cues, particularly in reverberation. The effects of hearing loss reduced the effectiveness of both cues, notably relative amplitude in reverberation. CONCLUSIONS: Reverberation and noise conditions have different perceptual effects. Hearing loss and age may have different, separable effects.  相似文献   

14.
Ricketts T 《Ear and hearing》2000,21(4):318-328
OBJECTIVE: To evaluate the impact of head turn and monaural and binaural fittings on the sentence reception thresholds of hearing-impaired listeners wearing directional and omnidirectional hearing aids. DESIGN: Sentence reception thresholds were measured for 20 listeners fit monaurally and binaurally with behind-the-ear hearing aids set in both directional and omnidirectional modes. All listeners exhibited symmetrical, sloping, sensorineural hearing loss. The aided performance across these four fittings was evaluated for three different head and body angles. The three angles reflected body turns of 0 degrees, 15 degrees, and 30 degrees as measured relative to the primary sound source, with 0 degrees denoting the listener directly facing the sound source. Listeners were instructed to keep their heads in a fixed horizontal position and turn their heads and bodies to face visual targets at the three test angles. Sentences from the Hearing in Noise Test presented with a background of five, spatially separated, uncorrelated samples of cafeteria noise served as test material. All testing was performed in a moderately reverberant (Rt = 631 msec) "living room" environment. RESULTS: Participants generally performed significantly better when fit with directional versus omnidirectional hearing aids, and when fit binaurally versus monaurally across test conditions. The measured "binaural advantage" was reduced with increasing head angle. Participants performed significantly better with a 30 degree head angle than when directly facing the primary speaker. This "head turn advantage" was most prominent for monaural (versus binaural) conditions. Binaural and head turn advantages were not significantly different across directional and omnidirectional modes. CONCLUSIONS: These data provide additional support for the use of directional hearing aids and binaural amplification to improve speech intelligibility in noisy environments. The magnitude of these advantages was similar to that reported in previous investigations. The data also showed that hearing aid wearers achieved significantly better speech intelligibility in noise by turning their heads and bodies to a position in which they were not directly facing the sound source. This head turn advantage was in good agreement with the increase in Directivity Index with head turn and reflected the fact that hearing aids are generally most sensitive to sounds arriving from angles other than directly in front of the hearing aid wearer. Although these data suggest that many monaural hearing aid wearers may significantly improve speech intelligibility in noise through the use of head turn, the interaction between this advantage and the potential loss of visual cues with head turn is unknown.  相似文献   

15.
The present study was designed to examine speech recognition in patients with sensorineural hearing loss when the temporal and spectral information in the speech signals were co-varied. Four subjects with mild to moderate sensorineural hearing loss were recruited to participate in consonant and vowel recognition tests that used speech stimuli processed through a noise-excited voeoder. The number of channels was varied between 2 and 32, which defined spectral information. The lowpass cutoff frequency of the temporal envelope extractor was varied from 1 to 512 Hz, which defined temporal information. Results indicate that performance of subjects with sensorineural heating loss varied tremendously among the subjects. For consonant recognition, patterns of relative contributions of spectral and temporal information were similar to those in normal-hearing subjects. The utility of temporal envelope information appeared to be normal in the hearing-impaired listeners. For vowel recognition, which depended predominately on spectral information, the performance plateau was achieved with numbers of channels as high as 16-24, much higher than expected, given that the frequency selectivity in patients with sensorineural hearing loss might be compromised. In order to understand the mechanisms on how hearing-impaired listeners utilize spectral and temporal cues for speech recognition, future studies that involve a large sample of patients with sensorineural hearing loss will be necessary to elucidate the relationship between frequency selectivity as well as central processing capability and speech recognition performance using vocoded signals.  相似文献   

16.
It is known that many listeners with sensorineural hearing loss (SNHL) have difficulty performing binaural tasks. In this study, interference and enhancement effects on interaural time discrimination and level discrimination were investigated in 4 listeners with normal hearing (NH) and 7 listeners with SNHL. Just-noticeable differences were measured using 1/3-octave narrowband noises centered at 0.5 and 4 kHz. Noises were presented in isolation and together at equivalent sound pressure level (EqSPL) and equivalent sensation level (EqSL). Each noise served as target and distractor in the dual-band conditions. Congruent conditions included interaural differences in both noises that varied together, and incongruent conditions included an interaural difference in one noise with the second noise diotic. No significant enhancement effects were observed for either group in either task. Interference effects for the NH group were limited to the interaural level discrimination task in the 0.5-kHz target and 4-kHz distractor condition. Performance of participants with SNHL was similar to that of the NH group for interaural time discrimination with noises at EqSL but not EqSPL. In interaural level discrimination, listeners with SNHL demonstrated interference with a 4-kHz target and 0.5-kHz distractor. Results indicated that the relative levels of low- and high-frequency targets and distractors could affect binaural performance of individuals with SNHL but that in some conditions listeners with SNHL performed similarly to those with normal hearing. Implications of these results for binaural clinical tests and hearing aid fitting strategies are discussed.  相似文献   

17.
The purpose of the present study was to determine if most comfortable listening level (MCL) presentation of word discrimination material would yield maximum discrimination for normal and sensorineural hearing loss subjects. The results of the study indicated that normally hearing listeners do obtain maximum discrimination at MCL, while presentation of word discrimination material at MCL to sensorineural hearing loss listeners does not yield maximum discrimination. Further examination of the results would also contraindicate the use of any other method which incorporates single or dual presentation levels as a means of determining maximum discrimination for the sensorineural hearing loss listener  相似文献   

18.
The purpose of the present study was to determine if most comfortable listening level (MCL) presentation of word discrimination material would yield maximum discrimination for normal and sensorineural hearing loss subjects. The results of the study indicated that normally hearing listeners do obtain maximum discrimination at MCL, while presentation of word discrimination material at MCL to sensorineural hearing loss listeners does not yield maximum discrimination. Further examination of the results would also contraindicate the use of any other method which incorporates single or dual presentation levels as a means of determining maximum discrimination for the sensorineural hearing loss listener.  相似文献   

19.
The present paper describes a clinical test for the assessment of speech perception in noise. The test was designed to separate the effects of several relevant monaural and binaural cues. Results show that the performance of individual hearing-impaired listeners deviates significantly from normal for at least 2 of the following aspects: (1) perception of speech in steady-state noise; (2) relative binaural advantage due to directional cues; (3) relative advantage due to masker fluctuations. In contrast, both the hearing loss for reverberated speech and the relative binaural advantage due to interaural signal decorrelation, caused by reverberation, were essentially normal for almost all hearing impaired.  相似文献   

20.
The present paper describes a clinical test for the assessment of speech perception in noise. The test was designed to separate the effects of several relevant monaural and binaural cues. Results show that the performance of individual hearing-impaired listeners deviates significantly from normal for at least 2 of the following aspects: (1) perception of speech in steady-state noise; (2) relative binaural advantage due to directional cues; (3) relative advantage due to masker fluctuations. In contrast, both the hearing loss for reverberated speech and the relative binaural advantage due to interaural signal decorrelation, caused by reverberation, were essentially normal for almost all hearing impaired.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号