首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
This study sought to determine if a high-pass hearing aid can provide increased improvement in word recognition and consonant discrimination over that of a conventional high frequency emphasis hearing aid in listeners with hearing loss limited to frequencies above 1 000 Hz. Word and consonant discrimination were assessed in quiet and in the presence of 12 talker speech babble for 10 subjects under three listening conditions: (1) unaided; (2) wearing a conventional high frequency emphasis hearing aid, and (3) wearing an experimental high-pass instrument. The speech testing materials included: (1) Northwestern University Auditory Test No. 6; (2) California Consonant Test, and (3) eight voiceless English consonants. Results suggested that both instruments provided similar benefit in quiet for improving word recognition and resolving consonant errors. For the noise condition, however, the experimental high-pass aid provided a considerable advantage in both word recognition and consonant identification and was particularly sensitive to reducing within- and between-manner voiceless consonant confusions. Furthermore, measurements of real-ear gain revealed that the high-pass aid afforded considerably greater acoustic gain above 4 000 Hz than that shown for the conventional high frequency emphasis hearing aid.

Nous avons, chez 10 sujets atteints d'une pertc d'audition pour les fréquences supérieures à 1 kHz, cherché à établir si un appareil expérimental de correction auditive passe-haut peut, davantage qu'un appareil classique à amplification préférentielle pour les fréquences hautes et dont le gain chute après 4 kHz, améliorer la discrimination de mots et de consonnes dans le calme et aussi en présence d'un bruit de conversation de 12 personnes. Les deux appareils se sont révélés identiques pour la discrimination dans le calme; mais dans le bruit, l'appareil expérimental s'est révélé etre nettement plus efficace.  相似文献   

2.
This study evaluated prototype multichannel nonlinear frequency compression (NFC) signal processing on listeners with high-frequency hearing loss. This signal processor applies NFC above a cut-off frequency. The participants were hearing-impaired adults (13) and children (11) with sloping, high-frequency hearing loss. Multiple outcome measures were repeated using a modified withdrawal design. These included speech sound detection, speech recognition, and self-reported preference measures. Group level results provide evidence of significant improvement of consonant and plural recognition when NFC was enabled. Vowel recognition did not change significantly. Analysis of individual results allowed for exploration of individual factors contributing to benefit received from NFC processing. Findings suggest that NFC processing can improve high frequency speech detection and speech recognition ability for adult and child listeners. Variability in individual outcomes related to factors such as degree and configuration of hearing loss, age of participant, and type of outcome measure.  相似文献   

3.
《Acta oto-laryngologica》2012,132(6):630-637
Conclusion: Cochlear implant (CI) recipients’ performance of lexical tone identification and consonant recognition can be enhanced by providing greater spectral details. Objective: To evaluate the effects of increasing the number of total spectral channels on the lexical tone identification and consonant recognition by normally hearing listeners who are native speakers of Mandarin Chinese. Subjects and methods: Lexical tone identification and consonant recognition were measured in 15 Mandarin-speaking, normal-hearing (NH) listeners with varied numbers of total spectral channels (i.e. 4, 6, 8, 10, 12, 16, 20, and 24), using acoustic simulations of CIs. Results: The group of NH listeners’ performance of lexical tone identification ranged from 44.53% to 66.60% with 4–24 spectral channels. The performance of tone identification between channels 4 and 16 remained similar; between channels 16 and 20 performance improved significantly. As regards consonant recognition, the NH listeners’ overall accuracy ranged from 73.17% to 95.33% with 4–24 channels. Steady improvement in consonant recognition accuracy was observed as a function of increasing the spectral channels. With about 12–16 spectral channels, the NH listeners’ overall accuracy in consonant recognition began to be comparable to their accuracy with the unprocessed stimuli.  相似文献   

4.
Abstract

Objective: Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Design: Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Study sample: Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Results: Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners’ errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Conclusions: Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.  相似文献   

5.
The present study was designed to examine speech recognition in patients with sensorineural hearing loss when the temporal and spectral information in the speech signals were co-varied. Four subjects with mild to moderate sensorineural hearing loss were recruited to participate in consonant and vowel recognition tests that used speech stimuli processed through a noise-excited voeoder. The number of channels was varied between 2 and 32, which defined spectral information. The lowpass cutoff frequency of the temporal envelope extractor was varied from 1 to 512 Hz, which defined temporal information. Results indicate that performance of subjects with sensorineural heating loss varied tremendously among the subjects. For consonant recognition, patterns of relative contributions of spectral and temporal information were similar to those in normal-hearing subjects. The utility of temporal envelope information appeared to be normal in the hearing-impaired listeners. For vowel recognition, which depended predominately on spectral information, the performance plateau was achieved with numbers of channels as high as 16-24, much higher than expected, given that the frequency selectivity in patients with sensorineural hearing loss might be compromised. In order to understand the mechanisms on how hearing-impaired listeners utilize spectral and temporal cues for speech recognition, future studies that involve a large sample of patients with sensorineural hearing loss will be necessary to elucidate the relationship between frequency selectivity as well as central processing capability and speech recognition performance using vocoded signals.  相似文献   

6.
7.
Objective: This study evaluated the diagnostic capabilities of an adaptive speech recognition protocol (NSRT®) that can be self-administered in non-clinical venues by listeners using internet-based software. Design: All participants were given an audiological evaluation, including pure-tone testing, and responded to the NSRT administered in quiet and + 5 dB SNR listening conditions. The NSRT test materials are sentence-length utterances containing phonetic contrasts, primarily minimal pairs. Study sample: Subjects were 123 adults with normal hearing to moderately severe sensorineural hearing loss (mean age = 55 years, SD = 23). Results: Performance on the NSRT is strongly related to pure-tone thresholds. Linear regression analyses support the utility of the NSRT as a proxy for clinically-obtained hearing thresholds across the octave frequencies 0.5 to 8 kHz, primarily for individuals in the ? 10 to 55 dB HL range. Other NSRT results are linked to analyses of phonetic errors and components of aural rehabilitation. Conclusions: Among its numerous results, the NSRT yields quantitative predictions of frequency-specific hearing thresholds, provides insight into the phonetic errors that affect speech understanding in adults who suffer from sensorineural hearing loss, primarily in the ? 10 to 55 dB HL range, and has implications for the design of individualized auditory training programs.  相似文献   

8.
Objective: Otoacoustic emissions (OAEs) can provide useful measures of tuning of auditory filters. We previously established that stimulus-frequency (SF) OAE suppression tuning curves (STCs) reflect major features of behavioral tuning (psychophysical tuning curves, PTCs) in normally-hearing listeners. Here, we aim to evaluate whether SFOAE STCs reflect changes in PTC tuning in cases of abnormal hearing. Design: PTCs and SFOAE STCs were obtained at 1 kHz and/or 4 kHz probe frequencies. For exploratory purposes, we collected SFOAEs measured across a wide frequency range and contrasted them to commonly measured distortion product (DP) OAEs. Study sample: Thirteen listeners with varying degrees of sensorineural hearing loss. Results: Except for a few listeners with the most hearing loss, the listeners had normal/nearly normal PTCs. However, attempts to record SFOAE STCs in hearing-impaired listeners were challenging and sometimes unsuccessful due to the high level of noise at the SFOAE frequency, which is not a factor for DPOAEs. In cases of successful measurements of SFOAE STCs there was a large variability in agreement between SFOAE STC and PTC tuning. Conclusions: These results indicate that SFOAE STCs cannot substitute for PTCs in cases of abnormal hearing, at least with the paradigm adopted in this study.  相似文献   

9.
PURPOSE: To determine if listeners with normal hearing and listeners with sensorineural hearing loss give different perceptual weightings to cues for stop consonant place of articulation in noise versus reverberation listening conditions. METHOD: Nine listeners with normal hearing (23-28 years of age) and 10 listeners with sensorineural hearing loss (31-79 years of age, median 66 years) participated. The listeners were asked to label the consonantal portion of synthetic CV stimuli as either /p/ or /t/. Two cues were varied: (a) the amplitude of the spectral peak in the F4/F5 frequency region of the burst was varied across a 30-dB range relative to the adjacent vowel peak amplitude in the same frequency region, (b) F2/F3 formant transition onset frequencies were either appropriate for /p/, /t/ or neutral for the labial/alveolar contrast. RESULTS: Weightings of relative amplitude and transition cues for voiceless stop consonants depended on the listening condition (quiet, noise, or reverberation), hearing loss, and age of listener. The effects of age with hearing loss reduced the perceptual integration of cues, particularly in reverberation. The effects of hearing loss reduced the effectiveness of both cues, notably relative amplitude in reverberation. CONCLUSIONS: Reverberation and noise conditions have different perceptual effects. Hearing loss and age may have different, separable effects.  相似文献   

10.
目的 探讨非线性频率压缩(nonlinear frequency compression, NLFC)助听器对以汉语普通话为母语的耳聋患者噪声下言语识别的影响,为耳聋患者助听器选配及助听效果评估提供参考.方法 25例以汉语普通话为母语的感音神经性聋患者,均双耳佩戴助听器,根据患者是否有NLFC助听经验分为A、B两组:A组13例有NLFC助听经验,日常使用NLFC技术;B组12例无NLFC助听经验,日常使用传统放大(conventional process, CP)技术.所有患者分别在NLFC和CP条件下进行噪声下句子识别测试,比较其结果.结果 A组患者在NLFC和CP条件下的噪声下言语识别率分别为82.33%±16.06%、76.70%±18.08%,前者高于后者,差异有显著统计学意义(P<0.01);B组分别为83.04%±12.56%、81.79%±20.07%(P=0.19),差异无统计学意义.A组患者在NLFC和CP条件下的高频(4、6、8 kHz)助听听阈分别为53.54±7.30、57.01±6.81 dB SPL,B组分别为57.42±8.38和61.21±7.42 dB SPL,两组NLFC条件下的助听听阈低于CP条件,差异均有统计学意义(P<0.01).经统计学回归处理,患者在NLFC和CP条件下的高频助听听阈差值与言语识别率差值呈线性相关(r=0.63,t=3.89,P=0.007).结论 NLFC技术可改善患者高频可听度及噪声下言语识别,一定的NLFC使用经验可使NLFC助听效果最大化.  相似文献   

11.
Abstract

Objective: The purpose of this study was to test the ability to discriminate low-frequency pure-tone stimuli for ears with and without contralateral dead regions, in subjects with bilateral high-frequency hearing loss; we examined associations between hearing loss characteristics and frequency discrimination of low-frequency stimuli in subjects with high-frequency hearing loss. Design: Cochlear dead regions were diagnosed using the TEN-HL test. A frequency discrimination test utilizing an adaptive three-alternative forced choice method provided difference limens for reference frequencies 0.25 kHz and 0.5 kHz. Study sample: Among 105 subjects with bilateral high-frequency hearing loss, unilateral dead regions were found in 15 subjects. These, and an additional 15 matched control subjects without dead regions, were included in the study. Results: Ears with dead regions performed best at the frequency discrimination test. Ears with a contralateral dead region performed significantly better than ears without a contralateral dead region at 0.5 kHz, the reference frequency closest to the mean audiogram cut-off, while the opposite result was obtained at 0.25 kHz. Conclusions: Results may be seen as sign of a contralateral effect of unilateral dead regions on the discrimination of stimuli with frequencies well below the audiogram cut-off in adult subjects with bilateral high-frequency hearing loss.  相似文献   

12.
OBJECTIVE: In contrast to fitting strategies for linear amplification, which have been refined frequently for listeners with different degrees of hearing loss, we know relatively little about the effects of wide dynamic range compression (WDRC) amplification for listeners with severe auditory thresholds. The primary objective of this study was to determine if increases in audibility with WDRC amplification improved speech recognition to a comparable degree for listeners with different degrees of hearing loss. DESIGN: Listeners with mild to moderate or severe sensorineural loss were tested on recognition of vowel-consonant-vowel (VCV) syllables and sentences digitally processed with linear and WDRC amplification. The speech materials were presented under conditions of controlled audibility, in which WDRC amplification improved speech audibility over linear amplification. Presentation levels were chosen to provide equivalent increases in audibility with WDRC amplification for both listener groups. A control condition in which audibility was equated for the two amplification conditions was also included. RESULTS: Recognition results for VCV stimuli indicated that both listener groups received the same benefit from the improved audibility provided by WDRC amplification. Results for sentence recognition showed a greater benefit of WDRC amplification for listeners with mild to moderate than for listeners with severe loss. CONCLUSIONS: Increasing the amount of audible speech information with WDRC has similar effects on consonant recognition for listeners with different degrees of hearing loss. Differences in sentence recognition for listeners with different degrees of loss may be due to processing effects or to differences in available acoustic information for longer segments of WDRC-amplified speech.  相似文献   

13.
OBJECTIVE: Our aim was to explore the consequences for speech understanding of leaving a gap in frequency between a region of acoustic hearing and a region stimulated electrically. Our studies were conducted with normal-hearing listeners, using an acoustic simulation of combined electric and acoustic (EAS) stimulation. DESIGN: Simulations of EAS were created by low-pass filtering speech at 0.5 kHz (90 dB octave roll-off) and adding amplitude-modulated sine waves at higher frequencies. The gap in frequency between acoustic and simulated electric hearing was varied over the range 0.5 kHz to 3.2 kHz. Stimuli included sentences in quiet, sentences in noise, and consonants and vowels. Three experiments were conducted with sample sizes of 12 listeners. RESULTS: Scores were highest in conditions that minimized the frequency gap between acoustic and electric stimulation. In quiet, vowels and consonant place of articulation showed the most sensitivity to the frequency gap. In noise, scores in the simulated EAS condition were higher than the sum of the scores from the acoustic-only and simulated electric-only conditions. CONCLUSIONS: Our results suggest that both deep and shallow insertions of electrodes could improve the speech understanding abilities of patients with residual hearing to 500 Hz. However, performance levels will be maximized if the gap between acoustic and electric stimulation is minimized.  相似文献   

14.
15.
There is increasing awareness among clinical audiologists of the inability of current speech discrimination tests to provide diagnostically significant data. In fact the most commonly used test of speech discrimination, the CID auditory test W–22 does not, according to Carhart [1965], separate persons with normal hearing from those with various types and degrees of hearing impairments. Although there has been discussion in the literature of mixing speech with noise to increase the diagnostic value of PB tests, very little data has been reported to date.

In the present study 10 persons with normal hearing, 10 persons with high-frequency hearing loss, and 10 persons with relatively flat hearing loss were used as experimental listeners. All listeners yielded PB score in quiet of 92% or better, i.e. a hearing impairment was not reflected in the PB score obtained in quiet.

CID auditory test W–22 words lists 1 and 2 were presented to these listeners at 40 dB SL or the sensation level necessary for PB Max. The words were presented in quiet and also in the presence of white noise. Three signal-to-noise (S/N) ratios were used; +8, 0, and –8 dB S/N. As the noise interference level increased the PB scores of all listeners deteriorated. The PB score of normal hearing listeners deteriorated approximately 52% from the quiet to the –8 dB S/N ratio, listeners with high-frequency hearing loss deteriorated approximately 57%, and listeners with flat hearing loss approximately 67%.

The PB scores of groups at the –8 dB S/N condition were significantly different at the 1.0% level of confidence. The data indicate certain directions for further research in order to arrive at a more useful speech discrimination test.  相似文献   

16.
OBJECTIVE: Experiments were conducted to examine the effects of lexical information on word recognition among normal hearing listeners and individuals with sensorineural hearing loss. The lexical factors of interest were incorporated in the Neighborhood Activation Model (NAM). Central to this model is the concept that words are recognized relationally in the context of other phonemically similar words. NAM suggests that words in the mental lexicon are organized into similarity neighborhoods and the listener is required to select the target word from competing lexical items. Two structural characteristics of similarity neighborhoods that influence word recognition have been identified; "neighborhood density" or the number of phonemically similar words (neighbors) for a particular target item and "neighborhood frequency" or the average frequency of occurrence of all the items within a neighborhood. A third lexical factor, "word frequency" or the frequency of occurrence of a target word in the language, is assumed to optimize the word recognition process by biasing the system toward choosing a high frequency over a low frequency word. DESIGN: Three experiments were performed. In the initial experiments, word recognition for consonant-vowel-consonant (CVC) monosyllables was assessed in young normal hearing listeners by systematically partitioning the items into the eight possible lexical conditions that could be created by two levels of the three lexical factors, word frequency (high and low), neighborhood density (high and low), and average neighborhood frequency (high and low). Neighborhood structure and word frequency were estimated computationally using a large, on-line lexicon-based Webster's Pocket Dictionary. From this program 400 highly familiar, monosyllables were selected and partitioned into eight orthogonal lexical groups (50 words/group). The 400 words were presented randomly to normal hearing listeners in speech-shaped noise (Experiment 1) and "in quiet" (Experiment 2) as well as to an elderly group of listeners with sensorineural hearing loss in the speech-shaped noise (Experiment 3). RESULTS: The results of three experiments verified predictions of NAM in both normal hearing and hearing-impaired listeners. In each experiment, words from low density neighborhoods were recognized more accurately than those from high density neighborhoods. The presence of high frequency neighbors (average neighborhood frequency) produced poorer recognition performance than comparable conditions with low frequency neighbors. Word frequency was found to have a highly significant effect on word recognition. Lexical conditions with high word frequencies produced higher performance scores than conditions with low frequency words. CONCLUSION: The results supported the basic tenets of NAM theory and identified both neighborhood structural properties and word frequency as significant lexical factors affecting word recognition when listening in noise and "in quiet." The results of the third experiment permit extension of NAM theory to individuals with sensorineural hearing loss. Future development of speech recognition tests should allow for the effects of higher level cognitive (lexical) factors on lower level phonemic processing.  相似文献   

17.
There is a subgroup of elderly listeners with hearing loss who can be characterized by exceptionally poor speech understanding. This study examined the hypothesis that the poor speech-understanding performance of some elderly listeners is associated with disproportionate deficits in temporal resolution and frequency resolution, especially for complex signals. Temporal resolution, as measured by gap detection, and frequency resolution, as measured by the critical ratio, were examined in older listeners with normal hearing, older listeners with hearing loss and good speech-recognition performance, and older listeners with hearing loss and poor speech-recognition performance. Listener performance was evaluated for simple and complex stimuli and for tasks of added complexity. In addition, syllable recognition was assessed in quiet and noise. The principal findings were that older listeners with hearing loss and poor word-recognition performance did not perform differently from older listeners with hearing loss and good word recognition on the temporal resolution measures nor on the spectral resolution measures for relatively simple stimuli. However, frequency resolution was compromised for listeners with poor word-recognition abilities when targets were presented in the context of complex signals. Group differences observed for syllable recognition in quiet were eliminated in the noise condition. Taken together, the findings support the hypothesis that unusual deficits in word-recognition performance among elderly listeners were associated with poor spectral resolution for complex signals.  相似文献   

18.
Objective: To examine the impact of visual cues, speech materials, age and listening condition on the frequency bandwidth necessary for optimizing speech recognition performance. Design: Using a randomized repeated measures design; speech recognition performance was assessed using four speech perception tests presented in quiet and noise in 13 LP filter conditions and presented in multimodalities. Participants’ performance data were fitted with a Boltzmann function to determine optimal performance (10% below performance achieved in FBW). Study sample: Thirty adults (18–63 years) and thirty children (7–12 years) with normal hearing. Results: Visual cues significantly reduced the bandwidth required for optimizing speech recognition performance for listeners. The type of speech material significantly impacted the bandwidth required for optimizing performance. Both groups required significantly less bandwidth in quiet, although children required significantly more than adults. The widest bandwidth required was for the phoneme detection task in noise where children required a bandwidth of 7399 Hz and adults 6674 Hz. Conclusions: Listeners require significantly less bandwidth for optimizing speech recognition performance when assessed using sentence materials with visual cues. That is, the amount of bandwidth systematically decreased as a function of increased contextual, linguistic, and visual content.  相似文献   

19.
Objective: The fundamental frequency modulation (F0mod) sound processing strategy was developed to improve pitch perception with cochlear implants. In previous work it has been shown to improve performance in a number of pitch-related tasks such as pitch ranking, familiar melody identification, and Mandarin Chinese tone identification. The objective of the current study was to compare speech perception with F0mod and the standard clinical advanced combination encoder (ACE) strategy. Study sample: Seven cochlear-implant listeners were recruited from the clinical population of the University Hospital Leuven. Design: F0mod was implemented on a real-time system. Speech recognition in quiet and noise was measured for seven cochlear-implant listeners, comparing F0mod with ACE, using three different Dutch-language speech materials. Additionally the F0 estimator used was evaluated physically, and pitch ranking performance was compared between F0mod and ACE. Results: Immediately after switch-on of the F0mod strategy, speech recognition in quiet and noise were similar for ACE and F0mod, for four out of seven listeners. The remaining three listeners were subjected to a short training protocol with F0mod, after which their performance was reassessed, and a significant improvement was found. Conclusions: As F0mod improves pitch perception, for the seven subjects tested it did not interfere with speech recognition in quiet and noise, and has a low computational complexity, it seems promising for implementation in a clinical sound processor.  相似文献   

20.
Objective: To determine speech perception in quiet and noise of adult cochlear implant listeners retaining a hearing aid contralaterally. Second, to investigate the influence of contralateral hearing thresholds and speech perception on bimodal hearing.

Patients and methods: Sentence recognition with hearing aid alone, cochlear implant alone and bimodally at 6 months after cochlear implantation were assessed in 148 postlingually deafened adults. Data were analyzed for bimodal summation using measures of speech perception in quiet and in noise.

Results: Most of the subjects showed improved sentence recognition in quiet and in noise in the bimodal condition compared to the hearing aid-only or cochlear implant-only mode. The large variability of bimodal benefit in quiet can be partially explained by the degree of pure tone loss. Also, subjects with better hearing on the acoustic side experience significant benefit from the additional electrical input.

Conclusions: Bimodal summation shows different characteristics in quiet and noise. Bimodal benefit in quiet depends on hearing thresholds at higher frequencies as well as in the lower- and middle-frequency ranges. For the bimodal benefit in noise, no correlation with hearing threshold in any frequency range was found.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号