首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study investigates whether temporally jittered stimuli will produce performance-intensity, phonetically balanced (PI-PB) rollover in young adults with normal hearing. Although not yet explicitly stated in the literature, there is clinical and theoretical evidence to suggest that PI-PB rollover, such as that found in cases of acoustic neuroma, is caused by neural dyssynchrony in the auditory system. Sixteen participants were tested with intact and temporally jittered word lists in quiet at 40, 55, 65, and uncomfortable listening level -5 dB HL. The results show significant rollover in the jittered but not the intact conditions. The results are consistent with the existing evidence that suggests that neural PI-PB rollover is caused by decreased neural synchrony and support the claim that temporal jitter simulates neural dyssynchrony. Furthermore, these results are consistent with the hypothesis that synchrony coding plays an important role in the perception of high-level speech.  相似文献   

2.
Temporal effects in simulataneous masking were studied by measuring the reduction in the amount of masking produced by a gated masker when that masker was preceded by a 400-ms noise (the precursor) that was usually spectrally identical to the masker. The signal frequency (fs) was 1.0 or 4.0 kHz. Experiment 1 revealed a temporal effect only when there was a spectral notch (centered at fs) in the masker and precursor. For a relative notchwidth of 0.4 fs, the temporal effect was larger at 4.0 than at 1.0 kHz. In experiment 2. where the masker and precursor both consisted of two bands of noise separated by a spectral notch of 0.4 fs, the size of the temporal effect remained essentially constant as the bandwidth of these noise bands increased from 0.2-0.8 kHz. The results from experiment 3 indicated that the temporal effect was largest when the level fo the precursor was equal to the level of the masker. Finally, the results from experiment 4 suggested that the temporal effect may depend upon the frequency region below as well as above fs, but that the frequency region above fs is probably more important.  相似文献   

3.
Older adults often find it more difficult than younger adults to attend to a target talker when there are other people talking. One possible reason for this difficulty is that it may take them longer to perceptually segregate the target speech from competing speech. This study investigated age-related differences in the time it takes to segregate target speech from either a speech spectrum noise masker or a babble masker (many people talking simultaneously). Specifically, we employed five different delays (0.1?s-1.1?s) between masker onset and target speech onset. Four signal-to-masker ratios were employed at each delay to determine the 50% thresholds for word recognition accuracy when target words were masked by either speech spectrum noise or multi-talker babble. Thresholds for word recognition decreased exponentially as a function of the masker-word-onset delay, at the same rate for younger and older adults, when the masker was speech spectrum noise. When the masker was babble, thresholds for younger adults decreased exponentially with delay at the same rate as they did when the masker was speech spectrum noise. The word recognition thresholds for older adults, however, did not appear to change over the range of delays explored in this study. In addition, the average difference between word recognition thresholds for younger and older adults (younger adult thresholds?相似文献   

4.
OBJECTIVES: The main purpose of the study was to assess the ability of adults with bilateral cochlear implants to localize noise and speech signals in the horizontal plane. A second objective was to measure the change in localization performance in these adults between approximately 5 and 15 mo after activation. A third objective was to evaluate the relative roles of interaural level difference (ILD) and interaural temporal difference (ITD) cues in localization by these subjects. DESIGN: Twenty-two adults, all postlingually deafened and all bilaterally fitted with MED-EL COMBI 40+ cochlear implants, were tested in a modified source identification task. Subjects were tested individually in an anechoic chamber, which contained an array of 43 numbered loudspeakers extending from -90 degrees to +90 degrees azimuth. On each trial, a 200-msec signal (either a noise burst or a speech sample) was presented from one of 17 active loudspeakers (span: +/-80 degrees ), and the subject had to identify which source from the 43 loudspeakers in the array produced the signal. Subjects were tested in three conditions: left device only active, right device only active, and both devices active. Twelve of the 22 subjects were retested approximately 10 mo after their first test. In Experiment 2, the spectral content and rise-decay time of the noise stimulus were manipulated. RESULTS: The relationship between source azimuth and response azimuth was characterized in terms of the adjusted constant error (?). (1) With both devices active, ? for the noise stimulus varied from 8.1 degrees to 43.4 degrees (mean: 24.1 degrees ). By comparison, ? for a group of listeners with normal hearing ranged from 3.5 degrees to 7.8 degrees (mean: 5.6 degrees ). When subjects listened in unilateral mode (with one device turned off), ? was at or near chance (50.5 degrees ) in all cases. However, when considering unilateral performance on each subject's better side, average ? for the speech stimulus was 47.9 degrees , which was significantly (but only slightly) better than chance. (2) When listening bilaterally, error score was significantly lower for the speech stimulus (mean ? = 21.5 degrees ) than for the noise stimulus (mean ? = 24.1 degrees ). (3) As a group, the 12 subjects who were retested 10 mo after their first visit showed no significant improvement in localization performance during the intervening time. However, two subjects who performed very poorly during their first visit showed dramatic improvement (error scores were halved) over the intervening time. In Experiment 2, removing the high-frequency content of noise signals resulted in significantly poorer performance, but removing the low-frequency content or increasing the rise-decay time did not have an effect. CONCLUSIONS: In agreement with previously reported data, subjects with bilateral cochlear implants localized sounds in the horizontal plane remarkably well when using both of their devices, but they generally could not localize sounds when either device was deactivated. They could localize the speech signal with slightly, but significantly better accuracy than the noise, possibly due to spectral differences in the signals, to the availability of envelope ITD cues with the speech but not the noise signal, or to more central factors related to the social salience of speech signals. For most subjects the remarkable ability to localize sounds has stabilized by 5 mo after activation. However, for some subjects who perform poorly initially, there can be substantial improvement past 5 mo. Results from Experiment 2 suggest that ILD cues underlie localization ability for noise signals, and that ITD cues do not contribute.  相似文献   

5.
Nie K  Barco A  Zeng FG 《Ear and hearing》2006,27(2):208-217
OBJECTIVE: Taking advantage of the flexibility in the number of stimulating electrodes and the stimulation rate in a modern cochlear implant, the present study evaluated relative contributions of spectral and temporal cues to cochlear implant speech perception. DESIGN: Four experiments were conducted by using a Research Interface Box in five MED-EL COMBI 40+ cochlear implant users. Experiment 1 varied the number of electrodes from four to twelve or the maximal number of available active electrodes while keeping a constant stimulation rate at 1000 Hz per electrode. Experiment 2 varied the stimulation rate from 1000 to 4000 Hz per electrode on four pairs of fixed electrodes. Experiment 3 covaried the number of stimulating electrodes and the stimulation rate to study the trade-off between spectral and temporal cues. Experiment 4 studied the effects of envelope extraction on speech perception and listening preference, including half-wave rectification, full-wave rectification, and the Hilbert transform. Vowels, consonants, and HINT sentences in quiet, as well as with a competing female voice served as test materials. RESULTS: Experiment 1 found significant improvement in all speech tests with a higher number of stimulating electrodes. Experiment 2 found a significant advantage of the high stimulation rate only on consonant recognition and sentence recognition in noise. Experiment 3 found an almost linear trade-off between the number of stimulation electrodes and the stimulation rate for consonant and sentence recognition in quiet, but not for vowel and sentence recognition in noise. Experiment 4 found significantly better performance with the Hilbert transform and the full-wave rectification than the half-wave rectification. In addition, envelope extraction with the Hilbert transform produced the highest rating on subjective judgment of sound quality. CONCLUSIONS: Consistent with previous studies, the present result from the five MED-EL subjects showed that (1) the temporal envelope cues from a limited number of channels are sufficient to support high levels of phoneme and sentence recognition in quiet but not for speech recognition in a competing voice, (2) consonant recognition relies more on temporal cues while vowel recognition relies more on spectral cues, (3) spectral and temporal cues can be traded to some degree to produce similar performance in cochlear implant speech recognition, and (4) the Hilbert envelope improves both speech intelligibility and quality in cochlear implants.  相似文献   

6.
Individuals with auditory neuropathy (AN) often suffer from temporal processing deficits causing speech perception difficulties. In the present study an envelope enhancement scheme that incorporated envelope expansion was used to reduce the effects of temporal deficits. The study involved two experiments. In the first experiment, to simulate the effects of reduced temporal resolution, temporally smeared speech stimuli were presented to listeners with normal hearing. The results revealed that temporal smearing of the speech signal reduced identification scores. With the envelope enhancement of the speech signal prior to being temporally smeared, identification scores improved significantly compared to temporally smeared condition. The second experiment assessed speech perception in twelve individuals with AN, using unprocessed and envelope-enhanced speech signals. The results revealed improvement in speech identification scores for the majority of individuals with AN when the envelope of the speech signal was enhanced. However, envelope enhancement was not able to improve speech identification scores for individuals with AN who had very poor unprocessed speech scores. Overall, the results of the present study suggest that applying envelope enhancement strategies in hearing aids might provide some benefits to many individuals with AN.  相似文献   

7.
Time-compressed visual speech and age: a first report   总被引:4,自引:0,他引:4  
OBJECTIVE: The purpose of this research was to investigate the effects of age on the ability to identify temporally altered visual speech signals. DESIGN: Two groups of adult lipreaders, older (N = 20) and younger (N = 15), were tested on perception of visual-only speech signals. Identification performance was measured for time-compressed, time-expanded, and unaltered versions of words with visual only speech. RESULTS: An overall reduction in lipreading ability was observed as a function of age. However, in contrast to results with time-altered auditory speech, older adults did not show a disproportionate change to speeded or slowed visual speech. CONCLUSIONS: The absence of age effects in the identification of temporally altered visual speech signals stands in contrast to the considerable evidence that older adults are disproportionately affected by temporal alterations of auditory speech signals. These results argue against a generalized slowing of information processing in older adults and instead point to modality specific changes in temporal processing abilities.  相似文献   

8.
A two-part study examined recognition of speech produced in quiet and in noise by normal hearing adults. In Part I 5 women produced 50 sentences consisting of an ambiguous carrier phrase followed by a unique target word. These sentences were spoken in three environments: quiet, wide band noise (WBN), and meaningful multi-talker babble (MMB). The WBN and MMB competitors were presented through insert earphones at 80 dB SPL. For each talker, the mean vocal level, long-term average speech spectra, and mean word duration were calculated for the 50 target words produced in each speaking environment. Compared to quiet, the vocal levels produced in WBN and MMB increased an average of 14.5 dB. The increase in vocal level was characterized by increased spectral energy in the high frequencies. Word duration also increased an average of 77 ms in WBN and MMB relative to the quiet condition. In Part II, the sentences produced by one of the 5 talkers were presented to 30 adults in the presence of multi-talker babble under two conditions. Recognition was evaluated for each condition. In the first condition, the sentences produced in quiet and in noise were presented at equal signal-to-noise ratios (SNR(E)). This served to remove the vocal level differences between the speech samples. In the second condition, the vocal level differences were preserved (SNR(P)). For the SNR(E) condition, recognition of the speech produced in WBN and MMB was on average 15% higher than that for the speech produced in quiet. For the SNR(P) condition, recognition increased an average of 69% for these same speech samples relative to speech produced in quiet. In general, correlational analyses failed to show a direct relation between the acoustic properties measured in Part I and the recognition measures in Part II.  相似文献   

9.
OBJECTIVE: To determine if subjects who used different cochlear implant devices and who were matched on consonant-vowel-consonant (CNC) identification in quiet would show differences in performance on speech-based tests of spectral and temporal resolution, speech understanding in noise, or speech understanding at low sound levels. DESIGN: The performance of 15 subjects fit with the CII Bionic Ear System (CII Bionic Ear behind-the-ear speech processor with the Hi-Resolution sound processing strategy; Advanced Bionics Corporation) was compared with the performance of 15 subjects fit with the Nucleus 24 electrode array and ESPrit 3G behind-the-ear speech processor with the advanced combination encoder speech coding strategy (cochlear corporation). SUBJECTS: Thirty adults with late-onset deafness and above-average speech perception abilities who used cochlear implants. MAIN OUTCOME MEASURES: Vowel recognition, consonant recognition, sentences in quiet (74, 64, and 54 dB SPL [sound pressure level]) and in noise (+10 and +5 dB SNR [signal-to-noise ratio]), voice discrimination, and melody recognition. RESULTS: Group differences in performance were significant in 4 conditions: vowel identification, difficult sentence material at +5 dB and +10 dB SNR, and a measure that quantified performance in noise and low input levels relative to performance in quiet. CONCLUSIONS: We have identified tasks on which there are between-group differences in performance for subjects matched on CNC word scores in quiet. We suspect that the differences in performance are due to differences in signal processing. Our next goal is to uncover the signal processing attributes of the speech processors that are responsible for the differences in performance.  相似文献   

10.
The purposes of this study were to: (1) compare medial olivocochlear system (MOCS) functioning and speech perception in noise in young and older adults and (2) to quantify the correlation between MOCS functioning and speech perception in noise. Measurements were taken in 20 young (mean 26.3 +/- 2.1 years) and 20 older adults (mean 55.2 +/- 2.8 years). Contralateral distortion product otoacoustic emission (DPOAE) suppression was measured to assess MOCS functioning. Speech perception in noise was evaluated using the Hearing in Noise Test in noise-ipsilateral, noise-front and noise-contralateral test conditions. The results revealed that the older group had a significantly lower high-frequency (3-8 kHz) contralateral DPOAE suppression, and performed more poorly in the noise-ipsilateral condition than the younger group. However, there was no correlation between contralateral DPOAE suppression and speech perception in noise. This study suggests that poor speech perception performance in noise experienced by older adults might be due to a decline in medial olivocochlear functioning, among other factors.  相似文献   

11.
This study was designed to explore the effect on speech comprehension of combining two types of signal distortion. A tape of clearly-enunciated sentences in quiet was distorted in each of four ways: low-pass (LP) filtering, time compression, interruption, and noise masking. Data are reported on a population of normal-hearing young men for multiple-choice answer tests of colloquial sentences of either LP filtered at 1, 2, 3, or 4 kHz, time compressed by computer at 250 words/min, interrupted (50 msec on--50 msec off), masked by speech-spectrum noise at +2 dB S/N, or given each of the 12 possible combinations of LP filtering plus the other three distortions. Individual distortion conditions were adjusted to reduce speech comprehension performance to about 90% accuracy, Low-pass filtering above 1 kHz reduced comprehension by no more than five to 10 percentage points, but when LP filtering was added to the other distortions in turn, latent effects were uncovered. The reduction in comprehension with the combined distortions was much greater than the simple additive effects of the distortion and LP filtering by themselves. For example, LP filtering above 2 kHz produced no measurable effect on sentence comprehension but this same distortion in combination with noise masking reduced performance from 89.4 to 59.7% correct (where 25% was chance). This study further validates the multiply-compounded nature of simultaneous types of distortion. The use of LP filtering extends the multiplicative principle to the simulated case of high-frequency hearing losses.  相似文献   

12.
PURPOSE: Three experiments measured benefit of spatial separation, benefit of binaural listening, and masking-level differences (MLDs) to assess age-related differences in binaural advantage. METHOD: Participants were younger and older adults with normal hearing through 4.0 kHz. Experiment 1 compared spatial benefit with and without head shadow. Sentences were at 0 degrees, and speech-shaped noise was at 0 degrees, 90 degrees, or +/-90 degrees. Experiment 2 measured binaural benefit with the near ear unplugged compared with plugged for sentences at 0 degrees and masker at 90 degrees. Experiment 3 measured MLDs under earphones for 0.5-kHz pure tones in Gaussian and low-noise noise, and spondees in speech-shaped noise. RESULTS: Spatial-separation benefit for speech did not differ significantly for younger and older adults but was smaller than predicted by an audibility-based model for older adults and larger than predicted for younger adults. Binaural listening benefit was observed for younger participants only. Tonal MLDs suggested that listeners benefit from interaural difference cues during noise dips for signals out of phase. Neither tonal nor speech MLDs differed significantly between younger and older participants. CONCLUSION: Binaural processing of sentences revealed some age-related deficits in the use of interaural difference cues, whereas no deficits were observed for more simple detection or recognition tasks.  相似文献   

13.
The effect of phonological neighborhood density on vowel articulation.   总被引:1,自引:0,他引:1  
Recent literature suggests that phonological neighborhood density and word frequency can affect speech production, in addition to the well-documented effects that they have on speech perception. This article describes 2 experiments that examined how phonological neighborhood density influences the durations and formant frequencies of adults' productions of vowels in real words. In Experiment 1, 10 normal speakers produced words that covaried in phonological neighborhood density and word frequency. Infrequent words with many phonological neighbors were produced with shorter durations and more expanded vowel spaces than frequent words with few phonological neighbors. Results of this experiment confirmed that this effect was not related to the duration of the vowels constituting the high- and low-density words. In Experiment 2, 15 adults produced words that varied in both word frequency and neighborhood density. Neighborhood density affected vowel articulation in both high- and low-frequency words. Moreover, frequent words were produced with more contracted vowel spaces than infrequent words. There was no interaction between these factors, and the vowel duration did not vary as a function of neighborhood density. Taken together, the results suggest that neighborhood density affects vowel production independent of word frequency and vowel duration.  相似文献   

14.
OBJECTIVE: The aims of this study were 1) to determine the number of channels of stimulation needed by normal-hearing adults and children to achieve a high level of word recognition and 2) to compare the performance of normal-hearing children and adults listening to speech processed into 6 to 20 channels of stimulation with the performance of children who use the Nucleus 22 cochlear implant. DESIGN: In Experiment 1, the words from the Multisyllabic Lexical Neighborhood Test (MLNT) were processed into 6 to 20 channels and output as the sum of sine waves at the center frequency of the analysis bands. The signals were presented to normal-hearing adults and children for identification. In Experiment 2, the wideband recordings of the MLNT words were presented to early-implanted and late-implanted children who used the Nucleus 22 cochlear implant. RESULTS: Experiment 1: Normal-hearing children needed more channels of stimulation than adults to recognize words. Ten channels allowed 99% correct word recognition for adults; 12 channels allowed 92% correct word recognition for children. Experiment 2: The average level of intelligibility for both early- and late-implanted children was equivalent to that found for normal-hearing adults listening to four to six channels of stimulation. The best intelligibility for implanted children was equivalent to that found for normal-hearing adults listening to six channels of stimulation. The distribution of scores for early- and late-implanted children differed. Nineteen percent of the late-implanted children achieved scores below that allowed by a 6-channel processor. None of the early-implanted children fell into this category. CONCLUSIONS: The average implanted child must deal with a signal that is significantly degraded. This is likely to prolong the period of language acquisition. The period could be significantly shortened if implants were able to deliver at least eight functional channels of stimulation. Twelve functional channels of stimulation would provide signals near the intelligibility of wideband signals in quiet.  相似文献   

15.
Speech recognition can be difficult and effortful for older adults, even for those with normal hearing. Declining frontal lobe cognitive control has been hypothesized to cause age-related speech recognition problems. This study examined age-related changes in frontal lobe function for 15 clinically normal hearing adults (21-75 years) when they performed a word recognition task that was made challenging by decreasing word intelligibility. Although there were no age-related changes in word recognition, there were age-related changes in the degree of activity within left middle frontal gyrus (MFG) and anterior cingulate (ACC) regions during word recognition. Older adults engaged left MFG and ACC regions when words were most intelligible compared to younger adults who engaged these regions when words were least intelligible. Declining gray matter volume within temporal lobe regions responsive to word intelligibility significantly predicted left MFG activity, even after controlling for total gray matter volume, suggesting that declining structural integrity of brain regions responsive to speech leads to the recruitment of frontal regions when words are easily understood.  相似文献   

16.
This study examined the performance of four subject groups on several temporally based measures of auditory processing and several measures of speech identification. The four subjects groups were (a) young normal-hearing adults; (b)-hearing-impaired elderly subjects ranging in age from 65 to 75 years; (c) hearing-impaired elderly adults ranging in age from 76 to 86 years; and (d) young normal-hearing listeners with hearing loss simulated with a spectrally shaped masking noise adjusted to match the actual hearing loss of the two elderly groups. In addition to between-group analyses of performance on the auditory processing and speech identification tasks, correlational and regression analyses within the two groups of elderly hearing-impaired listeners were performed. The results revealed that the threshold elevation accompanying sensorineural hearing loss was the primary factor affecting the speech identification performance of the hearing-impaired elderly subjects both as groups and as individuals. However, significant increases in the proportion of speech identification score variance accounted for were obtained in the elderly subjects by including various measures of auditory processing.  相似文献   

17.
A new Danish speech material (DANTALE) for clinical and experimental speech audiometry is digitally recorded on compact disc (CD). The speech material is designed to meet present audiological requirements at Danish hearing centres. One channel of the CD contains the speech signals and the other a masking noise. The CD also contains various calibration signals recorded on both channels at the end of the CD. The speech material compromises: 1) Digit triplets for the measurement of speech reception threshold (SRT). 2) Lists of monosyllabic words for the measurement of discrimination score (DS) for adults, children and small children. The word lists for the adults are equalized with regard to important phonetic and "visual" elements and the word lists for the children consist of minimal pairs. 3) Continuous speech for the measurement of the most comfortable loudness level (MCL), assessment of hearing aid fitting and the like. The masking noise is an amplitude-modulated, speech-shaped noise signal, which is designed to simulate a 4-person speech babble in order to assess both the frequency selectivity and the temporal resolution. The speech material is described and the long-term power spectra and modulation spectra are given.  相似文献   

18.
This paper describes low-frequency auditory steady-state responses (ASSRs) to speech-weighted noise stimuli. The effect of modulation frequency was evaluated within the frequency range below 40 Hz. Furthermore, objective ASSR measures were related to speech understanding performance in normal-hearing and hearing-impaired listeners. The variability in ASSR recordings over independent test sessions was larger between subjects than within. Trends of increased responses around 10 and/or 20 Hz were found in all subjects. Obtained latency estimates of the responses pointed to primarily cortical sources involved in ASSR generation at low frequencies. Furthermore, significant differences between normal-hearing and hearing-impaired adults were found for ASSRs to stimuli related to the temporal envelope of speech. Comparing these responses with phoneme identification scores over different stimulus levels showed both measures increased with stimulus level in a similar way (ρ=0.82). At a fixed stimulus level, ASSRs were significantly correlated with speech reception thresholds for phonemes and sentences in noise (ρ from ?0.45 to ?0.53). These results indicate that objective low-frequency ASSRs are related to behavioral speech understanding, independently of level.  相似文献   

19.
The two studies presented here examine the relationship between speech perception and speech production errors in children who have a functional articulation disorder. In both experiments, speech perception was assessed with a word identification test, based upon a synthesized continuum of speech stimuli, contrasting the specific phonemes that were associated with production errors in our sample of articulation-disordered subjects. Experiment 1 required subjects to identify words that contrasted the phonemes /s/ and /S/. In this test, adults, normal speaking 5-year-olds, and some articulation-disordered 5-year-olds identified the words seat and sheet appropriately and reliably. However, a subgroup of articulation-disordered children were unable to identify the test stimuli appropriately. Experiment 2 required a second group of subjects to identify words that contrasted the phonemes /s/ and /theta/. Although both adults and normal speaking children responded appropriately to the words sick and thick, in this test, none of the articulation-disordered children was able to identify these words appropriately. It is concluded that, for a subgroup of children who have a functional articulation disorder, production errors may reflect speech perception errors.  相似文献   

20.
The effect of noise on auditory steady-state response (ASSR) has not been systematically studied, despite the fact that ASSR thresholds are sometimes measured in noisy environments. This study examined the effects of noise (speech babble) on the ASSR thresholds obtained from 31 normal hearing adults aged from 17 to 36 years (mean = 25 years). The ASSR thresholds at 0.5, 1, 2 and 4 kHz were measured in the right ear only using the Biologic MASTER system twice in quiet and in the presence of 55 dB A and 75 dB A of speech babble. The results showed no change in mean ASSR thresholds across the test-retest conditions in quiet. The mean ASSR thresholds obtained in the quiet conditions were 23.8, 22.5, 18.2 and 20.4 dB HL at 0.5, 1, 2 and 4 kHz, respectively. No significant shift in ASSR thresholds across all test frequencies was found when 55 dB A of speech babble was presented. However, when 75 dB A of noise was applied, the mean ASSR thresholds were significantly shifted by 9.5, 3.8, 4.2 and 5.8 dB at 0.5, 1, 2 and 4 kHz, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号