首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Speech perception in background noise is a common challenge across individuals and health conditions (e.g., hearing impairment, aging, etc.). Both behavioral and physiological measures have been used to understand the important factors that contribute to perception-in-noise abilities. The addition of a physiological measure provides additional information about signal-in-noise encoding in the auditory system and may be useful in clarifying some of the variability in perception-in-noise abilities across individuals. Fifteen young normal-hearing individuals were tested using both electrophysiology and behavioral methods as a means to determine (1) the effects of signal-to-noise ratio (SNR) and signal level and (2) how well cortical auditory evoked potentials (CAEPs) can predict perception in noise. Three correlation/regression approaches were used to determine how well CAEPs predicted behavior. Main effects of SNR were found for both electrophysiology and speech perception measures, while signal level effects were found generally only for speech testing. These results demonstrate that when signals are presented in noise, sensitivity to SNR cues obscures any encoding of signal level cues. Electrophysiology and behavioral measures were strongly correlated. The best physiological predictors (e.g., latency, amplitude, and area of CAEP waves) of behavior (SNR at which 50 % of the sentence is understood) were N1 latency and N1 amplitude measures. In addition, behavior was best predicted by the 70-dB signal/5-dB SNR CAEP condition. It will be important in future studies to determine the relationship of electrophysiology and behavior in populations who experience difficulty understanding speech in noise such as those with hearing impairment or age-related deficits.  相似文献   

2.
Complex broadband sounds are decomposed by the auditory filters into a series of relatively narrowband signals, each of which can be considered as a slowly varying envelope (E) superimposed on a more rapid temporal fine structure (TFS). Both E and TFS information are represented in the timing of neural discharges, although TFS information as defined here depends on phase locking to individual cycles of the stimulus waveform. This paper reviews the role played by TFS in masking, pitch perception, and speech perception and concludes that cues derived from TFS play an important role for all three. TFS may be especially important for the ability to "listen in the dips" of fluctuating background sounds when detecting nonspeech and speech signals. Evidence is reviewed suggesting that cochlear hearing loss reduces the ability to use TFS cues. The perceptual consequences of this, and reasons why it may happen, are discussed.  相似文献   

3.
In a memory test based on the phonemic similarity effect, and using visually-presented homophone and non-homophone word lists, the serial recall of a group of 18 children with central auditory processing disorders (CAPD) was compared with that of a group of 18 normally hearing matched controls. The controls produced more errors on the homophone than the non-homophone list. The CAPD group showed only a slight bias towards more errors on the homophone list. This difference between the groups implied that, as expected, the controls used internal speech and preferred an articulatory- or auditory- rather than a visually-based processing code. The CAPD group, however, showed only a weak articulatory or auditory coding preference. Thus, the use of internal speech seemed poorly developed in the CAPD subjects.  相似文献   

4.
Earlier studies have indicated mid-frequency auditory dysfunction and depressed ability to discriminate speech in noise among noise-exposed listeners with high-frequency hearing loss. The present study was designed to determine whether mid-frequency dysfunction contributed to the depressed speech discrimination performance. Normal listeners, and noise-exposed and older listeners with high-frequency hearing loss listened to word lists presented in competing 'cocktail party' noise under unfiltered and low-pass filter conditions. In the low-pass filter condition the performance of the noise-exposed listeners was superior to that of the normal listeners, indicating that mid-frequency auditory dysfunction on the part of noise-exposed listeners does not contribute to their difficulties discriminating unfiltered speech in noise. The performance of the older listeners was below that of the two other groups in both filtered and unfiltered conditions, indicating greater difficulty discriminating speech than would be predicted only on the basis of high-frequency hearing loss.  相似文献   

5.
A cochlear implant (CI) presents band-pass-filtered acoustic envelope information by modulating current pulse train levels. Similarly, a vocoder presents envelope information by modulating an acoustic carrier. By studying how normal hearing (NH) listeners are able to understand degraded speech signals with a vocoder, the parameters that best simulate electric hearing and factors that might contribute to the NH-CI performance difference may be better understood. A vocoder with harmonic complex carriers (fundamental frequency, f0 = 100 Hz) was used to study the effect of carrier phase dispersion on speech envelopes and intelligibility. The starting phases of the harmonic components were randomly dispersed to varying degrees prior to carrier filtering and modulation. NH listeners were tested on recognition of a closed set of vocoded words in background noise. Two sets of synthesis filters simulated different amounts of current spread in CIs. Results showed that the speech vocoded with carriers whose starting phases were maximally dispersed was the most intelligible. Superior speech understanding may have been a result of the flattening of the dispersed-phase carrier’s intrinsic temporal envelopes produced by the large number of interacting components in the high-frequency channels. Cross-correlogram analyses of auditory nerve model simulations confirmed that randomly dispersing the carrier’s component starting phases resulted in better neural envelope representation. However, neural metrics extracted from these analyses were not found to accurately predict speech recognition scores for all vocoded speech conditions. It is possible that central speech understanding mechanisms are insensitive to the envelope-fine structure dichotomy exploited by vocoders.  相似文献   

6.
语前聋人工耳蜗植入儿童开放式听觉言语能力评估   总被引:1,自引:0,他引:1  
目的对人工耳蜗植入(cochlear implant,CI)儿童的开放式听觉言语能力进行评估,以期获得其听觉言语发育特性。方法选取听觉言语能力已达到进行开放式言语测试的27例语前聋人工耳蜗植入儿童(CI组),按照从易到难的测试方法和顺序,依次进行声场下普通话儿童词汇相邻性双音节易词-双音节难词-单音节易词-单音节难词及儿童版普通话噪声下言语测试(mandarin hearing in noise test for children,MHINT-C)的安静环境-非植入侧噪声-植入侧噪声-前方噪声的测试。将获得的数据与年龄相匹配的听力正常儿童进行比较。结果 27例CI儿童均能进行词汇相邻性测试,其中9例能进行安静环境下句子测试,7例能进行更高难度的噪声环境下的句子测试;双音节难易词表间、单音节难易词表间结果差异均有统计学意义(P〈0.05),且易词得分均明显高于难词;能进行噪声下言语测试的儿童,噪声在非植入侧、植入侧和前方的言语识别阈(speech response threshold,SRT)分别为4.38±3.43、8.76±4.18、9.15±3.39dB S/N,噪声在非植入侧与噪声在植入侧和前方得分相比差异均有统计学意义(P〈0.05),噪声在前方与噪声在植入侧结果差异无统计学意义(P〉0.05)。在安静环境下CI儿童的MHINT-C的SRT与听力正常儿童相差31.5dB S/N,噪声在前方、非植入侧和植入侧环境下分别相差13.4、15.2、19.7dB S/N。结论 CI儿童与听力正常儿童一样均对语音表现出一定的敏感性,对易词的识别好于难词;两者听觉言语发育遵循同样的轨迹,但CI儿童相对滞后;聆听技巧的掌握及发挥双耳对噪声的压制作用对CI儿童言语理解尤其是噪声环境下的言语理解有帮助。  相似文献   

7.
目的对比听神经病患者安静与噪声下言语识别率的差异并与正常受试者、感音神经性听力损失组、听神经瘤组进行比较。方法测试在符合国家标准的隔声室内进行,纯音测试及言语测试应用校准后丹麦耳听美Conera听力计Otosuite(版本号4.82)联结计算机输出言语声,受试者佩戴头戴式耳机TDH-39、B71骨导振动器测试纯音。言语识别测试材料采用解放军总医院郗昕编制的《普通话言语测听—单音节识别测试》词表,在安静和噪声环境下,分别测试听神经病患者10例、感音神经性聋患者11例、听神经瘤患者11例和听力正常受试者10例患者在平均听阈、阈上10dB、20dB、30dB处的言语识别率以及信噪比为-0、-5、-10、-15dB的言语识别率得分。结果听神经病患者在噪声下言语识别能力明显低于听神经瘤组、感音神经性听力损失组以及正常听力组(P<0.05);具有相似听力阈值及听力曲线的AN患者,给予安静及不同噪声强度测试,可呈现较差及较好二级分化的SRS曲线;正常组在信噪比为-0、-5、-10、-15dB的环境下,信噪比为10dB时对比自身安静环境言语识别得分无显著性差异(P>0.05),而听神经病组、听神经瘤组和感音神经性听力损失组在-10 SNR处均有显著性差异(P<0.05)。听神经病患者在安静环境下随刺激声强度的升高会出现"回跌"现象。听神经病患者总体水平在安静与噪声环境下纯音听阈与言语识别得分均与无相关性(R2=0.07),其他三组呈现负的弱相关或强相关。结论安静环境下言语识别能力较好的听神经病患者在噪声环境中下降程度更为显著,相对于安静环境言语识别测试更加敏感;采用平均阈上30dB及-10dB信噪比测试,所得言语识别得分可作为临床评价言语功能的敏感指标,且对于听神经病诊断和病变定位及程度分析更具有现实意义,能够更全面评估听神经病患者的言语交流能力。  相似文献   

8.
The present paper describes a clinical test for the assessment of speech perception in noise. The test was designed to separate the effects of several relevant monaural and binaural cues. Results show that the performance of individual hearing-impaired listeners deviates significantly from normal for at least 2 of the following aspects: (1) perception of speech in steady-state noise; (2) relative binaural advantage due to directional cues; (3) relative advantage due to masker fluctuations. In contrast, both the hearing loss for reverberated speech and the relative binaural advantage due to interaural signal decorrelation, caused by reverberation, were essentially normal for almost all hearing impaired.  相似文献   

9.
目的 研究先天性感音神经性耳聋婴幼儿植入人工耳蜗前听觉及语言中枢等相关功能脑区 的变化.方法 采用血氧水平依赖功能磁共振成像技术(blood oxygenation level dependent-functional magnetic resonance imaging,BOLD-fMRI)对极重度感音神经性耳聋患儿和正常听力对照组进行研究.结果 ①给予震动触觉刺激后,两组被试均可见正性激活脑区,主要包括听觉皮层和其它触觉相关激活脑区,耳聋组比正常听力对照组显示更大的激活范围和强度;②两组被试均可见负性激活,并且耳聋组负性激活区范围及强度明显大于对照组.结论①耳聋婴幼儿听力丧失后双侧听觉中枢仍存在一定的功能;②聋儿听皮质及其相关脑区发生了听-触觉重组;③聋儿的皮质系统感知震动触觉刺激的敏感性明显增加;④聋儿负性激活情况可能与听力丧失后皮质系统发生重组有关.  相似文献   

10.
This study explored the physiological response of the human brain to degraded speech syllables. The degradation was introduced using noise vocoding and/or background noise. The goal was to identify physiological features of auditory-evoked potentials (AEPs) that may explain speech intelligibility. Ten human subjects with normal hearing participated in syllable-detection tasks, while their AEPs were recorded with 32-channel electroencephalography. Subjects were presented with six syllables in the form of consonant-vowel-consonant or vowel-consonant-vowel. Noise vocoding with 22 or 4 frequency channels was applied to the syllables. When examining the peak heights in the AEPs (P1, N1, and P2), vocoding alone showed no consistent effect. P1 was not consistently reduced by background noise, N1 was sometimes reduced by noise, and P2 was almost always highly reduced. Two other physiological metrics were examined: (1) classification accuracy of the syllables based on AEPs, which indicated whether AEPs were distinguishable for different syllables, and (2) cross-condition correlation of AEPs (rcc) between the clean and degraded speech, which indicated the brain’s ability to extract speech-related features and suppress response to noise. Both metrics decreased with degraded speech quality. We further tested if the two metrics can explain cross-subject variations in their behavioral performance. A significant correlation existed for rcc, as well as classification based on early AEPs, in the fronto-central areas. Because rcc indicates similarities between clean and degraded speech, our finding suggests that high speech intelligibility may be a result of the brain’s ability to ignore noise in the sound carrier and/or background.  相似文献   

11.
12.
Age-related changes in the waveforms of the middle latency response (MLR) were investigated in 9 adults and 28 children aged between 4 and 14 years. The children were classified into three groups according to their age. For obtaining characteristic configurations in the responses for each group, composite group averaging was performed by sum-mating the individual recordings in each group. With high-pass digital filtering at 20 Hz, composite MLR for adults showed a well-defined Na-Pa-Nb-Pb complex with peak latencies at about 17, 30, 45 and 63 ms, respectively. The composite response for children aged 4-7 years was characterized by a broad positive deflection (Pa) followed by a negative peak (Nb) at about 40 and 60 ms after stimulus onset, respectively. The peak latency of Pa was close to the adult value in the composite MLR for subjects aged 8-11 years, while the complete adult pattern in the later part of the response was not reached even in the composite response for subjects aged 12-14 years.  相似文献   

13.
Age-related hearing loss, or presbyacusis, is a major public health problem that causes communication difficulties and is associated with diminished quality of life. Limited satisfaction with hearing aids, particularly in noisy listening conditions, suggests that central nervous system declines occur with presbyacusis and may limit the efficacy of interventions focused solely on improving audibility. This study of 49 older adults (M = 69.58, SD = 8.22 years; 29 female) was designed to examine the extent to which low and/or high frequency hearing loss was related to auditory cortex morphology. Low and high frequency hearing constructs were obtained from a factor analysis of audiograms from these older adults and 1,704 audiograms from an independent sample of older adults. Significant region of interest and voxel-wise gray matter volume associations were observed for the high frequency hearing construct. These effects occurred most robustly in a primary auditory cortex region (Te1.0) where there was also elevated cerebrospinal fluid with high frequency hearing loss, suggesting that auditory cortex atrophies with high frequency hearing loss. These results indicate that Te1.0 is particularly affected by high frequency hearing loss and may be a target for evaluating the efficacy of interventions for hearing loss.  相似文献   

14.
听神经病患者失匹配负波特征与言语识别率的关系   总被引:1,自引:1,他引:1  
目的观察听神经病(auditory neuropathy,AN)患者失匹配负波(mismatch negativity,MMN)的基本特征及其与最大言语识别率(phonetic balanced maximum,PBmax)的关系。方法用IHS3099(Version3.82)型诱发电位仪对14例(19耳)AN患者和24例(24耳)听力正常者行MMN测试,用GSI-61双通道诊断型听力计和SONY Tc-Fx25盒式双声道立体声录音机及自行录制的单音节音素平衡词表磁带分别测试14例(19耳)AN患者和19例(19耳)听力正常者的PBmax,比较两组MMN潜伏期和振幅差异有无显著性意义,并分析MMN潜伏期和振幅与PBmax的相关性。结果与对照组相比,AN组MMN(强度差异和频率差异)潜伏期显著延长(P<0.01),AN组强度差异MMN振幅与对照组相比有显著性差异(P=0.019),两组频率差异MMN振幅无显著性差异(P=0.128);AN组频率差异和强度差异MMN潜伏期与PBmax呈部分负相关(r=-0.647,P<0.01;r=-0.708,P<0.01),对照组强度差异MMN潜伏期与PBmax也呈部分负相关(r=-0.643,P<0.05),但对照组频率差异MMN潜伏期与PBmax无相关性(r=-0.027,P=0.913)。结论MMN潜伏期相对稳定,振幅变异较大。AN组MMN潜伏期明显比对照组延长,在群体水平与PBmax呈部分负相关。MMN潜伏期在预估AN患者的言语识别能力方面有一定的意义。  相似文献   

15.
目的探讨言语谱噪声(speech spectrum-shaped noise,SSN)和多人谈话噪声(babble noise,BN)对低龄正常儿童普通话词汇相邻性测试(Mandarin lexical neighborhood test,MLNT)言语感知的影响。方法 34例3~6岁正常听力儿童分为SSN组(21例)和BN组(13例),使用噪声下普通话词汇相邻性测试系统对两组儿童行声场下的言语测试,以听说复述方法获得不同信噪比下的言语识别率,比较两组两类噪声下的单音节易词表、难词表和双音节易词表、难词表的识别率-信噪比函数曲线(P-SNR曲线)和言语识别阈(SNR50)。结果 SSN组双音节易词表和难词表的SNR50阈值分别为-3dB、-0.5dB,单音节易词表和难词表的SNR50分别为-1dB、3.5dB;BN组双音节易词表和难词表的SNR50分别为-3dB、2dB,单音节易词表和难词表的SNR50分别为0.5dB、10dB。两组除双音节易词表SNR50相同外,其余各类词表BN组的SNR50均比SSN组高。词汇学因素对正常听力儿童噪声下的开放式言语识别的影响仍表现出易词识别率高于难词,双音节词识别率高于单音节词。结论对3~6岁正常听力儿童BN的掩蔽效应比SSN强,词汇学因素在噪声下仍然影响儿童的言语识别。  相似文献   

16.
目的 探讨伴脑白质病的语前极重度感音神经性聋患儿人工耳蜗植入的可行性及术后听觉言语康复效果.方法 海南省人民医院耳鼻咽喉科2013年9~11月行人工耳蜗植入的14例伴脑白质病语前聋患儿为研究组(年龄1~6岁,平均3.79±1.93岁),同期无中枢神经系统病变的语前聋患儿16例为对照组(年龄1~6岁,平均4.38±1.93岁),术前均行临床听力学、影像学检查及语言能力、智力水平等评估,经乳突后鼓室面神经隐窝入路一期行人工耳蜗植入术,术后均到海南省聋儿康复中心进行言语康复训练;采用听觉行为分级标准(categories of auditory performance,CAP)和言语可懂度分级标准(speech intelligibility rate,SIR)对两组术后康复效果进行评估,比较两组患儿术后不同阶段的CAP和SIR分级.结果 所有患儿人工耳蜗植入术后均获得听觉反应和不同程度的言语交流能力,随着康复时间延长两组患儿CAP及SIR分级均呈上升趋势;研究组术后6、12和24个月CAP平均分级分别为2.571±0.416、3.714±0.496、5.000±0.492级,SIR平均分级分别为1.357±0.133、2.143±0.275、3.071±0.245级,与对照组CAP(分别为2.688±0.313、3.875±0.364、5.000±0.354级)及SIR(分别为1.500±0.129、2.313±0.176、2.875±0.221级)比较,差异均无统计学意义(P>0.05).结论 伴脑白质病的语前极重度感音性聋患儿经过术前充分评估后可以实施人工耳蜗植入,术后2年内听觉言语康复效果与不伴脑白质病的同龄患儿相当.  相似文献   

17.
听觉察知能力是最基本的听觉能力,要求患者对声音做出准确而及时的反应。听力障碍者要能够逐步形成利用残余听力、助听听力或重建听力的意识和能力。听觉察知能力训练分为无意察知能力诱导和有意察知训练两部分。训练内容和方法各有侧重。本文介绍了听觉察知能力训练的内容与方法,分享了一例听觉察知能力训练的个别化教育康复方案。  相似文献   

18.
1 盲语听觉脑干诱发电位的由来 脑干在言语感知中发挥着重要作用,人们对人类言语感知能力的认识,不能仅仅关注高位听皮层的价值,还要探讨低位中枢的作用. 在听觉系统中,由声信号转化而成的、快速变化着的神经冲动,在脑干水平进行编码.  相似文献   

19.
目的 探讨皮层听觉诱发电位(cortical auditory evoked potential,CAEP)用于中度与重度听力损失老年人助听前后言语识别能力评估的可靠性和有效性.方法 26例中度与重度听力损失老年人均验配同一型号测试用助听器,于佩戴助听器前后在声场中分别测试/m/、/g/、/t/三个刺激声在65 dB ...  相似文献   

20.
目的探讨正常成人言语诱发听性脑干反应(speech evoked auditory brainstem response,speech-ABR)主波的电生理特性。方法记录64例正常成人右耳的speech-ABR,分析主波的极性、潜伏期及幅值及其与性别、年龄的相关性。结果 speech-ABR的主波主要为正极性;除E波极性与性别显著相关外,其他主波极性的性别和年龄差异无统计学意义。V波和A波潜伏期女性较男性显著缩短,V波幅值女性较男性显著增大;而两者潜伏期和幅值的年龄差异无统计学意义。在C、D、E、F和O波中,两种极性的潜伏期差异(除E波和O波外)以及幅值差异无统计学意义;而其潜伏期和幅值的性别和年龄差异亦无统计学意义。结论 speech-ABR各主波的潜伏期和幅值受极性、性别和年龄差异的影响较小,具有较高的稳定性,是言语感知机制基础及临床研究的良好指标。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号