首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study investigated the overall intelligibility of speech produced during simultaneous communication (SC). Four hearing, experienced sign language users were recorded under SC and speech alone (SA) conditions speaking Boothroyd's (1985) forced-choice phonetic contrast material designed for measurement of speech intelligibility. Twelve hearing-impaired listeners participated, with three of them randomly assigned to audit the speech sample provided by each one of the four speakers under the SC and SA conditions. Although results indicated longer sentence durations for SC than SA, results showed no difference in the overall intelligibility of speech produced during SC versus speech produced during SA, nor any difference in pattern of phonetic contrast recognition errors during SC. This conclusion is consistent with previous research indicating that temporal alterations produced by SC do not produce degradation of temporal or spectral cues in speech or disruption of the perception of specific English phoneme segments. LEARNING OUTCOMES: As a result of this activity, the participant will be able to (1) describe simultaneous communication; (2) explain the role of simultaneous communication in communication with children who are deaf; (3) discuss methods of measuring speech intelligibility; and (4) specify the ability of listeners to perceive speech produced during simultaneous communication.  相似文献   

2.
This study investigated the perception of voice onset time (VOT) in speech produced during simultaneous communication (SC). Four normally hearing, experienced sign language users were recorded under SC and speech alone (SA) conditions speaking stimulus words with voiced and voiceless initial consonants embedded in a sentence. Twelve hearing-impaired listeners participated, with three of them randomly assigned to audit the speech sample provided by each one of the four speakers under the SC and SA conditions. In addition, 24 normal hearing listeners were randomly assigned to audit the speech samples produced by the four speakers under the SC and SA conditions, three listeners in noise and three listeners in filtered listening conditions for each of the four speakers. Although results indicated longer sentence durations for SC than SA, results showed no difference in the perception of the voicing distinction for speech produced during SC versus speech produced during SA under either the noise or filtered listening condition, or any difference in perception for the hearing-impaired listeners. This conclusion is consistent with previous research indicating that temporal alterations produced by SC do not produce degradation of temporal or spectral cues in speech or disruption of the perception of specific English phoneme segments. LEARNING OUTCOMES: As a result of this activity, the participant will be able to: (1) describe simultaneous communication; (2) explain the role of simultaneous communication in communication with persons who are hearing-impaired; (3) discuss methods of measuring perception of voice onset time with hearing-impaired listeners and with hearing listeners under filtered and noise conditions; and (4) specify the ability of listeners to perceive the voicing distinction in speech produced during simultaneous communication.  相似文献   

3.
This study investigated the effects of noise and filtering on the intelligibility of speech produced during simultaneous communication (SC). Four normal hearing, experienced sign language users were recorded under SC and speech alone (SA) conditions speaking Boothroyd’s forced-choice phonetic contrast material designed for measurement of speech intelligibility. Twenty-four normal hearing listeners audited the speech samples produced by the four speakers under the SC and SA conditions, three listeners in noise and three listeners in filtered listening conditions for each of the four speakers. Although results indicated longer sentence durations for SC than SA, the data showed no difference in the intelligibility of speech produced during SC versus speech produced during SA under either the noise or filtered listening condition, nor any difference in pattern of phonetic contrast recognition errors between the SA and SC speech samples in either listening condition. This conclusion is consistent with previous research indicating that temporal alterations produced by SC do not produce degradation of temporal or spectral cues to speech intelligibility or disruption of the perception of specific English phoneme segments.

Learning outcomes

As a result of this activity, the participant will be able to (1) describe simultaneous communication; (2) explain the role of simultaneous communication in communication with children who are deaf; (3) discuss methods of measuring speech intelligibility under filtered and noise conditions; and (4) specify the ability of listeners to perceive speech produced during simultaneous communication under noise and filtered listening conditions.  相似文献   


4.
Vowel durations following the production of voiced and voiceless stop consonants produced during simultaneous communication (SC) were investigated by recording sign language users during SC and speech alone (SA). Under natural speaking conditions, or speaking alone (SA), vowels following voiced stop consonants are longer in duration than vowels following voiceless stops. Although the results indicated longer sentence durations for SC than SA, they showed no differences in the relative duration of vowels following voiced or voiceless stops. Vowel durations following voiced stop consonants were consistently longer than vowel durations following voiceless stops. This finding is consistent with previous research indicating that global temporal alterations in SC do not degrade temporal or spectral cues of spoken English. LEARNING OUTCOMES: As a result of this activity, the participant will be able to (1) describe simultaneous communication; (2) explain the role of simultaneous communication in communication with persons who are hearing-impaired; (3) describe how the voicing characteristic of syllable-initial consonants affects the duration of subsequent vowels; and (4) explain that simultaneous communication does not influence the relative durations of vowels following voiced and voiceless stop consonants.  相似文献   

5.
This study investigated the preservation of second formant transition acoustic cues to intelligibility in speech produced during simultaneous communication (SC) from a locus equation perspective. Twelve normal hearing, experienced sign language users were recorded under SC and speech alone (SA) conditions speaking a set of sentences containing monosyllabic words designed for measurement of second formant frequencies in consonant-vowel-consonant (CVC) syllables. Linear regression fits made to coordinates representing second formant transition onset and offset frequencies following stop consonant release of CVC syllables (locus equations) were used to examine place of articulation cues in both SA and SC conditions. Although results indicated longer sentence durations for SC than SA, locus equation slopes and intercepts obtained from speech produced during SC were virtually identical to those obtained during SA, indicating no degradation of stop consonant acoustic cues during SC. This conclusion is consistent with previous research indicating that temporal alterations produced by SC do not involve violations of other rules of spoken English. Educational objectives: As a result of this activity, the participant will be able to (1) describe SC; (2) explain the role of SC in communication with children who are deaf; (3) describe second formant transitions in English speech; and (4) identify second formant transition patterns in speech produced during SC.  相似文献   

6.
This study investigated the effect of vowel environment on fricative consonant duration in contextual speech produced during simultaneous communication (SC). Previous studies (Schwartz, 1969) of vowel influences on consonant duration supported the notion of anticipatory scanning, in which final vowel targets influence the duration of preceding fricative consonants. Ten normal-hearing, experienced sign language users recorded palatal and alveolar fricatives produced in four vowel environments in contextual sentences under SC and speech-only (SO) conditions. Results indicated longer sentence durations for SC than for SO, and significant effects of vowel context on fricative consonant duration in contextual speech in both SC and SO conditions that revealed similar anticipatory scanning effects as seen in previous studies. These data confirm previous research indicating that the temporal alterations produced by simultaneous communication do not involve violations of the temporal rules of English speech.  相似文献   

7.
Spectral moments, which describe the distribution of frequencies in a spectrum, were used to investigate the preservation of acoustic cues to intelligibility of speech produced during simultaneous communication (SC) in relation to acoustic cues produced when speaking alone. The spectral moment data obtained from speech alone (SA) were comparable to those spectral moment data reported by Jongman, Wayland, and Wong (2000) and Nittrouer (1995). The spectral moments obtained from speech produced during SC were statistically indistinguishable from those obtained during SA, indicating no measurable degradation of obstruent spectral acoustic cues during SC. Educational objectives: As a result of this activity, the participant will be able to (1) describe SC; (2) explain the role of SC in communication with children who are deaf; (3) describe the first, third, and fourth spectral moments of obstruent consonants; and (4) identify spectral moment patterns in speech produced during SC.  相似文献   

8.
PURPOSE: To determine the specific acoustic changes that underlie improved vowel intelligibility in clear speech. METHOD: Seven acoustic metrics were measured for conversational and clear vowels produced by 12 talkers-6 who previously were found (S. H. Ferguson, 2004) to produce a large clear speech vowel intelligibility effect for listeners with normal hearing identifying vowels in background noise (the big benefit talkers), and 6 who produced no clear speech vowel intelligibility benefit (the no benefit talkers). RESULTS: For vowel duration and for certain measures of the overall acoustic vowel space, the change from conversational to clear speech was significantly greater for big benefit talkers than for no benefit talkers. For measures of formant dynamics, in contrast, the clear speech effect was similar for the 2 groups. CONCLUSION: These results suggest that acoustic vowel space expansion and large vowel duration increases improve vowel intelligibility. In contrast, changing the dynamic characteristics of vowels seems not to contribute to improved clear speech vowel intelligibility. However, talker variability suggested that improved vowel intelligibility can be achieved using a variety of clear speech strategies, including some apparently not measured here.  相似文献   

9.
PURPOSE: This study addressed three research questions: (a) Can listeners use anticipatory vowel information in prevocalic consonants produced by talkers with dysarthria to identify the upcoming vowel? (b) Are listeners sensitive to interspeaker variation in anticipatory coarticulation during prevocalic consonants produced by healthy talkers and/or talkers with dysarthria, as measured by vowel identification accuracy? (c) Is interspeaker variation in anticipatory coarticulation reflected in measures of intelligibility? METHOD: Stimuli included 106 CVC words produced by 20 speakers with either Parkinson's disease or multiple sclerosis or by 16 healthy controls characterized by an operationally defined normal, under, or over level of anticipatory vowel coarticulation. Ten listeners were presented with prevocalic consonants for identification of the vowel. Ten additional listeners judged single-word intelligibility. An analysis of variance was used to determine differences in vowel identification accuracy and intelligibility as a function of speaker group, coarticulation level, and vowel type. RESULTS: Listeners accurately identified vowels produced by all speaker groups from the aperiodic portion of prevocalic consonants, but interspeaker variations in strength of coarticulation did not strongly affect vowel identification accuracy or intelligibility. CONCLUSIONS: Listeners appear to be tuned to similar types of information in the acoustic speech stream irrespective of the source or speaker, and any perceptual effects of interspeaker variation in coarticulation are subtle.  相似文献   

10.
Although normal-hearing (NH) and cochlear implant (CI) listeners are able to adapt to spectrally shifted speech to some degree, auditory training has been shown to provide more complete and/or accelerated adaptation. However, it is unclear whether listeners use auditory and visual feedback to improve discrimination of speech stimuli, or to learn the identity of speech stimuli. The present study investigated the effects of training with lexical and nonlexical labels on NH listeners’ perceptual adaptation to spectrally degraded and spectrally shifted vowels. An eight-channel sine wave vocoder was used to simulate CI speech processing. Two degrees of spectral shift (moderate and severe shift) were studied with three training paradigms, including training with lexical labels (i.e., “hayed,” “had,” “who’d,” etc.), training with nonlexical labels (i.e., randomly assigned letters “f,” “b,” “g,” etc.), and repeated testing with lexical labels (i.e., “test-only” paradigm without feedback). All training and testing was conducted over 5 consecutive days, with two to four training exercises per day. Results showed that with the test-only paradigm, lexically labeled vowel recognition significantly improved for moderately shifted vowels; however, there was no significant improvement for severely shifted vowels. Training with nonlexical labels significantly improved the recognition of nonlexically labeled vowels for both shift conditions; however, this improvement failed to generalize to lexically labeled vowel recognition with severely shifted vowels. Training with lexical labels significantly improved lexically labeled vowel recognition with severely shifted vowels. These results suggest that storage and retrieval of speech patterns in the central nervous system is somewhat robust to tonotopic distortion and spectral degradation. Although training with nonlexical labels may improve discrimination of spectrally distorted peripheral patterns, lexically meaningful feedback is needed to identify these peripheral patterns. The results also suggest that training with lexically meaningful feedback may be beneficial to CI users, especially patients with shallow electrode insertion depths.  相似文献   

11.
This study investigated prosodic variables of syllable stress and intonation contours in contextual speech produced during simultaneous communication (SC). Ten normal-hearing, experienced sign language users were recorded under SC and speech only (SO) conditions speaking a set of sentences containing stressed versus unstressed versions of the same syllables and a set of sentences containing interrogative versus declarative versions of the same words. Results indicated longer sentence durations for SC than SO for all speech materials. Vowel duration and fundamental frequency differences between stressed and unstressed syllables as well as intonation contour differences between declarative and interrogative sentences were essentially the same in both SC and SO conditions. The conclusion that prosodic rules were not violated in SC is consistent with previous research indicating that temporal alterations produced by simultaneous communication do not involve violations of other temporal rules of English speech.  相似文献   

12.
This study investigated temporal characteristics of speech produced during simultaneous communication (SC) by inexperienced signers. Recordings of stimulus words embedded in sentences produced with speech-only versus SC were made by 12 students during the first and last weeks of an introductory sign language course. Results indicated significant temporal differences between speech-only and SC conditions during both the first week and the last week of the class. Inexperienced signers appeared to sign between words in SC during the first week of the class, thereby extending interword intervals. At the last week of the class, they appeared to shift toward simultaneously signing while producing the words, thereby elongating segmental temporal characteristics, such as vowel duration. The specific temporal differences between SC and speech-only conditions were consistent with previous findings regarding the effect of SC on temporal characteristics of speech with experienced signers.  相似文献   

13.
This study examined the perceived changes in vowel articulation by profoundly deaf children as a function of the method of teaching: with visual feedback provided by the Computer Vowel Trainer (CVT) vs conventional methods. The assessment carried out by experienced listeners consisted in marking the sounds heard on the vowel quadrilateral. It was found that changes in perception were feedback and age dependent: younger children taught with the CVT were perceived as displaying more mobility in articulation and they approximated more closely the target vowels than their control counterparts or older children. Progress was evident in particular for back and central vowels. Analysis of perceived discrepancies between target and judged vowels, too, suggested that visual feedback was beneficial: perception of experimental children's utterances showed a marked reduction in substitutions with central vowels, a characteristic pattern of deaf speech. Comparison of these findings with the results yielded by the judgement of the same items by naive listeners indicated broad agreement between the two categories of assessors. Results were discussed in terms of the perceptual and articulatory intervening variables with reference to the specific advantages and constraints imposed by evaluating vowel quality on the vowel plane.  相似文献   

14.
PURPOSE: Studies on speech perception training have shown that adult 2nd language learners can learn to perceive non-native consonant contrasts through laboratory training. However, research on perception training for non-native vowels is still scarce, and none of the previous vowel studies trained more than 5 vowels. In the present study, the influence of training set sizes was investigated by training native Japanese listeners to identify American English (AE) vowels. METHOD: Twelve Japanese learners of English were trained 9 days either on 9 AE monophthongs (fullset training group) or on the 3 more difficult vowels (subset training group). Five listeners served as controls and received no training. Performance of listeners was assessed before and after training as well as 3 months after training was completed. RESULTS: Results indicated that (a) fullset training using 9 vowels in the stimulus set improved average identification by 25%; (b) listeners in both training groups generalized improvement to untrained words and tokens spoken by novel speakers; and (c) both groups maintained improvement after 3 months. However, the subset group never improved on untrained vowels. CONCLUSIONS: Training protocols for learning non-native vowels should present a full set of vowels and should not focus only on the more difficult vowels.  相似文献   

15.
Liu C  Jin SH 《Hearing research》2011,282(1-2):49-55
The purpose of this study was to evaluate whether there were significant differences in audibility of American English vowels in noise produced by non-native and native speakers. Detection thresholds for 12 English vowels with equalized durations of 170 ms produced by 10 English-, Chinese- and Korean-native speakers were measured for young normal-hearing English-native listeners in the presence of speech-shaped noise presented at 70 dB SPL. Similar patterns of vowel detection thresholds as a function of the vowel category were found for native and non-native speakers, with the highest thresholds for /u/ and /?/ and lowest thresholds for /i/ and /e/. In addition, vowel detection thresholds for non-native speakers were significantly lower and showed greater speaker variability than those for native speakers. Thresholds for vowel detection predicted from an excitation-pattern model corresponded well to behavioral thresholds, implying that vowel detection was primarily determined by the vowel spectrum regardless of speaker language background. Both behavioral and predicted thresholds showed that vowel audibility was similar or even better for non-native speakers than for native speakers, indicating that vowel audibility did not account for non-native speakers' lower-than-native intelligibility in noise. Effects of non-native speakers' English proficiency level on vowel audibility are discussed.  相似文献   

16.
This study determined whether listeners with hearing loss received reduced benefits due to an onset asynchrony between sounds. Seven normal-hearing listeners and 7 listeners with hearing impairment (HI) were presented with 2 synthetic, steady-state vowels. One vowel (the late-arriving vowel) was 250 ms in duration, and the other (the early-arriving vowel) varied in duration between 350 and 550 ms. The vowels had simultaneous offsets, and therefore an onset asynchrony between the 2 vowels ranged between 100 and 300 ms. The early-arriving and late-arriving vowels also had either the same or different fundamental frequencies. Increases in onset asynchrony and differences in fundamental frequency led to better vowel-identification performance for both groups, with listeners with HI benefiting less from onset asynchrony than normal-hearing listeners. The presence of fundamental frequency differences did not influence the benefit received from onset asynchrony for either group. Excitation pattern modeling indicated that the reduced benefit received from onset asynchrony was not easily predicted by the reduced audibility of the vowel sounds for listeners with HI. Therefore, suprathreshold factors such as loss of the cochlear nonlinearity, reduced temporal integration, and the perception of vowel dominance probably play a greater role in the reduced benefit received from onset asynchrony in listeners with HI.  相似文献   

17.
Experiments with simultaneous and time lag dichotic listening conditions were used to test two hypotheses concerning the right ear advantage and lag effect in dichotic listening. One hypothesis is based on the similarity of acoustic spectra, and the other is based on a categorization of speech sounds as being either encoded or not encoded. Natural vowels and consonant-vowel syllables were used to obtain seven different types of speech stimuli: stop vowel syllables, fricative vowel syllables, stop burst noise, fricative noise, stop vowel transitions, fricative vowel transitions and steady state vowels. The presentation conditions were monaural, simultaneous dichotic, and dichotic with interaural time delays of 15, 30, 60, and 90 msec. With monaural presentations, all stimuli were identifiable above chance levels. For the simultaneous dichotic condition, significant right ear advantages occurred for stop vowel syllables, fricative vowel syllables, stop burst noise, and steady state vowels. For the time lag conditions, stop vowel syllables, stop bursts, and fricative noise produced consistent lag effects, but steady state vowels produced consistent lead effects. In general, the results gave stronger support to the hypothesis of acoustic similarity than to the encoding hypothesis in that stop burst noise produced both a right ear advantage and a lag effect whereas consonant-vowel transitions produced neither a right ear advantage nor a lag effect.  相似文献   

18.
Communication mode use in the dyadic conversational speech of adolescent simultaneous communication (SC)-trained hearing-impaired twin boys was investigated. Proportional frequencies of modes and the English structural characteristics of the spoken components of utterances produced in each mode were examined. The results indicated that these adolescents were using an integrated bimodal form of English with a grammatical base that did not vary as a function of the presence or absence of simultaneous signs either in their speech or their partner's speech. Implications of the results are discussed.  相似文献   

19.
This study investigates covariation of perception and production of vowel contrasts in speakers who use cochlear implants and identification of those contrasts by listeners with normal hearing. Formant measures were made of seven vowel pairs whose members are neighboring in acoustic space. The vowels were produced in carrier phrases by 8 postlingually deafened adults, before and after they received their cochlear implants (CI). Improvements in a speaker's production and perception of a given vowel contrast and normally hearing listeners' identification of that contrast in masking noise tended to occur together. Specifically, speakers who produced vowel pairs with reduced contrast in the pre-CI condition (measured by separation in the acoustic vowel space) and who showed improvement in their perception of these contrasts post-CI (measured with a phoneme identification test) were found to have enhanced production contrasts post-CI in many cases. These enhanced production contrasts were associated, in turn, with enhanced masked word recognition, as measured from responses of a group of 10 normally hearing listeners. The results support the view that restoring self-hearing allows a speaker to adjust articulatory routines to ensure sufficient perceptual contrast for listeners.  相似文献   

20.
PURPOSE: This study explored vowel production and adaptation to articulatory constraints in adults with acquired apraxia of speech (AOS) plus aphasia. METHOD: Five adults with acquired AOS plus aphasia and 5 healthy control participants produced the vowels [i], [epsilon], and [ae] in four word-length conditions in unconstrained and bite block conditions. In addition to acoustic and perceptual measures of vowel productions, individually determined idealized vowels based on each participant's best performance were used to assess vowel accuracy and distinctiveness. RESULTS: Findings showed (a) clear separation of vowel formants in speakers with AOS; (b) impaired vowel production in speakers with AOS, shown by perceptual measures of vowel quality and acoustic measures of vowel accuracy and contrastivity; and (c) incomplete compensation to bite block compensation both for individuals with AOS and for healthy controls. CONCLUSIONS: Although adults with AOS were less accurate overall in vowel production than unimpaired speakers, introduction of a bite block resulted in similar patterns of decreased vowel accuracy for the two groups. Findings suggest that feedback control for vowel production is relatively intact in these individuals with AOS and aphasia. Predominant use of feedback control mechanisms is hypothesized to account for characteristic vowel deficits of the disorder.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号