首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Previous research on the visual reception of fingerspelled English suggests that communication rates are limited primarily by constraints on production. Studies of artificially accelerated fingerspelling indicate that reception of fingerspelled sentences is highly accurate for rates up to 2 to 3 times those that can be produced naturally. The current paper reports on the results of a comparable study of the reception of American Sign Language (ASL). Fourteen native deaf ASL signers participated in an experiment in which videotaped productions of isolated ASL signs or ASL sentences were presented at normal playback speed and at speeds of 2, 3, 4, and 6 times normal speed. For isolated signs, identification scores decreased from 95% correct to 46% correct across the range of rates that were tested; for sentences, the ability to identify key signs decreased from 88% to 19% over the range of rates tested. The results indicate a breakdown in processing at around 2.5-3 times the normal rate as evidenced both by a substantial drop in intelligibility in this region and by a shift in error patterns away from semantic and toward formational. These results parallel those obtained in previous studies of the intelligibility of the auditory reception of time-compressed speech and the visual reception of accelerated fingerspelling. Taken together, these results suggest a modality-independent upper limit to language processing.  相似文献   

2.
Experiments with keyboard arrangements of letters show that simple alphabetic letter-key sequences with 4 to 5 letters in a row lead to most rapid visual search performance. Such arrangements can be used on keyboards operated by the index finger of one hand. Arrangement of letters in words offers a promising alternative because these arrangements can be readily memorized and can result in small interletter distances on the keyboard for frequently occurring letter sequences. Experiments on operation of keyboards show that a space or shift key operated by the left hand (which also holds the communication device) results in faster keyboard operation than when space or shift keys on the front of the keyboard (operated by right hand) are used. Special problems of the deaf-blind are discussed. Keyboard arrangements are investigated, and matching tactual codes are suggested.  相似文献   

3.
In the Tadoma method of communication, deaf-blind individuals receive speech by placing a hand on the face and neck of the talker and monitoring actions associated with speech production. Previous research has documented the speech perception, speech production, and linguistic abilities of highly experienced users of the Tadoma method. The current study was performed to gain further insight into the cues involved in the perception of speech segments through Tadoma. Small-set segmental identification experiments were conducted in which the subjects' access to various types of articulatory information was systematically varied by imposing limitations on the contact of the hand with the face. Results obtained on 3 deaf-blind, highly experienced users of Tadoma were examined in terms of percent-correct scores, information transfer, and reception of speech features for each of sixteen experimental conditions. The results were generally consistent with expectations based on the speech cues assumed to be available in the various hand positions.  相似文献   

4.
OBJECTIVE: The objective of this study was to compare the effects of a single-band envelope cue as a supplement to speechreading of segmentals and sentences when presented through either the auditory or tactual modality. DESIGN: The supplementary signal, which consisted of a 200-Hz carrier amplitude-modulated by the envelope of an octave band of speech centered at 500 Hz, was presented through a high-performance single-channel vibrator for tactual stimulation or through headphones for auditory stimulation. Normal-hearing subjects were trained and tested on the identification of a set of 16 medial vowels in /b/-V-/d/ context and a set of 24 initial consonants in C-/a/-C context under five conditions: speechreading alone (S), auditory supplement alone (A), tactual supplement alone (T), speechreading combined with the auditory supplement (S+A), and speechreading combined with the tactual supplement (S+T). Performance on various speech features was examined to determine the contribution of different features toward improvements under the aided conditions for each modality. Performance on the combined conditions (S+A and S+T) was compared with predictions generated from a quantitative model of multi-modal performance. To explore the relationship between benefits for segmentals and for connected speech within the same subjects, sentence reception was also examined for the three conditions of S, S+A, and S+T. RESULTS: For segmentals, performance generally followed the pattern of T < A < S < S+T < S+A. Significant improvements to speechreading were observed with both the tactual and auditory supplements for consonants (10 and 23 percentage-point improvements, respectively), but only with the auditory supplement for vowels (a 10 percentage-point improvement). The results of the feature analyses indicated that improvements to speechreading arose primarily from improved performance on the features low and tense for vowels and on the features voicing, nasality, and plosion for consonants. These improvements were greater for auditory relative to tactual presentation. When predicted percent-correct scores for the multi-modal conditions were compared with observed scores, the predicted values always exceeded observed values and the predictions were somewhat more accurate for the S+A than for the S+T conditions. For sentences, significant improvements to speechreading were observed with both the auditory and tactual supplements for high-context materials but again only with the auditory supplement for low-context materials. The tactual supplement provided a relative gain to speechreading of roughly 25% for all materials except low-context sentences (where gain was only 10%), whereas the auditory supplement provided relative gains of roughly 50% (for vowels, consonants, and low-context sentences) to 75% (for high-context sentences). CONCLUSIONS: The envelope cue provides a significant benefit to the speechreading of consonant segments when presented through either the auditory or tactual modality and of vowel segments through audition only. These benefits were found to be related to the reception of the same types of features under both modalities (voicing, manner, and plosion for consonants and low and tense for vowels); however, benefits were larger for auditory compared with tactual presentation. The benefits observed for segmentals appear to carry over into benefits for sentence reception under both modalities.  相似文献   

5.
Although results obtained with the Tadoma method of speechreading have set a new standard for tactual speech communication, they are nevertheless inferior to those obtained in the normal auditory domain. Speech reception through Tadoma is comparable to that of normal-hearing subjects listening to speech under adverse conditions corresponding to a speech-to-noise ratio of roughly 0 dB. The goal of the current study was to demonstrate improvements to speech reception through Tadoma through the use of supplementary tactual information, thus leading to a new standard of performance in the tactual domain. Three supplementary tactual displays were investigated: (a) an articulatory-based display of tongue contact with the hard palate; (b) a multichannel display of the short-term speech spectrum; and (c) tactual reception of Cued Speech. The ability of laboratory-trained subjects to discriminate pairs of speech segments that are highly confused through Tadoma was studied for each of these augmental displays. Generally, discrimination tests were conducted for Tadoma alone, the supplementary display alone, and Tadoma combined with the supplementary tactual display. The results indicated that the tongue-palate contact display was an effective supplement to Tadoma for improving discrimination of consonants, but that neither the tongue-palate contact display nor the short-term spectral display was highly effective in improving vowel discriminability. For both vowel and consonant stimulus pairs, discriminability was nearly perfect for the tactual reception of the manual cues associated with Cued Speech. Further experiments on the identification of speech segments were conducted for Tadoma combined with Cued Speech. The observed data for both discrimination and identification experiments are compared with the predictions of models of integration of information from separate sources.  相似文献   

6.
The speech and language training for deaf children at our clinic is performed using a multisensory method, which consists of reception and expression training for sign language and fingerspelling as well as auditory training, lip reading, and written language training (the Kanazawa Method). We have already reported that acquisition of written language is not dependent on oral language, and that written language is easier to learn than oral language for deaf children. In the present investigation, we analyzed the acquisition of comprehensible and expressive vocabulary in sign language and fingerspelling. The subjects were two children congenitally deaf at levels higher than 105dB. Recorded language samples by the age of 48 months were analyzed. Acquisition of sign language was found to be significantly easier than acquisition of oral language. The development of expressive noun words, function words, and Wh-question words in sign language at the early period was almost equivalent to that of hearing peers, and then the sign language appeared transfer to the oral language. These results suggest that early presentation of sign language with written and oral language is effective in the acquisition of communicative attitudes, function words and interrogative sentences which are most difficult for the hearing-impaired. It was shown that early presentation of sign language with written and oral language serves to promote acquisition of oral language.  相似文献   

7.
OBJECTIVES: The purpose of this study was to examine characteristics of eye gaze behavior, specifically eye fixations, during reception of simultaneous communication (SC). SC was defined as conceptually accurate and semantically based signs and fingerspelling used in conjunction with speech. Specific areas of focus were (1) the pattern of frequency, duration, and location of observers' eye fixations in relation to the critical source of disambiguating information (signs or speech) in SC, and (2) how the pattern of an observer's eye fixations was related to the source of critical information (sign or speech), expectations regarding the location of the critical information after exposure to the stimulus set, observer characteristics, and sender. DESIGN: The investigation used eye tracking technology to monitor eye fixations of observers who watched silent video clips of sentences rendered in SC by three senders. Each sentence contained one of a pair of sign-critical (e.g., "sleeves"/"leaves") or speech-critical (e.g., "invited"/"hired") contrast items designed to depend on information at the hands or mouth, respectively, to resolve its ambiguity. Observers were 20 adults: five faculty/staff with early onset deafness, five faculty/staff with normal hearing, and ten college students with early onset deafness. Faculty and staff were identified by a sign language assessment specialist to be experienced and skillful users of SC. Students, exposed to SC in classroom instruction, were recruited through paper and electronic ads. RESULTS: Generally, observers looked toward the face, regardless of whether signs or speech disambiguated the message, suggesting that eye fixations toward the hands of the sender are not necessary to apprehend essential information to accurately identify an ambiguous part of the message during SC. However, other aspects of eye behavior indicated sensitivity to type of critical contrast. In particular, fixations were shorter during sign-critical items compared to speech-critical items, even after adjusting for stimulus length. In addition, experienced, adult deaf users of SC made more, brief eye fixations than observers who had normal hearing. Finally, differences in eye fixation patterns toward different senders indicates that sender characteristics affect visual processes in SC perception. CONCLUSIONS: This study provides supportive evidence of brief, frequent eye movements by deaf perceivers over small areas of a video display during reception of visuospatial linguistic information. These movements could be used to enhance activation of brain centers responsible for processing motion, consistent with neurophysiological evidence of attentional mechanisms or visual processes unique to perception of a visual language.  相似文献   

8.
9.
This study reports on the linguistic abilities of 3 adult deaf-blind subjects. The subjects perceive spoken language through touch, placing a hand on the face of the speaker and monitoring the speaker's articulatory motions, a method of speechreading known as Tadoma. Two of the subjects, deaf-blind since infancy, acquired language and learned to speak through this tactile system; the third subject has used Tadoma since becoming deaf-blind at age 7. Linguistic knowledge and productive language are analyzed, using standardized tests and several tests constructed for this study. The subjects' language abilities prove to be extensive, comparing favorably in many areas with hearing individuals. The results illustrate a relatively minor effect of limited language exposure on eventual language achievement. The results also demonstrate the adequacy of the tactile sense, in these highly trained Tadoma users, for transmitting information about spoken language sufficient to support the development of language and learning to produce speech.  相似文献   

10.
This study compared the speech-in-noise perception abilities of children with and without diagnosed learning disabilities (LDs) and investigated whether naturally produced clear speech yields perception benefits for these children. A group of children with LDs (n = 63) and a control group of children without LDs (n = 36) were presented with simple English sentences embedded in noise. Factors that varied within participants were speaking style (conversational vs. clear) and signal-to-noise ratio (-4 dB vs. -8 dB); talker (male vs. female) varied between participants. Results indicated that the group of children with LDs had poorer overall sentence-in-noise perception than the control group. Furthermore, both groups had poorer speech perception with decreasing signal-to-noise ratio; however the children with LDs were more adversely affected by a decreasing signal-to-noise ratio than the control group. Both groups benefited substantially from naturally produced clear speech, and for both groups, the female talker evoked a larger clear speech benefit than the male talker. The clear speech benefit was consistent across groups; required no listener training; and, for a large proportion of the children with LDs, was sufficient to bring their performance within the range of the control group with conversational speech. Moreover, an acoustic comparison of conversational-to-clear speech modifications across the two talkers provided insight into the acoustic-phonetic features of naturally produced clear speech that are most important for promoting intelligibility for this population.  相似文献   

11.
In the present interview study on a sample of 13 deaf-blind participants (eight Usher patients and five with other diagnoses), all but one with some remaining visual function and all but two with a pure-tone average (PTA) exceeding 100 dB HL, an instrument was developed to assess discovery and localization abilities (DILO), compensatory use of sensory information, emotional and cognitive aspects of communication, and the preferred use of technical aids. Both qualitative and quantitative data were collected, and it was found that (1) the importance of early discovery of events and persons is rated high, (2) vision ranks higher than other sensory information, and airflow, smell and residual hearing come next in the perceptual world of this sample, (3) cognitive aspects of communication correlate with the importance of discovery and localization, and (4) technical aids dominated by vision and vibratory senses are preferred. It is concluded that even a small remaining visual function could be of significant importance in rehabilitation. Finally, in the deaf-blind group of subjects with some remaining visual function, utilization of remaining vision was felt to be more important than utilization of other sensory modalities.  相似文献   

12.
Abstract

Objective: To introduce and verify an algorithm designed to administer adaptive speech-in- noise testing to a specified reliability at selectable points on the psychometric function. Design: Speech-in-noise performances were measured using BKB sentences presented in diffuse babble-noise, using morphemic scoring. Target of the algorithm was a test-retest standard deviation of 1.13 dB within the presentation of 32 sentences. Normal-hearing participants completed repeated measures using manual administration targeting 50% correct, and the automated procedure targeting 25%, 50%, and 75% correct. Aided hearing-impaired participants completed testing with the automated procedure targeting 25%, 50%, and 75% correct, repeating measurements at the 50% point three times. Study sample: Twelve normal-hearing and 63 hearing-impaired people who had English as first language. Results: Relative to the manual procedure, the algorithm produced the same speech reception threshold in noise (p = 0.96) and lower test-retest reliability on normal-hearing listeners. Both groups obtained significantly different results at the three target points (p < 0.04) with observed reliability close to expected. Target accuracy was not reached within 32 sentences for 18% of measurements on hearing-impaired participants. Conclusions: The reliability of the algorithm was verified. A second test is recommended if the target variability is not reached during the first measurement.  相似文献   

13.
This study investigated prosodic variables of syllable stress and intonation contours in contextual speech produced during simultaneous communication (SC). Ten normal-hearing, experienced sign language users were recorded under SC and speech only (SO) conditions speaking a set of sentences containing stressed versus unstressed versions of the same syllables and a set of sentences containing interrogative versus declarative versions of the same words. Results indicated longer sentence durations for SC than SO for all speech materials. Vowel duration and fundamental frequency differences between stressed and unstressed syllables as well as intonation contour differences between declarative and interrogative sentences were essentially the same in both SC and SO conditions. The conclusion that prosodic rules were not violated in SC is consistent with previous research indicating that temporal alterations produced by simultaneous communication do not involve violations of other temporal rules of English speech.  相似文献   

14.
Audiovisual perception of speech in noise and masked written text   总被引:1,自引:0,他引:1  
OBJECTIVE: The aim of this study was to examine the support obtained from degraded visual information in the comprehension of speech in noise. DESIGN: We presented sentences auditorily (speech reception threshold test), visually (text reception threshold test), and audiovisually. Presenting speech in noise and masked written text enabled the quantification and systematic variation of the amount of information presented in both modalities. Eighteen persons with normal hearing (aged 19 to 31 yr) participated. For half of them a bar pattern masked the text and for the other half random dots masked the text. The text was presented simultaneously or delayed relative to the speech. Using an adaptive procedure, the amount of information required for a correct reproduction of 50% of the sentences was determined for both the unimodal and the audiovisual stimuli. Bimodal support was defined as the difference between the observed bimodal performance and that predicted by an independent channels model. Nonparametric tests were used to evaluate the bimodal support and the effect of delaying the text. RESULTS: Masked text substantially supported the comprehension of speech in noise; the bimodal support ranged from 15% to 25% correct. A negative effect of delaying the text was observed in some conditions for the participants who were presented the text masked by the bar pattern. CONCLUSIONS: The ability of participants to reproduce bimodally presented sentences exceeds the performance as predicted by an independent channels model. This indicates that a relatively small amount of visual information can substantially augment speech comprehension in noise, which supports the use of visual information to improve speech comprehension by participants with hearing impairment, even if the visual information is incomplete.  相似文献   

15.
Six 3-year-old language-disordered children were taught the relationship between semantic role and word order through either production or comprehension training. All 6 subjects successfully learned the relationship through production training as indicated by their responses to a production probe and by their use of word order to express semantic role distinctions in their conversational speech. These subjects never used word order cues to decode semantically reversible sentences on comprehension tests even after they were using word order appropriately in their conversational speech. Also, none of the subjects were able to learn word order through comprehension training. The results were interpreted to mean the subjects could learn a word-order rule by teaching them to say sentences that contrast word order and meaning but that they could not learn by being taught to respond to sentences. The problem with this latter procedure may be that it requires a mental operation that is beyond the level of cognitive development of children under the age of 4.  相似文献   

16.
The present study is an investigation of complex sentence structures produced by school-age children in ordinary 100-utterance language samples. A total of 15 children with specific language impairment (SLI) and 15 of their classmates with typical language (TL) were the participants. Each child’s conversational sample was coded for several types of complex sentence structures. While a 100-utterance language sample was adequate to yield exemplars of several types of spoken syntactic complexity, findings raise concerns about the content validity of conversational language sampling in the assessment of spoken syntactic complexity. Results also indicated that, although the children with SLI produced fewer complex sentences as well as combined complex sentences than their classmates with TL, they produced some examples of most spoken complex sentence structures in their conversations. Implications for using conversational language sampling to assess complex syntax are discussed.

Learning outcomes

The reader will (a) explain the strengths and weaknesses of language sampling in assessment of spoken syntactic complexity in school-age children, and (b) describe differences in children with SLI and children with TL for spoken syntactic complexity in child–adult conversation, as well as how to account for those differences.  相似文献   


17.
Detailed speech analyses were performed on data from 61 speech-delayed children assessed by both a standard articulation test and a conversational speech sample. Statistically significant differences between the articulation accuracy profiles obtained from the two sampling modes were observed at all linguistic levels examined, including overall accuracy, phonological processes, individual phonemes, manner features, error-type, word position, and allophones. Established sounds were often produced more accurately in conversational speech, whereas emerging sounds were often produced more accurately in response to articulation test stimuli. Error patterns involving word-to-word transitions were available only in the context of continuous speech. A pass-fail analysis indicated that the average subject would receive similar clinical decisions from articulation testing and conversational speech sampling on an average of 71% of consonant sounds. Analyses of demographic, language, and speech variables did not yield any subject characteristics that were significantly associated with concordance rates in the two sampling modes. Discussion considers sources of variance for differences between sampling modes, including processes associated with both the speaker and the transcriber. In comparison to the validity of conversational speech samples for integrated speech, language, and prosodic analyses, articulation tests appear to yield neither typical nor optimal measures of speech performance.  相似文献   

18.
Previous studies (Picheny, Durlach, & Braida, 1985, 1986) have demonstrated that substantial intelligibility differences exist for hearing-impaired listeners for speech spoken clearly compared to speech spoken conversationally. This paper presents the results of a probe experiment intended to determine the contribution of speaking rate to the intelligibility differences. Clear sentences were processed to have the durational properties of conversational speech, and conversational sentences were processed to have the durational properties of clear speech. Intelligibility testing with hearing-impaired listeners revealed both sets of materials to be degraded after processing. However, the degradation could not be attributable to processing artifacts because reprocessing the materials to restore their original durations produced intelligibility scores close to those observed for the unprocessed materials. We conclude that the simple processing to alter the relative durations of the speech materials was not adequate to assess the contribution of speaking rate to the intelligibility differences; further studies are proposed to address this question.  相似文献   

19.
This research explored the relationship between sentence disruptions and the length and complexity of sentences spoken by children developing grammar. The study was cross-sectional in design and used samples of naturalistic, conversational interaction between 26 typically developing children (ages 2;6 to 4;0) and a primary caregiver. The active, declarative sentences produced by these children were coded for the presence of disruption, length in morphemes and words, and clausal complexity. The results showed that, for the majority of the children, disrupted sentences tended to be longer and more complex than fluent sentences. The magnitude of the differences in length and complexity was positively correlated with the children's grammatical development, as measured by the Index of Productive Syntax. It was also found that differences between the average complexity of disrupted versus fluent sentences increased with grammatical development even when sentence length was held constant. As grammatical development proceeded, disrupted sentences were more apt to be sentences on the "leading-edge" of the child's production capacity. Although these more advanced grammatical structures are part of the child's grammatical competence, the child cannot produce these sentences without an increased risk of processing difficulty. The results are congruent with proposals concerning the incremental and procedural nature of adult sentence production.  相似文献   

20.
This study investigated the acoustical and perceptual characteristics of vowels in speech produced during simultaneous communication (SC). Twelve normal hearing, experienced sign language users were recorded under SC and speech alone (SA) conditions speaking a set of sentences containing monosyllabic words designed for measurement of vowel duration, formant frequencies, and fundamental frequency in consonant-vowel-consonant (CVC) syllables and 60 listeners audited the speech samples. Although results indicated longer sentence and vowel durations for SC than SA, the data showed no difference in spectral characteristics of vowels produced during SC versus SA, indicating no degradation of vowel spectrum by rate alteration during SC. Further, no difference was found in listeners' ability to identify vowels produced during SC versus SA, indicating no degradation of vowel perceptual cues during SC. These conclusions are consistent with previous research indicating that temporal alterations produced by SC do not produce degradation of segmental acoustical characteristics of spoken English. LEARNING OUTCOMES: As a result of this activity, the participant will be able to (1) describe simultaneous communication; (2) explain the role of simultaneous communication in communication with children who are deaf; (3) describe vowel acoustics in English speech; (4) discuss methods of measuring vowel perception; (5) specify the acoustic characteristics of vowels produced during simultaneous communication; and (6) specify the ability of listeners to perceive vowels in speech produced during simultaneous communication.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号