首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 16 毫秒
1.
This 3-T fMRI study investigates brain regions similarly and differentially involved with listening and covert production of singing relative to speech. Given the greater use of auditory-motor self-monitoring and imagery with respect to consonance in singing, brain regions involved with these processes are predicted to be differentially active for singing more than for speech. The stimuli consisted of six Japanese songs. A block design was employed in which the tasks for the subject were to listen passively to singing of the song lyrics, passively listen to speaking of the song lyrics, covertly sing the song lyrics visually presented, covertly speak the song lyrics visually presented, and to rest. The conjunction of passive listening and covert production tasks used in this study allow for general neural processes underlying both perception and production to be discerned that are not exclusively a result of stimulus induced auditory processing nor to low level articulatory motor control. Brain regions involved with both perception and production for singing as well as speech were found to include the left planum temporale/superior temporal parietal region, as well as left and right premotor cortex, lateral aspect of the VI lobule of posterior cerebellum, anterior superior temporal gyrus, and planum polare. Greater activity for the singing over the speech condition for both the listening and covert production tasks was found in the right planum temporale. Greater activity in brain regions involved with consonance, orbitofrontal cortex (listening task), subcallosal cingulate (covert production task) were also present for singing over speech. The results are consistent with the PT mediating representational transformation across auditory and motor domains in response to consonance for singing over that of speech. Hemispheric laterality was assessed by paired t tests between active voxels in the contrast of interest relative to the left-right flipped contrast of interest calculated from images normalized to the left-right reflected template. Consistent with some hypotheses regarding hemispheric specialization, a pattern of differential laterality for speech over singing (both covert production and listening tasks) occurs in the left temporal lobe, whereas, singing over speech (listening task only) occurs in right temporal lobe.  相似文献   

2.
Dysarthria in people with Parkinson's disease (PD) has been widely studied. However, a limited number of studies have investigated lingual function during speech production in this population. This study aimed to investigate lingual kinematics during speech production using electromagnetic articulography (AG-200 EMA). The PD group consisted of eight dysarthric speakers with PD and was matched with a group of eight controls. The tongue tip and tongue back movements of all participants during sentence production were recorded by EMA. Results showed that, perceptually, the participants with PD were mildly dysarthric. Kinematic results documented comparable (for alveolar sentence production) and increased (for velar sentence production) range of lingual movement in the PD group when compared to the control group. Lingual movement velocity, acceleration, and deceleration were also increased in the PD group, predominantly for the release phase of consonant production during sentence utterances. The PD group had longer duration in the production of alveolar consonant and comparable duration in the production of velar consonant. The results of the present study suggest the presence of impaired lingual control in individuals with PD. Increased range of articulatory movement, primarily in the release phase of consonant production, may account for articulatory imprecision in this population.  相似文献   

3.
Neurophysiological research suggests that understanding the actions of others harnesses neural circuits that would be used to produce those actions directly. We used fMRI to examine brain areas active during language comprehension in which the speaker was seen and heard while talking (audiovisual) or heard but not seen (audio-alone) or when the speaker was seen talking with the audio track removed (video-alone). We found that audiovisual speech perception activated a network of brain regions that included cortical motor areas involved in planning and executing speech production and areas subserving proprioception related to speech production. These regions included the posterior part of the superior temporal gyrus and sulcus, the pars opercularis, premotor cortex, adjacent primary motor cortex, somatosensory cortex, and the cerebellum. Activity in premotor cortex and posterior superior temporal gyrus and sulcus was modulated by the amount of visually distinguishable phonemes in the stories. None of these regions was activated to the same extent in the audio- or video-alone conditions. These results suggest that integrating observed facial movements into the speech perception process involves a network of multimodal brain regions associated with speech production and that these areas contribute less to speech perception when only auditory signals are present. This distributed network could participate in recognition processing by interpreting visual information about mouth movements as phonetic information based on motor commands that could have generated those movements.  相似文献   

4.
This paper presents an analysis of connected speech data from six older children (9;05 – 16;03 years) with persisting developmental speech impairments. It uses a combination of perceptual, electropalatographic and acoustic analyses to explore the interplay between articulatory accuracy and prosodic fluency in their speech production, and contrasts their connected speech production with their production of single words. Both normal and atypical word boundary behaviours are identified, and each child presents with a different profile of articulatory and prosodic behaviours which are not observable in their production of single words. The clinical implications of the differences between single word production and connected speech behaviour are discussed.  相似文献   

5.
The primary objective of this article is to study whether an assessment instrument specifically designed to assess speech motor control on word level productions would be able to add differential diagnostic speech characteristics between people who clutter and people who stutter. It was hypothesized that cluttering is a fluency disorder in which speech motor control on word level is disturbed in high speech rate, resulting in errors in flow of speech and sequencing. An assessment instrument on speech motor coordination on word level was developed and validated. In an elicitation procedure, repetitions of complex multi-syllabic words at a fast speech rate were obtained from 47 dysfluent participants (mean age 24.3; SD 10.25, range 14.2–47.4 yrs) and 327 controls (mean age 25.56 yrs; SD 8.49; age range 14.3–50.1). Speech production was judged on articulatory accuracy, smooth-flow (coarticulation, flow and sequencing) and articulatory rate. Results from people who clutter (PWC) and people who stutter (PWS) were compared to normative data based on control group data. PWC produced significantly more flow and sequencing errors compared to PWS. Further research is needed in order to study speech motor control in spontaneous speech of people who clutter.  相似文献   

6.
Children with and without speech, language and/or literacy impairment, delete consonants when they name pictures to elicit single words. Consonant deletion seems to be more frequent in long words (words of three or more syllables) than in short words (words of one or two syllables). However, it may be missed in long words because they are not routinely assessed and, even if they are, there is little normative data about them. The study aims were (1) to determine if a relationship exists between consonant deletion and the number of syllables in words, (2) delimit variation in the numbers of children using it, its frequency of occurrence and the words it affects and (3) to discuss the application of these data to clinical practice. The participants were 283 typically developing children, aged 3;0 to 7;11 years, speaking Australian English with proven normal language, cognition and hearing. They named pictures, yielding 166 selected words that were varied for syllable number, stress and shape and repeatedly sampled all consonants and vowels of Australian English. Almost all participants (95%) used consonant deletion. Whilst a relationship existed between consonant deletion frequency and the number of syllables in words, the syllable effect was interpreted as a proxy of an interaction of segmental and prosodic features that included two or more syllables, sonorant sounds, non-final weak syllables, within-word consonant sequences and/or anterior-posterior articulatory movements. Clinically, two or three deletions of consonants across the affected words may indicate typical behaviour for children up to the age of 7;11 years but variations outside these tolerances may mark impairment. These results are further evidence to include long words in routine speech assessment.  相似文献   

7.
Sensory-motor interactions between auditory and articulatory representations in the dorsal auditory processing stream are suggested to contribute to speech perception, especially when bottom-up information alone is insufficient for purely auditory perceptual mechanisms to succeed. Here, we hypothesized that the dorsal stream responds more vigorously to auditory syllables when one is engaged in a phonetic identification/repetition task subsequent to perception compared to passive listening, and that this effect is further augmented when the syllables are embedded in noise. To this end, we recorded magnetoencephalography while twenty subjects listened to speech syllables, with and without noise masking, in four conditions: passive perception; overt repetition; covert repetition; and overt imitation. Compared to passive listening, left-hemispheric N100m equivalent current dipole responses were amplified and shifted posteriorly when perception was followed by covert repetition task. Cortically constrained minimum-norm estimates showed amplified left supramarginal and angylar gyri responses in the covert repetition condition at ~100ms from stimulus onset. Longer-latency responses at ~200ms were amplified in the covert repetition condition in the left angular gyrus and in all three active conditions in the left premotor cortex, with further enhancements when the syllables were embedded in noise. Phonetic categorization accuracy and magnitude of voice pitch change between overt repetition and imitation conditions correlated with left premotor cortex responses at ~100 and ~200ms, respectively. Together, these results suggest that the dorsal stream involvement in speech perception is dependent on perceptual task demands and that phonetic categorization performance is influenced by the left premotor cortex.  相似文献   

8.
Book review     
This commentary on Rose's review takes as departure point her remarks regarding possible functional interactive neurocognitive networks underpinning gesture. Advances in the understanding of motor control pertinent to the topic of gesture in aphasia are discussed, in particular the importance of mirror neurones in gesture perception-production and cross-modal linkages in input and output. It is argued that far from being simply an adjunct to communication, gesture (including articulatory, speech gestures) perception and production operates within a highly interactive system and forms an integral part of comprehending what is happening in the environment, and linking this to conceptual and motor representations and responses. Such networks offer fruitful insights into the role of gesture in facilitating word and speech movement retrieval in aphasia.  相似文献   

9.
An fMRI investigation of syllable sequence production   总被引:2,自引:0,他引:2  
Bohland JW  Guenther FH 《NeuroImage》2006,32(2):821-841
Fluent speech comprises sequences that are composed from a finite alphabet of learned words, syllables, and phonemes. The sequencing of discrete motor behaviors has received much attention in the motor control literature, but relatively little has been focused directly on speech production. In this paper, we investigate the cortical and subcortical regions involved in organizing and enacting sequences of simple speech sounds. Sparse event-triggered functional magnetic resonance imaging (fMRI) was used to measure responses to preparation and overt production of non-lexical three-syllable utterances, parameterized by two factors: syllable complexity and sequence complexity. The comparison of overt production trials to preparation only trials revealed a network related to the initiation of a speech plan, control of the articulators, and to hearing one's own voice. This network included the primary motor and somatosensory cortices, auditory cortical areas, supplementary motor area (SMA), the precentral gyrus of the insula, and portions of the thalamus, basal ganglia, and cerebellum. Additional stimulus complexity led to increased engagement of the basic speech network and recruitment of additional areas known to be involved in sequencing non-speech motor acts. In particular, the left hemisphere inferior frontal sulcus and posterior parietal cortex, and bilateral regions at the junction of the anterior insula and frontal operculum, the SMA and pre-SMA, the basal ganglia, anterior thalamus, and the cerebellum showed increased activity for more complex stimuli. We hypothesize mechanistic roles for the extended speech production network in the organization and execution of sequences of speech sounds.  相似文献   

10.
The ability to internally simulate other persons' actions is important for social interaction. In monkeys, neurons in the premotor cortex are activated both when the monkey performs mouth or hand actions and when it views or listens to actions made by others. Neuronal circuits with similar "mirror-neuron" properties probably exist in the human Broca's area and primary motor cortex. Viewing other person's hand actions also modulates activity in the primary somatosensory cortex SI, suggesting that the SI cortex is related to the human mirror-neuron system. To study the selectivity of the SI activation during action viewing, we stimulated the lower lip (with tactile pulses) and the median nerves (with electric pulses) in eight subjects to activate their SI mouth and hand cortices while the subjects either rested, listened to other person's speech, viewed her articulatory gestures, or executed mouth movements. The 55-ms SI responses to lip stimuli were enhanced by 16% (P<0.01) in the left hemisphere during speech viewing whereas listening to speech did not modulate these responses. The 35-ms responses to median-nerve stimulation remained stable during speech viewing and listening. Own mouth movements suppressed responses to lip stimuli bilaterally by 74% (P<0.001), without any effect on responses to median-nerve stimuli. Our findings show that viewing another person's articulatory gestures activates the left SI cortex in a somatotopic manner. The results provide further evidence for the view that SI is involved in "mirroring" of other persons' actions.  相似文献   

11.
This study explored the relationship between phonology and syntax by examining the development of regular past tense endings in relation to the acquisition of the final consonant clusters needed to encode them correctly. Forty children between three and six-years-old were tested in their mastery of the past tense ending. Four groups of ten subjects, formed on the basis of degree of mastery, were then tested in their ability to produce final consonant clusters in words with the past tense ending and words without the past tense ending. It was proposed that articulation of clusters would be poorer in words with the morphological ending and that children with better mastery of the morphological ending would experience less articulation difficulties when encoding this form. However, it was found that the presence of the past tense inflection had no significant effect on articulation. Furthermore, children with better mastery of past tense were not significantly better at encoding this form than were those with poor mastery. These results may have occurred because the subjects studied had almost mastered consonant clusters and this structure was therefore more resistant to syntactic influences. Furthermore, increasing syntactic complexity at the single word level did not appear to he significant enough to affect articulation.  相似文献   

12.
Young children's speech is compared to (a) adult-to-adult (A-A) normal speech, and (b) adult-to-adult (A-A) slow speech, and (c) adult-to-child (A-C) speech by measuring durations and variability of each segment in consonant-vowel-consonant CVC (CVC consonant-vowel-consonant) words. The results demonstrate that child speech is more similar to A-C speech than A-A slow speech in that it exhibits a large portion of long vowel duration in a word. However, child speech but differs from A-C speech by more noticeable lengthening of consonants. In addition, child speech exhibits an inconsistent timing relationship across segments within a word whereas durational variation in consonants and vowels was correlated in A-A speech and A-C speech. The results suggest that temporal patterns of young children are quite different from those of adults, and provide some evidence for lack of motor control capability and great variance in articulatory coordination.  相似文献   

13.
In this study the effect of phonotactic constraints concerning word-initial consonant clusters in children with delayed phonological acquisition was explored. Twelve German-speaking children took part (mean age 5;1). The spontaneous speech of all children was characterized by the regular appearance of the error patterns fronting, e.g., Kuh“cow” /ku:/ →[tu:], or stopping, e.g., Schaf“sheep” /a:f/ →[ta:f], which were inappropriate for their chronological age. The children were asked to produce words (picture naming task, word repetition task) with initial consonant clusters, in which the application of the error patterns would violate phonotactic sequence constraints. For instance, if fronting would apply in /kl-/, e.g., Kleid“dress”, it would be realized as the phontactically illegal consonant cluster /tl-/. The results indicate that phonotactic constraints affect word production in children with delayed phonological developments. Surprisingly, we found that children with fronting produced the critical consonants correctly significantly more often in word-initial consonant clusters than in words in which they appeared as singleton onsets. In addition, the results provide evidence for a similar developmental trajectory of acquisition in children with typical development and in children with delayed phonological acquisition.  相似文献   

14.
Traditionally, the left frontal and parietal lobes have been associated with language production while regions in the temporal lobe are seen as crucial for language comprehension. However, recent evidence suggests that the classical language areas constitute an integrated network where each area plays a crucial role both in speech production and perception. We used functional MRI to examine whether observing speech motor movements (without auditory speech) relative to non-speech motor movements preferentially activates the cortical speech areas. Furthermore, we tested whether the activation in these regions was modulated by task difficulty. This dissociates between areas that are actively involved with speech perception from regions that show an obligatory activation in response to speech movements (e.g. areas that automatically activate in preparation for a motoric response). Specifically, we hypothesized that regions involved with decoding oral speech would show increasing activation with increasing difficulty. We found that speech movements preferentially activate the frontal and temporal language areas. In contrast, non-speech movements preferentially activate the parietal region. Degraded speech stimuli increased both frontal and parietal lobe activity but did not differentially excite the temporal region. These findings suggest that the frontal language area plays a role in visual speech perception and highlight the differential roles of the classical speech and language areas in processing others' motor speech movements.  相似文献   

15.
Imaging speech production using fMRI   总被引:2,自引:0,他引:2  
Gracco VL  Tremblay P  Pike B 《NeuroImage》2005,26(1):294-301
Human speech is a well-learned, sensorimotor, and ecological behavior ideal for the study of neural processes and brain-behavior relations. With the advent of modern neuroimaging techniques such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), the potential for investigating neural mechanisms of speech motor control, speech motor disorders, and speech motor development has increased. However, a practical issue has limited the application of fMRI to issues in spoken language production and other related behaviors (singing, swallowing). Producing these behaviors during volume acquisition introduces motion-induced signal changes that confound the activation signals of interest. A number of approaches, ranging from signal processing to using silent or covert speech, have attempted to remove or prevent the effects of motion-induced artefact. However, these approaches are flawed for a variety of reasons. An alternative approach, that has only recently been applied to study single-word production, uses pauses in volume acquisition during the production of natural speech motion. Here we present some representative data illustrating the problems associated with motion artefacts and some qualitative results acquired from subjects producing short sentences and orofacial nonspeech movements in the scanner. Using pauses or silent intervals in volume acquisition and block designs, results from individual subjects result in robust activation without motion-induced signal artefact. This approach is an efficient method for studying the neural basis of spoken language production and the effects of speech and language disorders using fMRI.  相似文献   

16.
A further study is reported from a program of research exploring the improvement of speech perception by hearing-impaired persons via enhancement of acoustic features of consonants (10,11). Enhancements were applied to certain acoustic segments of consonants, segments known to be useful in consonant perception by normal-hearing persons but often not for persons with severe/profound hearing losses. The consonants were /k/, /t/, /g/, and /d/ located as the final phoneme in /baeC/ words; the voicing feature difference of /k/ versus /g/ and /t/ versus /d/ was the focus of study. The results showed that stop voicing perception improved to at least 90 percent for 3/4 of the listeners when the voiced murmur segments during /d/ and /g/ and the release bursts of /t/ and /k/ were amplified above their natural levels. The audibility of the enhanced segments generally explained differences between the listeners who showed large versus minimal improvements. One training session for stop voicing perception with the cue-enhanced words seemed sufficient to effect maximum performance improvement.  相似文献   

17.
Meschyan G  Hernandez AE 《NeuroImage》2006,29(4):1135-1140
The purpose of the present functional magnetic resonance imaging (fMRI) investigation was to examine how language proficiency and orthographic transparency (letter-sound mapping consistency) modulate neural activity during bilingual single word reading. Spanish-English bilingual participants, more fluent in their second language (L2; English) than their native language (L1; Spanish), were asked to read words in the two languages. Behavioral results showed that participants were significantly slower in reading words in their less proficient language (Spanish) than in their more proficient language (English). fMRI results also revealed that reading words in the less proficient language yielded greater activity in the articulatory motor system, consisting of supplementary motor area/cingulate, insula, and putamen. Together, the behavioral and fMRI results suggest that the less practiced, hence less proficient, language requires greater articulatory motor effort, which results in slower reading rates. Moreover, we found that orthographic transparency also played a neuromodulatory role. More transparent Spanish words yielded greater activity in superior temporal gyrus (STG; BA 22), a region implicated in phonological processing, and orthographically opaque English words yielded greater activity in visual processing and word recoding regions, such as the occipito-parietal border and inferior parietal lobe (IPL; BA 40). Overall, our fMRI results suggest that the articulatory motor system is more plastic, hence, more amenable to change because of greater exposure to the L2. By contrast, we propose that our orthography effect is less plastic, hence, less influenced by frequency of exposure to a language system.  相似文献   

18.
Reactivation of motor brain areas during explicit memory for actions   总被引:2,自引:0,他引:2  
Recent functional brain imaging studies have shown that sensory-specific brain regions that are activated during perception/encoding of sensory-specific information are reactivated during memory retrieval of the same information. Here we used PET to examine whether verbal retrieval of action phrases is associated with reactivation of motor brain regions if the actions were overtly or covertly performed during encoding. Compared to a verbal condition, encoding by means of overt as well as covert activity was associated with differential activity in regions in contralateral somatosensory and motor cortex. Several of these regions were reactivated during retrieval. Common to both the overt and covert conditions was reactivation of regions in left ventral motor cortex and left inferior parietal cortex. A direct comparison of the overt and covert activity conditions showed that activation and reactivation of left dorsal parietal cortex and right cerebellum was specific to the overt condition. These results support the reactivation hypothesis by showing that verbal-explicit memory of actions involves areas that are engaged during overt and covert motor activity.  相似文献   

19.
Wilson SM  Iacoboni M 《NeuroImage》2006,33(1):316-325
Neural responses to unfamiliar non-native phonemes varying in the extent to which they can be articulated were studied with functional magnetic resonance imaging (fMRI). Both superior temporal (auditory) and precentral (motor) areas were activated by passive speech perception, and both distinguished non-native from native phonemes, with greater signal change in response to non-native phonemes. Furthermore, speech-responsive motor regions and superior temporal sites were functionally connected. However, only in auditory areas did activity covary with the producibility of non-native phonemes. These data suggest that auditory areas are crucial for the transformation from acoustic signal to phonetic code, but the motor system also plays an active role, which may involve the internal generation of candidate phonemic categorizations. These 'motor' categorizations would then be compared to the acoustic input in auditory areas. The data suggest that speech perception is neither purely sensory nor motor, but rather a sensorimotor process.  相似文献   

20.
Tongue reduction surgery (TRS) has been advocated for children who have macroglossia associated with Beckwith Wiedemann Syndrome (BWS) to overcome, or reduce, the secondary effects of macroglossia. There are few reports describing the speech and oral motor characteristics in BWS, and no studies have systematically reported outcomes both pre- and post-operatively. The aims of this retrospective study were therefore to: systematically describe the speech and oral motor characteristics of this population pre- and post-operatively; to ascertain the effect TRS has on speech and oral motor skills and to discuss the presence of additional factors which may influence speech, and oral motor outcomes. Ten children with clinically confirmed BWS were assessed using a variety of standard clinical measures pre-operatively, three months post-operatively and at long-term follow up (mean age at follow-up after surgery 4.4 years). Results revealed that pre-operatively, speech was predominantly characterized by linguolabialization of bilabial consonants, and lingual blade production of alveolar and palatoalveolar consonants. These findings suggest that there are distinct articulatory errors caused by macroglossia. These errors were subsequently eliminated by TRS. Normal oral motor skills were present pre-operatively, and functional oral motor skills were found post-operatively with the exception of one case with co-occurring neurological impairment. Speech impairment unrelated directly to the macroglossia also occurred in the cohort. Assessment measures should take other factors into account when considering the aetiology of speech and oral motor impairment in this population.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号