首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Discourse, syntax, and prosody: the brain reveals an immediate interaction   总被引:1,自引:0,他引:1  
Speech is structured into parts by syntactic and prosodic breaks. In locally syntactic ambiguous sentences, the detection of a syntactic break necessarily follows detection of a corresponding prosodic break, making an investigation of the immediate interplay of syntactic and prosodic information impossible when studying sentences in isolation. This problem can be solved, however, by embedding sentences in a discourse context that induces the expectation of either the presence or the absence of a syntactic break right at a prosodic break. Event-related potentials (ERPs) were compared to acoustically identical sentences in these different contexts. We found in two experiments that the closure positive shift, an ERP component known to be elicited by prosodic breaks, was reduced in size when a prosodic break was aligned with a syntactic break. These results establish that the brain matches prosodic information against syntactic information immediately.  相似文献   

2.
Spoken language comprehension involves the use of different sources of linguistic information such as prosodic, syntactic, lexical, and semantic information. The question, however, of 'when' and 'how' these sources of information are exploited by the language processing system still remains unanswered. In the present study, we used event-related brain potentials (ERPs) to investigate the interaction between prosodic, syntactic, and lexical information during the processing of spoken German sentences. The sentence structure was manipulated by positioning a split particle at the end of the sentences after the occurrence of inflected verb whose lexical entry does not contain a split particle (e.g., *Sie alarmierte den Detektiv an [*She alerted at the detective]) [According to linguistic convention, incorrect sentences are marked by an asterisk.]. The prosodic contour of the verb stems was manipulated such that it marked either the presence of a split particle at a later position in the sentence or not. Participants performed an off-line probe-detection task. ERP data indicate that prosodic information of German-inflected verb stems is consulted on-line by the language processing system ('parser') in order to 'predict' the presence of a split particle at a later position in the sentence. An N400 effect was observed for the processing of split particles following verb stems which do not take a particle. However, this effect was only observed when the prosody of the verb stem did signal the presence of a split particle. We argue that the N400 component reflects the high costs associated with the lexical search that the language processing system has to perform when confronted with nonexisting words such as these resulting from the combination of the split particle and the verb stem in the present study. Furthermore, as a general reflection of prosodic processes, a Closure Positive Shift (CPS) was found at intonational phrase boundaries. In sum, the present findings provide strong evidence that prosodic information is a good 'predictor' of upcoming information during the auditory processing of German sentences.  相似文献   

3.
The present study addresses the question whether accentuation and prosodic phrasing can have a similar function, namely, to group words in a sentence together. Participants listened to locally ambiguous sentences containing object- and subject-control verbs while ERPs were measured. In Experiment 1, these sentences contained a prosodic break, which can create a certain syntactic grouping of words, or no prosodic break. At the disambiguation, an N400 effect occurred when the disambiguation was in conflict with the syntactic grouping created by the break. We found a similar N400 effect without the break, indicating that the break did not strengthen an already existing preference. This pattern held for both object- and subject-control items. In Experiment 2, the same sentences contained a break and a pitch accent on the noun following the break. We argue that the pitch accent indicates a broad focus covering two words [see Gussenhoven, C. On the limits of focus projection in English. In P. Bosch & R. van der Sandt (Eds.), Focus: Linguistic, cognitive, and computational perspectives. Cambridge: University Press, 1999], thus grouping these words together. For object-control items, this was semantically possible, which led to a "good-enough" interpretation of the sentence. Therefore, both sentences were interpreted equally well and the N400 effect found in Experiment 1 was absent. In contrast, for subject-control items, a corresponding grouping of the words was impossible, both semantically and syntactically, leading to processing difficulty in the form of an N400 effect and a late positivity. In conclusion, accentuation can group words together on the level of information structure, leading to either a semantically "good-enough" interpretation or a processing problem when such a semantic interpretation is not possible.  相似文献   

4.
Background : Research addressing prosodic deficits in brain-damaged populations has concentrated on the specialised capabilities of the right and the left cerebral hemispheres in processing the global characteristics of prosody. This focus has been of interest in that the fundamental frequency (F0), duration and intensity acoustic characteristics within a prosodic structure can convey different linguistic and nonlinguistic information. Much of the research investigating this interesting phenomenon has produced conflicting results. As such, different theories have been proposed in an attempt to provide plausible explanations of the conflicting findings regarding hemispheric specialisation in processing prosodic structures. Aims : The purpose of this study was to examine one of the theories, the functional lateralisation theory, through four experiments that altered the linguistic and nonlinguistic functions across a range of prosodic structures. Methods & Procedures : Three groups of subjects participated in each of the four experiments: (1) eight subjects with LHD, (2) eight subjects with RHD, and (3) eight control subjects. The first experiment addressed the extent to which the processing of lexical stress differences would be lateralised to the left or right hemisphere by requiring listeners to determine the meanings and grammatical assignments of two-syllable words conveyed through stressed or unstressed syllables. In another linguistic condition, the second experiment placed demands on syntactic parsing operations by requiring listeners to parse syntactically ambiguous sentences which were disambiguated through the perception of prosodic boundaries located at syntactic junctures. A third linguistic condition required listeners to determine the categorical assignment of a speaker's intention of making a statement or asking a question conveyed through the prosodic structures. The fourth experiment was designed to determine hemispheric lateralisation in processing nonlinguistic prosodic structures. In this experiment, listeners were required to determine the emotional state of a speaker conveyed through the prosodic structures in sentences that contained semantic information which was either congruent or incongruent with the emotional content of the prosodic structures. Results : When subjects were asked to identify lexical stress differences (Experiment 1), syntactically ambiguous sentences (Experiment 2), and questions and statements (Experiment 3) conveyed through prosody, the LHD group demonstrated a significantly poorer performance than the control and RHD groups. When asked to identify emotions conveyed through prosody (Experiment 4), the RHD group demonstrated a significantly poorer performance than the control and LHD groups. Conclusion : These findings support the functional lateralisation theory that proposes a left hemisphere dominance for processing linguistic prosodic structures and a right hemisphere dominance for processing nonlinguistic prosodic structures.  相似文献   

5.
Previous research has implicated a portion of the anterior temporal cortex in sentence-level processing. This region activates more to sentences than to word-lists, sentences in an unfamiliar language, and environmental sound sequences. The current study sought to identify the relative contributions of syntactic and prosodic processing to anterior temporal activation. We presented auditory stimuli where the presence of prosodic and syntactic structure was independently manipulated during functional magnetic resonance imaging (fMRI). Three "structural" conditions included normal sentences, sentences with scrambled word order, and lists of content words. These three classes of stimuli were presented either with sentence prosody or with flat supra-lexical (list-like) prosody. Sentence stimuli activated a portion of the left anterior temporal cortex in the superior temporal sulcus (STS) and extending into the middle temporal gyrus, independent of prosody, and to a greater extent than any of the other conditions. An interaction between the structural conditions and prosodic conditions was seen in a more dorsal region of the anterior temporal lobe bilaterally along the superior temporal gyrus (STG). A post-hoc analysis revealed that this region responded either to syntactically structured stimuli or to nonstructured stimuli with sentence-like prosody. The results suggest a parcellation of anterior temporal cortex into 1) an STG region that is sensitive both to the presence of syntactic information and is modulated by prosodic manipulations (in nonsyntactic stimuli); and 2) a more inferior left STS/MTG region that is more selective for syntactic structure.  相似文献   

6.
The present study was designed to examine the processing of prosodic and syntactic information in spoken language. The aim was to investigate the long discussed relationship between prosody and syntax in online speech comprehension to reveal direct evidence about whether the two information types are interactive or independent from each other. The method of event-related potentials allowed us to sheet light on the precise time course of this relationship. Our experimental manipulation involved two prosodically different positions in German sentences, i.e., the critical noun in penultimate vs. final position. In syntactically correct sentences, a prosodic manipulation of the penultimate word gave rise to a late centroparietal negativity that resembled the classical N400 component. We interpreted the negativity as a correlate of lexical integration costs for the prosodically unexpected sentence-final word. Comparisons with syntactically incorrect sentences revealed that this effect was dependent on the sentences' grammatical correctness. When the prosodic manipulation was realized at the final word, we observed a right anterior negativity followed by a late positivity (P600). The right anterior negativity was present independent of the sentences' grammatical correctness. However, the P600 was not, as a late positivity was present for straightforward prosodic and syntactic violations but increased for the combined violations. This suggests that the right anterior negativity, and not the P600, should be considered as a pure prosodic effect. The combined data moreover suggest an interaction between prosody and syntax in a later time window during sentence comprehension.  相似文献   

7.
In spoken language comprehension, syntactic parsing decisions interact with prosodic phrasing, which is directly affected by phrase length. Here we used ERPs to examine whether a similar effect holds for the on-line processing of written sentences during silent reading, as suggested by theories of "implicit prosody." Ambiguous Korean sentence beginnings with two distinct interpretations were manipulated by increasing the length of sentence-initial subject noun phrases (NPs). As expected, only long NPs triggered an additional prosodic boundary reflected by a closure positive shift (CPS) in ERPs. When sentence materials further downstream disambiguated the initially dispreferred interpretation, the resulting P600 component reflecting processing difficulties ("garden path" effects) was smaller in amplitude for sentences with long NPs. Interestingly, additional prosodic revisions required only for the short subject disambiguated condition-the delayed insertion of an implicit prosodic boundary after the subject NP-were reflected by a frontal P600-like positivity, which may be interpreted in terms of a delayed CPS brain response. These data suggest that the subvocally generated prosodic boundary after the long subject NP facilitated the recovery from a garden path, thus primarily supporting one of two competing theoretical frameworks on implicit prosody. Our results underline the prosodic nature of the cognitive processes underlying phrase length effects and contribute cross-linguistic evidence regarding the on-line use of implicit prosody for parsing decisions in silent reading.  相似文献   

8.
In reading, a comma in the wrong place can cause more severe misunderstandings than the lack of a required comma. Here, we used ERPs to demonstrate that a similar effect holds for prosodic boundaries in spoken language. Participants judged the acceptability of temporarily ambiguous English "garden path" sentences whose prosodic boundaries were either in line or in conflict with the actual syntactic structure. Sentences with incongruent boundaries were accepted less than those with missing boundaries and elicited a stronger on-line brain response in ERPs (N400/P600 components). Our results support the notion that mentally deleting an overt prosodic boundary is more costly than postulating a new one and extend previous findings, suggesting an immediate role of prosody in sentence comprehension. Importantly, our study also provides new details on the profile and temporal dynamics of the closure positive shift (CPS), an ERP component assumed to reflect prosodic phrasing in speech and music in real time. We show that the CPS is reliably elicited at the onset of prosodic boundaries in English sentences and is preceded by negative components. Its early onset distinguishes the speech CPS in adults both from prosodic ERP correlates in infants and from the "music CPS" previously reported for trained musicians.  相似文献   

9.
Prosody-driven sentence processing: an event-related brain potential study   总被引:2,自引:0,他引:2  
Four experiments systematically investigating the brain's response to the perception of sentences containing differing amounts of linguistic information are presented. Spoken language generally provides various levels of information for the interpretation of the incoming speech stream. Here, we focus on the processing of prosodic phrasing, especially on its interplay with phonemic, semantic, and syntactic information. An event-related brain potential (ERP) paradigm was chosen to record the on-line responses to the processing of sentences containing major prosodic boundaries. For the perception of these prosodic boundaries, the so-called closure positive shift (CPS) has been manifested as a reliable and replicable ERP component. It has mainly been shown to correlate to major intonational phrasing in spoken language. However, to define this component as exclusively relying on the prosodic information in the speech stream, it is necessary to systematically reduce the linguistic content of the stimulus material. This was done by creating quasi-natural sentence material with decreasing semantic, syntactic, and phonemic information (i. e., jabberwocky sentences, in which all content words were replaced by meaningless words; pseudoword sentences, in which all function and all content words are replaced by meaningless words; and delexicalized sentences, hummed intonation contour of a sentence removing all segmental content). The finding that a CPS was identified in all sentence types in correlation to the perception of their major intonational boundaries clearly indicates that this effect is driven purely by prosody.  相似文献   

10.
This study examined the right hemisphere contribution to the production of linguistic prosody where acoustic features of prosodic structures in different linguistic contexts were examined accompanied by perceptual judgements. When control and right hemisphere damaged (RHD) subjects were asked to produce lexical stress differences (Experiment 1), prosodic boundaries to denote syntactic constituents (Experiment 2), and questions and statements (Experiment 3) conveyed through prosody, both groups were similar in producing the duration, F0 and amplitude acoustic cues within prosodic structures to convey different linguistic meanings. Listeners were able to perceive the meanings of the productions of both groups in Experiments 1 and 3, but had greater difficulty perceiving the productions of the RHD group in Experiment 2. These findings, which suggest that the right hemisphere has a limited role in the production of linguistic prosody, are discussed in relation to perceptual theories of prosody.  相似文献   

11.
This study examined the right hemisphere contribution to the production of linguistic prosody where acoustic features of prosodic structures in different linguistic contexts were examined accompanied by perceptual judgements. When control and right hemisphere damaged (RHD) subjects were asked to produce lexical stress differences (Experiment 1), prosodic boundaries to denote syntactic constituents (Experiment 2), and questions and statements (Experiment 3) conveyed through prosody, both groups were similar in producing the duration, F0 and amplitude acoustic cues within prosodic structures to convey different linguistic meanings. Listeners were able to perceive the meanings of the productions of both groups in Experiments 1 and 3, but had greater difficulty perceiving the productions of the RHD group in Experiment 2. These findings, which suggest that the right hemisphere has a limited role in the production of linguistic prosody, are discussed in relation to perceptual theories of prosody.  相似文献   

12.
Psycholinguistic theories assume an interaction between prosody and syntax during language processing. Based on studies using mostly off-line methods, it is unclear whether an interaction occurs at later or initial processing stages. Using event-related potentials, the present study provides neurophysiological evidence for a prosody and syntax interaction in initial processing. The sentence material contained mere prosodic and syntactic as well as combined prosodic-syntactic violations. For the syntax violation, the critical word appeared after a preposition. The suffix of the critical word either indicated a noun fulfilling the syntactic requirements of the preceding preposition or a verb causing a word category violation. For the prosodic manipulation, congruent critical words were normally intonated (signaling sentence continuation) while prosodically incongruent critical words signaled sentence end. For the mere prosodic incongruity, a broadly distributed negativity was observed at the critical word-stem (300-500 msec aligned to word onset). In response to a mere syntactic error, a left temporal negativity was elicited in an early time window (200-400 msec aligned to suffix onset), taken to reflect initial phrase structure building processes. In contrast, in response to the combined prosodic-syntactic violation, an early temporal negativity showed up bilaterally at the suffix in the same time window. Our interpretation is that the process of initial structure building as reflected in the early left anterior negativity recruits additional right hemispheric neural resources when the critical word contains both syntactic and prosodic violations. This suggests the immediate influence of phrasal prosody during the initial parsing stage in speech processing.  相似文献   

13.
Williams syndrome (WS), a neurodevelopmental genetic disorder due to a microdeletion in chromosome 7, is described as displaying an intriguing socio-cognitive phenotype. Deficits in prosody production and comprehension have been consistently reported in behavioral studies. It remains, however, to be clarified the neurobiological processes underlying prosody processing in WS.This study aimed at characterizing the electrophysiological response to neutral, happy, and angry prosody in WS, and examining if this response was dependent on the semantic content of the utterance. A group of 12 participants (5 female and 7 male), diagnosed with WS, with age range between 9 and 31 years, was compared with a group of typically developing participants, individually matched for chronological age, gender and laterality. After inspection of EEG artifacts, data from 9 participants with WS and 10 controls were included in ERP analyses.Participants were presented with neutral, positive and negative sentences, in two conditions: (1) with intelligible semantic and syntactic information; (2) with unintelligible semantic and syntactic information (‘pure prosody’ condition). They were asked to decide which emotion was underlying the auditory sentence.Atypical event-related potentials (ERP) components were related with prosodic processing (N100, P200, N300) in WS. In particular, reduced N100 was observed for prosody sentences with semantic content; more positive P200 for sentences with semantic content, in particular for happy and angry intonations; and reduced N300 for both types of sentence conditions.These findings suggest abnormalities in early auditory processing, indicating a bottom-up contribution to the impairment in emotional prosody processing and comprehension. Also, at least for N100 and P200, they suggest the top-down contributions of semantic processes in the sensory processing of speech. This study showed, for the first time, that abnormalities in ERP measures of early auditory processing in WS are also present during the processing of emotional vocal information. This may represent a physiological signature of underlying impaired on-line language and socio-emotional processing.  相似文献   

14.
Neural oscillations track linguistic information during speech comprehension (Ding et al., 2016; Keitel et al., 2018), and are known to be modulated by acoustic landmarks and speech intelligibility (Doelling et al., 2014; Zoefel and VanRullen, 2015). However, studies investigating linguistic tracking have either relied on non-naturalistic isochronous stimuli or failed to fully control for prosody. Therefore, it is still unclear whether low-frequency activity tracks linguistic structure during natural speech, where linguistic structure does not follow such a palpable temporal pattern. Here, we measured electroencephalography (EEG) and manipulated the presence of semantic and syntactic information apart from the timescale of their occurrence, while carefully controlling for the acoustic-prosodic and lexical-semantic information in the signal. EEG was recorded while 29 adult native speakers (22 women, 7 men) listened to naturally spoken Dutch sentences, jabberwocky controls with morphemes and sentential prosody, word lists with lexical content but no phrase structure, and backward acoustically matched controls. Mutual information (MI) analysis revealed sensitivity to linguistic content: MI was highest for sentences at the phrasal (0.8–1.1 Hz) and lexical (1.9–2.8 Hz) timescales, suggesting that the delta-band is modulated by lexically driven combinatorial processing beyond prosody, and that linguistic content (i.e., structure and meaning) organizes neural oscillations beyond the timescale and rhythmicity of the stimulus. This pattern is consistent with neurophysiologically inspired models of language comprehension (Martin, 2016, 2020; Martin and Doumas, 2017) where oscillations encode endogenously generated linguistic content over and above exogenous or stimulus-driven timing and rhythm information.SIGNIFICANCE STATEMENT Biological systems like the brain encode their environment not only by reacting in a series of stimulus-driven responses, but by combining stimulus-driven information with endogenous, internally generated, inferential knowledge and meaning. Understanding language from speech is the human benchmark for this. Much research focuses on the purely stimulus-driven response, but here, we focus on the goal of language behavior: conveying structure and meaning. To that end, we use naturalistic stimuli that contrast acoustic-prosodic and lexical-semantic information to show that, during spoken language comprehension, oscillatory modulations reflect computations related to inferring structure and meaning from the acoustic signal. Our experiment provides the first evidence to date that compositional structure and meaning organize the oscillatory response, above and beyond prosodic and lexical controls.  相似文献   

15.
Emotional prosody provides important cues for understanding the emotions of others in every day communication. Asperger's syndrome (AS) is a developmental disorder characterised by pronounced deficits in socio-emotional communication, including difficulties in the domain of prosody processing. We measured pupillary responses as an index of emotional prosodic processing when 15 participants with AS and 19 non-clinical control participants listened to positive, negative and neutral prosodic sentences. This occurred under a spontaneous and an explicit task instruction. In the explicit processing condition, the AS group and the non-clinical controls showed increased pupil dilations to positively and negatively intoned sentences when judging the valence of that prosodic sentence. This suggests higher processing demands for emotionally arousing information, as the effect was not found in comparison to neutrally intoned sentences. In the spontaneous processing condition, controls also responded with increased pupil dilations to positively intoned sentences, whilst individuals with AS showed increased pupil dilations to negative sentences. The latter result is further supported by diminished ratings of emotionally intense sentences in the AS group compared to healthy controls. Perception and recognition of positively valenced sentences in individuals with AS appears impaired and dependent on the general task set-up. Diminished pupil dilations in spontaneous positive processing conditions as well as reduced positive valence ratings give strong indications for a general negative processing bias of verbal information for adult individuals diagnosed with AS.  相似文献   

16.
Work in theoretical linguistics and psycholinguistics suggests that human linguistic knowledge forms a continuum between individual lexical items and abstract syntactic representations, with most linguistic representations falling between the two extremes and taking the form of lexical items stored together with the syntactic/semantic contexts in which they frequently occur. Neuroimaging evidence further suggests that no brain region is selectively sensitive to only lexical information or only syntactic information. Instead, all the key brain regions that support high-level linguistic processing have been implicated in both lexical and syntactic processing, suggesting that our linguistic knowledge is plausibly represented in a distributed fashion in these brain regions. Given this distributed nature of linguistic representations, multi-voxel pattern analyses (MVPAs) can help uncover important functional properties of the language system. In the current study we use MVPAs to ask two questions: (1) Do language brain regions differ in how robustly they represent lexical vs. syntactic information? and (2) Do any of the language bran regions distinguish between “pure” lexical information (lists of words) and “pure” abstract syntactic information (jabberwocky sentences) in the pattern of activity? We show that lexical information is represented more robustly than syntactic information across many language regions (with no language region showing the opposite pattern), as evidenced by a better discrimination between conditions that differ along the lexical dimension (sentences vs. jabberwocky, and word lists vs. nonword lists) than between conditions that differ along the syntactic dimension (sentences vs. word lists, and jabberwocky vs. nonword lists). This result suggests that lexical information may play a more critical role than syntax in the representation of linguistic meaning. We also show that several language regions reliably discriminate between “pure” lexical information and “pure” abstract syntactic information in their patterns of neural activity.  相似文献   

17.
In dichotic listening, a right ear advantage for linguistic tasks reflects left hemisphere specialization, and a left ear advantage for prosodic tasks reflects right hemisphere specialization. Three experiments used a response hand manipulation with a dichotic listening task to distinguish between direct access (relative specialization) and callosal relay (absolute specialization) explanations of perceptual asymmetries for linguistic and prosodic processing. Experiment 1 found evidence for direct access in linguistic processing and callosal relay in prosodic processing. Direct access for linguistic processing was found to depend on lexical status (Experiment 2) and affective prosody (Experiment 3). Results are interpreted in terms of a dynamic model of hemispheric specialization in which right hemisphere contributions to linguistic processing emerge when stimuli are words, and when they are spoken with affective prosody.  相似文献   

18.
The role of sub-cortical structures such as the striatum in language remains a controversial issue. Based on linguistic claims that language processing implies both recovery of lexical information and application of combinatorial rules it has been shown that striatal damaged patients have difficulties applying conjugation rules while lexical recovery of irregular forms is broadly spared (e.g., Ullman, M. T., Corkin, S., Coppola, M., Hickok, G., Growdon, J. H., Koroshetz, W. J., et al. (1997). A neural dissociation within language: Evidence that the mental dictionary is part of declarative memory, and that grammatical rules are processed by the procedural system. Journal of Cognitive Neuroscience, 9(2), 266-276). Here we bolstered the striatum-rule hypothesis by investigating lexical abilities and rule application at the phrasal level. Both processing aspects were assessed in a model of striatal dysfunction, namely Huntington's disease (HD). Using a semantic priming task we compared idiomatic prime sentences involving lexical access to whole phrases (e.g., "Paul has kicked the bucket") with idiom-derived sentences that contained passivation changes involving syntactic movement rules (e.g., "Paul was kicked by the bucket"), word changes (e.g., "Paul has crushed the bucket") or either. Target words that were either idiom-related (e.g., "death") reflecting lexical access to idiom meanings, word-related (e.g., "bail") reflecting lexical access to single words, or unrelated (e.g., "table"). HD patients displayed selective abnormalities with passivated sentences whereas priming was normal with idioms and sentences containing only word changes. We argue that the role of the striatum in sentence processing specifically pertains to the application of syntactic movement rules whereas it is not involved in canonical rules required for active structures or in lexical processing aspects. Our findings support the striatum-rule hypothesis but suggest that it should be refined by tracking the particular kind of language rules depending on striatal computations.  相似文献   

19.
By means of fMRI measurements, the present study identifies brain regions in left and right peri-sylvian areas that subserve grammatical or prosodic processing. Normal volunteers heard 1) normal sentences; 2) so-called syntactic sentences comprising syntactic, but no lexical-semantic information; and 3) manipulated speech signals comprising only prosodic information, i.e., speech melody. For all conditions, significant blood oxygenation signals were recorded from the supratemporal plane bilaterally. Left hemisphere areas that surround Heschl gyrus responded more strongly during the two sentence conditions than to speech melody. This finding suggests that the anterior and posterior portions of the superior temporal region (STR) support lexical-semantic and syntactic aspects of sentence processing. In contrast, the right superior temporal region, in especially the planum temporale, responded more strongly to speech melody. Significant brain activation in the fronto-opercular cortices was observed when participants heard pseudo sentences and was strongest during the speech melody condition. In contrast, the fronto-opercular area is not prominently involved in listening to normal sentences. Thus, the functional activation in fronto-opercular regions increases as the grammatical information available in the sentence decreases. Generally, brain responses to speech melody were stronger in right than left hemisphere sites, suggesting a particular role of right cortical areas in the processing of slow prosodic modulations.  相似文献   

20.
Agrammatism is a language disorder characterised by a morphological and/or syntactic deficit in spontaneous speech. Such deficits are usually associated with comprehension disorders - though it is said that this is not always the case - which result in a certain degree of variability in syntactic, lexical, and morpholexical performance. The purpose of this study is to reconsider the nature of comprehension disorders in agrammatism, to test whether Grodzinsky's Trace Deletion Hypothesis (TDH) can be generalised to all agrammatic patients, and to ascertain whether the pattern of impairment observed in agrammatism differs from that present in fluent aphasic patients. Eleven agrammatic patients were tested by means of a sentence comprehension task comprising simple active and passive reversible sentences. The performance of the agrammatic patients was compared to that of 16 fluent aphasic (10 Wernicke's and 6 conduction) and 10 control subjects. The deficits observed in the agrammatic subjects were compatible with the TDH, but there was also impaired processing of pronouns (elements that are also subject to movement) and a mild deficit on the processing of simple active sentences. The fluent aphasic patients showed a similar pattern of impairment. A logistic regression analysis was then applied to each single case separately, in order to study the homogeneity of the patients' performance within each aphasic subgroup. Of the 11 agrammatic patients, 3 did not show comprehension disorders, 5 had a specific deficit for passive movement, 1 a lexical deficit for pronouns only, and 1 a pattern of impairment compatible with Linebarger et al.'s trade-off theory. The last patient showed a deficit for simple active reversible sentences compatible with damage to the mapping of grammatical functions to thematic roles. Similar patterns of impairment were also found in the fluent aphasic sample. Overall, the results lead to the conclusion that the TDH cannot be generalised to all agrammatic patients, that the mechanism it invokes is not the only source responsible for agrammatic comprehension disorders and also contributes to comprehension disorders in fluent aphasic patients.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号