首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Due to inconsistent findings, the role of the two cerebral hemispheres in processing metaphoric language is controversial. The present study examined the possibility that these inconsistent findings may be due, at least partly, to differences in the type (i.e., words vs sentences) or the familiarity of the linguistic material. Previous research has shown that novel two-word metaphoric expressions showed stronger activation in the right homologue of Wernicke's area for the novel metaphors than for both literal expressions and unrelated word pairs. In the present study fMRI was used to identify the left (LH) and the right hemisphere (RH) neural networks associated with processing unfamiliar, novel metaphoric sentences taken from poetry, as compared to those involved in processing familiar literal sentences and unfamiliar nonsensical sentences. Across participants, several left lateralised brain regions showed stronger activation for novel metaphoric sentences than for the nonsensical sentences although both types of sentence represent unfamiliar linguistic expressions. Moreover, the metaphoric sentences elicited more activation in the left dorsolateral prefrontal cortex and the posterior middle temporal gyri than did both the literal sentences and the nonsensical sentences. The increased activation in these brain regions might reflect the enhanced demand on the episodic and semantic memory systems in order to generate de-novo verbal semantic associations. The involvement of the left posterior middle temporal gyri could reflect extra reliance on classical brain structures devoted to sentence comprehension.  相似文献   

2.
Due to inconsistent findings, the role of the two cerebral hemispheres in processing metaphoric language is controversial. The present study examined the possibility that these inconsistent findings may be due, at least partly, to differences in the type (i.e., words vs sentences) or the familiarity of the linguistic material. Previous research has shown that novel two-word metaphoric expressions showed stronger activation in the right homologue of Wernicke's area for the novel metaphors than for both literal expressions and unrelated word pairs. In the present study fMRI was used to identify the left (LH) and the right hemisphere (RH) neural networks associated with processing unfamiliar, novel metaphoric sentences taken from poetry, as compared to those involved in processing familiar literal sentences and unfamiliar nonsensical sentences. Across participants, several left lateralised brain regions showed stronger activation for novel metaphoric sentences than for the nonsensical sentences although both types of sentence represent unfamiliar linguistic expressions. Moreover, the metaphoric sentences elicited more activation in the left dorsolateral prefrontal cortex and the posterior middle temporal gyri than did both the literal sentences and the nonsensical sentences. The increased activation in these brain regions might reflect the enhanced demand on the episodic and semantic memory systems in order to generate de-novo verbal semantic associations. The involvement of the left posterior middle temporal gyri could reflect extra reliance on classical brain structures devoted to sentence comprehension.  相似文献   

3.
In all signed languages used by deaf people, signs are executed in "sign space" in front of the body. Some signed sentences use this space to map detailed "real-world" spatial relationships directly. Such sentences can be considered to exploit sign space "topographically." Using functional magnetic resonance imaging, we explored the extent to which increasing the topographic processing demands of signed sentences was reflected in the differential recruitment of brain regions in deaf and hearing native signers of the British Sign Language. When BSL signers performed a sentence anomaly judgement task, the occipito-temporal junction was activated bilaterally to a greater extent for topographic than nontopographic processing. The differential role of movement in the processing of the two sentence types may account for this finding. In addition, enhanced activation was observed in the left inferior and superior parietal lobules during processing of topographic BSL sentences. We argue that the left parietal lobe is specifically involved in processing the precise configuration and location of hands in space to represent objects, agents, and actions. Importantly, no differences in these regions were observed when hearing people heard and saw English translations of these sentences. Despite the high degree of similarity in the neural systems underlying signed and spoken languages, exploring the linguistic features which are unique to each of these broadens our understanding of the systems involved in language comprehension.  相似文献   

4.
Resting‐state functional magnetic resonance imaging (rsfMRI) is a promising technique for language mapping that does not require task‐execution. This can be an advantage when language mapping is limited by poor task performance, as is common in clinical settings. Previous studies have shown that language maps extracted with rsfMRI spatially match their task‐based homologs, but no study has yet demonstrated the direct participation of the rsfMRI language network in language processes. This demonstration is critically important because spatial similarity can be influenced by the overlap of domain‐general regions that are recruited during task‐execution. Furthermore, it is unclear which processes are captured by the language network: does it map rather low‐level or high‐level (e.g., syntactic and lexico‐semantic) language processes? We first identified the rsfMRI language network and then investigated task‐based responses within its regions when processing stimuli of increasing linguistic content: symbols, pseudowords, words, pseudosentences and sentences. The language network responded only to language stimuli (not to symbols), and higher linguistic content elicited larger brain responses. The left fronto‐parietal, the default mode, and the dorsal attention networks were examined and yet none showed language involvement. These findings demonstrate for the first time that the language network extracted through rsfMRI is able to map language in the brain, including regions subtending higher‐level syntactic and semantic processes.  相似文献   

5.
Working memory (WM) evoked by linguistic cues for allocentric spatial and egocentric spatial aspects of a visual scene was investigated by correlating fMRI BOLD signal (or "activation") with performance on a spatial-relations task. Subjects indicated the relative positions of a person or object (referenced by the personal pronouns "he/she/it") in a previously shown image relative to either themselves (egocentric reference frame) or shifted to a reference frame anchored in another person or object in the image (allocentric reference frame), e.g. "Was he in front of you/her?" Good performers had both shorter response time and more correct responses than poor performers in both tasks. These behavioural variables were entered into a principal component analysis. The first component reflected generalised performance level. We found that the frontal eye fields (FEF), bilaterally, had a higher BOLD response during recall involving allocentric compared to egocentric spatial reference frames, and that this difference was larger in good performers than in poor performers as measured by the first behavioural principal component. The frontal eye fields may be used when subjects move their internal gaze during shifting reference frames in representational space. Analysis of actual eye movements in three subjects revealed no difference between egocentric and allocentric recall tasks where visual stimuli were also absent. Thus, the FEF machinery for directing eye movements may also be involved in changing reference frames within WM.  相似文献   

6.
The role of sub-cortical structures such as the striatum in language remains a controversial issue. Based on linguistic claims that language processing implies both recovery of lexical information and application of combinatorial rules it has been shown that striatal damaged patients have difficulties applying conjugation rules while lexical recovery of irregular forms is broadly spared (e.g., Ullman, M. T., Corkin, S., Coppola, M., Hickok, G., Growdon, J. H., Koroshetz, W. J., et al. (1997). A neural dissociation within language: Evidence that the mental dictionary is part of declarative memory, and that grammatical rules are processed by the procedural system. Journal of Cognitive Neuroscience, 9(2), 266-276). Here we bolstered the striatum-rule hypothesis by investigating lexical abilities and rule application at the phrasal level. Both processing aspects were assessed in a model of striatal dysfunction, namely Huntington's disease (HD). Using a semantic priming task we compared idiomatic prime sentences involving lexical access to whole phrases (e.g., "Paul has kicked the bucket") with idiom-derived sentences that contained passivation changes involving syntactic movement rules (e.g., "Paul was kicked by the bucket"), word changes (e.g., "Paul has crushed the bucket") or either. Target words that were either idiom-related (e.g., "death") reflecting lexical access to idiom meanings, word-related (e.g., "bail") reflecting lexical access to single words, or unrelated (e.g., "table"). HD patients displayed selective abnormalities with passivated sentences whereas priming was normal with idioms and sentences containing only word changes. We argue that the role of the striatum in sentence processing specifically pertains to the application of syntactic movement rules whereas it is not involved in canonical rules required for active structures or in lexical processing aspects. Our findings support the striatum-rule hypothesis but suggest that it should be refined by tracking the particular kind of language rules depending on striatal computations.  相似文献   

7.
Functional neuroanatomy of three-term relational reasoning   总被引:12,自引:0,他引:12  
Goel V  Dolan RJ 《Neuropsychologia》2001,39(9):901-909
In a recent study we demonstrated that reasoning with categorical syllogisms engages two dissociable mechanisms. Reasoning involving concrete sentences engaged a left hemisphere linguistic system while formally identical arguments, involving abstract sentences, recruited a parietal spatial network. The involvement of a parietal visuo-spatial system in abstract syllogism reasoning raised the question whether argument forms involving explicit spatial relations (or relations that can be easily mapped onto spatial relations) are sufficient to engage the parietal system? We addressed this question in an event-related fMRI study of three-term relational reasoning, using sentences with concrete and abstract content. Our findings indicate that both concrete and abstract three-term relational arguments activate a similar bilateral occipital-parietal-frontal network. However, the abstract reasoning condition engendered greater parietal activation than the concrete reasoning condition. We conclude that arguments involving relations that can be easily mapped onto explicit spatial relations engage a visuo-spatial system, irrespective of concrete or abstract content.  相似文献   

8.
9.
Work in theoretical linguistics and psycholinguistics suggests that human linguistic knowledge forms a continuum between individual lexical items and abstract syntactic representations, with most linguistic representations falling between the two extremes and taking the form of lexical items stored together with the syntactic/semantic contexts in which they frequently occur. Neuroimaging evidence further suggests that no brain region is selectively sensitive to only lexical information or only syntactic information. Instead, all the key brain regions that support high-level linguistic processing have been implicated in both lexical and syntactic processing, suggesting that our linguistic knowledge is plausibly represented in a distributed fashion in these brain regions. Given this distributed nature of linguistic representations, multi-voxel pattern analyses (MVPAs) can help uncover important functional properties of the language system. In the current study we use MVPAs to ask two questions: (1) Do language brain regions differ in how robustly they represent lexical vs. syntactic information? and (2) Do any of the language bran regions distinguish between “pure” lexical information (lists of words) and “pure” abstract syntactic information (jabberwocky sentences) in the pattern of activity? We show that lexical information is represented more robustly than syntactic information across many language regions (with no language region showing the opposite pattern), as evidenced by a better discrimination between conditions that differ along the lexical dimension (sentences vs. jabberwocky, and word lists vs. nonword lists) than between conditions that differ along the syntactic dimension (sentences vs. word lists, and jabberwocky vs. nonword lists). This result suggests that lexical information may play a more critical role than syntax in the representation of linguistic meaning. We also show that several language regions reliably discriminate between “pure” lexical information and “pure” abstract syntactic information in their patterns of neural activity.  相似文献   

10.
Neural oscillations track linguistic information during speech comprehension (Ding et al., 2016; Keitel et al., 2018), and are known to be modulated by acoustic landmarks and speech intelligibility (Doelling et al., 2014; Zoefel and VanRullen, 2015). However, studies investigating linguistic tracking have either relied on non-naturalistic isochronous stimuli or failed to fully control for prosody. Therefore, it is still unclear whether low-frequency activity tracks linguistic structure during natural speech, where linguistic structure does not follow such a palpable temporal pattern. Here, we measured electroencephalography (EEG) and manipulated the presence of semantic and syntactic information apart from the timescale of their occurrence, while carefully controlling for the acoustic-prosodic and lexical-semantic information in the signal. EEG was recorded while 29 adult native speakers (22 women, 7 men) listened to naturally spoken Dutch sentences, jabberwocky controls with morphemes and sentential prosody, word lists with lexical content but no phrase structure, and backward acoustically matched controls. Mutual information (MI) analysis revealed sensitivity to linguistic content: MI was highest for sentences at the phrasal (0.8–1.1 Hz) and lexical (1.9–2.8 Hz) timescales, suggesting that the delta-band is modulated by lexically driven combinatorial processing beyond prosody, and that linguistic content (i.e., structure and meaning) organizes neural oscillations beyond the timescale and rhythmicity of the stimulus. This pattern is consistent with neurophysiologically inspired models of language comprehension (Martin, 2016, 2020; Martin and Doumas, 2017) where oscillations encode endogenously generated linguistic content over and above exogenous or stimulus-driven timing and rhythm information.SIGNIFICANCE STATEMENT Biological systems like the brain encode their environment not only by reacting in a series of stimulus-driven responses, but by combining stimulus-driven information with endogenous, internally generated, inferential knowledge and meaning. Understanding language from speech is the human benchmark for this. Much research focuses on the purely stimulus-driven response, but here, we focus on the goal of language behavior: conveying structure and meaning. To that end, we use naturalistic stimuli that contrast acoustic-prosodic and lexical-semantic information to show that, during spoken language comprehension, oscillatory modulations reflect computations related to inferring structure and meaning from the acoustic signal. Our experiment provides the first evidence to date that compositional structure and meaning organize the oscillatory response, above and beyond prosodic and lexical controls.  相似文献   

11.
There is much evidence for the existence of multiple memory systems. However, it has been argued that tasks assumed to reflect different memory systems share basic processing components and are mediated by overlapping neural systems. Here we used multivariate analysis of PET-data to analyze similarities and differences in brain activity for multiple tests of working memory, semantic memory, and episodic memory. The results from two experiments revealed between-systems differences, but also between-systems similarities and within-system differences. Specifically, support was obtained for a task-general working-memory network that may underlie active maintenance. Premotor and parietal regions were salient components of this network. A common network was also identified for two episodic tasks, cued recall and recognition, but not for a test of autobiographical memory. This network involved regions in right inferior and polar frontal cortex, and lateral and medial parietal cortex. Several of these regions were also engaged during the working-memory tasks, indicating shared processing for episodic and working memory. Fact retrieval and synonym generation were associated with increased activity in left inferior frontal and middle temporal regions and right cerebellum. This network was also associated with the autobiographical task, but not with living/non-living classification, and may reflect elaborate retrieval of semantic information. Implications of the present results for the classification of memory tasks with respect to systems and/or processes are discussed.  相似文献   

12.
Prosody-driven sentence processing: an event-related brain potential study   总被引:2,自引:0,他引:2  
Four experiments systematically investigating the brain's response to the perception of sentences containing differing amounts of linguistic information are presented. Spoken language generally provides various levels of information for the interpretation of the incoming speech stream. Here, we focus on the processing of prosodic phrasing, especially on its interplay with phonemic, semantic, and syntactic information. An event-related brain potential (ERP) paradigm was chosen to record the on-line responses to the processing of sentences containing major prosodic boundaries. For the perception of these prosodic boundaries, the so-called closure positive shift (CPS) has been manifested as a reliable and replicable ERP component. It has mainly been shown to correlate to major intonational phrasing in spoken language. However, to define this component as exclusively relying on the prosodic information in the speech stream, it is necessary to systematically reduce the linguistic content of the stimulus material. This was done by creating quasi-natural sentence material with decreasing semantic, syntactic, and phonemic information (i. e., jabberwocky sentences, in which all content words were replaced by meaningless words; pseudoword sentences, in which all function and all content words are replaced by meaningless words; and delexicalized sentences, hummed intonation contour of a sentence removing all segmental content). The finding that a CPS was identified in all sentence types in correlation to the perception of their major intonational boundaries clearly indicates that this effect is driven purely by prosody.  相似文献   

13.
Language processing inevitably involves working memory (WM) operations, especially for sentences with complex syntactic structures. Evidence has been provided for a neuroanatomical segregation between core syntactic processes and WM, but the dynamic relation between these systems still has to be explored. In the present functional magnetic resonance imaging (fMRI) study, we investigated the network dynamics of regions involved in WM operations which support sentence processing during reading, comparing a set of dynamic causal models (DCM) with different assumptions about the underlying connectional architecture. The DCMs incorporated the core language processing regions (pars opercularis and middle temporal gyrus), WM related regions (inferior frontal sulcus and intraparietal sulcus), and visual word form area (fusiform gyrus). The results indicate a processing hierarchy from the visual to WM to core language systems, and moreover, a clear increase of connectivity between WM regions and language regions as the processing load increases for syntactically complex sentences.  相似文献   

14.
The human brain possesses a remarkable capacity to interpret and recall novel sounds as spoken language. These linguistic abilities arise from complex processing spanning a widely distributed cortical network and are characterized by marked individual variation. Recently, graph theoretical analysis has facilitated the exploration of how such aspects of large-scale brain functional organization may underlie cognitive performance. Brain functional networks are known to possess small-world topologies characterized by efficient global and local information transfer, but whether these properties relate to language learning abilities remains unknown. Here we applied graph theory to construct large-scale cortical functional networks from cerebral hemodynamic (fMRI) responses acquired during an auditory pitch discrimination task and found that such network properties were associated with participants' future success in learning words of an artificial spoken language. Successful learners possessed networks with reduced local efficiency but increased global efficiency relative to less successful learners and had a more cost-efficient network organization. Regionally, successful and less successful learners exhibited differences in these network properties spanning bilateral prefrontal, parietal, and right temporal cortex, overlapping a core network of auditory language areas. These results suggest that efficient cortical network organization is associated with sound-to-word learning abilities among healthy, younger adults.  相似文献   

15.
Previous research regarding the neural basis of semantic composition has relied heavily on violation paradigms, which often compare implausible sentences that violate world knowledge to plausible sentences that do not violate world knowledge. This comparison is problematic as it may involve extralinguistic operations such as contextual repair and processes that ultimately lead to the rejection of an anomalous sentence, and these processes may not be part of the core language system. Also, it is unclear if violations of world knowledge actually affect the linguistic operations for semantic composition. Here, we compared two types of sentences that were grammatical, plausible, and acceptable and differed only in the number of semantic operations required for comprehension without the confound of implausible sentences. Specifically, we compared complement coercion sentences (the novelist began the book), which require an extra compositional operation to arrive at their meaning, to control sentences (the novelist wrote the book), which do not have this extra compositional operation, and found that the neural response to complement coercion sentences activated Brodmann's area 45 in the left inferior frontal gyrus more than control sentences. Furthermore, the processing of complement coercion recruited different brain regions than more traditional semantic and syntactic violations (the novelist astonished/write the book, respectively), suggesting that coercion processes are a part of the core of the language faculty but do not recruit the wider network of brain regions underlying semantic and syntactic violations.  相似文献   

16.
Spoken language comprehension is known to involve a large left-dominant network of fronto-temporal brain regions, but there is still little consensus about how the syntactic and semantic aspects of language are processed within this network. In an fMRI study, volunteers heard spoken sentences that contained either syntactic or semantic ambiguities as well as carefully matched low-ambiguity sentences. Results showed ambiguity-related responses in the posterior left inferior frontal gyrus (pLIFG) and posterior left middle temporal regions. The pLIFG activations were present for both syntactic and semantic ambiguities suggesting that this region is not specialised for processing either semantic or syntactic information, but instead performs cognitive operations that are required to resolve different types of ambiguity irrespective of their linguistic nature, for example by selecting between possible interpretations or reinterpreting misparsed sentences. Syntactic ambiguities also produced activation in the posterior middle temporal gyrus. These data confirm the functional relationship between these two brain regions and their importance in constructing grammatical representations of spoken language.  相似文献   

17.
We present data from right brain-damaged patients, with and without spatial heminattention, which show the influence of hemispatial deficits on spoken language processing. We explored the findings of a previous study, which used an emphatic stress detection task and suggested spatial transcoding of a spoken active sentence in a 'language line'. This transcoding was impaired in its initial portion (the subject-word) when the neglect syndrome was present. By expanding the original methodology, the present study provides a deeper understanding of the level of spoken language processing involved in the heminattentional bias. To ascertain the role played by syntactic structure, active and passive sentences were compared. Sentences comprised of musical notes and of a sequence of unrelated nouns were also compared to determine whether the bias was manifest with any sequence of events (not only linguistic ones) deployed over time, and with a sequence of linguistic events not embedded in a structured syntactic frame. Results showed that heminattention exerted an influence only when a syntactically structured linguistic input (=sentence with agent of action, action and recipient of action) was processed, and that it did not interfere when a sequence of non-linguistic sounds or unrelated words was presented. Furthermore, when passing from active to passive sentences, the heminattentional bias was inverted, suggesting that heminattention primarily involves the logical subject of the sentence, which has an inverted position in passive sentences. These results strongly suggest that heminattention acts on the spatial transcoding of the deep structure of spoken language.  相似文献   

18.
We used functional magnetic resonance imaging (fMRI) in conjunction with a voxel-based approach to lesion symptom mapping to quantitatively evaluate the similarities and differences between brain areas involved in language and environmental sound comprehension. In general, we found that language and environmental sounds recruit highly overlapping cortical regions, with cross-domain differences being graded rather than absolute. Within language-based regions of interest, we found that in the left hemisphere, language and environmental sound stimuli evoked very similar volumes of activation, whereas in the right hemisphere, there was greater activation for environmental sound stimuli. Finally, lesion symptom maps of aphasic patients based on environmental sounds or linguistic deficits [Saygin, A. P., Dick, F., Wilson, S. W., Dronkers, N. F., & Bates, E. Shared neural resources for processing language and environmental sounds: Evidence from aphasia. Brain, 126, 928-945, 2003] were generally predictive of the extent of blood oxygenation level dependent fMRI activation across these regions for sounds and linguistic stimuli in young healthy subjects.  相似文献   

19.
Gestures are an important part of interpersonal communication, for example by illustrating physical properties of speech contents (e.g., “the ball is round”). The meaning of these so‐called iconic gestures is strongly intertwined with speech. We investigated the neural correlates of the semantic integration for verbal and gestural information. Participants watched short videos of five speech and gesture conditions performed by an actor, including variation of language (familiar German vs. unfamiliar Russian), variation of gesture (iconic vs. unrelated), as well as isolated familiar language, while brain activation was measured using functional magnetic resonance imaging. For familiar speech with either of both gesture types contrasted to Russian speech‐gesture pairs, activation increases were observed at the left temporo‐occipital junction. Apart from this shared location, speech with iconic gestures exclusively engaged left occipital areas, whereas speech with unrelated gestures activated bilateral parietal and posterior temporal regions. Our results demonstrate that the processing of speech with speech‐related versus speech‐unrelated gestures occurs in two distinct but partly overlapping networks. The distinct processing streams (visual versus linguistic/spatial) are interpreted in terms of “auxiliary systems” allowing the integration of speech and gesture in the left temporo‐occipital region. Hum Brain Mapp, 2009. © 2009 Wiley‐Liss, Inc.  相似文献   

20.
In many situations, people can only compute one stimulus-to-response mapping at a time, suggesting that response selection constitutes a "central processing bottleneck" in human information processing. Using fMRI, we tested whether common or distinct brain regions were involved in response selection across visual and auditory inputs, and across spatial and nonspatial mapping rules. We isolated brain regions involved in response selection by comparing two conditions that were identical in perceptual input and motor output, but differed in the complexity of the mapping rule. In the visual-manual task of Experiment 1, four vertical lines were positioned from left to right, and subjects pressed one of four keys to report which line was unique in length. In the auditory-manual task of Experiment 2, four tones were presented in succession, and subjects pressed one of four keys to report which tone was unique in duration. For both visual and auditory tasks, the mapping between target position and key position was either spatially compatible or incompatible. In the verbal task of Experiment 3, subjects used nonspatial mappings that were either compatible ("same" if colors matched; "different" if they mismatched) or incompatible (the opposite). Extensive activation overlap was observed across all three experiments for incompatible versus compatible mapping in bilateral parietal and frontal regions. Our results indicate that common neural substrates are involved in response selection across input modalities and across spatial and nonspatial domains of stimulus-to-response mapping, consistent with behavioral evidence that response selection is a central process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号