首页 | 本学科首页   官方微博 | 高级检索  
检索        


Decoding the information structure underlying the neural representation of concepts
Authors:Leonardo Fernandino  Jia-Qing Tong  Lisa L Conant  Colin J Humphries  Jeffrey R Binder
Institution:aDepartment of Neurology, Medical College of Wisconsin, Milwaukee, WI 53226;bDepartment of Biomedical Engineering, Medical College of Wisconsin, Milwaukee, WI 53226;cDepartment of Biophysics, Medical College of Wisconsin, Milwaukee, WI 53226
Abstract:The nature of the representational code underlying conceptual knowledge remains a major unsolved problem in cognitive neuroscience. We assessed the extent to which different representational systems contribute to the instantiation of lexical concepts in high-level, heteromodal cortical areas previously associated with semantic cognition. We found that lexical semantic information can be reliably decoded from a wide range of heteromodal cortical areas in the frontal, parietal, and temporal cortex. In most of these areas, we found a striking advantage for experience-based representational structures (i.e., encoding information about sensory-motor, affective, and other features of phenomenal experience), with little evidence for independent taxonomic or distributional organization. These results were found independently for object and event concepts. Our findings indicate that concept representations in the heteromodal cortex are based, at least in part, on experiential information. They also reveal that, in most heteromodal areas, event concepts have more heterogeneous representations (i.e., they are more easily decodable) than object concepts and that other areas beyond the traditional “semantic hubs” contribute to semantic cognition, particularly the posterior cingulate gyrus and the precuneus.

The capacity for conceptual knowledge is arguably one of the most defining properties of human cognition, and yet it is still unclear how concepts are represented in the brain. Recent developments in functional neuroimaging and computational linguistics have sparked renewed interest in elucidating the information structures and neural circuits underlying concept representation (15). Attempts to characterize the representational code for concepts typically involve information structures based on three qualitatively distinct types of information, namely, taxonomic, experiential, and distributional information. As the term implies, a taxonomic information system relies on category membership and intercategory relations. Our tendency to organize objects, events, and experiences into discrete categories has led most authors—dating back at least to Plato (6)—to take taxonomic structure as the central property of conceptual knowledge (7). The taxonomy for concepts is traditionally seen as a hierarchically structured network, with basic-level categories (e.g., “apple,” “orange”) grouped into superordinate categories (e.g., “fruit,” “food”) and subdivided into subordinate categories (e.g., “Gala apple,” “tangerine”) (8). A prominent account in cognitive science maintains that such categories are represented in the mind/brain as purely symbolic entities, whose semantic content and usefulness derive primarily from how they relate to each other (9, 10). Such representations are seen as qualitatively distinct from the sensory-motor processes through which we interact with the world, much like the distinction between software and hardware in digital computers.An experiential representational system, on the other hand, encodes information about the experiences that led to the formation of particular concepts. It is motivated by a view, often referred to as embodied, grounded, or situated semantics, in which concepts arise primarily from generalization over particular experiences, as information originating from the various modality-specific systems (e.g., visual, auditory, tactile, motor, affective) is combined and re-encoded into progressively more schematic representations that are stored in memory. Since, in this view, there is a degree of continuity between conceptual and modality-specific systems, concept representations are thought to reflect the structure of the perceptual, affective, and motor processes involved in those experiences (1114).Finally, distributional information pertains to statistical patterns of co-occurrence between lexical concepts (i.e., concepts that are widely shared within a population and denoted by a single word) in natural language usage. As is now widely appreciated, these co-occurrence patterns encode a substantial amount of information about word meaning (1517). Although word co-occurrence patterns primarily encode contextual associations, such as those connecting the words “cow,” “barn,” and “farmer,” semantic similarity information is indirectly encoded since words with similar meanings tend to appear in similar contexts (e.g., “cow” and “horse,” “pencil” and “pen”). This has led some authors to propose that concepts may be represented in the brain, at least in part, in terms of distributional information (15, 18).Whether, and to what extent, each of these types of information plays a role in the neural representation of conceptual knowledge is a topic of intense research and debate. A large body of evidence has emerged from behavioral studies, functional neuroimaging experiments, and neuropsychological assessments of patients with semantic deficits, with results typically interpreted in terms of taxonomic (1924), experiential (13, 2534), or distributional (2, 3, 5, 35, 36) accounts. However, the extent to which each of these representational systems plays a role in the neural representation of conceptual knowledge remains controversial (23, 37, 38), in part, because their representations of common lexical concepts are strongly intercorrelated. Patterns of word co-occurrence in natural language are driven in part by taxonomic and experiential similarities between the concepts to which they refer, and the taxonomy of natural categories is systematically related to the experiential attributes of the exemplars (3941). Consequently, the empirical evidence currently available is unable to discriminate between these representational systems.Several computational models of concept representation have been proposed based on these structures. While earlier models relied heavily on hierarchical taxonomic structure (42, 43), more recent proposals have emphasized the role of experiential and/or distributional information (34, 4446). The model by Chen and colleagues (45), for example, showed that graded taxonomic structure can emerge from the statistical coherent covariation found across experiences and exemplars without explicitly coding such taxonomic information per se. Other models propose that concepts may be formed through the combination of experiential and distributional information (44, 46), suggesting a dual representational code akin to Paivio’s dual coding theory (47).We investigated the relative contribution of each representational system by deriving quantitative predictions from each system for the similarity structure of a large set of concepts and then using representational similarity analysis (RSA) with high-resolution functional MRI (fMRI) to evaluate those predictions. Unlike the more typical cognitive subtraction technique, RSA focuses on the information structure of the pattern of neural responses to a set of stimuli (48). For a given stimulus set (e.g., words), RSA assesses how well the representational similarity structure predicted by a model matches the neural similarity structure observed from fMRI activation patterns (Fig. 1). This allowed us to directly compare, in quantitative terms, predictions derived from the three representational systems.Open in a separate windowFig. 1.Representational similarity analysis. (A) An fMRI activation map was generated for each concept presented in the study, and the activation across voxels was reshaped as a vector. (B) The neural RDM for the stimulus set was generated by computing the dissimilarity between these vectors (1 − correlation) for every pair of concepts. (C) A model-based RDM was computed from each model, and the similarity between each model’s RDM and the neural RDM was evaluated via Spearman correlation. (D) Anatomically defined ROIs. The dashed line indicates the boundary where temporal lobe ROIs were split into anterior and posterior portions (see main text for acronyms). (E) Cortical areas included in the functionally defined semantic network ROI (49).
Keywords:semantic memory  concept representation  lexical semantics  embodied semantics  representational similarity analysis
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号