首页 | 本学科首页   官方微博 | 高级检索  
     


Optimizing the human learnability of abstract network representations
Authors:William Qian  Christopher W. Lynn  Andrei A. Klishin  Jennifer Stiso  Nicolas H. Christianson  Dani S. Bassett
Abstract:Precisely how humans process relational patterns of information in knowledge, language, music, and society is not well understood. Prior work in the field of statistical learning has demonstrated that humans process such information by building internal models of the underlying network structure. However, these mental maps are often inaccurate due to limitations in human information processing. The existence of such limitations raises clear questions: Given a target network that one wishes for a human to learn, what network should one present to the human? Should one simply present the target network as-is, or should one emphasize certain parts of the network to proactively mitigate expected errors in learning? To investigate these questions, we study the optimization of network learnability in a computational model of human learning. Evaluating an array of synthetic and real-world networks, we find that learnability is enhanced by reinforcing connections within modules or clusters. In contrast, when networks contain significant core–periphery structure, we find that learnability is best optimized by reinforcing peripheral edges between low-degree nodes. Overall, our findings suggest that the accuracy of human network learning can be systematically enhanced by targeted emphasis and de-emphasis of prescribed sectors of information.

From a young age, humans demonstrate the capacity to learn the relationships between concepts (13). During the learning process, humans are exposed to discrete chunks of information that combine and interconnect to form cognitive maps that can be represented as complex networks (49). These chunks of information often appear in a natural sequential order, such as words in language, notes in music, and abstract concepts in stories and classroom lectures (1014). Further, these sequences are encoded in the brain as networks, with links between items reflecting observed transitions (see refs. 1518 for empirical studies and 19 for a recent review). Broadly, the fact that many different types of information exhibit temporal order (and therefore network structure) motivates investigations into the processes that underlie the human learning of transition networks (8, 19, 20).To understand the network-learning process, recent studies have investigated how humans internally construct abstract representations of associations (2123). Using a variety of approaches, from computational models to artificial neural networks, such studies have consistently found that the mind builds network representations by integrating information over time. Such integration enables humans to compress exact sequences of experienced events into broader, but less precise, representations of context (24). These mental representations allow learners to make better generalizations about new information, at the cost of accuracy (22). Here, we focus on one particular modeling approach that accounts for the temporal integration and inaccuracies inherent in human learning. In particular, we build upon a maximum-entropy model, which posits that the mind learns a network representation of the world in a manner guided by a tradeoff between accuracy and complexity (21, 25). Specifically, in order to conserve mental resources, humans will tend to reduce the complexity of their representations at the cost of accuracy by allowing for errors during the learning process.While inaccuracies in human learning can aid flexibility across contexts, they present fundamental obstacles for the human comprehension of transition networks. Thus, a clear question emerges: What strategies should be employed to most effectively communicate the structure of a network to an inaccurate human learner? Prior studies of animal communication and behavior have demonstrated the utility of exaggerating the presentation of certain signals to receivers in offsetting erroneous information processing (26, 27). Similarly, one could imagine that, by emphasizing some features of a network over others, one may be able to correct for errors in human learning. Such an approach of targeted modulation of emphasis may be helpful not only in learning a whole network, but also in optimally learning particularly challenging parts of a network. In fact, humans show consistent difficulties in learning certain motifs in networks, such as the connections between modules (21, 2830). Taken together, these observations suggest that disproportionately weighting specific network features that are difficult to learn may facilitate human network learning.
Keywords:graph learning   maximum entropy   complex networks
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号