首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
2.
3.
4.
The COVID-19 pandemic led to lockdowns in countries across the world, changing the lives of billions of people. The United Kingdom’s first national lockdown, for example, restricted people’s ability to socialize and work. The current study examined how changes to socializing and working during this lockdown impacted ongoing thought patterns in daily life. We compared the prevalence of thought patterns between two independent real-world, experience-sampling cohorts, collected before and during lockdown. In both samples, young (18 to 35 y) and older (55+ y) participants completed experience-sampling measures five times daily for 7 d. Dimension reduction was applied to these data to identify common “patterns of thought.” Linear mixed modeling compared the prevalence of each thought pattern 1) before and during lockdown, 2) in different age groups, and 3) across different social and activity contexts. During lockdown, when people were alone, social thinking was reduced, but on the rare occasions when social interactions were possible, we observed a greater increase in social thinking than prelockdown. Furthermore, lockdown was associated with a reduction in future-directed problem solving, but this thought pattern was reinstated when individuals engaged in work. Therefore, our study suggests that the lockdown led to significant changes in ongoing thought patterns in daily life and that these changes were associated with changes to our daily routine that occurred during lockdown.

On March 23, 2020, the United Kingdom entered a nationwide lockdown to curb the spread of COVID-19. This first national lockdown required people to stay at home and not meet with anyone outside their household. Social gatherings were banned, and “nonessential” industries were closed, reducing opportunities for work (1). There were also large economic changes (2), and death rates increased substantially (3). Studies show the lockdown had widespread psychological and behavioral consequences including elevated anxiety and depression levels (4), overall deterioration of mental health (5), changes to diet and physical activity (68), high levels of loneliness (9), and increasing suicidal ideation (10). Our study used experience sampling to measure patterns of ongoing thoughts before and during lockdown in the United Kingdom, with the aim of understanding how specific features of the stay-at-home order impacted people’s thinking in daily life, and to use this data to inform contemporary theoretical views on ongoing thought.Our investigation served three broad goals. First, the lockdown led to changes in opportunities for socializing, and contemporary theories of ongoing thought suggest that social processing is an important influence on our day-to-day thinking (11, 12). For example, previous research indicates that individuals spend a lot of time thinking about other people in daily life (13, 14) or when performing tasks dependent on social cognition in the laboratory (15). Importantly, spontaneous social thoughts decline following periods of solitude and increase following periods of social interaction in the laboratory (11). They can also facilitate socio-emotional adjustment during important life transitions, such as starting university (16). Furthermore, ongoing thought patterns with social features are associated with increased neural responses to social stimuli (in this case, faces) (17). Such evidence suggests that the social environment can shape ongoing thought, leading to the possibility that changes in opportunities for socialization following the stay-at-home order could have changed the expression of social thinking in daily life.Second, lockdowns also disrupted individuals’ normal working practices, forcing people to reassess their goals. Prior work highlights that ongoing thought content is linked to an individual’s current concerns and self-related goals (1821) and that experimentally manipulating an individual’s goals can prime ongoing thought to focus on these issues (2123). In particular, a substantial proportion of ongoing thoughts are future directed (14, 18, 21, 2426), and this “prospective bias” is thought to support the formation and refinement of personal goals for future behavior (18, 21, 27, 28). Notably, this type of thought is also important in maintaining mental health through associations with improved subsequent mood (24) and reduced suicidal ideation (29, 30). Changes to opportunities for working during the lockdown, therefore, provide a chance to understand whether prospective features of ongoing thought are altered when important external commitments change.Third, previous work indicates that the contents of thought vary across the life span. For example, during periods of low cognitive demand, younger adults report significantly more future-directed thoughts, while older adults report significantly more past-related thoughts (31). At rest, older adults report more “novel” and present-oriented thoughts compared to younger adults (32). In daily life, older adults tend to report fewer “off-task” thoughts than younger adults, and their thoughts are rated as more “pleasant,” “interesting,” and “clear” (33). Finally, aging is associated with a decline in daydreaming, particularly a reduction in topics such as the future, fear of failure, or guilt (34). However, the degree to which these age-related changes are explained by lifestyle differences between young and older individuals is unclear. The lockdown may have altered key contextual factors that, under normal circumstances, differ systematically between younger and older adults. For example, increasing age is associated with more interactions with family members and fewer with “peripheral partners” (e.g., coworkers, acquaintances, and strangers) (35), a pattern that may be common in younger people during lockdown. With all this in mind, the lockdown provided an opportunity to examine whether changes to daily life during the lockdown differentially impacted ongoing thought patterns in younger and older individuals.Our study used an experience-sampling methodology in which people are signaled at random times in their daily lives to obtain multiple reports describing features of their ongoing thoughts and the context in which they occur (e.g., social environment, activity, and location) (36). To examine the contents of people’s thoughts, we used multidimensional experience sampling (MDES) (37). In this method, participants describe their in-the-moment thoughts by rating their thoughts on several dimensions (e.g., temporal focus or relationship to self and others) (38). Dimension reduction techniques can then be applied to use covariation in the responses to different questions to identify “patterns of thought” (37, 39). Previous studies have used MDES to identify common patterns of ongoing thought, varying in both form and content, often with distinct neural correlates (27, 37, 3943). For example, a pattern of episodic social cognition is associated with increased activity within regions of the ventromedial prefrontal cortex associated with memory and social cognition (41), while a pattern of external task focus is associated with increased activity in the intraparietal sulcus (42). In addition, at rest, visual imagery is associated with stronger interactions between the precuneus and lateral frontotemporal network (44), while detailed task focus is high during working memory tasks (15) and other complex tasks (45) and linked to activity in the default mode network during working memory maintenance (46).In summary, our study set out to examine whether ongoing thought patterns experienced during lockdown differed from those normally reported in daily life, focusing on the consequences of changes in opportunities for socialization and work. The prelockdown sample was an existing dataset used to provide a baseline for ongoing thought patterns in daily life before lockdown restrictions. In both samples, young (18 to 35 y) and older (55+ y) participants completed surveys five times daily over 7 d. Each sampling point obtained in the moment measured key dimensions of ongoing thought using MDES (37). Participants also provided information regarding the social environment in which the experience occurred. Dimension reduction was applied to both samples’ thought data to identify common patterns of thought. We then used linear mixed modeling (LMM) to explore the prevalence of each thought pattern 1) before and during lockdown, 2) in different age groups, and 3) across social contexts. In the lockdown sample, participants provided additional information regarding their current activity (e.g., working or leisure activities) and virtual social environment, which we used to explore how specific features of daily life during lockdown corresponded with patterns of thought.  相似文献   

5.
There has been growing concern about the role social media plays in political polarization. We investigated whether out-group animosity was particularly successful at generating engagement on two of the largest social media platforms: Facebook and Twitter. Analyzing posts from news media accounts and US congressional members (n = 2,730,215), we found that posts about the political out-group were shared or retweeted about twice as often as posts about the in-group. Each individual term referring to the political out-group increased the odds of a social media post being shared by 67%. Out-group language consistently emerged as the strongest predictor of shares and retweets: the average effect size of out-group language was about 4.8 times as strong as that of negative affect language and about 6.7 times as strong as that of moral-emotional language—both established predictors of social media engagement. Language about the out-group was a very strong predictor of “angry” reactions (the most popular reactions across all datasets), and language about the in-group was a strong predictor of “love” reactions, reflecting in-group favoritism and out-group derogation. This out-group effect was not moderated by political orientation or social media platform, but stronger effects were found among political leaders than among news media accounts. In sum, out-group language is the strongest predictor of social media engagement across all relevant predictors measured, suggesting that social media may be creating perverse incentives for content expressing out-group animosity.

According to a recent article in the Wall Street Journal, a Facebook research team warned the company in 2018 that their “algorithms exploit the human brain’s attraction to divisiveness.” This research was allegedly shut down by Facebook executives, and Facebook declined to implement changes proposed by the research team to make the platform less divisive (1). This article is consistent with concerns that social media might be incentivizing the spread of polarizing content. For instance, Twitter CEO Jack Dorsey has expressed concern about the popularity of “dunking” (i.e., mocking or denigrating one’s enemies) on the platform (2). These concerns have become particularly relevant as social media rhetoric appears to have incited real-world violence, such as the recent storming of the US Capital (3). We sought to investigate whether out-group animosity was associated with increased virality on two of the largest social media platforms: Facebook and Twitter.A growing body research has examined the potential role of social media in exacerbating political polarization (4, 5). A large portion of this work has centered on the position that social media sorts us into “echo chambers” or “filter bubbles” that selectively expose people to content that aligns with their preexisting beliefs (611). However, some recent scholarship questions whether the “echo chamber” narrative has been exaggerated (12, 13). Some experiments suggest that social media can indeed increase polarization. For example, temporarily deactivating Facebook can reduce polarization on policy issues (14). However, other work suggests that polarization has grown the most among older demographic groups, who are the least likely to use social media (15), albeit the most likely to vote. As such, there is an open debate about the role of social media in political polarization and intergroup conflict.Other research has examined the features of social media posts that predict “virality” online. Much of the literature focuses on the role of emotion in social media sharing. High-arousal emotions, whether they are positive (e.g., awe) or negative (e.g., anger or outrage), contribute to the sharing of content online (1620). Tweets expressing moral and emotional content are more likely to be retweeted within online political conversations, especially by members of one’s political in-group (21, 22). On Facebook, posts by politicians that express “indignant disagreement” receive more likes and shares (23), and negative news tends to spread farther on Twitter (24). Moreover, false rumors spread farther and faster on Twitter than true ones, especially in the domain of politics, possibly because they are more likely to express emotions such as surprise and fear (25).Yet, to our knowledge, little research has investigated how social identity motives contribute to online virality. Group identities are hypersalient on social media, especially in the context of online political or moral discussions (26). For example, an analysis of Twitter accounts found that people are increasingly categorizing themselves by their political identities in their Twitter bios over time, providing a public signal of their social identity (27). Additionally, since sharing behavior is public, it can reflect self-conscious identity presentation (28, 29). According to social identity theory (30) and self-categorization theory (31), when group identities are highly salient, this can lead individuals to align themselves more with their fellow in-group members, facilitating in-group favoritism and out-group derogation in order to maintain a positive sense of group distinctiveness (32). Thus, messages that fulfill group-based identity motives may receive more engagement online. As an anecdotal example, executives at the website Buzzfeed, which specializes in creating viral content, reportedly noticed that identity-related content contributed to virality and began creating articles appealing to specific group identities (33).People may process information in a manner that is consistent with their partisan identities, prior beliefs, and motivations, a process known as motivated cognition (3437). Scholars noted early on that the degree to which individuals identify with their political party “raises a perceptual screen through which the individual tends to see what is favorable to his [or her] partisan orientation” (38). Partisan motivations have been hypothesized to influence online behavior, such as the sharing of true and false news online (39, 40). Accordingly, we suggest that just as people engage in motivated cognition—processing information in a way that supports their beliefs—people may also engage in motivated tweeting (or sharing, liking, or retweeting), selectively interacting with and attending to content that aligns with their partisan identity motivations. There is already evidence suggesting that people selectively follow (41) and retweet (10, 42) in-group members at much higher rates than out-group members.In polarized political contexts, out-group animosity may be a more successful strategy for expressing one’s partisan identity and generating engaging content than in-group favoritism. Political polarization has been growing rapidly in the United States over the past few decades. Affective polarization, which reflects dislike of people in the opposing political party as compared to one’s own party, has most strikingly increased (43), and ideological polarization may have increased as well (though this is still a topic of debate) (44). This growth in affective polarization is driven primarily by increasing out-party animosity (rather than increasing in-party warmth)—a phenomenon known as “negative partisanship” (45). According to recently released American National Election Studies data, affective polarization grew particularly steeply from 2016 to 2020, reaching its highest point in 40 y. Out-party animosity, more so than in-party warmth, has also become a more powerful predictor of important behaviors, such as voting behavior (46) and the sharing of political fake news (39). When out-party animosity is strong, partisans are motivated to distinguish themselves from the out-party (by, for instance, holding opinions that are distinct from the out-party) (47). While some research suggests that out-group cues might be more powerful than in-group cues (48), there is still debate about the extent to which partisan belief and behavior is driven by in-group favoritism versus out-group derogation (49). A limitation of prior research is that much of it is based on self-report surveys, and so it remains unknown how expressions of in-group favoritism or out-group animosity play out in a social media context—or whether one might be a more powerful contributor to virality than the other.We investigated the role that political in-group and out-group language, as well as emotional language, play in predicting online engagement in a large sample of posts from news media accounts and US congressional members (n = 2,730,215). We sought to examine this on both Facebook and Twitter since they are two of the world’s largest and most influential social media companies and constitute around three billion users out of nearly four billion total social media users worldwide (50). Specifically, we were interested in 1) how political in-group and out-group language compared to other established predictors of social media engagement, 2) whether in-group or out-group language was a better predictor of shares and retweets, and 3) whether out-group terms were associated with negative emotions (as measured by the six Facebook “reactions”), and whether in-group terms were associated with positive emotions, reflecting patterns of out-party derogation and in-group favoritism. Finally, 4) we wanted to see if these findings applied to both news sources and political leaders, who often have an outsized influence on social discourse as well as policy change.  相似文献   

6.
How do shared conventions emerge in complex decentralized social systems? This question engages fields as diverse as linguistics, sociology, and cognitive science. Previous empirical attempts to solve this puzzle all presuppose that formal or informal institutions, such as incentives for global agreement, coordinated leadership, or aggregated information about the population, are needed to facilitate a solution. Evolutionary theories of social conventions, by contrast, hypothesize that such institutions are not necessary in order for social conventions to form. However, empirical tests of this hypothesis have been hindered by the difficulties of evaluating the real-time creation of new collective behaviors in large decentralized populations. Here, we present experimental results—replicated at several scales—that demonstrate the spontaneous creation of universally adopted social conventions and show how simple changes in a population’s network structure can direct the dynamics of norm formation, driving human populations with no ambition for large scale coordination to rapidly evolve shared social conventions.Social conventions are the foundation for social and economic life (17), However, it remains a central question in the social, behavioral, and cognitive sciences to understand how these patterns of collective behavior can emerge from seemingly arbitrary initial conditions (24, 8, 9). Large populations frequently manage to coordinate on shared conventions despite a continuously evolving stream of alternatives to choose from and no a priori differences in the expected value of the options (1, 3, 4, 10). For instance, populations are able to produce linguistic conventions on accepted names for children and pets (11), on common names for colors (12), and on popular terms for novel cultural artifacts, such as referring to junk email as “SPAM” (13, 14). Similarly, economic conventions, such as bartering systems (2), beliefs about fairness (3), and consensus regarding the exchangeability of goods and services (15), emerge with clear and widespread agreement within economic communities yet vary broadly across them (3, 16).Prominent theories of social conventions suggest that institutional mechanisms—such as centralized authority (14), incentives for collective agreement (15), social leadership (16), or aggregated information (17)—can explain global coordination. However, these theories do not explain whether, or how, it is possible for conventions to emerge when social institutions are not already in place to guide the process. A compelling alternative approach comes from theories of social evolution (2, 1820). Social evolutionary theories maintain that networks of locally interacting individuals can spontaneously self-organize to produce global coordination (21, 22). Although there is widespread interest in this approach to social norms (6, 7, 14, 18, 2326), the complexity of the social process has prevented systematic empirical insight into the thesis that these local dynamics are sufficient to explain universally adopted conventions (27, 28).Several difficulties have limited prior empirical research in this area. The most notable of these limitations is scale. Although compelling experiments have successfully shown the creation of new social conventions in dyadic and small group interactions (2931), the results in small group settings can be qualitatively different from the dynamics in larger groups (Model), indicating that small group experiments are insufficient for demonstrating whether or how new conventions endogenously form in larger populations (32, 33). Important progress on this issue has been made using network-based laboratory experiments on larger groups (15, 24). However, this research has been restricted to studying coordination among players presented with two or three options with known payoffs. Natural convention formation, by contrast, is significantly complicated by the capacity of individuals to continuously innovate, which endogenously expands the “ecology” of alternatives under evaluation (23, 29, 31). Moreover, prior experimental studies have typically assumed the existence of either an explicit reward for universal coordination (15) or a mechanism that aggregates and reports the collective state of the population (17, 24), which has made it impossible to evaluate the hypothesis that global coordination is the result of purely local incentives.More recently, data science approaches to studying norms have addressed many of these issues by analyzing behavior change in large online networks (34). However, these observational studies are limited by familiar problems of identification that arise from the inability to eliminate the confounding influences of institutional mechanisms. As a result, previous empirical research has been unable to identify the collective dynamics through which social conventions can spontaneously emerge (8, 3436).We addressed these issues by adopting a web-based experimental approach. We studied the effects of social network structure on the spontaneous evolution of social conventions in populations without any resources to facilitate global coordination (9, 37). Participants in our study were rewarded for coordinating locally, however they had neither incentives nor information for achieving large scale agreement. Further, to eliminate any preexisting bias in the evolutionary process, we studied the emergence of arbitrary linguistic conventions, in which none of the options had any a priori value or advantage over the others (3, 23). In particular, we considered the prototypical problem of whether purely local interactions can trigger the emergence of a universal naming convention (38, 39).  相似文献   

7.
Despite its theoretical prominence and sound principles, integrated pest management (IPM) continues to suffer from anemic adoption rates in developing countries. To shed light on the reasons, we surveyed the opinions of a large and diverse pool of IPM professionals and practitioners from 96 countries by using structured concept mapping. The first phase of this method elicited 413 open-ended responses on perceived obstacles to IPM. Analysis of responses revealed 51 unique statements on obstacles, the most frequent of which was “insufficient training and technical support to farmers.” Cluster analyses, based on participant opinions, grouped these unique statements into six themes: research weaknesses, outreach weaknesses, IPM weaknesses, farmer weaknesses, pesticide industry interference, and weak adoption incentives. Subsequently, 163 participants rated the obstacles expressed in the 51 unique statements according to importance and remediation difficulty. Respondents from developing countries and high-income countries rated the obstacles differently. As a group, developing-country respondents rated “IPM requires collective action within a farming community” as their top obstacle to IPM adoption. Respondents from high-income countries prioritized instead the “shortage of well-qualified IPM experts and extensionists.” Differential prioritization was also evident among developing-country regions, and when obstacle statements were grouped into themes. Results highlighted the need to improve the participation of stakeholders from developing countries in the IPM adoption debate, and also to situate the debate within specific regional contexts.Feeding the 9,000 million people expected to inhabit Earth by 2050 will present a constant and significant challenge in terms of agricultural pest management (13). Despite a 15- to 20-fold increase in pesticide use since the 1960s, global crop losses to pests—arthropods, diseases, and weeds—have remained unsustainably high, even increasing in some cases (4). These losses tend to be highest in developing countries, averaging 40–50%, compared with 25–30% in high-income countries (5). Alarmingly, crop pest problems are projected to increase because of agricultural intensification (4, 6), trade globalization (7), and, potentially, climate change (8).Since the 1960s, integrated pest management (IPM) has become the dominant crop protection paradigm, being endorsed globally by scientists, policymakers, and international development agencies (2, 915). The definitions of IPM are numerous, but all involve the coordinated integration of multiple complementary methods to suppress pests in a safe, cost-effective, and environmentally friendly manner (9, 11). These definitions also recognize IPM as a dynamic process in terms of design, implementation, and evaluation (11). In practice, however, there is a continuum of interpretations of IPM (e.g., refs. 14, 16, 17), but bounded by those that emphasize pesticide management (i.e., “tactical IPM”) and those that emphasize agroecosystem management (i.e., “strategic IPM,” also known as “ecologically based pest management”) (16, 18, 19). Despite apparently solid conceptual grounding and substantial promotion by the aforementioned groups, IPM has a discouragingly poor adoption record, particularly in developing-country settings (9, 10, 1523), raising questions over its applicability as it is presently conceived (15, 16, 22, 24).The possible reasons behind the developing countries’ poor adoption of IPM have been the subject of considerable discussion since the 1980s (9, 15, 16, 22, 2531), but this debate has been notable for the limited direct involvement from developing-country stakeholders. Most of the literature exploring poor adoption of IPM in the developing world has originated in the developed world (e.g., refs. 15, 16, 22). An international workshop, entitled “IPM in Developing Countries,” was held at the Pontificia Universidad Católica del Ecuador (PUCE) from October 31 to November 3, 2011. Poor IPM adoption spontaneously became a central discussion point, creating an opportunity to address the apparent participation bias in the IPM adoption debate.It was therefore decided to explore the topic further by eliciting and mapping the opinions of a large and diverse pool of IPM professionals and practitioners from around the world, including many based in developing countries. The objective was to generate and prioritize a broad list of hypotheses to explain poor IPM adoption in developing-country agriculture. We also wanted to explore differences as influenced by respondents’ characteristics, particularly their region of practice. To achieve these objectives, we used structured concept mapping (32), an empirical survey method often used to quantify and give thematic structure to open-ended opinions (33).We know of only one other similar study that characterizes obstacles to IPM. It was based on the structured responses of 153 experts, all from high-income countries (30). Our survey was designed to progress from unstructured to structured responses, and to reach a much larger and diverse pool of participants, particularly those from the “Global South.” Considering that the vast majority of farmers live in developing countries (34), it would seem imperative that the voices from this region be heard.  相似文献   

8.
The remarkable robustness of many social systems has been associated with a peculiar triangular structure in the underlying social networks. Triples of people that have three positive relations (e.g., friendship) between each other are strongly overrepresented. Triples with two negative relations (e.g., enmity) and one positive relation are also overrepresented, and triples with one or three negative relations are drastically suppressed. For almost a century, the mechanism behind these very specific (“balanced”) triad statistics remained elusive. Here, we propose a simple realistic adaptive network model, where agents tend to minimize social tension that arises from dyadic interactions. Both opinions of agents and their signed links (positive or negative relations) are updated in the dynamics. The key aspect of the model resides in the fact that agents only need information about their local neighbors in the network and do not require (often unrealistic) higher-order network information for their relation and opinion updates. We demonstrate the quality of the model on detailed temporal relation data of a society of thousands of players of a massive multiplayer online game where we can observe triangle formation directly. It not only successfully predicts the distribution of triangle types but also explains empirical group size distributions, which are essential for social cohesion. We discuss the details of the phase diagrams behind the model and their parameter dependence, and we comment on to what extent the results might apply universally in societies.

Recognizing the fundamental role of triadic interactions in shaping social structures, Heider (1) introduced the notion of balanced and unbalanced triads. A triad (triangle) of individuals is balanced if it includes zero or two negative links; otherwise, it is unbalanced. Heider (1) hypothesized that social networks have a tendency to reduce the number of unbalanced triangles over time such that balanced triads would dominate in a stationary situation. This theory of “social balance” has been confirmed empirically in many different contexts, such as schools (2), monasteries (3), social media (4), or computer games (5). Social balance theory and its generalizations (68) have been studied extensively for more than a half century for their importance in understanding polarization of societies (9), global organization of social networks (10), evolution of the network of international relations (11), opinion formation (12, 13), epidemic spreading (14, 15), government formation (16), and decision-making processes (17).Following Heider’s intuition (1841), current approaches toward social balance often account for the effect of triangles on social network formation in one way or another. For example, the models in refs. 22 and 23 consider a reduction of the number of unbalanced triads either in the neighborhood of a node or in the whole network. The latter process sometimes leads to imbalance due to the existence of so-called jammed states (42). In order to reach social balance, individuals can also update their links according to their relations to common neighbors (1821) or adjust link weights via opinion updates (24, 25) or via a minimization of social stress based on triadic interactions (3744). These works not only ignore the difficulty of individuals to know the social interactions beyond their direct neighbors in reality, so far, they also have not considered the detailed statistical properties of the over- or underrepresentation of the different types of triads, such as those reported in refs. 4 and 5, with the exception of refs. 43 and 44.It is generally believed that the similarity of individuals plays a crucial role in the formation of social ties in social networks, something that has been called homophily (4548). This means that to form a positive or negative tie with another person, people compare only pairwise overlaps in their individual opinions (dyadic interaction). It has also been argued that social link formation takes into account a tendency in people to balance their local interaction networks in the sense that they introduce friends to each other, that they do give up friendships if two mutual friends have negative attitudes toward each other, and that they tend to avoid situations where everyone feels negatively about the others. This is the essence of social balance theory (1). Obviously, link formation following social balance is cognitively much more challenging than homophily-based link formation since in the former, one has to keep in mind the many mutual relations between all your neighbors in a social network. While social balance–driven link formation certainly occurs in the context of close friendships, it is less realistic to assume that this mechanism is at work in social link formation in general. In Fig. 1, we schematically show the situation in a portion of a social network. It is generally hard for node i to know all the relations between his neighbors j, k, and l.Open in a separate windowFig. 1.Schematic view of opinion and link updates in a society. Every individual has an opinion vector whose components represent (binary) opinions on G=5 different subjects. Red (blue) links denote positive (negative) relationships. The question marks denote unknown relationships between i’s neighbors. As an agent i flips one of its opinions (red circle), si1, from 1 to –1, i can either decrease or increase its individual stress, H(i), depending on the value of the parameter α (Eq. 1). For instance, H(i) would increase if α=1 but would decrease for α=0. For high “rationality” values of individuals w.r.t. social stress, as quantified by β, the latter is more likely to be accepted, resulting in a reduction of the number of unbalanced triads in i’s neighborhood.Here, assuming that it is generally unrealistic for individuals to know their social networks at the triadic level, we aim to understand the emergence and the concrete statistics of balanced triads on the basis of dyadic or one-to-one interactions. Therefore, we use a classic homophily rule (45, 46) to define a “stress level” between any pair of individuals based on the similarity (or overlap) of their individual opinions. Here, the opinions of an individual i are represented by a vector with G components, si, that we show in Fig. 1. Homophily implies that i and j tend to become friends if the overlap (e.g., scalar product of their opinion vectors) is positive, and they become enemies if the overlap is negative. Such a specification of homophily is often referred to as an attraction–repulsion or assimilation–differentiation rule (49, 50). Assuming that, generally, social relations rearrange such as to minimize individual social stress on average, we will show that balanced triads naturally emerge from purely dyadic homophilic interactions without any explicit selection mechanisms for specific triads. We formulate the opinion link dynamics leading to social balance within a transparent physics-inspired framework. In particular, we observe a dynamic transition between two different types of balanced steady states that correspond to different compositions of balanced triads.Explaining the empirical statistics of triangles in social systems is a challenge. Early works considered groups of a few monks in a monastery (3) or a few students in classrooms (51). The studies suffered from limited data and small network sizes. Large-scale studies were first performed in online platforms (4) and in the society of players of the massive multiplayer online game (MMOG) Pardus. Players in Pardus engage in a form of economic life, such as trade and mining, and in social activities, such as communication on a number of channels, forming friendships and enmities (details are in refs. 5, 52, and 53). In the social networks of this game, balanced triads were once more confirmed to be overrepresented compared with what is expected by chance. Similar patterns of triad statistics were also observed in Epinion, Slashdot, and Wikipedia (4). More details on the Pardus society are in Materials and Methods. This dataset gives us the unique possibility to validate the model and compare the predictions with actual triangle statistics and formation of positively connected groups that are foundational to social cohesion.  相似文献   

9.
Theories of human behavior suggest that individuals attend to the behavior of certain people in their community to understand what is socially normative and adjust their own behavior in response. An experiment tested these theories by randomizing an anticonflict intervention across 56 schools with 24,191 students. After comprehensively measuring every school’s social network, randomly selected seed groups of 20–32 students from randomly selected schools were assigned to an intervention that encouraged their public stance against conflict at school. Compared with control schools, disciplinary reports of student conflict at treatment schools were reduced by 30% over 1 year. The effect was stronger when the seed group contained more “social referent” students who, as network measures reveal, attract more student attention. Network analyses of peer-to-peer influence show that social referents spread perceptions of conflict as less socially normative.One of the most elusive and important goals in the behavioral sciences is to understand how community-wide patterns of behavior can be changed (18). In some cases, social scientists seek to reduce widespread and persistent patterns of negative behavior like corruption or conflict; in others, to promote positive behavior like healthy eating or environmental conservation. Research on changing individual behavior provides many intervention strategies targeted to the psychology of the individual, such as attitudinal persuasion, situational cues, and peer influence (912). Another body of research focuses on scaling up behavior change interventions to the community level, studying attempts to reach every individual in a population with mass education or persuasion messaging (13), or with institutional regulation or defaults (14). A third strategy has been to seed a social network with individuals who demonstrate new behaviors, and to rely on processes of social influence to spread the behavior through the channel of structural features of the network (1518).The present paper incorporates all three approaches. We implemented a social influence strategy designed to change individual behavior, and we tested whether, as a result, new behaviors and norms are transmitted through a social network and also whether they scale up to shift overall levels of behavior within a community. Specifically, we randomized the selection of students within a comprehensively measured social network to determine the relative power of certain individuals to influence the behavior of others. We randomly assigned the presence of this treatment to some community networks and not others. This approach allowed us to determine whether influence from a small group of influential people is enough to shift a community’s behavioral climate, which we define as a widespread and persistent behavioral pattern across the community.Our experimental design is motivated by theoretical debates about how social norms emerge and are transmitted within communities (1, 1923). At the community level, it is believed that social norms, or perceptions of typical or desirable behavior, emerge when they support the survival of the group (24) or because of arbitrary historical precedent (23). Once formed, these informal rules for behavior are transmitted by the survival of those who follow them, or through the punishment of deviants and the social success of followers. For these reasons, theory suggests that most individual community members strive to understand the social norms of a group and adjust their own behavior accordingly (21, 25). When many individuals in a community perceive a similar norm and adjust their behavior, then a community-wide behavioral pattern may emerge.Social norms may be explained directly to community members through storytelling or advice, but small-scale experiments and theory suggest that individuals often infer which behaviors are typical and desirable through observation of other community members’ behavior (1, 21, 22). A large literature attempts to identify which community members are effective at transmitting social information across a community (16, 18, 2628). Theories of norm perception predict that individuals infer community social norms by observing the behavior of community members who have many connections within the community’s social network (29). Sometimes called “social referents” (20), individuals may view these community members as important sources of normative information, in part because their many connections imply a comparatively greater knowledge of typical or desirable behavioral patterns in the community. In fact, social referents may have many connections for numerous reasons: they may have a higher status, they may be more popular, or they may have a greater capacity for socialization. Social referents may be different on many dimensions, but what they share is a comparatively greater amount of attention from their peers. Theory and evidence point to the prediction, supported by recent experimental evidence (20, 30), that social referents are particularly influential over perceptions of community norms and behavior in their network.However, despite the large theoretical and empirical literature devoted to ideas about how social norms and behavioral patterns emerge and persist, the central question of which individual level interventions can shift a community’s behavioral climate remains open. We pose this question in the context of adolescent school conflict, such as verbal and physical aggression, rumor mongering, and social exclusion. Although the term “conflict” lacks a consensus definition (31), we follow other social scientists (32, 33) who define conflict broadly, as characterized by antagonistic relations or interactions, or behavioral opposition, respectively, between two or more social entities. This broad definition includes harassment or antagonism from a high-power or high-status person aimed at a person with lower power or status (i.e., bullying), but also conflict between or among people with relatively balanced levels of social power and status.Within many middle and secondary schools in the United States, student conflict is part of the schools’ behavioral climate; that is, conflict is widespread and persistent (34, 35). In contrast to claims that conflict is driven by a minority group of student “bullies” (36), evidence suggests a majority of students contribute to conflicts at their school (37), and these conflicts persist over time because of cyclical patterns of offense and retaliation (38).Student conflict, and in particular bullying, has recently attracted research and policy attention as online social media have brought face-to-face student conflicts into adult view (34, 39). New laws and school policies have been introduced to improve school climate, along with many school programs targeting students’ character and empathy. However, basic research illustrates that students perceive social constraints on reporting or intervening in peer conflict (40). That is, students may perpetuate and tolerate conflict not because of their personal character or level of empathy, but because they perceive conflict behaviors to be typical or desirable: that is, normative within their school’s social network. In such a context, reporting or intervening in peer conflict could be perceived by peers as deviant.  相似文献   

10.
The precise mechanisms by which the information ecosystem polarizes society remain elusive. Focusing on political sorting in networks, we develop a computational model that examines how social network structure changes when individuals participate in information cascades, evaluate their behavior, and potentially rewire their connections to others as a result. Individuals follow proattitudinal information sources but are more likely to first hear and react to news shared by their social ties and only later evaluate these reactions by direct reference to the coverage of their preferred source. Reactions to news spread through the network via a complex contagion. Following a cascade, individuals who determine that their participation was driven by a subjectively “unimportant” story adjust their social ties to avoid being misled in the future. In our model, this dynamic leads social networks to politically sort when news outlets differentially report on the same topic, even when individuals do not know others’ political identities. Observational follow network data collected on Twitter support this prediction: We find that individuals in more polarized information ecosystems lose cross-ideology social ties at a rate that is higher than predicted by chance. Importantly, our model reveals that these emergent polarized networks are less efficient at diffusing information: Individuals avoid what they believe to be “unimportant” news at the expense of missing out on subjectively “important” news far more frequently. This suggests that “echo chambers”—to the extent that they exist—may not echo so much as silence.

By standard measures, political polarization in the American mass public is at its highest point in nearly 50 y (1). The consequences of this fundamental and growing societal divide are potentially severe: High levels of polarization reduce policy responsiveness and have been associated with decreased social trust (2), acceptance of and dissemination of misinformation (3), democratic erosion (4), and in extreme cases even violence (5). While policy divides have traditionally been thought to drive political polarization, recent research suggests that political identity may play a stronger role (6, 7). Yet people’s political identities may be increasingly less visible to those around them: Many Americans avoid discussing and engaging with politics and profess disdain for partisanship (8), and identification as “independent” from the two major political parties is higher than at any point since the 1950s (9). Taken together, these conflicting patterns complicate simple narratives about the mechanisms underlying polarization. Indeed, how macrolevel divisions relate to the preferences, perceptions, and interpersonal interactions of individuals remains a significant puzzle.A solution to this puzzle is particularly elusive given that many Americans, increasingly wary of political disagreement, avoid signaling their politics in discussions and self-presentation and thus lack direct information about the political identities of their social connections (10). However, regardless of individuals’ perceptions about each other, the information ecosystem around them—the collection of news sources available to society—reflects, at least to some degree, the structural divides of the political and economic system (11, 12). Traditional accounts of media-driven polarization have emphasized a direct mechanism: Individuals are influenced by the news they consume (13) but also tend to consume news from outlets that align with their politics (14, 15), thereby reinforcing their views and shifting them toward the extremes (16, 17). However, large-scale behavioral studies have offered mixed evidence of these mechanisms (18, 19), including evidence that many people encounter a significant amount of counter-attitudinal information online (2022). Furthermore, instead of directly tuning into news sources, individuals often look to their immediate social networks to guide their attention to the most important issues (2327). Therefore, it is warranted to investigate how the information ecosystem may impact society beyond direct influence on individual opinions.Here, we examine media-driven polarization as a social process (28) and propose a mechanism—information cascades—by which a polarized information ecosystem can indirectly polarize society by causing individuals to self-sort into emergent homogeneous social networks even when they do not know others’ political identities. Information cascades, in which individuals observe and adopt the behavior of others, allow the actions of a few individuals to quickly propagate through a social network (29, 30). Found in social systems ranging from fish schools (31) and insect swarms (32) to economic markets (33) and popular culture (29), information cascades are a widespread social phenomenon that can greatly impact collective behavior such as decision making (34). Online social media platforms are especially prone to information cascades since the primary affordances of these services involve social networking and information sharing (3538): For example, users often see and share posts of social connections without ever reading the source material (e.g., a shared news article) (39). In addition to altering beliefs and behavior, information cascades can also affect social organization: For instance, retweet cascades on Twitter lead to bursts of unfollowing and following activity (40) that indicate sudden shifts in social connections as a direct result of information spreading through the social network. While research so far has been agnostic as to the content of the information shared during a cascade, it is plausible that information from partisan news outlets could create substantial changes in networks of individuals.We therefore propose that the interplay between network-altering cascades and an increasingly polarized information ecosystem could result in politically sorted social networks, even in the absence of partisan cues. While we do not argue that this mechanism is the only driver of political polarization—a complex phenomenon likely influenced by several factors—we do argue that the interplay between information and social organization could be one driver that is currently overlooked in discussions of political polarization. We explore this proposition by developing a general theoretical model. After presenting the model, we use Twitter data to probe some of its predictions. Finally, we use the model to explore how the emergence of politically sorted networks might alter information diffusion.  相似文献   

11.
Human culture, biology, and health were shaped dramatically by the onset of agriculture ∼12,000 y B.P. This shift is hypothesized to have resulted in increased individual fitness and population growth as evidenced by archaeological and population genomic data alongside a decline in physiological health as inferred from skeletal remains. Here, we consider osteological and ancient DNA data from the same prehistoric individuals to study human stature variation as a proxy for health across a transition to agriculture. Specifically, we compared “predicted” genetic contributions to height from paleogenomic data and “achieved” adult osteological height estimated from long bone measurements for 167 individuals across Europe spanning the Upper Paleolithic to Iron Age (∼38,000 to 2,400 B.P.). We found that individuals from the Neolithic were shorter than expected (given their individual polygenic height scores) by an average of −3.82 cm relative to individuals from the Upper Paleolithic and Mesolithic (P = 0.040) and −2.21 cm shorter relative to post-Neolithic individuals (P = 0.068), with osteological vs. expected stature steadily increasing across the Copper (+1.95 cm relative to the Neolithic), Bronze (+2.70 cm), and Iron (+3.27 cm) Ages. These results were attenuated when we additionally accounted for genome-wide genetic ancestry variation: for example, with Neolithic individuals −2.82 cm shorter than expected on average relative to pre-Neolithic individuals (P = 0.120). We also incorporated observations of paleopathological indicators of nonspecific stress that can persist from childhood to adulthood in skeletal remains into our model. Overall, our work highlights the potential of integrating disparate datasets to explore proxies of health in prehistory.

The agricultural revolution—beginning ∼12,000 B.P. in the Fertile Crescent zone (1, 2) and then spreading (35) or occurring independently (6, 7) across much of the inhabited planet—precipitated profound changes to human subsistence, social systems, and health. Seemingly paradoxically, the agricultural transition may have presented conflicting biological benefits and costs for early farming communities (8, 9). Specifically, demographic reconstructions from archaeological and population genetic records suggest that the agricultural transition led to increased individual fitness and population growth (6, 1012), likely due in part to new food production and storage capabilities. Yet, bioarchaeological analyses of human skeletal remains from this cultural period suggest simultaneous declines in individual physiological well-being and health, putatively from 1) nutritional deficiency and/or 2) increased pathogen loads as a function of greater human population densities, sedentary lifestyles, and proximity to livestock (9, 1318).To date, anthropologists have used two principal approaches to study health across the foraging-to-farming transition in diverse global regions (13, 19, 20). The first approach involves identifying paleopathological indicators of childhood stress that persist into adult skeletal remains. For example, porotic hyperostosis (porous lesions on the cranial vault) and cribra orbitalia (porosity on the orbital roof) reflect a history of bone marrow hypertrophy or hyperplasia resulting from one or more periods of infection, metabolic deficiencies, malnutrition, and/or chronic disease (2126). Meanwhile, linear enamel hypoplasia (transverse areas of reduced enamel thickness on teeth) occurs in response to similar childhood physiological stressors (e.g., disease, metabolic deficiencies, malnutrition, weaning) that disrupt enamel formation in the developing permanent dentition (2730). Broadly, these paleopathological indicators of childhood stress tend to be observed at higher rates among individuals from initial farming communities relative to earlier periods, potentially reflecting their overall “poorer” health (14, 3136).A second approach uses skeleton-based estimates of achieved adult stature as a proxy for health during childhood growth and development (3739). Since stature is responsive to the influences of nutrition and disease burden alongside other factors, relatively short “height-for-age” (or “stunting”) has been used as an indicator of poorer health in both living and bioarchaeological contexts (3943). When studying the past, individual stature can be estimated from long bone measurements and regression equations (4447). Using these methods, multiple prior studies have reported a general profile of relatively reduced stature for individuals from early agricultural societies in Europe (15, 4850), North America (5153), the Levant (16, 32), and Asia (54, 55). For example, estimated average adult mean statures for early farmers are ∼10 cm shorter relative to those for preceding hunter-gatherers in both western Europe (females, −8 cm; males, −14 cm) (49, 50) and the eastern Mediterranean (females, −11 cm; males, −8 cm) (56). This pattern is not universal, as a few studies do not report such changes (57, 58); the variation could be informative with respect to identifying potential underlying factors (59).However, in addition to environmental effects like childhood nutrition and disease, inherited genetic variation can have an outsized impact on terminal stature, with ∼80% of the considerable degree of height variation within many modern populations explainable by heritable genetic variation (6063). Moreover, migration and gene flow likely accompanied many subsistence shifts in human prehistory. For example, there is now substantial paleogenomic evidence of extensive population turnover across prehistoric Europe (6469). Therefore, from osteological studies alone, we are unable to quantify the extent to which temporal changes in height reflect variation in childhood health vs. changes/differences in the frequencies of alleles associated with height variation.In this study, we have performed a combined analysis of ancient human paleogenomic and osteological data where both are available from the same n = 167 prehistoric European individuals representing cultural periods from the Upper Paleolithic (∼38,000 B.P.) to the Iron Age (∼2,400 B.P.). This approach allows us to explore whether “health,” as inferred from the per-individual difference between predicted genetic contributions to height and osteological estimates of achieved adult height, changed over the Neolithic cultural shift to agriculture in Europe. When craniodental elements were preserved and available for analysis (n = 98 of the 167 individuals), we also collected porotic hyperostosis, cribra orbitalia, and linear enamel hypoplasia paleopathological data in order to examine whether patterns of variation between osteological height and genetic contributions to height are explained in part by the presence/absence of these indicators of childhood or childhood-inclusive stress.  相似文献   

12.
Unlike crystalline atomic and ionic solids, texture development due to crystallographically preferred growth in colloidal crystals is less studied. Here we investigate the underlying mechanisms of the texture evolution in an evaporation-induced colloidal assembly process through experiments, modeling, and theoretical analysis. In this widely used approach to obtain large-area colloidal crystals, the colloidal particles are driven to the meniscus via the evaporation of a solvent or matrix precursor solution where they close-pack to form a face-centered cubic colloidal assembly. Via two-dimensional large-area crystallographic mapping, we show that the initial crystal orientation is dominated by the interaction of particles with the meniscus, resulting in the expected coalignment of the close-packed direction with the local meniscus geometry. By combining with crystal structure analysis at a single-particle level, we further reveal that, at the later stage of self-assembly, however, the colloidal crystal undergoes a gradual rotation facilitated by geometrically necessary dislocations (GNDs) and achieves a large-area uniform crystallographic orientation with the close-packed direction perpendicular to the meniscus and parallel to the growth direction. Classical slip analysis, finite element-based mechanical simulation, computational colloidal assembly modeling, and continuum theory unequivocally show that these GNDs result from the tensile stress field along the meniscus direction due to the constrained shrinkage of the colloidal crystal during drying. The generation of GNDs with specific slip systems within individual grains leads to crystallographic rotation to accommodate the mechanical stress. The mechanistic understanding reported here can be utilized to control crystallographic features of colloidal assemblies, and may provide further insights into crystallographically preferred growth in synthetic, biological, and geological crystals.

As an analogy to atomic crystals, colloidal crystals are highly ordered structures formed by colloidal particles with sizes ranging from 100 nm to several micrometers (16). In addition to engineering applications such as photonics, sensing, and catalysis (4, 5, 7, 8), colloidal crystals have also been used as model systems to study some fundamental processes in statistical mechanics and mechanical behavior of crystalline solids (914). Depending on the nature of interparticle interactions, many equilibrium and nonequilibrium colloidal self-assembly processes have been explored and developed (1, 4). Among them, the evaporation-induced colloidal self-assembly presents a number of advantages, such as large-size fabrication, versatility, and cost and time efficiency (35, 1518). In a typical synthesis where a substrate is immersed vertically or at an angle into a colloidal suspension, the colloidal particles are driven to the meniscus by the evaporation-induced fluid flow and subsequently self-assemble to form a colloidal crystal with the face-centered cubic (fcc) lattice structure and the close-packed {111} plane parallel to the substrate (2, 3, 1923) (see Fig. 1A for a schematic diagram of the synthetic setup).Open in a separate windowFig. 1.Evaporation-induced coassembly of colloidal crystals. (A) Schematic diagram of the evaporation-induced colloidal coassembly process. “G”, “M”, and “N” refer to “growth,” “meniscus,” and “normal” directions, respectively. The reaction solution contains silica matrix precursor (tetraethyl orthosilicate, TEOS) in addition to colloids. (B) Schematic diagram of the crystallographic system and orientations used in this work. (C and D) Optical image (Top Left) and scanning electron micrograph (SEM) (Bottom Left) of a typical large-area colloidal crystal film before (C) and after (D) calcination. (Right) SEM images of select areas (yellow rectangles) at different magnifications. Corresponding fast-Fourier transform (see Inset in Middle in C) shows the single-crystalline nature of the assembled structure. (E) The 3D reconstruction of the colloidal crystal (left) based on FIB tomography data and (right) after particle detection. (F) Top-view SEM image of the colloidal crystal with crystallographic orientations indicated.While previous research has focused on utilizing the assembled colloidal structures for different applications (4, 5, 7, 8), considerably less effort is directed to understand the self-assembly mechanism itself in this process (17, 24). In particular, despite using the term “colloidal crystals” to highlight the microstructures’ long-range order, an analogy to atomic crystals, little is known regarding the crystallographic evolution of colloidal crystals in relation to the self-assembly process (3, 22, 25). The underlying mechanisms for the puzzling—yet commonly observed—phenomenon of the preferred growth along the close-packed <110> direction in evaporation-induced colloidal crystals are currently not understood (3, 2529). The <110> growth direction has been observed in a number of processes with a variety of particle chemistries, evaporation rates, and matrix materials (3, 2528, 30), hinting at a universal underlying mechanism. This behavior is particularly intriguing as the colloidal particles are expected to close-pack parallel to the meniscus, which should lead to the growth along the <112> direction and perpendicular to the <110> direction (16, 26, 31)*.Preferred growth along specific crystallographic orientations, also known as texture development, is commonly observed in crystalline atomic solids in synthetic systems, biominerals, and geological crystals. While current knowledge recognizes mechanisms such as the oriented nucleation that defines the future crystallographic orientation of the growing crystals and competitive growth in atomic crystals (3234), the underlying principles for texture development in colloidal crystals remain elusive. Previous hypotheses based on orientation-dependent growth speed and solvent flow resistance are inadequate to provide a universal explanation for different evaporation-induced colloidal self-assembly processes (3, 2529). A better understanding of the crystallographically preferred growth in colloidal self-assembly processes may shed new light on the crystal growth in atomic, ionic, and molecular systems (3537). Moreover, mechanistic understanding of the self-assembly processes will allow more precise control of the lattice types, crystallography, and defects to improve the performance and functionality of colloidal assembly structures (3840).  相似文献   

13.
Human brains flexibly combine the meanings of words to compose structured thoughts. For example, by combining the meanings of “bite,” “dog,” and “man,” we can think about a dog biting a man, or a man biting a dog. Here, in two functional magnetic resonance imaging (fMRI) experiments using multivoxel pattern analysis (MVPA), we identify a region of left mid-superior temporal cortex (lmSTC) that flexibly encodes “who did what to whom” in visually presented sentences. We find that lmSTC represents the current values of abstract semantic variables (“Who did it?” and “To whom was it done?”) in distinct subregions. Experiment 1 first identifies a broad region of lmSTC whose activity patterns (i) facilitate decoding of structure-dependent sentence meaning (“Who did what to whom?”) and (ii) predict affect-related amygdala responses that depend on this information (e.g., “the baby kicked the grandfather” vs. “the grandfather kicked the baby”). Experiment 2 then identifies distinct, but neighboring, subregions of lmSTC whose activity patterns carry information about the identity of the current “agent” (“Who did it?”) and the current “patient” (To whom was it done?”). These neighboring subregions lie along the upper bank of the superior temporal sulcus and the lateral bank of the superior temporal gyrus, respectively. At a high level, these regions may function like topographically defined data registers, encoding the fluctuating values of abstract semantic variables. This functional architecture, which in key respects resembles that of a classical computer, may play a critical role in enabling humans to flexibly generate complex thoughts.Yesterday, the world’s tallest woman was serenaded by 30 pink elephants. The previous sentence is false, but perfectly comprehensible, despite the improbability of the situation it describes. It is comprehensible because the human mind can flexibly combine the meanings of individual words (“woman,” “serenade,” “elephants,” etc.) to compose structured thoughts, such as the meaning of the aforementioned sentence (1, 2). How the brain accomplishes this remarkable feat remains a central, but unanswered, question in cognitive science.Given the vast number of sentences we can understand and produce, it would be implausible for the brain to allocate individual neurons to represent each possible sentence meaning. Instead, it is likely that the brain employs a system for flexibly combining representations of simpler meanings to compose more complex meanings. By “flexibly,” we mean that the same meanings can be combined in many different ways to produce many distinct complex meanings. How the brain flexibly composes complex, structured meanings out of simpler ones is a matter of long-standing debate (310).At the cognitive level, theorists have held that the mind encodes sentence-level meaning by explicitly representing and updating the values of abstract semantic variables (3, 5) in a manner analogous to that of a classical computer. Such semantic variables correspond to basic, recurring questions of meaning such as “Who did it?” and “To whom was it done?” On such a view, the meaning of a simple sentence is partly represented by filling in these variables with representations of the appropriate semantic components. For example, “the dog bit the man” would be built out of the same semantic components as “the man bit the dog,” but with a reversal in the values of the “agent” variable (“Who did it?”) and the “patient” variable (“To whom was it done?”). Whether and how the human brain does this remains unknown.Previous research has implicated a network of cortical regions in high-level semantic processing. Many of these regions surround the left sylvian fissure (1119), including regions of the inferior frontal cortex (13, 14), inferior parietal lobe (12, 20), much of the superior temporal sulcus and gyrus (12, 15, 21), and the anterior temporal lobes (17, 20, 22). Here, we describe two functional magnetic resonance imaging (fMRI) experiments aimed at understanding how the brain (in these regions or elsewhere) flexibly encodes the meanings of sentences involving an agent (“Who did it?”), an action (“What was done?”), and a patient (“To whom was it done?”).First, experiment 1 aims to identify regions that encode structure-dependent meaning. Here, we search for regions that differentiate between pairs of visually presented sentences, where these sentences convey different meanings using the same words (as in “man bites dog” and “dog bites man”). Experiment 1 identifies a region of left mid-superior temporal cortex (lmSTC) encoding structure-dependent meaning. Experiment 2 then asks how the lmSTC represents structure-dependent meaning. Specifically, we test the long-standing hypothesis that the brain represents and updates the values of abstract semantic variables (3, 5): here, the agent (“Who did it?”) and the patient (“To whom was it done?”). We search for distinct neural populations in lmSTC that encode these variables, analogous to the data registers of a computer (5).  相似文献   

14.
Sorghum is a drought-tolerant crop with a vital role in the livelihoods of millions of people in marginal areas. We examined genetic structure in this diverse crop in Africa. On the continent-wide scale, we identified three major sorghum populations (Central, Southern, and Northern) that are associated with the distribution of ethnolinguistic groups on the continent. The codistribution of the Central sorghum population and the Nilo-Saharan language family supports a proposed hypothesis about a close and causal relationship between the distribution of sorghum and languages in the region between the Chari and the Nile rivers. The Southern sorghum population is associated with the Bantu languages of the Niger-Congo language family, in agreement with the farming-language codispersal hypothesis as it has been related to the Bantu expansion. The Northern sorghum population is distributed across early Niger-Congo and Afro-Asiatic language family areas with dry agroclimatic conditions. At a finer geographic scale, the genetic substructure within the Central sorghum population is associated with language-group expansions within the Nilo-Saharan language family. A case study of the seed system of the Pari people, a Western-Nilotic ethnolinguistic group, provides a window into the social and cultural factors involved in generating and maintaining the continent-wide diversity patterns. The age-grade system, a cultural institution important for the expansive success of this ethnolinguistic group in the past, plays a central role in the management of sorghum landraces and continues to underpin the resilience of their traditional seed system.Sorghum [Sorghum bicolor (L.) Moench] is a drought-tolerant C4 crop of major importance for food security in Africa (1, 2). The grain crop has played a fundamental role in adaptation to environmental change in the Sahel since the early Holocene, when the Sahara desert was a green homeland for Nilo-Saharan groups pursuing livelihoods based on hunting or herding of cattle and wild grain collecting (3, 4). The earliest archaeological evidence of human sorghum use is dated 9100–8900 B.P., and the seeds were excavated together with cattle bones, lithic artifacts, and pottery from a site close to the current border between Egypt and Sudan (5, 6). The timing of the domestication of cattle and sorghum remains contested due to limited archaeological evidence, but, at some point, the livelihoods in this region transformed from hunting and gathering into agropastoralism. Sorghum cultivation in combination with cattle herding was a successful livelihood adaptation to the dry grassland ecology, and, eventually, as the climate changed and the Sahel moved south, the agropastoral adaptation spread over large parts of the Central African steppes (7).Recent molecular work on sorghum diversity (813) stands on the shoulders of J. R. Harlan and others’ work from the 1960s–1980s. Diversity of sorghum types, varieties, and races has been related to movement of people, disruptive selection, geographic isolation, gene flow from wild to cultivated plants, and recombination of these types in different environments (2, 14, 15). On the basis of morphology, Harlan and de Wet (16) classified sorghum into five basic and 10 intermediary botanical races (16). The race “bicolor” has small elongated grains, and, because of the “primitive” morphology, it is considered the progenitor of more derived races (16, 17). The race “guinea” has open panicles well adapted to high rainfall areas, and it is proposed that the “guinea margaritiferum” type from West Africa represents an independent domestication (10, 12). The race “kafir” is associated with the Bantu agricultural tradition, and the race “durra” is considered well-adapted to the dryland agricultural areas along the Arabic trade routes from West Africa to India (14). The fifth race, “caudatum,” is characterized by “turtle-backed” grains, and Stemler et al. (ref. 17, p. 182) proposed that “the distribution of caudatum sorghums and Chari-Nile–speaking peoples coincide so closely that a causal relationship seems probable.” This hypothesis is considered plausible on the basis of historical linguistics, but it remains to be tested by independent evidence (3). The hypothesis is a specific version of the interdisciplinary “farming-language codispersal hypothesis,” which proposes that farming and language families have moved together through population growth and migration (18, 19).The role of cultural selection and adaptation has been documented in many studies of domestication and translocation of crops (20, 21). The literature on the role of farmers’ management in maintaining and enhancing genetic resources (2226) is relevant to understanding how patterns of diversity visible at large spatial scales are caused by evolutionary processes operating at finer scales. On-farm management of crop varieties and cultural boundaries influencing the diffusion of seeds, practices, and knowledge are important local-scale explanatory factors behind patterns of regional and continental scale associations between ethnolinguistic groups and crop genetic structure (2730).Knowledge on the role of social, cultural, and environmental factors in structuring crop diversity is important to assess the resilience of rural livelihoods in the face of global environmental change. Impact studies project that anthropogenic climate change will negatively affect sorghum yields in Sub-Saharan Africa (31, 32). Such projections pose questions about the availability of appropriate genetic resources and the ability of both breeding programs and local seed systems to develop the required adaptations in a timely manner (33, 34). Insight in local seed systems can contribute to more sustainable development assistance efforts aimed at building resilience in African agriculture in the face of climate change and human insecurity (25, 35).Here, we present a study of geographic patterns in African sorghum diversity and its associations with the distribution of ethnolinguistic groups. First, we evaluate the proposed farming-language codispersal hypothesis by genotyping sorghum accessions from a continent-wide diversity panel (36). Second, to elucidate the local level mechanisms involved in generating and maintaining this diversity, we present a case study of the sorghum seed system of a group of descendants of the first Nilo-Saharan sorghum cultivators, the Pari people in South Sudan. By comparing accessions collected in 1983 with seeds sampled from the same villages in 2010 and 2013, we assess the resilience of the traditional Pari seed system during a period of civil war and climatic stress. We draw on environmental, linguistic, and anthropological evidence to understand the role of geographic, ecological, historical, and cultural factors in shaping sorghum genetic structure.  相似文献   

15.
Recent studies on electronic communication records have shown that human communication has complex temporal structure. We study how communication patterns that involve multiple individuals are affected by attributes such as sex and age. To this end, we represent the communication records as a colored temporal network where node color is used to represent individuals’ attributes, and identify patterns known as temporal motifs. We then construct a null model for the occurrence of temporal motifs that takes into account the interaction frequencies and connectivity between nodes of different colors. This null model allows us to detect significant patterns in call sequences that cannot be observed in a static network that uses interaction frequencies as link weights. We find sex-related differences in communication patterns in a large dataset of mobile phone records and show the existence of temporal homophily, the tendency of similar individuals to participate in communication patterns beyond what would be expected on the basis of their average interaction frequencies. We also show that temporal patterns differ between dense and sparse neighborhoods in the network. Because also this result is independent of interaction frequencies, it can be seen as an extension of Granovetter’s hypothesis to temporal networks.Social networks have been studied since the early 20th century, and their significance to the performance and well-being of individuals is now widely recognized (1, 2). The availability of electronic communication records—data on mobile phone calls, e-mails, tweets, and messages in social networking sites—has, however, created unprecedented opportunities for studying social networks (3, 4), allowing the analyzing of human interaction networks at the societal scale (57), studying their mesoscale structure (8), and carrying out experiments with tens of millions of subjects (9).Communication records are typically studied by constructing an “aggregate network” where the nodes correspond to people, edges denote their relations as inferred from the communication data, and tie strengths are accounted for by edge weights representing communication frequency. Although this approach has been immensely successful, it disregards all information contained in the detailed timings of communication events. As an example, individuals who appear highly connected in the aggregate network might only interact with a small number of acquaintances at a time (10).Human communication has been shown to have rich temporal structure (1113), and one of the challenges of computational social science is to understand this rich behavior. Although temporal inhomogeneities can be partially attributed to circadian and weekly patterns (12, 14), detailed analysis has shown that they have more fundamental origins (13, 1517). Human communication is known to be intrinsically bursty (11, 13, 18, 19) and contain strong pairwise correlations of interaction times (13).“Homophily” refers to the well-documented tendency of individuals to interact with others similar to them with respect to various social and demographic factors (2022). Because social networks act as conduits of information, homophily limits the information that individuals can receive. Although sex homophily is known to be less strong than homophily by age, race, or education (22), sex-related differences in communication have been documented at least in instant messaging (23), Facebook (24), and the use of both domestic (25) and mobile phones (26). However, not much is known about patterns involving multiple individuals, or the influence of factors such as sex or age on communication patterns. This is the focus of the present article.Increased awareness about the importance of temporal information in various empirical datasets has led to the emergence of the concept of “temporal networks,” a general framework for studying time-dependent interactions between nodes (27). Here, we study communication patterns of multiple individuals within this framework. We represent communication records as a “colored” temporal network where node colors are used to refer to individuals’ attributes. We then identify “temporal motifs” in these data to summarize their mesoscale temporal structure (28) and develop a null model that identifies differences between the relative occurrence of node colors in temporal motifs so that these differences are independent of the structure of the aggregate network. This choice of null model assures that all results presented in this article are independent of any previous results obtained by studying static communication networks where link weights correspond to communication frequency.Using a large dataset of mobile phone calls, we find significant differences in the occurrence of mesoscale communication patterns. We identify “temporal homophily,” overrepresentation of temporal patterns that contain similar nodes beyond that predicted by the structure of the aggregate network. By using event colors in addition to node colors, we also find consistent and robust differences between events occurring in dense and sparse neighborhoods of the aggregate network. Because this result is independent of the aggregate network, it can be seen as a temporal extension of Granovetter’s hypothesis about the correlation of local density and edge weights (29).  相似文献   

16.
Value is a foundational concept in reinforcement learning and economic choice theory. In these frameworks, individuals choose by assigning values to objects and learn by updating values with experience. These theories have been instrumental for revealing influences of probability, risk, and delay on choices. However, they do not explain how values are shaped by intrinsic properties of the choice objects themselves. Here, we investigated how economic value derives from the biologically critical components of foods: their nutrients and sensory qualities. When monkeys chose nutrient-defined liquids, they consistently preferred fat and sugar to low-nutrient alternatives. Rather than maximizing energy indiscriminately, they seemed to assign subjective values to specific nutrients, flexibly trading them against offered reward amounts. Nutrient–value functions accurately modeled these preferences, predicted choices across contexts, and accounted for individual differences. The monkeys’ preferences shifted their daily nutrient balance away from dietary reference points, contrary to ecological foraging models but resembling human suboptimal eating in free-choice situations. To identify the sensory basis of nutrient values, we developed engineering tools that measured food textures on biological surfaces, mimicking oral conditions. Subjective valuations of two key texture parameters—viscosity and sliding friction—explained the monkeys’ fat preferences, suggesting a texture-sensing mechanism for nutrient values. Extended reinforcement learning and choice models identified candidate neuronal mechanisms for nutrient-sensitive decision-making. These findings indicate that nutrients and food textures constitute critical reward components that shape economic values. Our nutrient-choice paradigm represents a promising tool for studying food–reward mechanisms in primates to better understand human-like eating behavior and obesity.

The concept of “value” plays a fundamental role in behavioral theories that formalize learning and decision-making. Economic choice theory examines whether individuals behave as if they assigned subjective values to goods, which are inferred from observable choices (1, 2). In reinforcement learning, values integrate past reward experiences to guide future behavior (3, 4). Although these theories have been critical for revealing how choices depend on factors such as probability, risk, and delay (2, 4, 5), they do not explain how values and preferences are shaped by particular properties of the choice objects themselves. Why do we like chocolate, and why do some individuals like chocolate more than others? In classical economics, one famously does not argue about tastes (6). By contrast, biology conceptualizes choice objects as rewards with well-defined components that benefit survival and reproductive success and endow rewards with value (4). Here we followed this approach to investigate how the biologically critical, intrinsic properties of foods—their nutrients and sensory qualities—influence values inferred from behavioral choices and help explain individual differences in preference.The reward value of food is commonly thought to derive from its nutrients and sensory properties: sugar and fat make foods attractive because of their sweet taste and rich mouthfeel. Sensory scientists and food engineers seek to uncover rules that link food composition to palatability (710). Similarly, ecological foraging theory links animals’ food choices to nutritional quality (11). By contrast, in behavioral and neuroscience experiments, food components are often only manipulated to elicit choice variation but rarely studied in their own right. Here, we aimed to empirically ground the value concept in the constitutive properties of food rewards. We combined a focus on specific nutrients and food qualities with well-controlled repeated-choice paradigms from behavioral neurophysiology and studied the choices of rhesus monkeys (Macaca mulatta) for nutrient-defined liquid rewards.Like humans, macaques are experts in scrutinizing rewards for sophisticated, value-guided decision-making (4, 1215). This behavioral complexity, the closeness of the macaque brain’s sensory and reward systems to those of humans (16), and the suitability for single-neuron recordings make macaques an important model for studying food–reward mechanisms with relevance to human eating behavior and obesity (17).Previous studies in macaques uncovered key reward functions and their neuronal implementations, including the assignment of values to choice options (13, 1825), reinforcement learning (4, 26) and reward-dependence on satiety and thirst (7, 27, 28). Despite these advances, behavioral principles for nutrient rewards in macaques remain largely uncharacterized. The typical diet of these primates includes a broad variety of foods and nutrient compositions (29, 30). Their natural feeding conditions require adaptation to both short-term and seasonal changes in nutrient availability and ecologically diverse habitats (31, 32). Thus, the macaque reward system should be specialized for flexible, nutrient-directed food choices. Accordingly, we manipulated the fat and sugar content of liquid food rewards to study their effects on macaques’ choices. We addressed several aims.First, we tested whether macaques’ choices were sensitive to the nutrient composition of rewards, consistent with the assignment of subjective values. In previous studies, macaques showed subjective trade-offs between flavored liquid rewards (12, 13). We hypothesized that nutrients and nutrient-correlated sensory qualities constitute the intrinsic food properties that shape such preferences. We focused on macronutrients (carbohydrates, fats, and proteins), specifically sugar and fat, because of their relevance for human overeating and obesity, and their role in determining sensory food qualities. As nutrients are critical for survival and well-being, nonsated macaques should prefer foods high in nutrient content. In addition, like humans, they may individually prefer specific nutrients and sensory qualities (e.g., valuing isocaloric sweet taste over fat-like texture). Because nutrients are basic building blocks of foods, establishing an animal’s “nutrient–value function” could enable food choice predictions across contexts.Second, to identify a physical, sensory basis for nutrient preferences, we developed engineering tools to measure nutrient-dependent food textures on biological surfaces that mimicked oral conditions. Although sugar is directly sensed by taste receptors (33), the mechanism for oral fat-sensing remains unclear. While the existence of a “fat taste” in primates is debated (34), substantial evidence points to a somatosensory, oral–texture mechanism (7, 9). Fat-like textures reliably elicit fatty, creamy mouthfeel (8) and activate neural sensory and reward systems in macaques (35) and humans (36, 37). Two distinct texture parameters are implicated in fat-sensing: viscosity and sliding friction, reflecting a food’s thickness and lubricating properties, respectively (3840). We hypothesized that these parameters mediate the influence of fat content on choices.Third, we compared the monkeys’ choices to ecologically relevant dietary reference points. In optimal foraging theory (41), animals maximize energy as a common currency for choices (“energy maximization”). Alternatively, animals may balance the intake of different nutrients (“nutrient balancing”) (4244) or choose food based on the reward value of specific sensory and nutrient components (“nutrient reward”) (7, 45). We evaluated these strategies in a repeated-choice paradigm suited for neurophysiological recordings and derived hypotheses about the neuronal mechanisms for nutrient-sensitive decision-making (e.g., “energy-tracking neurons” versus “nutrient–value neurons”—Discussion).Finally, based on our behavioral data, we explored in computational simulations how theories of reinforcement learning and economic choice can be extended by a nutrient–value function. Together with recently proposed homeostatic reinforcement learning (46), nutrient-specific model parameters may optimize predictions when choices depend on nutrient composition and homeostatic set-points.  相似文献   

17.
Although open databases are an important resource in the current deep learning (DL) era, they are sometimes used “off label”: Data published for one task are used to train algorithms for a different one. This work aims to highlight that this common practice may lead to biased, overly optimistic results. We demonstrate this phenomenon for inverse problem solvers and show how their biased performance stems from hidden data-processing pipelines. We describe two processing pipelines typical of open-access databases and study their effects on three well-established algorithms developed for MRI reconstruction: compressed sensing, dictionary learning, and DL. Our results demonstrate that all these algorithms yield systematically biased results when they are naively trained on seemingly appropriate data: The normalized rms error improves consistently with the extent of data processing, showing an artificial improvement of 25 to 48% in some cases. Because this phenomenon is not widely known, biased results sometimes are published as state of the art; we refer to that as implicit “data crimes.” This work hence aims to raise awareness regarding naive off-label usage of big data and reveal the vulnerability of modern inverse problem solvers to the resulting bias.

Public databases are an important driving force in the current deep learning (DL) revolution; ImageNet (1) is a well-known example. However, due to the growing availability of open-access data and the general hype around artificial intelligence, databases are sometimes used in an “off-label” manner: Data published for one task are used for different ones. Here we aim to show that such naive and seemingly appropriate usage of open-access data could lead to biased, overly optimistic results.Biased performance of machine-learning models due to faulty construction of data cohorts or research pipelines recently has been identified for various tasks, including gender classification (2), COVID-19 prediction (3), and natural language processing (4). However, to the best of our knowledge, it has not been studied for inverse problem solvers. We address this gap by highlighting scenarios that lead to biased performance of algorithms developed for image reconstruction from undersampled MRI measurements; the latter is a real-world example of an inverse problem and a current frontier of DL research (513).The MRI measurements are fundamentally acquired in the Fourier domain, which is known as k-space. Sub-Nyquist sampling is commonly applied to shorten the traditionally lengthy MRI scan time, and image reconstruction algorithms are used to recover images from the undersampled data (1417). Therefore, the development of such algorithms ideally should be done using raw k-space data. However, the development of DL methods requires thousands of examples, and databases containing raw k-space data are scarce. To date, only a few databases offer such data (for example, refs. 1822), whereas many more offer reconstructed and processed magnetic resonance (MR) images (for example, refs. 2330). The latter offer images for postreconstruction tasks, such as segmentation and biomarker discovery. Nevertheless, due to their availability, they often are downloaded and used to synthesize “raw” k-space data using the forward Fourier transform; the synthesized data are then used for the development of reconstruction algorithms. We identified that this common approach could lead to undesirable consequences; the underlying cause is that the nonraw MR images are commonly processed using hidden pipelines. These pipelines, which are implemented by commercial scanner software or during database storage, include a full set or a subset of the following steps: image reconstruction, filtering, storage of magnitude data only (i.e., loss of the MRI complex values), lossy compression, and conversion to Digital Imaging and Communications in Medicine (DICOM) or Neuroimaging Informatics Technology Initative (NIFTI) formats. These reduce the data entropy. We aim to highlight that when modern algorithms are trained and evaluated using such data, they benefit from the data processing and, hence, tend to exhibit overly optimistic results compared to performance on raw, unprocessed data. Because this phenomenon is largely unknown, such biased results are sometimes published as state of the art without reporting the processing pipelines or addressing their effects. To raise community awareness of this growing problem, we coin the term “data crimes” to describe such publications, in reference to the more obvious “inverse crime” scenario (31) described next.Bias stemming from the underlying data has been recognized previously in a few scenarios related to inverse problems. The term inverse crime describes a scenario in which an algorithm is tested using simulated data, and the simulation resonates with the algorithm such that it leads to improved results (3135). Specifically, the authors of ref. 34 described an inverse crime as a situation where the same discrete model is used for simulating k-space measurements and reconstructing an MR image from them. They showed that compared with reconstruction from raw or analytically computed measurements, this leads to reduced ringing artifacts. A second example is evaluation of MRI reconstruction algorithms on real-valued magnitude images. In this case, k-space exhibits conjugate symmetry; hence, it is sufficient to use only about half of it for full image recovery. This symmetry often is leveraged in partial Fourier methods such as Homodyne (15) and projection onto convex sets (36), where additional steps are applied for recovery of the full complex data. However, neglecting the fact that the data are complex valued results in better conditioning due to the lower dimensionality of the inverse problem. This may lead to an obvious advantage when evaluating algorithms on such data as opposed to raw k-space data. However, to the best of our knowledge, inverse crimes have not been studied yet in the context of machine learning or public data usage.Here we report on two subtle forms of algorithmic bias that have not been described in the literature yet and that are relevant to the current DL era. We show how they arise from two hidden data-processing pipelines that affect many open-access MRI databases: a commercial scanner pipeline and a JPEG data storage pipeline. To demonstrate these scenarios, we took raw MRI data and “spoiled” them with carefully controlled processing steps. We then used the processed datasets for training and evaluation of algorithms from three well-established MRI reconstruction frameworks: compressed sensing (CS) with a wavelet transform (37), dictionary learning (DictL) (38), and DL (39). Our experiments demonstrate that these algorithms yield overly optimistic results when trained and evaluated on processed data.The main contributions of this work are fivefold. First, we reveal scenarios in which algorithmic bias of inverse problem solvers may arise from off-label usage of open-access databases and analyze them through large-scale statistics. Second, we find that CS, DictL, and DL algorithms are all prone to this form of subtle bias. While recent studies identified stability issues of MRI reconstruction algorithms (5, 40), here we identify a common vulnerability of canonical algorithms to data-related bias. Third, we demonstrate the potentially harmful impact of data crimes by showing that methods trained on processed data but applied to unprocessed data yield lower-quality image reconstruction in real-world scenarios. Fourth, our experiments reveal limited generalization ability of the studied algorithms. Finally, by introducing the concept of data crimes, we hope to raise community awareness of the growing problem of bias stemming from off-label usage of open-access data.  相似文献   

18.
19.
Across the tree of life, organisms modify their local environment, rendering it more or less hospitable for other species. Despite the ubiquity of these processes, simple models that can be used to develop intuitions about the consequences of widespread habitat modification are lacking. Here, we extend the classic Levins metapopulation model to a setting where each of n species can colonize patches connected by dispersal, and when patches are vacated via local extinction, they retain a “memory” of the previous occupant—modeling habitat modification. While this model can exhibit a wide range of dynamics, we draw several overarching conclusions about the effects of modification and memory. In particular, we find that any number of species may potentially coexist, provided that each is at a disadvantage when colonizing patches vacated by a conspecific. This notion is made precise through a quantitative stability condition, which provides a way to unify and formalize existing conceptual models. We also show that when patch memory facilitates coexistence, it generically induces a positive relationship between diversity and robustness (tolerance of disturbance). Our simple model provides a portable, tractable framework for studying systems where species modify and react to a shared landscape.

Many interactions between species are realized indirectly, through effects on a shared environment. For example, consumers compete indirectly by altering resource availability (1, 2). However, the ways that species affect and are affected by their environment extend far beyond the consumption of resources. Across the tree of life and over a tremendous range of spatial scales, organisms make complex and sometimes substantial changes to the physical and chemical properties of their local environment (36). Many species also impact local biotic factors; for example, plant–soil feedbacks are often driven by changes in soil microbiome composition (4, 79).Numerous studies have recognized and discussed the ways that such changes can mediate interactions between species, as well as the obstacles to modeling these complex, indirect interactions (5, 7, 1012). In some instances, the effects of environmental modification by one species on another can be accounted for implicitly in models of direct interactions (2, 13, 14) or within the well-established framework of resource competition (12, 15). But in many other cases, new modeling approaches are necessary.Because the range of ecosystems where interactions are driven by environmental modification is wide and varied, many parallel strands of theory have developed for them. Examples include “traditional” ecosystem engineers (1620), plant–soil feedbacks (4, 7, 21), and chemically mediated interactions between microbes (5, 12). Similar dynamics underlie Janzen–Connell effects, where individuals (e.g., tropical trees) modify their local environment by supporting high densities of natural enemies (8, 2224), and immune-mediated pathogen competition, where pathogen strains modify their hosts by inducing specific immunity (2528). These last two examples highlight that environmental modification might be “passive,” in the sense that it is generated by the environment itself.While each of these systems has attracted careful study, it is difficult to elucidate general principles for the dynamics of environmentally mediated interactions without a simple, shared theoretical framework. Are there generic conditions for the coexistence of many species in these systems? What are typical relationships between diversity and ecosystem productivity or robustness? We especially lack theoretical expectations for high-diversity communities, as most existing models focus on the dynamics of one or two species (4, 7, 16, 17).To begin answering these questions, we introduce and analyze a flexible model for species interactions mediated by environmental modification. Two essential features of these interactions—which underlie the difficulty integrating them into standard ecological theory—are that environmental modifications are localized in space and persistent in time (10). To capture these aspects, we adopt the metapopulation framework, introduced by Levins (29), which provides a minimal model for population dynamics with distinct local and global scales. Metapopulation models underpin a productive and diverse body of theory in ecology (30, 31), including various extensions to study multispecies communities (32, 33). Here, we adopt the simplest such extension, by assuming zero-sum dynamics and an essentially horizontal community (34, 35). Our modeling framework accommodates lasting environmental modification by introducing a versatile notion of “patch memory,” in which the state of local sites depends on past occupants.In line with evidence from a range of systems, we find that patch memory can support the robust coexistence of any number of species, even in an initially homogeneous landscape. We derive quantitative conditions for species’ coexistence and show how they connect to existing conceptual models. Importantly, these conditions apply even as several model assumptions are relaxed. We also investigate an emergent relationship between species diversity and robustness, demonstrating that our modeling framework can provide insight for a variety of systems characterized by localized environmental feedbacks.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号