首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 18 毫秒
1.
A recent poll showed that most people think of science as technology and engineering—life-saving drugs, computers, space exploration, and so on. This was, in fact, the promise of the founders of modern science in the 17th century. It is less commonly understood that social and behavioral sciences have also produced technologies and engineering that dominate our everyday lives. These include polling, marketing, management, insurance, and public health programs.  相似文献   

2.
The thoughts and behaviors of financial market participants depend upon adopted cultural traits, including information signals, beliefs, strategies, and folk economic models. Financial traits compete to survive in the human population and are modified in the process of being transmitted from one agent to another. These cultural evolutionary processes shape market outcomes, which in turn feed back into the success of competing traits. This evolutionary system is studied in an emerging paradigm, social finance. In this paradigm, social transmission biases determine the evolution of financial traits in the investor population. It considers an enriched set of cultural traits, both selection on traits and mutation pressure, and market equilibrium at different frequencies. Other key ingredients of the paradigm include psychological bias, social network structure, information asymmetries, and institutional environment.  相似文献   

3.
Human history is written in both our genes and our languages. The extent to which our biological and linguistic histories are congruent has been the subject of considerable debate, with clear examples of both matches and mismatches. To disentangle the patterns of demographic and cultural transmission, we need a global systematic assessment of matches and mismatches. Here, we assemble a genomic database (GeLaTo, or Genes and Languages Together) specifically curated to investigate genetic and linguistic diversity worldwide. We find that most populations in GeLaTo that speak languages of the same language family (i.e., that descend from the same ancestor language) are also genetically highly similar. However, we also identify nearly 20% mismatches in populations genetically close to linguistically unrelated groups. These mismatches, which occur within the time depth of known linguistic relatedness up to about 10,000 y, are scattered around the world, suggesting that they are a regular outcome in human history. Most mismatches result from populations shifting to the language of a neighboring population that is genetically different because of independent demographic histories. In line with the regularity of such shifts, we find that only half of the language families in GeLaTo are genetically more cohesive than expected under spatial autocorrelations. Moreover, the genetic and linguistic divergence times of population pairs match only rarely, with Indo-European standing out as the family with most matches in our sample. Together, our database and findings pave the way for systematically disentangling demographic and cultural history and for quantifying processes of shifts in language and social identities on a global scale.

There are numerous conceptual parallels between the processes of genetic and linguistic evolution (1) (here referred to for simplicity as “genes and languages”). In his book On the Origin of Species, Darwin went a step further and boldly proposed that the parallels were more than just conceptual. Famously, he claimed that “a perfect pedigree of mankind…would afford the best classification of the various languages now spoken throughout the world” (2, p. 422). The pioneering work of Cavalli-Sforza and Sokal in the 1980s appeared to provide substantial support for Darwin’s claim. The critical evidence for this claim was that a global phylogeny of human populations showed some broad matches with a global language tree (35). Genetic and linguistic differentiation processes also appeared to mirror each other on a continental scale in Europe (6, 7). Matches of this kind can result from local codiffusion processes and can be amplified by large-scale population expansions. According to the farming/language dispersal hypothesis, migrations fueled by the shift toward agriculture and animal husbandry in the Holocene have given rise to some of the largest language families identifiable today (8, 9). Notable examples of major language family spreads accompanied by substantial demographic expansions include the Bantu migration in sub-Saharan Africa and the Austronesian peopling of the Pacific. In both cases, genetics and phylolinguistic inference support a broad match of genetic and linguistic histories (10, 11).In line with this research tradition, research on gene–language associations has tended to emphasize matches between genes and languages, and disregarded mismatches as an exception to the norm. However, regional case studies have repeatedly identified instances where languages and genes clearly do not match (1215). Mismatches arise if a population adopts another language without (or with only minimal) genetic admixture, or if a population assimilates genetically with a neighboring one without changing its language. For example, Hungarian speakers in central Europe have little or no genetic trace associated with the Siberian origin of their language (16), and Damara speakers in southern Africa have no genetic ties to their linguistically related Nama neighbors (17). While populations necessarily retain the genetic makeup of their ancestors, they can shift to other languages at any time, because speakers can learn new languages throughout their lifespan. Some authors have taken a more extreme position by arguing that language shift has been so pervasive in shaping contemporary linguistic diversity that an association between genes and languages is the exception rather than the rule (18).However, the claims that either matches or mismatches are the norm are premature. Rather than more cherry-picked examples, what is needed is a systematic assessment of matches and mismatches on a global scale. To accomplish this task, we introduce a global database of gene–language associations: GeLaTo (or Genes and Languages Together), a large, high-resolution genomic resource designed for multidisciplinary research on human cultural and linguistic diversity. We use GeLaTo to address the following questions: How frequent are mismatches between genes and languages? Which scenarios can shape match and mismatch profiles? How genetically cohesive are language families? Within language families, do linguistic and genetic histories reflect the same temporal processes?  相似文献   

4.
What are the origins of humans’ capacity to represent social relations? We approached this question by studying human infants’ understanding of social dominance as a stable relation. We presented infants with interactions between animated agents in conflict situations. Studies 1 and 2 targeted expectations of stability of social dominance. They revealed that 15-mo-olds (and, to a lesser extent, 12-mo-olds) expect an asymmetric relationship between two agents to remain stable from one conflict to another. To do so, infants need to infer that one of the agents (the dominant) will consistently prevail when her goals conflict with those of the other (the subordinate). Study 3 and 4 targeted the format of infants’ representation of social dominance. In these studies, we found that 12- and 15-mo-olds did not extend their expectations of dominance to unobserved relationships, even when they could have been established by transitive inference. These results suggest that infants'' expectation of stability originates from their representation of social dominance as a relationship between two agents rather than as an individual property. Infants’ demonstrated understanding of social dominance reflects the cognitive underpinning of humans’ capacity to represent social relations, which may be evolutionarily ancient, and may be shared with nonhuman species.  相似文献   

5.
The social network maintained by a focal individual, or ego, is intrinsically dynamic and typically exhibits some turnover in membership over time as personal circumstances change. However, the consequences of such changes on the distribution of an ego’s network ties are not well understood. Here we use a unique 18-mo dataset that combines mobile phone calls and survey data to track changes in the ego networks and communication patterns of students making the transition from school to university or work. Our analysis reveals that individuals display a distinctive and robust social signature, captured by how interactions are distributed across different alters. Notably, for a given ego, these social signatures tend to persist over time, despite considerable turnover in the identity of alters in the ego network. Thus, as new network members are added, some old network members either are replaced or receive fewer calls, preserving the overall distribution of calls across network members. This is likely to reflect the consequences of finite resources such as the time available for communication, the cognitive and emotional effort required to sustain close relationships, and the ability to make emotional investments.Social relationships play an important functional role in human society both at the collective level and by providing benefits to individuals. In particular, it appears that having strong and supportive relationships, characterized by closeness and emotional intensity, is essential for health and well-being in both humans and other primates (1, 2). At the same time, there is a higher cost to maintaining closer relationships, reflected in the amount of effort required to maintain a relation at the desired level of emotional closeness. Because of this, the number of emotionally intense relationships is typically small. Moreover, it has been suggested that ego networks, the sets of ties individuals (egos) have to their friends and family (alters), may be subject to more general constraints associated with limits on human abilities to interact with large numbers of alters (35). Although there are obvious constraints on the time available for interactions (57), additional constraints may also arise through limits on memory capacity (3, 8) or other cognitive abilities (9, 10).Irrespective of the specific mechanisms that act to constrain ego networks, it is reasonable to ask whether such mechanisms shape these networks in similar ways under different circumstances, giving rise to some characteristic features that persist over time despite network turnover. Here, we explore this question with a detailed analysis of the communication patterns within ego networks in an empirical setting that results in large membership turnover and changes in the closeness of relationships. In particular, we focus on the way that egos divide their communication efforts among alters and how persistent the observed patterns are over time. We call these patterns, which may be expected to vary across individuals, social signatures.Over the last decade, research on human communication has been given a significant boost by the widespread adoption of new communication technologies. The popularity of communication channels such as mobile phones and online environments has made it possible to capture microlevel quantitative data on human interactions automatically, in a way that circumvents biases inherent in retrospective self-reports (11). However, studies using electronic communication sources typically lack information on the nature of the social relationships (5, 1215), whereas the challenge in using survey data alone has been that these give detailed information about the nature of the social relationships, but lack quantitative information about the actual patterns of communication (16). Further, in surveys the respondent burden from recording communication events with their entire ego network is very high (17) and people’s accuracy in recalling detailed communication events is known to be limited (18).We combine detailed, autorecorded data from mobile phone call records with survey data. These were collected during a study (19) that tracked changes in the ego networks of 24 students over 18 mo as they made the transition from school to university or work (details in Materials and Methods). These changes in personal circumstances result in a period of flux for the social relationships of the participants, with many alters both leaving and entering their networks. This provides a unique setting for studying network-level structure and its response to major changes in social circumstances. This dataset combines detailed data on communication patterns from mobile phone call records with questionnaire data that explore participants’ own perceptions of the quality of the relationships with all of the members of their network. More importantly, call record data contain complete time-stamped records of all calls made by egos to alters in their network, rather than just a subset of calls egos make to alters who happened to be on the same mobile network as they are (as has usually been the case in previous work, e.g., ref. 12). The questionnaires that augmented the call records provide information on the networks of participants that includes assessment of emotional closeness, time between face-to-face contact, and the phone numbers of alters. This allowed the call records of alters with several phone numbers (mobile phones, landlines) to be merged, giving a more accurate picture of communication between two individuals than that based on mobile phone calls alone.These data enable us to uncover changes in the structure of the ego networks of the participants, reflected in their communication behavior. We find a consistent pattern that is seen to be persistent over time even when there is large network turnover. This social signature is consistent with previously observed patterns of social network site use (20, 21) and text messaging (22, 23) in that a high proportion of communication is focused on a small number of alters. A detailed analysis of the social signatures of individual participants reveals that there is individual variation in the exact way their limited communication time is allocated across their network members. Although individual signatures show some response to network turnover, they surprisingly retain much of their distinctive variation over time despite this turnover.  相似文献   

6.
The emergence of complex cultural practices in simple hunter-gatherer groups poses interesting questions on what drives social complexity and what causes the emergence and disappearance of cultural innovations. Here we analyze the conditions that underlie the emergence of artificial mummification in the Chinchorro culture in the coastal Atacama Desert in northern Chile and southern Peru. We provide empirical and theoretical evidence that artificial mummification appeared during a period of increased coastal freshwater availability and marine productivity, which caused an increase in human population size and accelerated the emergence of cultural innovations, as predicted by recent models of cultural and technological evolution. Under a scenario of increasing population size and extreme aridity (with little or no decomposition of corpses) a simple demographic model shows that dead individuals may have become a significant part of the landscape, creating the conditions for the manipulation of the dead that led to the emergence of complex mortuary practices.  相似文献   

7.
Background:This study determined the effects of rational emotive occupational health coaching on the management of work stress among academic staff of science and social science education in south east Nigerian universities.Method:A randomized controlled trial experimental design was adopted for the study with a sample size of 63 participants who were randomized into an intervention group (n = 32) and control group (n = 31). Occupational stress index and perceived stress scale were used for data collection. The intervention program was administered for 12 weeks after which posttest was administered and a 2-month follow-up measure followed. Mixed-design repeated analysis of variance was used to determine the within-groups and between-groups effects.Results:The findings of the study revealed that there was no significant difference between the baseline, and the nonintervention group did not change over time in their management of work stress. However, the mean stress of the intervention group decreased over time than that of the control group.Conclusion:Rational emotive occupational health coaching had significant effects on the management of work stress among academic staff of science and social science education.  相似文献   

8.
The quest to identify materials with tailored properties is increasingly expanding into high-order composition spaces, with a corresponding combinatorial explosion in the number of candidate materials. A key challenge is to discover regions in composition space where materials have novel properties. Traditional predictive models for material properties are not accurate enough to guide the search. Herein, we use high-throughput measurements of optical properties to identify novel regions in three-cation metal oxide composition spaces by identifying compositions whose optical trends cannot be explained by simple phase mixtures. We screen 376,752 distinct compositions from 108 three-cation oxide systems based on the cation elements Mg, Fe, Co, Ni, Cu, Y, In, Sn, Ce, and Ta. Data models for candidate phase diagrams and three-cation compositions with emergent optical properties guide the discovery of materials with complex phase-dependent properties, as demonstrated by the discovery of a Co-Ta-Sn substitutional alloy oxide with tunable transparency, catalytic activity, and stability in strong acid electrolytes. These results required close coupling of data validation to experiment design to generate a reliable end-to-end high-throughput workflow for accelerating scientific discovery.

Increased incorporation of data science in materials research is anticipated to accelerate discovery of materials with improved properties and combinations thereof for technological applications requiring multifunctional materials (1, 2). Machine learning is one popular approach for building predictive models, but limited materials training data often compromises the prediction accuracy, especially in composition spaces for which no training data are available (35). Training data are particularly limited in high-order composition spaces (e.g., at least three cation oxides), which offer opportunities for tuning multiple properties through formation of a phase, i.e., a crystal structure or substitutional alloy, that contains all three cations. The vast number of potential high-order compositions exceeds current methods of discovery or prediction (69), and prediction of substitutional alloy phases and their properties remains a substantial challenge (10, 11).We develop two data science methods to discover materials in high-order composition spaces. The phase diagram model uses thermodynamic equilibrium assumptions to propose candidate phase diagrams using only optical absorption data. The emergent property model uses the same data to identify compositions whose optical properties cannot be explained by combinations of lower-order compositions of the same elements. The present work additionally describes the design and implementation of the high-throughput workflow that provides data to these models as well as an example use case for guiding discovery. Our primary finding is that appropriately constructed data science models can make inferences about the phase behavior of complex materials using data that are not traditionally used for phase characterization. These inferences add scientific value to existing datasets and guide materials discovery efforts.We demonstrate this approach for three-cation oxide systems via high-throughput experiments coupled to automated quality control and modeling of spectral microscopy data. The select three-cation oxide compositions whose properties appear unique compared to lower-order oxide compositions are then candidates for more expensive and time-consuming structural and functional characterization. This approach is distinct from computational inverse design wherein a model predicts a material to have a specific property, a promising strategy that is hampered by the dual challenges of computational prediction of experimental properties and the computational generation of synthesizable materials (12, 13). Our approach shifts the strategy from identifying materials with a specific property to rapidly screening materials that may be exceptional for any property. By releasing the database of experiments and analyses alongside this work, we aim to accelerate the community’s selection of composition spaces and compositions therein for discovery of materials exhibiting a broad range of properties (14).Discovering complex phases with desirable properties, whether by experiment or computation, is highly challenging due to the combinatorics of composition spaces. Searching the Materials Project (15) for entries containing oxygen, having an associated Inorganic Crystal Structure Database entry (16), having unique composition and space group, and excluding inert gas and nonmetallic elements (He, Ne, Ar, Kr, Xe, Rn, C, N, F, P, S, Cl, Se, Br, I, and H) yields 755 1-cation oxide entries from 73 cation elements. Applying the same search to two-cation oxides increases the number of identified materials to 4,345, although the corresponding search for three-cation oxides yields only 3,163 materials. While some two-cation oxide phases undoubtedly remain to be discovered, there has been extensive computational exploration of two-cation oxides, making such materials the focus of recent high-throughput computational (1719) and machine learning–driven materials discovery (2023). Higher-order composition spaces enable further tuning of materials properties, but the expense of comprehensive search of combinatorial spaces is clear when considering 3-cation oxides. Using the 73 cation elements from the 1-cation oxide entries, there are 62,196 (73 choose 3) possible 3-cation oxide composition spaces, yet only 2,205 are represented in the Materials Project, leaving over 96% of the composition spaces with no existing data.The computational exploration of three-cation phases to date has focused on crystal structures where each cation has a unique crystallographic site. The site substitution of multiple elements on a single crystallographic site is a distinguishing feature of metal substitutional alloys, and a metal oxide structure exhibiting such substitutions on cation sites is referred to herein as an substitutional alloy oxide (or “alloy” for brevity). A three-cation oxide can crystallize in a structure observed in the one or two-cation subspaces, and the composition-tuned decoration of the cation sublattice comprises an opportunity for tuning properties in the three-cation composition space. Since the site substitution is disordered, large unit cells combined with ensemble averaging of different random site decorations are required to explicitly model substitutional alloys. While approximations to computational modeling of alloys have been developed (2427), alloys in high-order composition space comprise a dramatically underexplored class of materials for discovery efforts. We know from the examples of high-temperature superconductors and catalysis that extremely valuable properties are obtainable via substitutional alloying in high-order composition spaces (8, 28, 29).We report a high-throughput workflow for discovering candidate compositions for functional properties by coupling high-throughput synthesis and optical characterization with automated data interpretation. Parallel optical screening was recently demonstrated as a proxy for phase behavior in the context of combinatorial thermal processing of individual compositions (30). We extend this approach to high-order composition spaces using inkjet printing (31) to deposit composition-gradient lines of material that are subsequently imaged by a purpose-built hyperspectral microscope that measures optical absorption from the infrared to ultraviolet (UV). We present a dataset consisting of nine channels of optical absorption data for a series of metal oxide composition samples. Each composition sample is defined by the stoichiometry of cation elements, with oxygen content driven toward equilibrium by calcination at fixed oxygen pressure. The dataset contains 376,752 distinct compositions from 108 three-cation oxide systems based on the cation elements Mg, Fe, Co, Ni, Cu, Y, In, Sn, Ce, and Ta, for which only the Ce-Cu-Fe oxide system contains an entry in the Materials Project. We present a data science workflow incorporating cross-validation and other quality control measures to establish confidence in the data, enabling subsequent data modeling to predict aspects of the underlying phase behavior. In the present work, we discuss models that 1) predict candidate phase diagrams along with the absorption spectrum of each phase (the “phase diagram model”) and 2) predict the likelihood that the three-cation composition space contains a three-cation phase whose properties are distinct from one or two-cation phases (the “emergent property model”). These complementary prediction models are emblematic of the usage of data from high-throughput experiments to make inferences that accelerate resource-intensive experiments.This implementation of data science–driven analysis of experimental data are complementary to quantum mechanical (32) and machine learning (22, 33) prediction of new phases. Detection of interesting systems and compositions via the modeling of optical data can seed an investigation for new phases and/or for determining whether the three-cation compositions exhibit exceptional properties. Our approach builds upon a foundation of combinatorial materials science in which synthesis of composition libraries is coupled to measurement of properties of interest (3445). While providing a direct route to discovery of a desirable property in a specific composition system, this approach limits exploration of many composition spaces due to both the high relative expense of property measurements and the need to measure every composition library for every property of interest. Modeling phase behavior from optical properties to guide further experiments is illustrated herein for the Co-Ta-Sn oxide composition space, for which X-ray diffraction (XRD) experiments verified the discovery of the (Sn,Co,Ta)O2 rutile substitutional alloy oxide. Furthermore, screening of the alloy compositions for electrocatalysis of the oxygen evolution reaction revealed an optimal combination of activity and stability, which was enabled by the optical discovery despite the lack of any explicit relationship between the optical and catalytic properties. Although demonstrated for three-cation oxides, the methodology is designed to be implementable in even higher-order composition spaces.  相似文献   

9.
There are, in mankind, two kinds of heredity: biological and cultural. Cultural inheritance makes possible for humans what no other organism can accomplish: the cumulative transmission of experience from generation to generation. In turn, cultural inheritance leads to cultural evolution, the prevailing mode of human adaptation. For the last few millennia, humans have been adapting the environments to their genes more often than their genes to the environments. Nevertheless, natural selection persists in modern humans, both as differential mortality and as differential fertility, although its intensity may decrease in the future. More than 2,000 human diseases and abnormalities have a genetic causation. Health care and the increasing feasibility of genetic therapy will, although slowly, augment the future incidence of hereditary ailments. Germ-line gene therapy could halt this increase, but at present, it is not technically feasible. The proposal to enhance the human genetic endowment by genetic cloning of eminent individuals is not warranted. Genomes can be cloned; individuals cannot. In the future, therapeutic cloning will bring enhanced possibilities for organ transplantation, nerve cells and tissue healing, and other health benefits.  相似文献   

10.
Collaboration among researchers is an essential component of the modern scientific enterprise, playing a particularly important role in multidisciplinary research. However, we continue to wrestle with allocating credit to the coauthors of publications with multiple authors, because the relative contribution of each author is difficult to determine. At the same time, the scientific community runs an informal field-dependent credit allocation process that assigns credit in a collective fashion to each work. Here we develop a credit allocation algorithm that captures the coauthors’ contribution to a publication as perceived by the scientific community, reproducing the informal collective credit allocation of science. We validate the method by identifying the authors of Nobel-winning papers that are credited for the discovery, independent of their positions in the author list. The method can also compare the relative impact of researchers working in the same field, even if they did not publish together. The ability to accurately measure the relative credit of researchers could affect many aspects of credit allocation in science, potentially impacting hiring, funding, and promotion decisions.Reflecting the increasing complexity of modern research, in the last decades, collaboration among researchers became a standard path to discovery (1). Collaboration plays a particularly important role in multidisciplinary research that requires expertise from different scientific fields (2). As the number of coauthors of each publication increases, science’s credit system is under pressure to evolve (35). For single-author papers, which were the norm decades ago, credit allocation is simple: the sole author gets all of the credit. This rule, accepted since the birth of science, fails for multiauthor papers (6). The lack of a robust credit allocation system that can account for the discrepancy between researchers’ contribution to a particular body of work and the credit they obtain, has prompted some to state that “multiple authorship endangers the author credit system” (7). This situation is particularly acute in multidisciplinary research (8, 9), when communities with different credit allocation traditions collaborate (10). Furthermore, a detailed understanding of the rules underlying credit allocation is crucial for an accurate assessment of each researcher’s scientific impact, affecting hiring, funding, and promotion decisions.Current approaches to allocating scientific credit fall in three main categories. The first views each author of a multiauthor publication as the sole author (11, 12), resulting in inflated scientific impact for publications with multiple authors. This system is biased toward researchers with multiple collaborations or large teams, customary in experimental particle physics or genomics. The second assumes that all coauthors contribute equally to a publication, allocating fractional credit evenly among them (13, 14). This approach ignores the fact that authors’ contributions are never equal and hence dilutes the credit of the intellectual leader. The third allocates scientific credit according to the order or the role of coauthors, interpreting a message agreed on within the respective discipline (1517). For example, in biology, typically the first and the last author(s) get the lion’s share of the credit, and in some areas of physical sciences, the author list reflects a decreasing degree of contribution. An extreme case is offered by experimental particle physics, where the author list is alphabetic, making it impossible to interpret the author contributions without exogenous information. Finally, there is an increasing trend to allocate credit based on the specific contribution of each author (18, 19), specified in the contribution declaration required by some journals (20, 21). However, each of these approaches ignores the most important aspect of credit allocation: notwithstanding the agreed on order, credit allocation is a collective process (2224), which is determined by the scientific community rather than the coauthors or the order of the authors in a paper. This phenomena is clearly illustrated by the 2012 Nobel prize in physics that was awarded based on discoveries reported in publications whose last authors were the laureates (25, 26), whereas the 2007 Nobel prize in physics was awarded to the third author of a nine-author paper (27) and the first author of a five-author publication (28). Clearly the scientific community operates an informal credit allocation system that may not be obvious to those outside of the particular discipline.The leading hypothesis of this work is that the information about the informal credit allocation within science is encoded in the detailed citation pattern of the respective paper and other papers published by the same authors on the same subject. Indeed, each citing paper expresses its perception of the scientific impact of a paper’s coauthors by citing other contributions by them, conveying implicit information about the perceived contribution of each author. Our goal is to design an algorithm that can capture in a discipline-independent fashion the way this informal collective credit allocation mechanism develops.  相似文献   

11.
12.
13.
Citations are important building blocks for status and success in science. We used a linked dataset of more than 4 million authors and 26 million scientific papers to quantify trends in cumulative citation inequality and concentration at the author level. Our analysis, which spans 15 y and 118 scientific disciplines, suggests that a small stratum of elite scientists accrues increasing citation shares and that citation inequality is on the rise across the natural sciences, medical sciences, and agricultural sciences. The rise in citation concentration has coincided with a general inclination toward more collaboration. While increasing collaboration and full-count publication rates go hand in hand for the top 1% most cited, ordinary scientists are engaging in more and larger collaborations over time, but publishing slightly less. Moreover, fractionalized publication rates are generally on the decline, but the top 1% most cited have seen larger increases in coauthored papers and smaller relative decreases in fractional-count publication rates than scientists in the lower percentiles of the citation distribution. Taken together, these trends have enabled the top 1% to extend its share of fractional- and full-count publications and citations. Further analysis shows that top-cited scientists increasingly reside in high-ranking universities in western Europe and Australasia, while the United States has seen a slight decline in elite concentration. Our findings align with recent evidence suggesting intensified international competition and widening author-level disparities in science.

Science is a highly stratified social system. The distribution of scientific rewards is remarkably uneven, and a relatively small stratum of elite scientists enjoys exceptional privileges in terms of funding, research facilities, professional reputation, and influence (15). The so-called Matthew effect, well-documented in science (615), implies that accomplished scientists receive more rewards than their research alone merits, and recent evidence indicates a widening gap between the “haves” and the “have nots” of science in terms of salary levels (5), research funding (16), and accumulation of scientific awards (17).Inequality may foster creative competition in the science system (18, 19). However, it can also lead to a dense concentration of resources with diminishing returns on investment (intellectual and fiscal) (16, 20), and to monopolies in the marketplace of ideas (21, 22).The social processes that sort scientists into more or less prestigious strata are complex and multifaceted (1, 10, 23) and may be changing in response to external pressures such as globalization, the advent of new information technologies, and shifts in university governance models (3). However, a few common characteristics have always separated elite scientists from the rest of us, most notably their scientific output and visibility. Publications and citations are critical building blocks for status and success in science (23, 24), and the scientific elite accounts for a large share of what is published and cited.In 1926, Lotka observed that the publication frequencies of chemists followed an inverse-square distribution, where the number of authors publishing N papers would be ∼1/N2 of the number of authors publishing one paper (25). Building on Lotka’s work, de Solla Price later went on to suggest that 50% of all publications were produced by a mere 6% of all scientists (26). More recent research demonstrates even larger disparities in citation distributions at the author level (2, 6, 11, 27, 28), but variations in citation concentration across disciplinary, institutional, and national boundaries remain uncertain. Further, it is unclear whether the observed inequalities in citation shares have intensified over time.Advances in author-disambiguation methods (29) allow us to investigate these questions on a global scale. We used a linked dataset of 4,042,612 authors and 25,986,133 articles to examine temporal trends in the concentration of citations at the author level, and differences in the degree of concentration across fields, countries, and institutions.Publication and citation data were retrieved from Clarivate’s Web of Science (WoS). We limited our focus to disciplines within the medical and health sciences, natural sciences, and agricultural sciences, where journal publication is the primary form of scholarly communication (Materials and Methods). We used a disambiguation algorithm to create publication profiles for all authors with five or more publication entries in WoS. The disambiguated dataset allowed us to measure developments in citation concentration from 2000 onward.Per-author citation impact was measured using field-normalized citation scores (ncs). ncs is calculated by dividing the raw per-paper citation scores with the average citation counts of comparable papers published in the same year and subfield. ncs was rescaled to account for citation inflation, represented here as nics. We report per-author cumulative citation impact based on a full and fractional counting. The full counting gives the sum of nics for all papers published by a scientist. The fractional counting also gives the sum of citations accrued by a scientist in all her papers, but divides the per-article citation scores with the number of contributors per paper. We use citation density plots and Gini coefficients to gauge trends in citation imbalance and concentration.  相似文献   

14.
The capacity to collect fingerprints of individuals in online media has revolutionized the way researchers explore human society. Social systems can be seen as a nonlinear superposition of a multitude of complex social networks, where nodes represent individuals and links capture a variety of different social relations. Much emphasis has been put on the network topology of social interactions, however, the multidimensional nature of these interactions has largely been ignored, mostly because of lack of data. Here, for the first time, we analyze a complete, multirelational, large social network of a society consisting of the 300,000 odd players of a massive multiplayer online game. We extract networks of six different types of one-to-one interactions between the players. Three of them carry a positive connotation (friendship, communication, trade), three a negative (enmity, armed aggression, punishment). We first analyze these types of networks as separate entities and find that negative interactions differ from positive interactions by their lower reciprocity, weaker clustering, and fatter-tail degree distribution. We then explore how the interdependence of different network types determines the organization of the social system. In particular, we study correlations and overlap between different types of links and demonstrate the tendency of individuals to play different roles in different networks. As a demonstration of the power of the approach, we present the first empirical large-scale verification of the long-standing structural balance theory, by focusing on the specific multiplex network of friendship and enmity relations.  相似文献   

15.
Unlike other species, humans cooperate in large, distantly related groups, a fact that has long presented a puzzle to biologists. The pathway by which adaptations for large-scale cooperation among nonkin evolved in humans remains a subject of vigorous debate. Results from theoretical analyses and agent-based simulations suggest that evolutionary dynamics need not yield homogeneous populations, but can instead generate a polymorphic population that consists of individuals who vary in their degree of cooperativeness. These results resonate with the recent increasing emphasis on the importance of individual differences in understanding and modeling behavior and dynamics in experimental games and decision problems. Here, we report the results of laboratory experiments that complement both theory and simulation results. We find that our subjects fall into three types, an individual's type is stable, and a group's cooperative outcomes can be remarkably well predicted if one knows its type composition. Reciprocal types, who contribute to the public good as a positive function of their beliefs about others' contributions, constitute the majority (63%) of players; cooperators and free-riders are also present in our subject population. Despite substantial behavioral differences, earnings among types are statistically identical. Our results support the view that our human subject population is in a stable, polymorphic equilibrium of types.  相似文献   

16.
Choosing experiments to accelerate collective discovery   总被引:1,自引:0,他引:1  
A scientist’s choice of research problem affects his or her personal career trajectory. Scientists’ combined choices affect the direction and efficiency of scientific discovery as a whole. In this paper, we infer preferences that shape problem selection from patterns of published findings and then quantify their efficiency. We represent research problems as links between scientific entities in a knowledge network. We then build a generative model of discovery informed by qualitative research on scientific problem selection. We map salient features from this literature to key network properties: an entity’s importance corresponds to its degree centrality, and a problem’s difficulty corresponds to the network distance it spans. Drawing on millions of papers and patents published over 30 years, we use this model to infer the typical research strategy used to explore chemical relationships in biomedicine. This strategy generates conservative research choices focused on building up knowledge around important molecules. These choices become more conservative over time. The observed strategy is efficient for initial exploration of the network and supports scientific careers that require steady output, but is inefficient for science as a whole. Through supercomputer experiments on a sample of the network, we study thousands of alternatives and identify strategies much more efficient at exploring mature knowledge networks. We find that increased risk-taking and the publication of experimental failures would substantially improve the speed of discovery. We consider institutional shifts in grant making, evaluation, and publication that would help realize these efficiencies.A scientist’s choice of research problem directly affects his or her career. Indirectly, it affects the scientific community. A prescient choice can result in a high-impact study. This boosts the scientist’s reputation, but it can also create research opportunities across the field. Scientific choices are hard to quantify because of the complexity and dimensionality of the underlying problem space. In formal or computational models, problem spaces are typically encoded as simple choices between a few options (1, 2) or as highly abstract “landscapes” borrowed from evolutionary biology (35). The resulting insight about the relationship between research choice and collective efficiency is suggestive, but necessarily qualitative and abstract.We obtain concrete, quantitative insight by representing the growth of knowledge as an evolving network extracted from the literature (2, 6). Nodes in the network are scientific concepts and edges are the relations between them asserted in publications. For example, molecules—a core concept in chemistry, biology, and medicine—may be linked by physical interaction (7) or shared clinical relevance (8). Variations of this network metaphor for knowledge have appeared in philosophy (9), social studies of science (1012), artificial intelligence (13), complex systems research (14), and the natural sciences (7, 15, 16). Nevertheless, networks have rarely been used to measure scientific content (2, 11, 17, 18) and never to evaluate the efficiency of scientific problem selection.In this paper, we build a model of scientific investigation that allows us to measure collective research behavior in a large corpus of scientific texts and then compare this inferred behavior with more and less efficient alternatives. We define an explicit objective function to quantify the efficiency of a research strategy adopted by the scientific community: the total number of experiments performed to discover a given portion of an unknown knowledge graph. Comparing the modal pattern of “real-science” investigations with hypothetical alternatives, we identify strategies that appear much more efficient for scientific discovery. We also demonstrate that the publication of experimental failures would increase the speed of discovery. In this analysis, we do not focus on which strategies tend to receive high citations or scientific prizes, although we illustrate the relationship between these accolades and research strategies (2).Our model represents science as a growing network of scientific claims that traces the accumulation of observations and experiments (see Figs. S1S3). Earlier scientific choices influence subsequent exploration of the network (19). The addition of one redundant link is inconsequential for the topology of science. By contrast, a well-placed new link could radically rewire this network (20). Our model incorporates two key features of problem selection, importance and difficulty, which have received repeated attention in qualitative and quantitative investigations of science. We map these features onto two network properties, degree and distance, which are central to foundational models of network formation and search (2123). First, scientists typically select “important,” central, or well-studied topics on which to anchor their findings and signal their relevance to others’ work (10, 24). Our model uses the degree of a concept in the network of claims (i.e., the number of distinct links in which it participates) as a measure of its importance (see Figs. S4S6). In assuming that scientists’ research choices are influenced by concept degree, we posit that scientists are influenced by the choices of others, a well-attested choice heuristic (25, 26). Second, scientists introduce novelty into their work by studying understudied topics and by combining ideas and technologies that others are unlikely to connect (17, 20). Henri Poincaré (27) and many since (28) have observed that the most generative combinations are “drawn from domains that are far apart” (ref. 27, p. 24). When the concepts under study are more distant, more effort is required to imagine and coordinate their combinations (29). More risk is involved in testing distant claims, because no similar claims have been successful (30).* We operationalize the “cognitive distance” between concepts using their topological distance in the knowledge network. If two concepts are not mutually reachable through the network (i.e., in two distinct components of the network), there is no way a scientist could hypothesize a connection simply by wandering through the literature; conceptual jumps must be made. If two molecules are distant in the network but can reach one another (i.e., they are in the same component), scientists would need to read a range of research articles—likely spread across several journals and subfields—to infer a possible connection (32). Drawing together these insights, we model unlikely combinations as connections between neglected (i.e., low degree), distant, or disconnected concepts within the network of scientific claims.Open in a separate windowFig. S1.Chemical examples from the published network. Central estradiol and cholesterol molecules were linked when hormone therapies were found to have no effect on reducing heart disease (PMID 10954759 and 12904517). RNA and zinc (PMID 4040853) were recombined in the discovery of “zinc fingers” of amino acids, which are essential for gene regulation and ribosome synthesis. Bromodeoxyuridine, which replaced thymidine in DNA and so “labeled” replicated DNA, allowed scientists to discover cell division in the adult hippocampus (PMID 9809557). HIV therapeutics zidovudine, indinavir, stavidine, and lamivudine were combined in clinical trials of promising antiretroviral mixtures (PMID 9287227). Commercially available protein kinase inhibitors, including KT 5720, rottlerin, quercetin, wortmannin, and the more recently discovered Y 27632, were tested against an array of protein kinases (PMID 10998351).Open in a separate windowFig. S3.(A, Top and Middle) The distribution of node degrees for each pair of chemicals in MEDLINE abstracts and in abstracts authored by prize-winning scientists (SI Text). The (log-)degree of the most and least central chemicals of each pair is normalized to [0,1] and the height of the figure represents the frequency with which each pair of chemical degrees appears in the literature. All degrees are evaluated on the full (2010) network. (A, Bottom)The “Citations” subplot shows citation counts greater and smaller than average in red and blue, respectively; the red scale has been set to the same maximum value as the blue to improve contrast. (A, Middle) The combined figure reveals how less common degree–degree combinations are more intensely cited than common degree–degree combinations. (B) Distribution of network distances between each pair of chemicals in MEDLINE abstracts and in abstracts written by prize winners. All distances were evaluated at time of linking; frequencies have been transformed to log10-scale. distance indicates two chemicals that are mutually unreachable—disconnected—in the current network. The red and purple bands tracing the distributions are the 95% confidence intervals, constructed by considering the actual distribution of shortest paths as a sample from an underlying multinomial distribution (SI Text). Prize winners combine disconnected molecules significantly more frequently than others.Open in a separate windowFig. S4.(A) Annotated version of the generative model. (B) A simple network example, which calculates the probability associated with possible node connections. (C) The probability of choosing nodes separated by distance di,j, given different values of β and γ. (D) The probability that a scientist would investigate the relationship between X and Y, X and Z, and Y and Z in Fig. S4B, given different values of αμ, αι, β, γ, and δ.Open in a separate windowFig. S6.(A) Infinite distances (δ parameter, estimated separately) over time. (B) Entropy of distance distributions (bits) as a function of time. As both distance distributions become more concentrated near distance 1, entropy decreases with time. Note that there are bursts of entropy that correlate across patents and biomedical publications and correspond with the bursts of jumps pictured in A. (C) Estimated preferences for finite distances (defined by β and γ parameters) in model estimated from data. (D) Distribution of measured distances as a function of time. MEDLINE and Patents become more conservative over time, restricting distance between chemicals selected. Researchers patent pairs with shorter distances than articles (相似文献   

17.
Research about HIV constitutes a global domain of academic knowledge. The patterns that structure this domain reflect inequalities in the production and dissemination of knowledge, as well as broader inequalities in geopolitics. Conventional metrics for assessing the value and impact of academic research reveal that “Northern” research remains dominant, while “Southern” research remains peripheral. Southern theory provides a framework for greater critical engagement with knowledge produced by researchers within the global South. With a focus on HIV social science, we show that investigators working in and from Africa have produced and disseminated knowledge fundamental to the global domain of HIV research, and argue that their epistemological contribution may be understood within the framework of Southern theory. Through repurposing a bibliometrical measure of citation count, we constitute a new archive of highly cited social science research. With a focus on South Africa, we situate this archive within changing historical contexts, connecting research findings to developments in medicine, health sciences and politics. We focus on two key themes in the evolution of HIV knowledge: (1) the significance of context and locality — the “setting” of HIV research; and (2) sex, race and risk — changing ideas about the social determinants of HIV transmission.  相似文献   

18.
In many academic fields, the number of papers published each year has increased significantly over time. Policy measures aim to increase the quantity of scientists, research funding, and scientific output, which is measured by the number of papers produced. These quantitative metrics determine the career trajectories of scholars and evaluations of academic departments, institutions, and nations. Whether and how these increases in the numbers of scientists and papers translate into advances in knowledge is unclear, however. Here, we first lay out a theoretical argument for why too many papers published each year in a field can lead to stagnation rather than advance. The deluge of new papers may deprive reviewers and readers the cognitive slack required to fully recognize and understand novel ideas. Competition among many new ideas may prevent the gradual accumulation of focused attention on a promising new idea. Then, we show data supporting the predictions of this theory. When the number of papers published per year in a scientific field grows large, citations flow disproportionately to already well-cited papers; the list of most-cited papers ossifies; new papers are unlikely to ever become highly cited, and when they do, it is not through a gradual, cumulative process of attention gathering; and newly published papers become unlikely to disrupt existing work. These findings suggest that the progress of large scientific fields may be slowed, trapped in existing canon. Policy measures shifting how scientific work is produced, disseminated, consumed, and rewarded may be called for to push fields into new, more fertile areas of study.

A straightforward view of scientific progress would suggest more is better. The more papers published in a field, the greater the rate of scientific progress; the more researchers, the more ground covered. Even if not every article is earth shaking in its impact, each can contribute a metaphorical grain of sand to the sandpile, increasing the probability of an avalanche, wherein the scientific landscape is reconfigured and new paradigms arise to structure inquiry (1, 2). The publication of more papers also increases the probability at least one of them contains an important innovation. A disruptive new idea can destabilize the status quo, siphoning attention from previous work and garnering the lion’s share of new citations (3, 4).Policy reflects this more-is-better view. Scholars are evaluated and rewarded on productivity. Publishing many articles within a set period of time is the surest path to tenure and promotion. Quantity remains the measuring stick at the university (5) and the national levels (6), where comparisons focus on the total number of publications, patents, scientists, and dollars spent.“Quality” is also predominantly judged quantitatively. Citation counts are used to measure the importance of individuals (7), teams (8), and journals (9) within a field. At the paper level, the assumption is that the best and most valuable papers will attract more attention, shaping the research trajectory of the field (10).Here, however, we predict that when the number of papers published each year grows very large, the rapid flow of new papers can force scholarly attention to already well-cited papers and limit attention for less-established papers—even those with novel, useful, and potentially transformative ideas. Rather than causing faster turnover of field paradigms, a deluge of new publications entrenches top-cited papers, precluding new work from rising into the most-cited, commonly known canon of the field.These arguments, supported by our empirical analysis, suggest that the scientific enterprise’s focus on quantity may obstruct fundamental progress. This detrimental effect will intensify as the annual mass of publications in each field continues to grow—which is almost inevitable given the entrenched, interlocking structures motivating publication quantity. Policy measures restructuring the scientific production value chain may be required to allow mass attention to concentrate on promising, novel ideas.This study focuses on the effects of field size: The number of papers published in a field in a given year. Previous studies have found that citation inequality is increasing across a range of disciplines (11), at least partially driven by processes of preferential attachment (12, 13). Papers do not always maintain their citation levels and rankings over the years, however. Disruptive papers can eclipse prior work (4) and natural fluctuations in citation numbers can upset rankings (14). We predict that when fields are large, the dynamics change. The most-cited papers become entrenched, garnering disproportionate shares of future citations. New papers cannot rise into canon by amassing citations through processes of preferential attachment. Newly published papers rarely disrupt established scholarship.Two mechanisms underlie these predictions (15). First, when many papers are published within a short period of time, scholars are forced to resort to heuristics to make continued sense of the field. Rather than encountering and considering intriguing new ideas each on their own merits, cognitively overloaded reviewers and readers process new work only in relationship to existing exemplars (1618). A novel idea that does not fit within extant schemas will be less likely to be published, read, or cited. Faced with this dynamic, authors are pushed to frame their work firmly in relationship to well-known papers, which serve as “intellectual badges” (19) identifying how the new work is to be understood, and discouraged from working on too-novel ideas that cannot be easily related to existing canon. The probabilities of a breakthrough novel idea being produced, published, and widely read all decline, and indeed, the publication of each new paper adds disproportionately to the citations for the already most-cited papers.Second, if the arrival rate of new ideas is too fast, competition among new ideas may prevent any of the new ideas from becoming known and accepted field wide. To see why this is so, consider a sandpile model of idea spread in a field. When sand is dropped on a sandpile slowly, one grain at a time, waiting for movement on the sandpile to stop before dropping the next grain, the sandpile over time reaches a scale-free critical state wherein one dropped grain of sand can trigger an avalanche over the whole area of the pile (2). But when sand is dropped at a rapid rate, neighboring miniavalanches interfere with each other, and no individual grain of sand can trigger pile-wide shifts (20). The faster the rate of sand dropping the smaller the domain each new grain of sand can affect. If the arrival rate of papers is too fast, no new paper can rise into canon through localized processes of diffusion and preferential attachment.The arguments above yield six predictions, two each predicting durable dominance of the most-cited papers, entrepreneurial futility for newly published papers, and decrease in the disruptiveness (3, 4) of newly published papers. Compared to when a field produces few publications each year, when that field produces many new publications each year: 1) new citations will be more likely to cite the most-cited papers rather than less-cited papers; 2) the list of most-cited papers will change little year to year—the canon ossifies; 3) the probability a new paper eventually becomes canon will drop; 4) new papers that do rise into the ranks of those most cited will not do so through gradual, cumulative processes of diffusion; 5) the proportion of newly published papers developing existing scientific ideas will increase and the proportion disrupting existing ideas will decrease; and 6) the probability of a new paper becoming highly disruptive will decline.  相似文献   

19.
Evolution and structure of sustainability science   总被引:3,自引:0,他引:3  
The concepts of sustainable development have experienced extraordinary success since their advent in the 1980s. They are now an integral part of the agenda of governments and corporations, and their goals have become central to the mission of research laboratories and universities worldwide. However, it remains unclear how far the field has progressed as a scientific discipline, especially given its ambitious agenda of integrating theory, applied science, and policy, making it relevant for development globally and generating a new interdisciplinary synthesis across fields. To address these questions, we assembled a corpus of scholarly publications in the field and analyzed its temporal evolution, geographic distribution, disciplinary composition, and collaboration structure. We show that sustainability science has been growing explosively since the late 1980s when foundational publications in the field increased its pull on new authors and intensified their interactions. The field has an unusual geographic footprint combining contributions and connecting through collaboration cities and nations at very different levels of development. Its decomposition into traditional disciplines reveals its emphasis on the management of human, social, and ecological systems seen primarily from an engineering and policy perspective. Finally, we show that the integration of these perspectives has created a new field only in recent years as judged by the emergence of a giant component of scientific collaboration. These developments demonstrate the existence of a growing scientific field of sustainability science as an unusual, inclusive and ubiquitous scientific practice and bode well for its continued impact and longevity.  相似文献   

20.
Markets are central to modern society, so their failures can be devastating. Here, we examine a prominent failure: price bubbles. Bubbles emerge when traders err collectively in pricing, causing misfit between market prices and the true values of assets. The causes of such collective errors remain elusive. We propose that bubbles are affected by ethnic homogeneity in the market and can be thwarted by diversity. In homogenous markets, traders place undue confidence in the decisions of others. Less likely to scrutinize others’ decisions, traders are more likely to accept prices that deviate from true values. To test this, we constructed experimental markets in Southeast Asia and North America, where participants traded stocks to earn money. We randomly assigned participants to ethnically homogeneous or diverse markets. We find a marked difference: Across markets and locations, market prices fit true values 58% better in diverse markets. The effect is similar across sites, despite sizeable differences in culture and ethnic composition. Specifically, in homogenous markets, overpricing is higher as traders are more likely to accept speculative prices. Their pricing errors are more correlated than in diverse markets. In addition, when bubbles burst, homogenous markets crash more severely. The findings suggest that price bubbles arise not only from individual errors or financial conditions, but also from the social context of decision making. The evidence may inform public discussion on ethnic diversity: it may be beneficial not only for providing variety in perspectives and skills, but also because diversity facilitates friction that enhances deliberation and upends conformity.In modern society, markets are ubiquitous (1). We rely on them not only to furnish necessities but also to finance businesses, provide healthcare, control pollution, and predict events (2). The market has become such a central social institution because it typically excels in aggregating information and expectations from disparate traders, thereby setting prices and allocating resources better than any individual or government (3). However, markets can go astray, and here we examine a prominent failure of markets: price bubbles (46).Bubbles emerge when traders err collectively in pricing, causing a persistent misfit between the market price and the true value (also known as “intrinsic” or “fundamental” value) of an asset, such as a stock (7, 8). Bubbles devastate individuals and markets, wreck nations, and destabilize the entire world economy. When a stock market bubble burst in 1929, the Great Depression materialized (6). After its “bubble economy” ruptured in 1990, Japan stagnated for decades. More recently, housing bubbles in the United States and Europe caused a financial crisis, burdening the global economy since (3, 7).Price bubbles can wreck people, markets, and nations, but they also present a puzzle. That people occasionally err is unsurprising—psychologists and economists have documented myriad individual biases—but individual errors do not necessitate a bubble. Traders vie for advantage, so if some unwittingly misprice an asset, for example by paying lofty prices, competitors should exploit the error by offering to sell dearly, thereby profiting from others’ mistakes (9). At the same time, the sellers also increase supply and depress prices, which should prevent a bubble. In other words, even if some traders err, the market as a whole should still price accurately—markets are thought to be self-correcting (3). For price bubbles to emerge, pricing errors must be not idiosyncratic, but common among traders.Attempting to pinpoint the cause of bubbles, some researchers have designed experimental markets that are ideally suited for accurate decision making. However, even there—with skilled participants who possess complete information about the true values of the stocks traded—bubbles persist (7, 8). Researchers have shown that bubbles are related to financial conditions such as excess cash (10), but also to behavior that exhibits “elements of irrationality” (11). Indeed, bubbles have been long ascribed to collective delusions, implied in terms such as “herd behavior” and “animal spirits” (1214), but their exact causes remain nebulous. We suggest that that price bubbles arise not only from individual errors or financial conditions but also from the social context of decision making.We draw on studies that have used simulations (15), ethnographic accounts of an arbitrage disaster (9), and qualitative research on the recent financial crisis (16) that point to the dangers of homogeneity. We also rely on past research investigating the effects of diversity on the performance of countries and regions, organizations, and teams. Our results suggest that bubbles are affected by a property of the collectivity of market traders—ethnic homogeneity.Homogeneity and diversity have been studied across the social sciences. A commonly accepted view is that cognitive diversity, an assortment of perspectives and skills, enables exchange of valuable information, thereby enhancing creativity and problem solving (15, 17). However, when it comes to ethnic diversity, the effects are decidedly mixed. Ethnic diversity has been studied in multiple spheres, including economic growth (18, 19), social capital (20), cities and neighborhoods (21), organizations (17, 22), work teams (2325), and jury deliberations (26). Some studies find benefits, but others do not. For instance, ethnic diversity in a city or region can summon a multitude of abilities, experiences, and cultures, but can also bring heterogeneity in preferences and mores, which complicates public policy decisions (18, 27) and may hamper collective action (20). In the workplace, ethnic diversity is associated with greater innovation, but also increased conflict (28).Some of the disparity can be explained by the results we report here: Ethnic diversity facilitates friction. This friction can increase conflict in some group settings, whether a work team, a community, or a region (29). Conversely, ethnic homogeneity may induce confidence, or instrumental trust (30), in others'' decisions (confidence not necessarily in their benevolence or morality, but in the reasonableness of their decisions, as captured in such everyday statements as “I trust his judgment”). However, in modern markets, vigilant skepticism is beneficial; overreliance on others’ decisions is risky.As Portes and Vikstorm (31) note, modern “markets do not run on social capital; they operate instead on the basis of universalistic rules and their embodiment in specific roles.” In other words, modern markets rely less on the mechanical solidarity engendered by coethnicity, the “bounded solidarity” (32) embodied for instance in the Maghribi traders’ coalition (33) or the rotating credit associations of Southeast Asia (34, 35). Instead, modern markets rely on organic solidary, which turns on heterogeneity, role differentiation, and division of labor (31, 36). Ethnic homogeneity may be beneficial in some group settings for the same reason it may be detrimental to modern markets—it instills confidence in others’ decisions.Confidence in others’ decisions matters because, in many situations, people watch others for cues about appropriate behavior (37). When people enter a market, whether to purchase stock, buy a house, or hire an employee, they heed not only the objective features of the good or service—the performance of the company, the number of bedrooms, the years of work experience—but they also note the behavior of others, attempting to decipher their mindset before deciding how to act (12, 13, 38). In a modern market, where competition is key, undue confidence in others’ decisions is counterproductive: It can discourage scrutiny and encourage imitation of others’ decisions, ultimately causing bubbles.In ethnically homogenous markets, we propose, traders place greater confidence in the actions of others. They are more likely to accept their coethnics’ decisions as reasonable, and therefore more likely to act alike. Compared with those in an ethnically diverse market, traders in a homogenous market are less likely to scrutinize others’ behavior. Conversely, in a diverse market, traders are more likely to scrutinize others’ behavior and less likely to assume that others’ decisions are reasonable.This proposition is galvanized by a persistent empirical finding across the social sciences: People tend to be more trusting of the perspectives, actions, and intentions of ethnically similar others (21, 39, 40). As intergroup contact theory and social identity theory establish, shared ethnic identity is a broad basis for establishing trust among strangers. Moreover, empirical evidence shows specifically that people surrounded by ethnic peers tend to process information more superficially (26, 41, 42). Such superficial thinking fits with the notion of greater confidence in others’ decisions: If one assumes that others’ decisions are reasonable, one may exert less effort in scrutinizing them. For instance, ethnically diverse juries consider a wider range of perspectives, deliberate longer, and make fewer inaccurate statements than homogeneous juries (26). Compared with those in homogeneous discussion groups, students who are told they will join diverse discussion groups review the discussion materials more thoroughly beforehand (42) and write more complex postdiscussion essays (41). In markets, where information is incomplete and decisions are uncertain (43), traders may be particularly reliant on ethnicity as a group-level heuristic for establishing confidence in others’ decisions. Such superficial information processing can engender conformity, herding, and price bubbles. As the term implies, herding is not the outcome of careful analysis but of observational imitation (14).Therefore, we propose that, when an offer is made to buy or to sell an asset, traders in homogeneous markets are more likely to accept it than those in diverse markets. If traders in homogeneous markets place greater confidence in the decisions of their coethnics, so they are more likely to accept offers that are further from true value. This is not an individual idiosyncrasy, but a collective phenomenon: Pricing errors of traders in homogenous markets are more likely to be correlated than those of traders in diverse markets. The culmination of these processes leads to bubbles that are bigger.To study the effects of diversity on markets, we created experimental markets in Southeast Asia (study 1) and North America (study 2). We selected those locales purposefully. The ethnic groups in them are distinct and nonoverlapping—Chinese, Malays, and Indians in Southeast Asia, and Whites, Latinos, and African-Americans in North America—thus allowing a broad comparison. We also sought more generalizable results by including participants beyond Western, rich, industrialized, and democratic nations (44).Realistic trading requires financial skills, so we turned to those who are likely to possess it. For study 1, in Southeast Asia, we recruited skilled participants, trained in business or finance, for a “stock-trading simulation.” We surveyed their demographics in advance and randomly assigned them to markets (trading sessions) as to create a collectivity of traders that was either ethnically homogeneous or diverse (Fig. 1). In the homogeneous markets, all participants were drawn from the dominant ethnicity in the locale; in the diverse markets, at least one of the participants was an ethnic minority. All traders could view their counterparts and note the ethnicities present in the market.Open in a separate windowFig. 1.The experiment. Participants were randomly assigned to markets that were ethnically homogeneous or diverse (Left). After they received the information needed to price stocks accurately, we assessed each participant’s financial skills individually, using 10 hypothetical market scenarios to establish a baseline of pricing accuracy (Center). Trading in a computerized stock market, each participant was free to buy and sell stocks and/or to make requests to buy (“bid”) or offers to sell (“ask”). All trading information was true, public, and anonymous: All participants could see all completed transactions and bid and ask offers (Right; see example in SI Appendix, Fig. S8). The data reflect actual prices in the sixth period of trading in two of the markets of study 1. The experiment did not involve deception.When the participants arrived in the trading laboratory, we provided them with all of the information necessary to calculate the stocks’ true value accurately, including examples. After they read the instructions (and before actual trading), we assessed each participant’s comprehension and financial (pricing) skills. We presented each participant separately with simple market scenarios and asked him or her to declare the prices in which he or she would buy or sell in each scenario. The participants could not see the others’ responses. We used the responses to calculate ex-ante pricing accuracy: the extent to which the participants’ responses, in aggregate, approximated the true values of the stocks. This measure of pricing accuracy serves as a baseline of performance. Because the responses were collected individually, and participants could not observe others’ responses, social influence was minimal at this stage. Fig. 1 provides a visual overview of the experiment.Next, participants were allocated cash and stocks and began trading. Much as in a modern stock market, participants observed all of the trading activity on their computer screens. They saw the prices at which others bid to buy and asked to sell. They saw what others ultimately paid and received. As various financial features of the market can affect bubbles (4547), we control these through the experimental design. While trading, participants could not see each other or communicate directly. As in modern stock markets, they did not know which trader made a certain bid or offer. So, direct social influence was curtailed, but herding was possible. When trading ended, the participants received their earnings in cash. Then, we used the prices in which stocks were bought and sold to calculate the ex-post pricing accuracy: the extent to which market prices, on average, approximated the true values of the stocks.For study 2, a replication in North America, we followed the same protocol. An exact, or direct, replication further suggests that the pattern we observed is general, independent of specific culture or demographics (48). So we selected a wholly different site, distinct by culture and encompassing a different mix of ethnicities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号