首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
An inherent risk of algorithmic personalization is disproportionate targeting of individuals from certain groups (or demographic characteristics, such as gender or race), even when the decision maker does not intend to discriminate based on those “protected” attributes. This unintended discrimination is often caused by underlying correlations in the data between protected attributes and other observed characteristics used by the algorithm to create predictions and target individuals optimally. Because these correlations are hidden in high-dimensional data, removing protected attributes from the database does not solve the discrimination problem; instead, removing those attributes often exacerbates the problem by making it undetectable and in some cases, even increases the bias generated by the algorithm. We propose BEAT (bias-eliminating adapted trees) to address these issues. This approach allows decision makers to target individuals based on differences in their predicted behavior—hence, capturing value from personalization—while ensuring a balanced allocation of resources across individuals, guaranteeing both group and individual fairness. Essentially, the method only extracts heterogeneity in the data that is unrelated to protected attributes. To do so, we build on the general random forest (GRF) framework [S. Athey et al., Ann. Stat. 47, 1148–1178 (2019)] and develop a targeting allocation that is “balanced” with respect to protected attributes. We validate BEAT using simulations and an online experiment with N = 3,146 participants. This approach can be applied to any type of allocation decision that is based on prediction algorithms, such as medical treatments, hiring decisions, product recommendations, or dynamic pricing.

In the era of algorithmic personalization, resources are often allocated based on individual-level predictive models. For example, financial institutions allocate loans based on individuals’ expected risk of default, advertisers display ads based on users’ likelihood to respond to the ad, hospitals allocate organs to patients based on their chances to survive, and marketers allocate price discounts based on customers’ propensity to respond to such promotions. The rationale behind these practices is to leverage differences across individuals, such that a desired outcome can be optimized via personalized or targeted interventions. For example, a financial institution would reduce risk of default by approving loans to individuals with the lowest risk of defaulting, advertisers would increase profits when targeting ads to users who are most likely to respond to those ads, and so forth.There are, however, individual differences that firms may not want to leverage for personalization, as they might lead to disproportionate allocation to a specific group. These individual differences may include gender, race, sexual orientation, or other protected attributes. In fact, several countries have instituted laws against discrimination based on protected attributes in certain domains (e.g., in voting rights, employment, education, and housing). However, discrimination in other domains is lawful but is often still perceived as unfair or unacceptable (1). For example, it is widely accepted that ride-sharing companies set higher prices during peak hours, but these companies were criticized when their prices were found to be systematically higher in non-White neighborhoods compared with White areas (2).Intuitively, a potentially attractive solution to this broad concern of protected attributes–based discrimination may be to remove the protected attributes from the data and to generate a personalized allocation policy based on the predictions obtained from models trained using only the unprotected attributes. However, such an approach would not solve the problem as there might be other variables remaining in the dataset that are related to the protected attributes and therefore, will still generate bias. Interestingly, as we show in our empirical section, there are cases in which removing protected attributes from the data can actually increase the degree of discrimination on the protected attributes (i.e., a firm that chooses to exclude protected attributes from its database might create a greater imbalance). This finding is particularly relevant today because companies are increasingly announcing their plans to stop using protected attributes in fear of engaging in discrimination practices. In our empirical section, we show the conditions under which this finding applies in practice.Personalized allocation algorithms typically use data as input to a two-stage model. First, the data are used to predict accurate outcomes based on the observed variables in the data (the “inference” stage). Then, these predictions are used to create an optimal targeting policy with a particular objective function in mind (the “allocation” stage). The (typically unintended) biases in the policies might occur because the protected attributes are often correlated with the predicted outcomes. Thus, using either the protected attributes themselves or variables that are correlated with those protected attributes in the inference stage may generate a biased allocation policy.*This biased personalization problem could be in principle solved using constrained optimization, focusing on the allocation stage of the algorithm (e.g., refs. 3 and 4). Using this approach, a constraint is added to the optimization problem such that individuals who are allocated to receive treatment (the “targeted” group) are not systematically different in their protected attributes from those who do not receive treatment. Although methods for constrained optimization problems often work well in low dimensions, they are sensitive to the curse of dimensionality (e.g., if there are multiple protected attributes).Another option would be to focus on the data that are fed to the algorithm and “debias” the data before they are used: that is, transform the unprotected variables such that they become independent of the protected attributes and use the resulting data in the two-stage model (e.g., refs. 5 and 6). While doing so guarantees pairwise independence of each variable from the protected attributes, it is difficult to account for underlying dependencies between the protected attributes and interactions of the different variables (6). Most importantly, while these methods are generally effective at achieving group fairness (statistical parity), they often harm individual fairness (79). Finally, debiasing methods require the decision maker to collect protected attributes at all times, both when estimating the optimal policy and when applying that policy to new individuals. A more desired approach would be to create a mapping between unprotected attributes and policy allocations that not only is fair (both at the group level and at the individual level) but can also be applied without the need to collect protected attributes for new individuals.In this paper, we depart from those approaches and propose an approach that addresses the potential bias at the inference stage (rather than pre- or postprocessing the data or adding constraints to the allocation). Our focus is to infer an object of interest—“conditional balanced targetability” (CBT)—that measures the adjusted treatment effect predictions, conditional on a set of unprotected variables. Essentially, we create a mapping from unprotected attributes to a continuous targetability score that leads to balanced allocation of resources with respect to the protected attributes. Previous papers that modified the inference stage (e.g., refs. 1014) are limited in their applicability because they typically require additional assumptions and restrictions and are limited in the type of classifiers they apply to. The benefits of our approach are noteworthy. First, allocating resources based on CBT scores does, by design, achieve both group and individual fairness. Second, we leverage computationally efficient methods for inference that are easy to implement in practice and also have desirable scalability properties. Third, out-of-sample predictions for CBT do not require protected attributes as an input. In other words, firms or institutions seeking allocation decisions that do not discriminate on protected attributes only need to collect the protected attributes when calibrating the model. Once the model is estimated, future allocation decisions can be based on (out-of-sample) predictions, which only require the unprotected attributes of the new individuals.We propose a practical solution where the decision maker can leverage the value of personalization without the risk of disproportionately targeting individuals based on protected attributes. The solution, which we name BEAT (bias-eliminating adapted trees), generates individual-level predictions that are independent of any preselected protected attributes. Our approach builds on general random forests (GRFs) (15, 16), which are designed to efficiently estimate heterogeneous outcomes. Our method preserves most of the core elements of GRF, including the use of forests as a type of adaptive nearest neighbor estimator and the use of gradient-based approximations to specify the tree-split point. Importantly, we depart from GRF in how we select the optimal split for partitioning. Rather than using divergence between children nodes as the primary objective of any partition, the BEAT algorithm combines two objectives—heterogeneity in the outcome of interest and homogeneity in the protected attributes—when choosing the optimal split. Essentially, the BEAT method only identifies individual differences in the outcome of interest (e.g., heterogeneity in response to price) that are homogeneously distributed in the protected attributes (e.g., race). As a result, not only the protected attributes will be equally distributed across policy allocations (group fairness), but the method will also ensure that individuals with the same unprotected attributes would have the same allocation (individual fairness).Using a variety of simulated scenarios, we show that our method exhibits promising empirical performance. Specifically, BEAT reduces the unintended bias while leveraging the value of personalized targeting. Further, BEAT allows the decision maker to quantify the trade-off between performance and discrimination. We also examine the conditions under which the intuitive approach of removing protected attributes from the data alleviates or increases the bias. Finally, we apply our solution to a marketing context in which a firm decides which customers to target with a discount coupon. Using an online sample of n = 3,146 participants, we find strong evidence of relationships between “protected” and “unprotected” attributes in real data. Moreover, applying personalized targeting to these data leads to significant bias against a protected group (in our case, older populations) due to these underlying correlations. Finally, we demonstrate that BEAT mitigates the bias, generating a balanced targeting policy that does not discriminate against individuals based on protected attributes.Our contribution fits broadly into the vast literature on fairness and algorithmic bias (e.g., refs. 2 and 1722). Most of this literature has focused on uncovering biases and their causes as well as on conceptualizing the algorithmic bias problem and potential solutions for researchers and practitioners. We complement this literature by providing a practical solution that prevents algorithmic bias that is caused by underlying correlations. Our work also builds on the growing literature on treatment personalization (e.g., refs. 2328). This literature has mainly focused on the estimation of heterogeneous treatment effects and designing targeting rules accordingly, but it has largely ignored any fairness or discrimination considerations in the allocation of treatment.  相似文献   

2.
3.
For decades, public warning messages have been relayed via broadcast information channels, including radio and television; more recently, risk communication channels have expanded to include social media sites, where messages can be easily amplified by user retransmission. This research examines the factors that predict the extent of retransmission for official hazard communications disseminated via Twitter. Using data from events involving five different hazards, we identity three types of attributes—local network properties, message content, and message style—that jointly amplify and/or attenuate the retransmission of official communications under imminent threat. We find that the use of an agreed-upon hashtag and the number of users following an official account positively influence message retransmission, as does message content describing hazard impacts or emphasizing cohesion among users. By contrast, messages directed at individuals, expressing gratitude, or including a URL were less widely disseminated than similar messages without these features. Our findings suggest that some measures commonly taken to convey additional information to the public (e.g., URL inclusion) may come at a cost in terms of message amplification; on the other hand, some types of content not traditionally emphasized in guidance on hazard communication may enhance retransmission rates.Under conditions of imminent threat, rapid communication of warning information to the public is a primary strategy for decreasing loss of life and increasing public safety by eliciting protective actions from those at risk (1). For decades, public warnings have been relayed via mass media channels, including radio, broadcast television, and sirens (2). With the advent of social computing, warnings have begun to be disseminated via online social networks (OSNs), where messages can be more easily propagated and amplified by the user population (36). Risk amplification via message retransmission in this setting is important because it enables a message to reach individuals beyond the sender’s direct contacts, increasing exposure and potentially leading to lifesaving actions (7). Although such transmission occurs offline as well (811), OSNs offer the potential for the rapid retransmission of short messages with higher fidelity—and to more persons—than would typically be feasible via other means.In addition to enabling message diffusion, the clustered structure of most OSNs (12, 13) allows retransmission to expose individuals to the same message multiple times. Multiple exposures to messages have been linked to greater confidence in message veracity (14, 15), which can lead to further sharing (16, 17). Repeated exposures from multiple network ties are often a prerequisite for the spread of information through networks, and are of particular importance for inducing behavioral change (4, 1820). Under conditions of imminent threat, exposure to a warning message from a trusted source (such as a neighbor, friend, or family member) strongly affects one’s willingness to take protective actions (21, 22). This highlights the need to understand the factors that enhance or suppress the amplification of emergent risk messages within OSNs, with particular attention on the features of the messages themselves. Such an understanding can inform evidence-based strategies to increase message proliferation, thus allowing risk communicators to achieve a higher level of message penetration and/or to increase the number of exposures per person in the impacted population. This research examines message retransmission—commonly referred to as “retweeting”—on Twitter, identifying network, content, and style features that promote the amplification (23) of terse (i.e., content-constrained) messages within five types of hazard events.  相似文献   

4.
Ideological media bias is increasingly central to the study of politics. Yet, past literature often assumes that the ideological bias of any outlet, at least in the short term, is static and exogenous to the political process. We challenge this assumption. We use longitudinal data from the Stanford Cable News Analyzer (2010 to 2021), which reports the screen time of various political actors on cable news, and quantify the partisan leaning of those actors using their past campaign donation behavior. Using one instantiation of media bias—the mean ideology of political actors on a channel, i.e., visibility bias—we examine weekly, within-day, and program-level estimates of media bias. We find that media bias is highly dynamic even in the short term and that the heightened polarization between TV channels over time was mostly driven by the prime-time shows.  相似文献   

5.
6.
Americans are much more likely to be socially connected to copartisans, both in daily life and on social media. However, this observation does not necessarily mean that shared partisanship per se drives social tie formation, because partisanship is confounded with many other factors. Here, we test the causal effect of shared partisanship on the formation of social ties in a field experiment on Twitter. We created bot accounts that self-identified as people who favored the Democratic or Republican party and that varied in the strength of that identification. We then randomly assigned 842 Twitter users to be followed by one of our accounts. Users were roughly three times more likely to reciprocally follow-back bots whose partisanship matched their own, and this was true regardless of the bot’s strength of identification. Interestingly, there was no partisan asymmetry in this preferential follow-back behavior: Democrats and Republicans alike were much more likely to reciprocate follows from copartisans. These results demonstrate a strong causal effect of shared partisanship on the formation of social ties in an ecologically valid field setting and have important implications for political psychology, social media, and the politically polarized state of the American public.  相似文献   

7.
Twitter is a public microblogging platform that overcomes physical limitations and allows unrestricted participation beyond academic silos, enabling interactive discussions. Twitter‐based journal clubs have demonstrated growth, sustainability, and worldwide communication, using a hashtag (#) to follow participation. This article describes the first year of #GeriMedJC, a monthly 1‐hour live, 23‐hour asynchronous Twitter‐based complement to the traditional‐format geriatric medicine journal club. The Twitter moderator tweets from the handle @GeriMedJC; encourages use of #GeriMedJC; and invites content experts, study authors, and followers to participate in critical appraisal of medical literature. Using the hashtag #GeriMedJC, tweets were categorized according to thematic content, relevance to the journal club, and authorship. Third‐party analytical tools Symplur and Twitter Analytics were used for growth and effect metrics (number of followers, participants, tweets, retweets, replies, impressions). Qualitative analysis of follower and participant profiles was used to establish country of origin and occupation. A semistructured interview of postgraduate trainees was conducted to ascertain qualitative aspects of the experience. In the first year, @GeriMedJC has grown to 541 followers on six continents. Most followers were physicians (43%), two‐thirds of which were geriatricians. Growth metrics increased over 12 months, with a mean of 121 tweets, 25 participants, and 105,831 impressions per journal club. Tweets were most often related to the article being appraised (87.5%) and ranged in thematic content from clinical practice (29%) to critical appraisal (24%) to medical education (20%). #GeriMedJC is a feasible example of using social media platforms such as Twitter to encourage international and interprofessional appraisal of medical literature.  相似文献   

8.
9.
The precise mechanisms by which the information ecosystem polarizes society remain elusive. Focusing on political sorting in networks, we develop a computational model that examines how social network structure changes when individuals participate in information cascades, evaluate their behavior, and potentially rewire their connections to others as a result. Individuals follow proattitudinal information sources but are more likely to first hear and react to news shared by their social ties and only later evaluate these reactions by direct reference to the coverage of their preferred source. Reactions to news spread through the network via a complex contagion. Following a cascade, individuals who determine that their participation was driven by a subjectively “unimportant” story adjust their social ties to avoid being misled in the future. In our model, this dynamic leads social networks to politically sort when news outlets differentially report on the same topic, even when individuals do not know others’ political identities. Observational follow network data collected on Twitter support this prediction: We find that individuals in more polarized information ecosystems lose cross-ideology social ties at a rate that is higher than predicted by chance. Importantly, our model reveals that these emergent polarized networks are less efficient at diffusing information: Individuals avoid what they believe to be “unimportant” news at the expense of missing out on subjectively “important” news far more frequently. This suggests that “echo chambers”—to the extent that they exist—may not echo so much as silence.

By standard measures, political polarization in the American mass public is at its highest point in nearly 50 y (1). The consequences of this fundamental and growing societal divide are potentially severe: High levels of polarization reduce policy responsiveness and have been associated with decreased social trust (2), acceptance of and dissemination of misinformation (3), democratic erosion (4), and in extreme cases even violence (5). While policy divides have traditionally been thought to drive political polarization, recent research suggests that political identity may play a stronger role (6, 7). Yet people’s political identities may be increasingly less visible to those around them: Many Americans avoid discussing and engaging with politics and profess disdain for partisanship (8), and identification as “independent” from the two major political parties is higher than at any point since the 1950s (9). Taken together, these conflicting patterns complicate simple narratives about the mechanisms underlying polarization. Indeed, how macrolevel divisions relate to the preferences, perceptions, and interpersonal interactions of individuals remains a significant puzzle.A solution to this puzzle is particularly elusive given that many Americans, increasingly wary of political disagreement, avoid signaling their politics in discussions and self-presentation and thus lack direct information about the political identities of their social connections (10). However, regardless of individuals’ perceptions about each other, the information ecosystem around them—the collection of news sources available to society—reflects, at least to some degree, the structural divides of the political and economic system (11, 12). Traditional accounts of media-driven polarization have emphasized a direct mechanism: Individuals are influenced by the news they consume (13) but also tend to consume news from outlets that align with their politics (14, 15), thereby reinforcing their views and shifting them toward the extremes (16, 17). However, large-scale behavioral studies have offered mixed evidence of these mechanisms (18, 19), including evidence that many people encounter a significant amount of counter-attitudinal information online (2022). Furthermore, instead of directly tuning into news sources, individuals often look to their immediate social networks to guide their attention to the most important issues (2327). Therefore, it is warranted to investigate how the information ecosystem may impact society beyond direct influence on individual opinions.Here, we examine media-driven polarization as a social process (28) and propose a mechanism—information cascades—by which a polarized information ecosystem can indirectly polarize society by causing individuals to self-sort into emergent homogeneous social networks even when they do not know others’ political identities. Information cascades, in which individuals observe and adopt the behavior of others, allow the actions of a few individuals to quickly propagate through a social network (29, 30). Found in social systems ranging from fish schools (31) and insect swarms (32) to economic markets (33) and popular culture (29), information cascades are a widespread social phenomenon that can greatly impact collective behavior such as decision making (34). Online social media platforms are especially prone to information cascades since the primary affordances of these services involve social networking and information sharing (3538): For example, users often see and share posts of social connections without ever reading the source material (e.g., a shared news article) (39). In addition to altering beliefs and behavior, information cascades can also affect social organization: For instance, retweet cascades on Twitter lead to bursts of unfollowing and following activity (40) that indicate sudden shifts in social connections as a direct result of information spreading through the social network. While research so far has been agnostic as to the content of the information shared during a cascade, it is plausible that information from partisan news outlets could create substantial changes in networks of individuals.We therefore propose that the interplay between network-altering cascades and an increasingly polarized information ecosystem could result in politically sorted social networks, even in the absence of partisan cues. While we do not argue that this mechanism is the only driver of political polarization—a complex phenomenon likely influenced by several factors—we do argue that the interplay between information and social organization could be one driver that is currently overlooked in discussions of political polarization. We explore this proposition by developing a general theoretical model. After presenting the model, we use Twitter data to probe some of its predictions. Finally, we use the model to explore how the emergence of politically sorted networks might alter information diffusion.  相似文献   

10.
11.
12.
《Seminars in hematology》2017,54(4):184-188
The use of social media, and in particular, Twitter, for professional use among healthcare providers is rapidly increasing across the world. One medical subspecialty that is leading the integration of this new platform for communication into daily practice and for information dissemination to the general public is the field of hematology/oncology. A growing amount of research in this area demonstrates that there is increasing interest among physicians to learn not only how to use social media for consumption of educational material, but also how to generate and contribute original content in one’s interest/expert areas. One aspect in which this phenomenon has been highlighted is at the time of maximum new information presentation: at a major medical conference. Hematologists/oncologists are engaging regularly in one of the most common forms of social media, Twitter, during major medical conferences, for purposes of debate, discussion, and real-time evaluation of the data being presented. As interest has grown in this area, this article aims to review the new norms, practices, and impact of using Twitter at the time of medical conferences, and also explores some of the barriers and pitfalls that users are encountering in this emerging field.  相似文献   

13.
寄生虫病在我国分布广泛,严重危害着人们的身体健康.环介导等温扩增技术(loop-mediated isothermal amplification,LAMP)是一种新的核酸扩增技术,具有敏感性高、特异性强和简便快速等优点,可用于寄生虫的快速诊断.该文就LAMP检测寄生虫的研究现状进行综述.  相似文献   

14.
目的 建立一种环介导等温扩增(LAMP)快速检测动物布鲁杆菌的方法.方法 使用Primer 4.0,针对布鲁杆菌外膜蛋白(OMP31)基因保守区设计4条特异引物,在Bst大片段聚合酶的作用下,实现DNA的梯状等温扩增,并在此方法最适扩增条件优化的基础上,对其特异性、灵敏度进行实验,同时与普通PCR灵敏度进行比较,以及进行LAMP可视化实验.结果 本研究建立的检测方法对牛种布鲁杆菌(Brucella abortus,B.abortus)544A和104M、羊种布鲁杆菌(Brucella melitensis,B.melitensis)Rev-1和16M、猪种布鲁杆菌(Brucella suis,B.suis)S2和1330S、犬种布鲁杆菌(Brucellac anis,B.canis)RM6/66、绵羊附睾种布鲁杆菌(Brucela ovis,B.ovis)63/290、沙林鼠种布鲁杆菌(Brucella neotomae,B.neotomae)5K33检测阳性,而耶尔森氏菌(Yersinia enterocolitica,Y.enterocolitica)O∶9、大肠杆菌(Escherichia coli E.coli)O157∶H7和沙门氏菌(Salmonella typhimurium,S.typhimunum)47729检测阴性.LAMP法检出最低DNA浓度为8.5×10-8 mg/L,较普通PCR检测灵敏度高.检测结果既可以通过电泳判定也可以通过可视化判定.结论 本研究所建立的布鲁杆菌LAMP检测方法具有特异、灵敏、设备要求简单等特点,适用于基层兽医部门进行布鲁杆菌的快速检测.  相似文献   

15.
肠杆菌是肠道疾病的一种病原体。环介导等温扩增技术(LAMP)是一种新的核酸扩增技术,具有敏感性高、特异性强和简便快速等优点,可用于肠杆菌的快速诊断。本文拟就LAMP检测肠杆菌的研究现状进行综述。  相似文献   

16.
Information manipulation is widespread in today’s media environment. Online networks have disrupted the gatekeeping role of traditional media by allowing various actors to influence the public agenda; they have also allowed automated accounts (or bots) to blend with human activity in the flow of information. Here, we assess the impact that bots had on the dissemination of content during two contentious political events that evolved in real time on social media. We focus on events of heightened political tension because they are particularly susceptible to information campaigns designed to mislead or exacerbate conflict. We compare the visibility of bots with human accounts, verified accounts, and mainstream news outlets. Our analyses combine millions of posts from a popular microblogging platform with web-tracking data collected from two different countries and timeframes. We employ tools from network science, natural language processing, and machine learning to analyze the diffusion structure, the content of the messages diffused, and the actors behind those messages as the political events unfolded. We show that verified accounts are significantly more visible than unverified bots in the coverage of the events but also that bots attract more attention than human accounts. Our findings highlight that social media and the web are very different news ecosystems in terms of prevalent news sources and that both humans and bots contribute to generate discrepancy in news visibility with their activity.

Online networks have become an important channel for the distribution of news. Platforms like Twitter have created a public domain in which longstanding gatekeeping roles lose prominence and nontraditional media actors can also shape the agenda, increasing the public representation of voices that would otherwise be ignored (1, 2). The role of online networks is particularly crucial to launch mobilizations, gain traction in collective action efforts, and increase the visibility of political issues (38). However, online networks have also created an information ecosystem in which automated accounts (human and software-assisted accounts, such as bots) can hijack communication streams for opportunistic reasons [e.g., to trigger collective attention (9, 10), gain status (11, 12), and monetize public attention (13)] or with malicious intent [e.g., to diffuse disinformation (14, 15) and seed discord (16)]. During contentious political events, such as demonstrations, strikes, or acts of civil disobedience, online networks carry benefits and risks: They can be tools for organization and awareness or tools for disinformation and conflict. However, it is unclear how human, bot, and media accounts interact in the coverage of those events, or whether social media activity increases the visibility of certain sources of information that are not so prominent elsewhere online. Here, we cast light on these information dynamics, and we measure the relevance of unverified bots in the coverage of protest activity, especially as they compare to public interest accounts.Prior research has documented that a high fraction of active Twitter accounts are bots (17); that bots are responsible for much disinformation during election periods (18, 19); and that bots exacerbate political conflict by targeting social media users with inflammatory content (20). Past research has also looked at the role bots play in the diffusion of false information, showing that they amplify low-credibility content in the early stages of diffusion (21) but also that they do not discriminate between true and false information (i.e., bots accelerate the spread of both) and that, instead, human accounts are more likely to spread false news (22). Following this past work, this paper aims to determine whether bots distort the visibility of legitimate news accounts as defined by their audience reach off-platform. Unlike prior work, we connect Twitter activity with audience data collected from the web to determine whether the social media platform changes the salience that legitimate news sources have elsewhere online and, if so, determine whether bots are responsible for that distortion. We combine web-tracking data with social media data to characterize the visibility of legitimate news sources and analyze how bots create differences in the news environment. This is a particularly relevant question in the context of noninstitutional forms of political participation (like the protests we analyze here) because of their unpredictable and volatile nature.The use of the label “bot” often blurs the diversity that the category still contains. This label serves as a shorthand for accounts that can be fully or partially automated, but accounts that exhibit bot-like behavior can have very different goals and levels of human involvement in their operation. Traditional news organizations, for instance, manage many of the accounts usually classified as bots—but these accounts are actually designed to push legitimate news content. Other accounts with bot-like behavior belong to journalists and public figures—many of which are actually verified by the platform to let users know that their accounts are authentic and of public interest. Past research does not shed much light on how different types of bots enable exposure to news from legitimate sources, or how the attention they attract compares to the reach of mainstream news—which also generate a large share of social media activity (2325). More generally, previous work does not address the question of how news visibility on social media relates to other online sources (most prominently, the web), especially in a comparative context where different political settings other than the United States are considered. Are bots effective in shifting the focus of attention as it emerges elsewhere online?This paper addresses these questions by analyzing Twitter and web-tracking data in the context of two contentious political events. The first is the Gilets Jaunes (GJ) or Yellow Vests movement, which erupted in France at the end of 2018 to demand economic justice. The second is the Catalan referendum for independence from Spain, which took place on October 1 (1-O) of 2017 as an act of civil disobedience. These two events were widely covered by mainstream media (nationally and internationally), but they also generated high volumes of social media activity, with Twitter first channeling the news that was coming out of street actions and confrontations with the police. According to journalistic accounts, Twitter helped fuel political feuds by enabling bots to exacerbate conflict (26, 27). Our analyses aim to compare the attention that unverified bot accounts received during these events of intense political mobilization with the visibility of verified and mainstream media accounts, contextualizing that activity within the larger online information environment. In particular, we want to determine whether there is a discrepancy in the reach of news sources across channels (i.e., social media and the web) and, if so, determine whether bot activity helps explain that discrepancy (e.g., for instance, by having bots retweet sources that are less visible on the web). Ultimately, our analyses aim to identify changes in the information environment to which people are exposed depending on the channel they use to access news—a process of particular relevance during fast-evolving political events of heightened social tension.  相似文献   

17.
目的 建立应用环介导等温扩增技术(loop-mediated isothermal amplification,LAMP)检测蓝氏贾第鞭毛虫的方法. 方法 体外培养蓝氏贾第鞭毛虫滋养体,提取DNA.根据GenBank显示的贾第虫序列及环介导等温扩增技术的原理,设计4条贾第虫特异引物,利用LAMP检测蓝氏贾第鞭毛虫DNA,以隐孢子虫卵囊DNA、疟原虫DNA为对照,并将不含病原体DNA的纯水作为阴性对照.LAMP产物经SYBR green I显色后观察结果,绿色为阳性,棕色为阴性;对LAMP产物进行琼脂糖凝胶电泳分析,观察其特征条带的情况. 结果 蓝氏贾第鞭毛虫DNA检测管经显色后呈绿色,隐孢子虫卵囊DNA、疟原虫DNA及水阴性对照管呈棕色.含有蓝氏贾第鞭毛虫DNA的LAMP产物经电泳后呈LAMP特征性梯状条带,安氏隐孢子虫DNA、恶性疟原虫DNA及阴性对照水无扩增产物. 结论 成功建立了检测蓝氏贾第鞭毛虫的LAMP方法.  相似文献   

18.
Whole-genome amplification (WGA) for next-generation sequencing has seen wide applications in biology and medicine when characterization of the genome of a single cell is required. High uniformity and fidelity of WGA is needed to accurately determine genomic variations, such as copy number variations (CNVs) and single-nucleotide variations (SNVs). Prevailing WGA methods have been limited by fluctuation of the amplification yield along the genome, as well as false-positive and -negative errors for SNV identification. Here, we report emulsion WGA (eWGA) to overcome these problems. We divide single-cell genomic DNA into a large number (105) of picoliter aqueous droplets in oil. Containing only a few DNA fragments, each droplet is led to reach saturation of DNA amplification before demulsification such that the differences in amplification gain among the fragments are minimized. We demonstrate the proof-of-principle of eWGA with multiple displacement amplification (MDA), a popular WGA method. This easy-to-operate approach enables simultaneous detection of CNVs and SNVs in an individual human cell, exhibiting significantly improved amplification evenness and accuracy.Single-cell sequencing, characterization the genome of individual cells, is highly needed for studying scarce and/or precious cells, which are inaccessible for conventional bulk genome characterization, and for probing genomic variations of a heterogeneous population of cells (13). Recently single-cell genomics has unveiled unprecedented details of various biological processes, such as tumor evolution (46), embryonic development (7), and neural somatic mosaicism (8). Single-cell whole-genome amplification (WGA) is required to generate enough replicates of genomic DNAs for library preparation in conjunction with current sequencing protocols. Single-cell WGA has been increasingly used in cutting-edge clinical diagnostic applications such as molecular subtyping of single tumor cells (4, 9) and preimplantation genetic screening of in vitro fertilized embryos (10).An ideal single-cell WGA method should have high uniformity and accuracy across the whole genome. The WGA uniformity is critical for copy number variation (CNV) detection, whereas the WGA accuracy is essential for avoiding single-nucleotide variation (SNV) detection errors, either false positives or false negatives. The false positives arise from misincorporation of wrong bases in the first few cycles of WGA. In a diploid human cell, the false negatives primarily arise from the allelic dropout (ADO), i.e., heterozygous mutations are mistaken as homozygous ones because of the lack of amplification in one of the two alleles (11).Existing WGA chemistry includes degenerate oligonucleotide-primed PCR (DOP-PCR) (12), multiple displacement amplification (MDA) (1317), and multiple annealing and looping-based amplification cycles (MALBAC) (4, 18, 19), which have successively achieved genome analysis at the single-cell level. DOP-PCR is based on PCR amplification of the fragments flanked by universal priming sites, and provides high accuracy for detecting CNVs in single cells but has low coverage and high false-positive and false-negative rates for calling SNVs (5). MDA has a much improved coverage but tends to have lower precision/sensitivity in CNV determination due to its variation of the amplification gain along the genome, not reproducible from cell to cell (20). By virtue of quasilinear amplification, MALBAC suppresses the random bias of amplification and exhibits reduced ADO rates, yielding low false negatives for SNV detection (2, 11, 18, 19). Notwithstanding its drawbacks, MDA still offers comparable or higher genome coverage than MALBAC, at least for single diploid cells, possibly taking advantage of the randomness (2). In fact, even higher coverage has been obtained for cells with aneuploidy, such as dividing cells (21), and cancer cells (22). MDA’s main advantage is its lower false-positive rate for SNV detection on account of the use of Phi-29, a highly processive polymerase with high fidelity.Microfluidic devices have been carried out for single-cell WGA (16, 20, 23, 24), allowing avoidance of contaminations and high-throughput analyses of multiple single cells in parallel. The small total reaction volumes (microliters to nanoliters- or picoliters) of the microfluidic devices not only facilitate the efficiency of reactions but also allow significant cost reduction for enzymes and regents used. It was reported that the nanoliter volume of a microfluidic device improved uniformity of the amplification compared with microliter devices in the WGA of single bacterial cells (20).Here, we report a method, emulsion whole-genome amplification (eWGA), to use the small volume of aqueous droplets in oil to better the WGA chemistry for uniform amplification of a single cell’s genome. By distributing single-cell genomic DNA fragments into a large number (105) of picoliter droplets, a few DNA fragments in each droplet is allowed to reach saturation of DNA amplification. After merging the droplets by demulsification, the differences in amplification gain among the DNA fragments are significantly minimized.Although this approach can be used for any chemistry of WGA, we take MDA as an example to greatly reduce the random bias of amplification by separating the reactions into a large amount of emulsion droplets. We carried out detailed comparison with MDA, MALBAC, and DOP-PCR performed in tube using single cells from normal diploid human cells and a monoclonal human cancer cell line with inherited CNVs. Our results indicate that eWGA not only offers higher coverage but also enables simultaneous detection of SNVs and CNVs with higher accuracy and finer resolution, outperforming the prevailing single-cell amplification methods in many aspects.  相似文献   

19.
目的研究环介导等温扩增法在不同临床样本中的结核病诊断价值。方法使用环介导等温扩增法检测肺泡灌洗液、局部穿刺液、腹水和脑脊液中的结核分枝杆菌复合群的特异性基因IS1081,并与实时荧光定量PCR进行比较。结果在荷菌量较高的肺泡灌洗液和局部穿刺液中,环介导等温扩增法敏感性比实时荧光定量PCR低(80%vs 92%);在荷菌量极低的腹水和脑脊液中,环介导等温扩增法敏感性比实时荧光定量PCR高(34.6%vs 15.4%),但特异性略有下降(80%vs 93.3%);综合计算所有样本的敏感性,环介导等温扩增法与实时荧光定量PCR无显著差别(56.9%vs 52.9%)。结论环介导等温扩增法具有较高敏感性和特异性,可以用于结核病诊断。  相似文献   

20.
将显微成像系统应用于寄生虫学实验教学,不仅可以提高教师的工作效率,而且有利于激发学生的学习兴趣,提高动手能力,给学生提供一个互相学习,发现问题,共同进步的平台。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号