首页 | 本学科首页   官方微博 | 高级检索  
     


Bots are less central than verified accounts during contentious political events
Authors:Sandra Gonzá  lez-Bailó  n,Manlio De Domenico
Affiliation:aAnnenberg School for Communication, University of Pennsylvania, Philadelphia, PA, 19104;bCenter for Information and Communication Technology, Fondazione Bruno Kessler, 38123 Trento, Italy
Abstract:Information manipulation is widespread in today’s media environment. Online networks have disrupted the gatekeeping role of traditional media by allowing various actors to influence the public agenda; they have also allowed automated accounts (or bots) to blend with human activity in the flow of information. Here, we assess the impact that bots had on the dissemination of content during two contentious political events that evolved in real time on social media. We focus on events of heightened political tension because they are particularly susceptible to information campaigns designed to mislead or exacerbate conflict. We compare the visibility of bots with human accounts, verified accounts, and mainstream news outlets. Our analyses combine millions of posts from a popular microblogging platform with web-tracking data collected from two different countries and timeframes. We employ tools from network science, natural language processing, and machine learning to analyze the diffusion structure, the content of the messages diffused, and the actors behind those messages as the political events unfolded. We show that verified accounts are significantly more visible than unverified bots in the coverage of the events but also that bots attract more attention than human accounts. Our findings highlight that social media and the web are very different news ecosystems in terms of prevalent news sources and that both humans and bots contribute to generate discrepancy in news visibility with their activity.

Online networks have become an important channel for the distribution of news. Platforms like Twitter have created a public domain in which longstanding gatekeeping roles lose prominence and nontraditional media actors can also shape the agenda, increasing the public representation of voices that would otherwise be ignored (1, 2). The role of online networks is particularly crucial to launch mobilizations, gain traction in collective action efforts, and increase the visibility of political issues (38). However, online networks have also created an information ecosystem in which automated accounts (human and software-assisted accounts, such as bots) can hijack communication streams for opportunistic reasons [e.g., to trigger collective attention (9, 10), gain status (11, 12), and monetize public attention (13)] or with malicious intent [e.g., to diffuse disinformation (14, 15) and seed discord (16)]. During contentious political events, such as demonstrations, strikes, or acts of civil disobedience, online networks carry benefits and risks: They can be tools for organization and awareness or tools for disinformation and conflict. However, it is unclear how human, bot, and media accounts interact in the coverage of those events, or whether social media activity increases the visibility of certain sources of information that are not so prominent elsewhere online. Here, we cast light on these information dynamics, and we measure the relevance of unverified bots in the coverage of protest activity, especially as they compare to public interest accounts.Prior research has documented that a high fraction of active Twitter accounts are bots (17); that bots are responsible for much disinformation during election periods (18, 19); and that bots exacerbate political conflict by targeting social media users with inflammatory content (20). Past research has also looked at the role bots play in the diffusion of false information, showing that they amplify low-credibility content in the early stages of diffusion (21) but also that they do not discriminate between true and false information (i.e., bots accelerate the spread of both) and that, instead, human accounts are more likely to spread false news (22). Following this past work, this paper aims to determine whether bots distort the visibility of legitimate news accounts as defined by their audience reach off-platform. Unlike prior work, we connect Twitter activity with audience data collected from the web to determine whether the social media platform changes the salience that legitimate news sources have elsewhere online and, if so, determine whether bots are responsible for that distortion. We combine web-tracking data with social media data to characterize the visibility of legitimate news sources and analyze how bots create differences in the news environment. This is a particularly relevant question in the context of noninstitutional forms of political participation (like the protests we analyze here) because of their unpredictable and volatile nature.The use of the label “bot” often blurs the diversity that the category still contains. This label serves as a shorthand for accounts that can be fully or partially automated, but accounts that exhibit bot-like behavior can have very different goals and levels of human involvement in their operation. Traditional news organizations, for instance, manage many of the accounts usually classified as bots—but these accounts are actually designed to push legitimate news content. Other accounts with bot-like behavior belong to journalists and public figures—many of which are actually verified by the platform to let users know that their accounts are authentic and of public interest. Past research does not shed much light on how different types of bots enable exposure to news from legitimate sources, or how the attention they attract compares to the reach of mainstream news—which also generate a large share of social media activity (2325). More generally, previous work does not address the question of how news visibility on social media relates to other online sources (most prominently, the web), especially in a comparative context where different political settings other than the United States are considered. Are bots effective in shifting the focus of attention as it emerges elsewhere online?This paper addresses these questions by analyzing Twitter and web-tracking data in the context of two contentious political events. The first is the Gilets Jaunes (GJ) or Yellow Vests movement, which erupted in France at the end of 2018 to demand economic justice. The second is the Catalan referendum for independence from Spain, which took place on October 1 (1-O) of 2017 as an act of civil disobedience. These two events were widely covered by mainstream media (nationally and internationally), but they also generated high volumes of social media activity, with Twitter first channeling the news that was coming out of street actions and confrontations with the police. According to journalistic accounts, Twitter helped fuel political feuds by enabling bots to exacerbate conflict (26, 27). Our analyses aim to compare the attention that unverified bot accounts received during these events of intense political mobilization with the visibility of verified and mainstream media accounts, contextualizing that activity within the larger online information environment. In particular, we want to determine whether there is a discrepancy in the reach of news sources across channels (i.e., social media and the web) and, if so, determine whether bot activity helps explain that discrepancy (e.g., for instance, by having bots retweet sources that are less visible on the web). Ultimately, our analyses aim to identify changes in the information environment to which people are exposed depending on the channel they use to access news—a process of particular relevance during fast-evolving political events of heightened social tension.
Keywords:social media   computational social science   online networks   information diffusion   political mobilization
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号