“The Rise and Impact of Twitter Bots in Political Elections”
By Eve Cuthbertson
As the 2019 Canadian Federal election comes to a close and the 2020 US election heats up, it’s important to discuss the impact of social media on our democracies – specifically, Twitter bots. Computer scientists at the University of Southern California recently investigated the impact of Twitter bots in the 2016 US election and uncovered 30,000 likely bot accounts. A Pew Research study uncovered that “bots accounted for 66% of tweeted links to sites focused on news and current events.” While automated tweets can simply be a mechanism to spread news, they can also spread fake news, and create the impression of massive support where it doesn’t truly exist. This article will introduce the concept of political bots and consider their impact on democracy.
What is a political bot?
Dubois and McKelvey (2019) define political bots as “automated online agents that are used to intervene in political discourse online” (p. 28). They are primarily found on Twitter, and are becoming more advanced, making it difficult to tell if their message is coming from a human or machine.
How do bots impact democracy?
Dubois and McKelvey (2019) argue that Canada’s democracy has suffered as a result of political bots, through a practice called Astroturfing. The Guardian describes Astroturfing as “the attempt to create an impression of widespread grassroots support for a policy, individual, or product, where little such support exists.” This is a threat to democracy because it may contribute to a shift in public opinion, as people favour an opinion where it seems there’s a lot of support. For example, Twitter bots sent messages showing support for Brexit, just before the vote (Field & Wright, 2018). With the recent Canadian election, this concept could be applied to the act of strategic voting. For example, if you’re a Green Party supporter, and think voters in your riding will either support Liberal or Conservative, you may vote Liberal, as they are closer to your ideals than the Conservatives.
Further, bots can be used to spread misinformation, and go largely undetected before it is too late. Gorodnichenko, Pham and Talavera (2018) found that “information about Brexit and the 2016 U.S. Presidential Election is disseminated and absorbed among Twitter users within 50-70 minutes.” (p.3). Humans can also be a part of spreading misinformation, as they are likely to engage with tweets that support their opinion, and retweet without validating accuracy.
What can be done?
Dubois and McKelvey (2019) point out that the platforms themselves (such as Twitter) have the most data and insight into which accounts are automated, and therefore have a duty to communicate this information to users, and remove accounts that spread misinformation. Michele Austin, head of government and public policy for Twitter Canada, was quoted in a CBC article stating, “We have a team dedicated to monitoring inauthentic and spam activity for the Canadian election and it’s something that we will be banning if we see it happen… We do take these issues very, very seriously. The public conversation is never more important than during an election.” (Rogers, 2019).
Other policy interventions they explore include banning bots from platforms, registering bots, or providing guidelines for how political parties should use bots. While banning bots may stop some spread of misinformation, there are some bots that are perfectly harmless, or even helpful. For example, a group in Edmonton Alberta created @ParityBOT, which detects abusive tweets towards female political candidates, and sends out positive, encouraging messages in response. (Cuthbertson, et al, 2019). To actively engage against bots, online services offer detection of Twitter bots based on machine learning analysis. For example, BotCheck.me, Botometer or BotSentinel.
While many interventions are possible, it is most important that voters become aware of Twitter bots, their potential influence, and ways to combat the influence. Determining whether a message comes from a human or machine is now another consideration for critically consuming information online.
Resources
Bienkov, A. (2012, February 8). Astroturfing: what is it and why does it matter? RetrievedOctober 27, 2019, from https://www.theguardian.com/commentisfree/2012/feb/08/what-is-astroturfing.
BotCheck. (n.d.). Retrieved October 27, 2019, from https://botcheck.me/.
Botmeter (n.d.). Retrieved October 27, 2019, from https://botometer.iuni.iu.edu/#!/
BotSentinel (n.d.) Retrieved October 27, 2019,from https://botsentinel.com/about
Cuthbertson L. Cuthbertson E., Dawson R., Gordon A., Kearny A., Mathewson K., (2019). Women, politics and Twitter: Using machine learning to change the discourse. Manuscript submitted for publication.
Dubois, E., & McKelvey, F. (2019). Political Bots: Disrupting Canada’s Democracy. Canadian Journal of Communication, 44(2). doi:https://doi.org/10.22230/cjc.2019v44n2a3511
Kulp, P. (2019, September 6). As 2020 Election Nears, Twitter Bots Have Only Gotten Better at Seeming Human. Retrieved October 27, 2019, from https://www.adweek.com/digital/as-2020-election-nears-twitter-bots-have-only-gotten-better-at-seeming-human/.
Wojcik, S. (2018, April 9). 5 things to know about bots on Twitter. Retrieved October 27, 2019, from https://www.pewresearch.org/fact-tank/2018/04/09/5-things-to-know-about-bots-on-twitter/.
Wright, M., & Field, M. (2018, October 17). Russian trolls sent thousands of pro-Leave messages on day of Brexit referendum, Twitter data reveals. Retrieved October 27, 2019, https://www.telegraph.co.uk/technology/2018/10/17/russian-iranian-twitter-trolls-sent-10-million-tweets-fake-news/.
Featured Image: chatbot from mohamed_hassan used under Pixabay License.
Its really beneficial blog to aware the people about this twitter bots in political elections its a crime and a case of cyber crime.If someone hack your account and do that kind of things you can file a case and contact to the lawyer.