Algorithms & The Death of Democratic Discourse
by Christian Koch
Our entire digital history tracked— and is still being tracked. Platforms like Google, Facebook, and others collect data from links clicked, time spent on a particular page, whether or not the content is shared, and even the most seemingly inconsequential choices— all to build pictures of user search propensities, or a user identity. And that personal data has been used to populate our social media and search feeds with content that we are statistically likely to click on. [8]
The reason for aggregating this historical data is to construct algorithms that can process our choices with the “[aim] to predict and modify human behavior as a means to produce revenue and market control”. [16] Algorithms sift through the immense available content online and show users what they are most likely to click on. Our perception is a streamlined and personalized internet experience. However, this process means that anything that is assessed to be less relevant to user interests is shuffled to the bottom. [8] Ergo, ideological diversity is limited to user interests and the potential for democratic discourse is undermined.
This curated content is designed to drive engagement, belief, and action while increasing profits on the side of the publisher. The algorithm responds to the behavior of the user by supplying similar content which reinforces their engagement with the ideas therein. This reinforcement creates a feedback loop as it creates an environment wherein “people share stories based on affective or emotional appeal more than factual accuracy, with the goal of supporting their pre-existing beliefs and signaling their identity to like-minded others”. [9] These feedback loops naturally exclude voices that differ from the group identity, which creates the potential for the cultivation of misinformation.
Consider the following: our click habits are being tracked while using these platforms’ services, and recall that their proprietary algorithms have the ability to parse that data into advertising strategies. Combine this with users’ dependence on these platforms for news acquisition [14], and then we begin to see the potential for stunted information exchange within feedback loops.
The danger inherent in a system that provides an a priori route to information acquisition is made worse by the disinterest these platforms show regarding the consequences their algorithms have on discourse in their pursuit of profit growth. [16] The ability to create environments for users of certain beliefs and habits seems a strategic way to advertise lucratively. But how then do opposing views come together to discuss the pressing issues of the moment when these environments have separated users from those who do not believe or behave in the same way? Each group experiences their media feeds unaware of the potential that portions of it could be curated for them so as to influence how they might participate online, in society, or even in an election. [3] And that very well could be the most sinister effect of social media and search algorithms — their ability to influence political participation.
One example of this kind of manipulation arose in the U.K. during the process of voting to leave the EU. Companies such as Cambridge Analytica and AggregateIQ purchased harvested personal data to build algorithms that could draft individualized messages loaded with personalized triggers so as to influence the way citizens might vote. [3] These companies had “found a way of targeting people based on behavioral input” [3] and were manipulating them to act in a certain way that suited the company’s interest.
What Can Be Done?
There are two ways for users to protect themselves against manipulation of this kind. The first is through increased awareness: awareness that our information is being captured and an understanding of how that data can be used. This strategy can assist in mitigating the impact of these algorithms by illuminating the mechanisms underpinning the curation process, especially since “the typical user has little or no knowledge of… business operations,” within these internet companies and is likely unaware of the “range of personal data that [users] contribute to Google’s servers”. [16] But it doesn’t stop with awareness; it’s important for users to know that they aren’t locked into the curated feedback loops that they find themselves in. [4] Changes to search, click, and buying habits will shift content to match those new propensities.
The second tool in a user’s arsenal is content analysis. Increased user awareness offers a lens that users can use to analyze the content that populates on their feeds during web experiences. This is especially important when considering that “51% of people with online access use social media as a news source” and that Facebook has been found to be the platform most used for that purpose. [14] Groups using their social media platforms for news acquisition are less likely to use such as newspapers, televised news, etc. [5] For this reason, it is imperative that user awareness be used to assess the choices made online and how those choices can dictate the content visible to them. Furthermore, this content must be analyzed as it has the potential to reinforce already held beliefs while downplaying content that would otherwise challenge the user, without which users can become hindered in their ability to participate in democratic discourse with individuals outside of their feedback loop.
So, the next time you hover over the search bar or scroll through your social media feed, consider the following:
- What kinds of content are being populated for me?
- Does this content reflect a specific view or value judgement?
- Do these views or values align with my own?
- Are other views being represented within this content?
By asking these questions, users can begin to identify the loops that they are undoubtedly within. Empowerment then comes from this identification, as once the loop is identified, the user has the ability to make choices that can broaden the content that is visible to them.
References
[1] Blue, Charles (2021). Debunking Misinformation and Confronting Conspiracy Theories. Association for Psychological Science. https://www.psychologicalscience.org/observer/debunking-misinformation
[2] Caplan, R., Hanson, L., & Donovan, J. (2018). Dead reckoning: Navigating content moderation after fake news. Data & Society Research Institute. Retrieved from https://datasociety.net/pubs/oh/DataAndSociety_Dead_Reckoning_2018.pdf.
[3] Cadwalladr, carol (2017). The Great British Brexit Robbery: How Our Democracy was Hijacked. The Guardian. https://www.theguardian.com/technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy
[4] Dutton, William H. (2017). Fake news, echo chambers and filter bubbles: Underresearched and overhyped. The Conversation. https://theconversation.com/fake-news-echo-chambers-and-filter-bubbles-underresearched-and-overhyped-76688
[5] Gottfried, J., & Barthel, M. (2015). How millennials’ political news habits differ from those of Gen X and Baby Boomers. Pew Research Center. Retrieved June 26, 2019, from: https://www.pewresearch.org/fact-tank/2015/06/01/political-news-habits-by-generation/ .
[6] Lazer, D., Baum, M., Grinberg, N., Friedland, L., Joseph, K., & Mattsson, C. (2017). Combating fake news: An agenda for research and action. Shorenstein Center on Media, Politics and Public Policy. Retrieved June 26, 2019, from: https://shorensteincenter.org/combating-fake-news-agenda-for-research.
[7] Legg, H., & Kerwin, J. (2018). The fight against disinformation in the U.S.: A landscape analysis. Shorenstein Center on Media, Politics, and Public Policy. Retrieved June 26, 2019, from: https://shorensteincenter.org/the-fight-against-disinformation-in-the-u-s-a-landscape-analysis/
[8] Luckerson, Victor (2015). Here’s How Facebook’s Newsfeed Actually Works. Time Magazine. https://time.com/collection-post/3950525/facebook-news-feed-algorithm/
[9] Marwick, A. (2018). Why do people share fake news? A sociotechnical model of media effects. Georgetown Law Technology Review, 2, 474–512. https://georgetownlawtechreview.org/why-do-people-share-fake-news-a-sociotechnical-model-of-media-effects/GLTR-07-2018/
[10] Marwick, A., & Lewis, R. (2017). Media manipulation and disinformation online. Data & Society Research Institute. Retrieved from https://datasociety.net/output/mediamanipulation-and-disinfo-online/
[11] Millennials and Baby Boomers: A Generational Divide in Sources Relied on for Political News. PEW Research Center. https://www.journalism.org/2015/06/01/millennials-political-news/pj_15-06-01_millennialmedia02/
[12] Mikkelson, David (2016). We Have a Bad News Problem, Not a Fake News Problem. Snopes. https://www.snopes.com/news/2016/11/17/we-have-a-bad-news-problem-not-a-fake-news-problem/
[13] Rieh, S.Y. (2014). Credibility Assessment of Online Information in Context. Journal of Information Science Theory and Practice, 2(3), 6-17. https://doi.org/10.1633/JISTAP.2014.2.3.1
[14] Wakefield, Jane (2016). Social media ‘outstrips TV’ as news source for young people. BBC News. https://www.bbc.com/news/uk-36528256
[15] Wineburg, S., McGrew, S., Breakstone, J., & Ortega, T. (2016). Evaluating information: The cornerstone of civic online reasoning. Stanford Digital Repository. Retrieved June 26, 2019, from: http://purl.stanford.edu/fv751yt5934 .
[16] Zuboff, S. (2015). Big other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology, 30(1), 75–89. https://doi.org/10.1057/jit.2015.5
Written by Christian Koch
Edited by Rachael Bradshaw
Featured image Hacker Image by Darwin Laganzon via Pixaby License
People said…