What’s in a Category: Definitions of Authenticity, Transparency, and the Social-Bot
by Nicholas Lindsay Lewis
After Surveillance Capitalism, Soshana Zuboff’s 2018 magnum opus on data and the platform economy, it is safe to establish two premises. The first, that the process of identifying users is how social media platforms monetize their operations and the second, that the medium of social media has encouraged users to communicate and interact with a wider range of digital actors for advice, direction, and self-affirmation. Therein lies the inherent value and danger of social media conglomerates and their monitoring of online behavior.
In our daily desire to socialize and stay up-to-date, we leave behind digital traces and tracking these identifiable bits of information to a single device, a specific profile of user with a combination of attributes has become the lucrative practice of web platform development not only to sell us product, but to peddle influence as well. The markets have created indexes of meta-tags used to cluster individuals into groups and categories.
The Digital Privacy Act as of 2018, requires domestic and foreign companies doing business in Canada to implement better practices managing ’personal information’, data revealing” identifiable individuals” (PIPEDA, 2015) section 2 or face financial penalties for damages related to breaches. However, there exists relatively little regulations outside libel and harassment case-law (McVelvey, Dubois, 2017) for the misuse of these personally identifiable traces being used to manipulate groups of individuals abilities for informed critical decision making.
There exists no ethical framework in how data-brokers, ads agencies, and platform developers sell control on how we choose to consume product, or ideology. In many ways they have chosen it for us by repackaging the coded categories we unwittingly occupy. Getting social media developers and data-brokerages to disclose their full range of categorical listings has proven to be a legally exhaustive pursuit, awarding them transparency admirations, and small wins when they release a fraction of the lists.
Often, these companies maintain that their categories are proprietary and essential to their business funding models. Much like data-brokers, social networking sites aren’t interested in being transparent in the full range of categories they use to cluster individuals for fear of raising any red-flags to regulators and activists (Crain, 2017). They argue, this surveillance allows them ways to minimize the gamification of the social media system by disruptive actors, and by segmentation and differentiation, they may better protect the authentic user from the inauthentic.
Though recent congressional testimony has proven that Facebook is not willing to fact check political advertisements.(Wood, 2019) With the transparency of coded categories researchers may be able to understand how they are then operationalized against us in the form of social-bots.
Bot-masters utilize fleets of social bots to emulate users that become instruments in messaging and advertising. (Wooley, Howard 2016) Companies have known that their reach is only as strong as their brand reputation. In the age of reviews, comments, likes and linking through networked platforms, our collective voices have increased in scale. User evaluations and inputs garner traction and help promote various private and public interests. These types of interactions have a profound effect on how individuals will spend and invest time in any given activity, service and product. The most valuable promotions these companies can rely on is user interactivity whether through ‘authentic’ or ‘fake’ users.
At one point, bots were messaging-games, that you could download and communicate with via MSN or Skype- largely unsophisticated and amusing. Now bots operate to fill the digital space and prompt and influence user interaction to the financial gain of both the platform and the advertisers. More nefariously, some bots operate to disrupt and dissuade collective movements.
This is not to assume all negative discourse, is primarily generated by fake-profiles. Trolls unlike bots operate as lone actors and could be legitimate users who seek fulfillment at provoking reaction. They could be engaged in misleading other users for monetary reward indeed and also may have a grudge to settle by being purposeful combative or interruptive.
However, the coordinated inauthentic activity that occurred in the April 2019 Alberta elections are a sign of something else, in particular the management of public sentiment (Weber, 2019). These collaborative efforts to stimulate human agents and digital sock-puppets has become a new tool in the arsenal of marketers, political campaigners, and media conglomerates.
Like most oppressive forces that attempt to sway or limit our ability to discern truths from falsehoods, or reality from fiction, they seek to control and marginalize us to better manage us. Through education and organization, dissenters have peacefully and legally fought back against attempts to pacify our concerns, and muddy or manipulate our perceptions. How we educate future users of social media about the social media definitions of “authentic” behavior may be key to how we develop our critical media lens and navigate with more confidence around the activities of social-bots, but more importantly in this quest to understand authenticity is the need for these platforms to be transparent about how they categorize “behavior” and heed intervention.
In the age of targeting-advertising, we have to begin searching for our own ways to push back against the bots and their masters, because social media sites like Facebook would rather attempt this on their own, leaving real users out of the know.
References
Crain, Matthew, “The limits of transparency: Data brokers and commodification” (2017). CUNY Academic Works. https://academicworks.cuny.edu/qc_pubs/169.
Fenwick McKelvey & Elizabeth Dubois, “Computational Propaganda in Canada: The Use of Political Bots.” Samuel Woolley and Philip N. Howard, Eds. Working Paper 2017.6. Oxford, UK: Project on Computational Propaganda. comprop.oii.ox.ac.uk<http://comprop.oii.ox.ac.uk/>. 32 pp https://blogs.oii.ox.ac.uk/politicalbots/wp-content/uploads/sites/89/2017/06/Comprop-Canada.pdf.
Weber, Bob. “Federal Probe Finds “Co-Ordinated” Social Media Bots In Alberta Election”. Msn.Com, 2019, https://www.msn.com/en-ca/news/politics/federal-probe-finds-co-ordinated-social-media-bots-in-alberta-election/ar-AAGUWfW. Accessed 31 Oct 2019.
Woolley, Samuel, & Philip N. Howard. “Automation, Algorithms, and Politics| Political Communication, Computational Propaganda, and Autonomous Agents — Introduction.” International Journal of Communication [Online], 10 (2016): 9. Web. 4 Oct. 2019 https://ijoc.org/index.php/ijoc/article/view/6298.
Wood, Charlie. “Alexandria Ocasio-Cortez And Elizabeth Warren Got Behind The Facebook Employees Slamming Mark Zuckerberg For Allowing Lies In Political Ads”. Business Insider, 2019, https://www.businessinsider.com/aoc-and-warren-back-facebook-employees-over-political-ads-criticism-2019-10.
Featured Image: chatbot from mohamed_hassan used under Pixabay License.
People said…