When we interact with technology, we tend to assume that our tools are neutral, unbiased machines. How can an algorithm, a mathematical sequence, be discriminatory? Those responsible for coding our online spaces, writing the algorithms that mediate our digital lives spaces necessarily, whether knowingly or unknowingly embed their worldviews and biases into our online spaces, constructing digital landscapes that treat people differently. Embedded biases can have potentially harmful effects on the public who engage with these technologies on a daily basis. Unfortunately, it’s hard to see how power structures, and biases are operating within our digital spaces. These mechanisms covertly modulate and record our behaviors and experiences; however, there are barriers which prevent the public from organizing to respond to the potentially harmful and discriminatory outcomes of algorithmic mechanisms. This blog will look at some cases of algorithmic discrimination, and discuss the challenge of creating informed digital citizens so that we are capable of comprehending and feeling invested in this emerging problem.
Michael Brennan, for the Ford Foundation, wrote a provocative article asking, “Can Computers be Racist?” Brennan explains how information systems can embody human flaws like bias. He confirms that algorithmic media such as Google can produce racially discriminatory outcomes. Embedded within Brennan’s article is a video of Harvard professor Latanya Sweeney speaking about a personal experience where a Google search of her name produced an ad implying that she had a history of arrest. While the arrest never happened, her co-worker suggested that Google may be providing this result due to her name being affiliated with black familial names, a suggestion which she initially disbelieved. Sweeney followed up by conducting a study across the United States of 120,000 internet ads and found that indeed, Google’s ads do create an associative link between black affiliated names and a history of arrest. Another example of how technologies treat users differently can be seen in this video, which reveals that HP computers cannot recognize black faces using facial recognition technology. Algorithmic control exists within a social context and in certain scenarios reproduces negative stereotypes, impacting vulnerable groups more heavily. This case, like many others, demonstrates the potential danger of viewing technological apparatuses as neutral, as the guise of objectivity and imperceptibility of algorithmic control amplifies the problem by making it harder to assign responsibility.
Physical and virtual space are becoming increasingly indistinct therefore negative associative links produced by algorithmic media undoubtedly affect individual’s real lives. There is also see an increase of web-connected objects called the internet of things (IOT). In her essay “(Un)Ethical Use of Smart Meter Data” Winter weighs benefits and negative implications of one manifestation of IOT with the advent of smart grids; the next generation of electrical power grids intended to upgrade and replace aging infrastructure.** Smart grids provide real-time feedback to customers for them to make more informed decisions regarding energy consumption. However, to be prudent, privacy concerns around what data is being collected about IOT users, and how this might become a social issue must be adequately understood before we become reliant on such devices. For example, will this data be used to create user profiles? Will certain users be given preferential treatment based on their consumption habits? The connectivity of our appliances further breaks down a sense of privacy and security previously associated with homes. Winter asks, “how can we reap the many benefits of technologies like smart grids and smart meters without risking the loss of personal privacy, loss of jobs or housing or government intrusion into one’s home life.”** It is unclear if the benefits of connectivity can be provided without accepting serious consequences. Users need to have a clearer understanding of what they are giving up. Opening up alternative streams of information around privacy, algorithmic control, and the devices that use it, can prevent the mystifying force of advertising from being the dominant voice. A first step in resisting the serious implications algorithmic control is education, the public must become more aware of how algorithms work.
Algorithmically controlled media is characterized by the personalization of information flow, for example, Facebook’s newsfeed, which creates a filter bubble. Personalization of media consumption may challenge our ability to feel a sense of belonging with a larger social body with common political concerns. Gilles Deleuze, in his prescient 1992 essay Postscript on the Societies of Control anticipates this social break down, going so far as to claim that social bodies can be further divided even threatening our sense of being an individual: “Individuals have become ‘dividuals,’ and masses, samples, data, markets or ‘banks.” Our lives are being compressed into masses of data points that represent, and describe who we are in fragmented pieces.
Traditional media sources associated with print cultures such as maps, newspapers, and censuses have had the benefits of aiding in the creation of a sense of social cohesion, encouraging participation. For example, “punctual media” like newspaper contributes to the construction of a sense of national identity by allowing individuals to imagine they exist in a common time with others.* “The date at the top of the newspaper, the single most important emblem on it, provides the essential connection- the steady onward clocking of homogeneous empty time.”* The “continuous temporality” of algorithmic media, such as Facebook, dissolves this collective sense of time, conflicting with our ability to form collective responses to events in our environment. Continuous temporality means that the flow of information is continuous, indistinct, and global. New imaginative ways of building publics must also be imagined and enacted.
How can the public detect and care for the consequences of algorithmic control? Part of the problem is that the most popular digital social spaces are owned and operated by massive corporations, such as Facebook and Twitter. The algorithmic code for these spaces are proprietary, trade secrets which are not exposed to public debate or scrutiny for human rights abuses. We are all affected by algorithms whether or not we are aware of it and there is a lack of accountability for the discriminatory outcomes of algorithmically controlled spaces. Algorithmic environments pose a threat to our ability to feel a sense of boundaries, or “aggregations” whether they are geographical, temporal or social, and make the goal of social cohesion more slippery. According to McKelvey, algorithmic media creates a social condition where the body public is “constantly being dissected and re-assembled.” *
In a recent video published on the website “Free Assange” Noam Chomsky states that: “The architects of power in the United States must create a force that can be felt but not seen. Power remains strong when it remains in the dark. Exposed to the sunlight, it begins to evaporate.” What steps can be taken to unmask the power of algorithms, to create a tangible collective awareness of this issue? Surely, the effort will need to be multifaceted, including extensive public participation between artists and technical experts who can bring algorithms into the public imagination and translate into expressive forms that are dynamic, poetic and affective. All kinds of storytelling, whether it be visual art, writing, video games or virtual reality can be created to unveil the elements which cloak algorithmic control in obscurity. In order to address these issues, we need to mobilize knowledge of how certain algorithms affect people negatively. How do we get this conversation started? How can we work together to prevent algorithms from being a naturalized part of our digitized environment?
*McKelvey, F. (2014). Algorithmic Media need Democrative Methods: Why Publics Matter. Canadian Journal of Communication, 39.4, 597-613.
**Winter, J. S. (2014). (Un) Ethical Use of Smart Meter Data. In Data and Discrimination (pp. 37-42). Open Technology Institute.
People said…