Algorithms and Your Data

Video credit:  The Coded Gaze: Unmasking Algorithmic Bias – posted by Joy Buolamwini on YouTube



When you hear terms like “algorithm,” “machine learning,” and “artificial intelligence (AI),” they might seem intimidating and complicated. And while yes, the mathematics behind them can be rather complex, it is easiest to just think of algorithms as “the set of rules a machine (and especially a computer) follows to achieve a particular goal,” of which artificial intelligence are the result (Merriam Webster).

Algorithms exist all around us, both on the internet and in the physical world too. They have a huge variety of uses, from gathering your data in order to populate your social media feed with posts that the algorithm thinks you will likely click on to more external uses such as facial recognition software for tasks like unlocking your phone.

While algorithms do make the world around us faster and more convenient, it’s important to remember that they – like all technology – are not just neutral sets of numbers working away in the background with no agenda. Algorithms might be rules, but they are rules created by people, each of whom has their own biases that get written into the programming that they create. These biases are not always intentional, but they can have a huge effect on the type of information that we access. And when algorithms are used to make complicated, human-based decisions such as job suitability or prison sentencing, the biases inherent in algorithmic programming can have the serious result of perpetuating existing inequalities.

Popping the Filter Bubble

One important use of algorithms online is to take your data (including demographic information like your age, gender, sexual orientation, and more) and use it to place advertisements on your feed that you are more likely to click on. However, did you know that this same tactic is used for your search results on websites like Google and YouTube too? Depending on the data that Google gathers about you, the links that you see in a Google search result will be different.

Remember, while Google is a search engine, it is also a company that wants to make as much profit as possible. The more you interact with the search results and the ads around them, the more money Google and its affiliate websites make. It makes sense from a profit standpoint to customize search results using algorithms fed by your data.

However, this process of customizing search results means that you are likely to only see things online that you like and agree with. This makes it all too easy for objectivity to be lost, especially since people tend to use Google uncritically as an information source and not think very hard about its profit-driven nature. This particular type of personalization can lead to a phenomenon called “filter bubbles,” which are “spheres of algorithmically imposed ignorance that mean we don’t know how the content we’re seeing might be biased to please us and protect us from information that challenges our views.” The combination of uncritical acceptance of algorithmically influenced search results as fact and a lack of transparency from companies such as Google and Facebook about their practices is believed by some to be linked to issues such as increasing partisanship and the spread of misinformation.

Algorithms and Bias: Facial Recognition and Beyond

Online filter bubbles are not the only places where the dangers of uncritical acceptance of algorithmically produced results can be seen. Algorithms are also commonly used in facial recognition software, which can be used for many tasks, from unlocking your smartphone to police surveillance. Joy Bulomwini, founder of the Algorithmic Justice League, discovered while working on a project as an MIT student that the facial recognition software that she was using could recognize the faces of white people with high accuracy, but was shockingly bad at registering the presence of Black faces. As she discovered, many facial recognition algorithms are trained using data sets that mostly include white male faces, making them bad at identifying faces of BIPOC and women, with Black women being accurately recognized at the lowest rates. Given that facial recognition technologies are used in surveillance by police forces including the RCMP, this lack of accuracy has the potential to lead to serious harms such as false identification and unjust detainment.

Algorithmically driven technologies can also be used for complex, human-centered tasks such as recruiting job candidates and risk assessment tools that attempt to determine a person’s likelihood to criminally reoffend. Given what we already know about algorithms, it will likely not surprise you to know that in both tasks, the algorithms used created biased results. A recent study found that when algorithms were used to target ads for job recruitment, “broadly targeted ads on Facebook for supermarket cashier positions were shown to an audience of 85% women, while jobs with taxi companies went to an audience that was approximately 75% Black,” exacerbating previously existing stereotypes and creating knowledge barriers for entry into different types of work. In the criminal justice case, studies found that software used in several U.S. states to determine the likelihood of criminal re-offense was “particularly likely to falsely flag Black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.” While it’s tempting to think about algorithms as neutral technologies that take away human errors like bias in complex cases such as these, algorithms have actually been shown to perpetuate existing biases.


Think before you ink

Some legal challenges to unregulated algorithm use are now starting to come forward. In Canada in February of 2021, the Office of the Privacy Commissioner ruled that Clearview facial recognition technology, the same technology used by the RCMP, was an illegal violation of Canadian citizens’ privacy rights. While there are no specific regulations on how facial recognition technologies can be used by Canadian law enforcement, greater regulation is likely forthcoming. Similar legislation calling for transparency about filter bubbles was introduced in the U.S. Congress in 2019, but the bill did not receive enough support to move forward.

Clearly, a more nuanced and critical conversation around algorithms and their uses is needed in the wider public. Fortunately, these conversations are beginning to happen in both Canada and in other parts of the world. Documentaries like Coded Bias and The Social Dilemma are raising awareness of the biases that can be exacerbated by facial recognition technologies and social media filter bubbles. While algorithmic bias may be a daunting challenge to overcome, engaging with these issues, spreading awareness, and calling for political change by contacting your elected representatives and voting are vitally important ways in which you can actively encourage social and legal change on these issues.


Algorithms are an essential functional component for many online functions, and yet they have potential to be biased in ways that are extremely harmful for marginalized populations. What do you think can be done to combat algorithmic bias?

Leave a Reply