Watch
Video credit: Created by Hina Ovais for the Digital Tattoo Project, UBC (2025), licensed under CC BY 4.0.
Download the video script here.
Download the worksheet to reclaim yourself.
Think
What does AI know about you or think it knows?
In this activity, you’ll create your own AI-shaped reflection. Drag each digital trace, from search history to shopping patterns, onto the body map where you think it belongs: your head, heart, hands, or feet. This isn’t a quiz. There are no right or wrong answers.
You’ve just created a data-shaped version of yourself. But how close is this digital self to who you really are?
Before moving on, consider:
- What does it reveal about your online identity?
- Which of these placements surprised you the most?
- Which part of your digital self do you want to change?
- What feels accurate?
- What seems incomplete or incorrect?
- If you could revise this reflection, what would you alter?
Explore
Follow the Algorithm: Who’s Really in Control?
It starts with a click. You type in a prompt. Hit “Enter.” And in seconds, a clean block of text or a vivid image appears. It appears effortless. But the truth is that this process is not simple.
Let’s unpack what’s really happening when AI tools respond.
Step 1: You Provide the Prompt
Sounds easy. But even your prompt is shaped by what the tool encourages you to say. You may want to question who’s nudging to shape how you ask.
A 2024 peer-reviewed study that analyzed over 145,000 prompts found that when AI tools offered suggestions, users were more likely to follow them. Dropdowns, autocomplete, or example prompts steer your thinking. This limits opportunities for creative exploration or for more complex prompts.
You may want to question who’s nudging you, and how much freedom does that leave for your own creativity?
Step 2: The Model Does the Math
Your words are fed into an algorithm trained on massive datasets, which are collections of online text, images, and user behavior gathered at scale.
But those datasets reflect real-world patterns, including unfair ones.
A peer-reviewed study by researchers Buolamwini and Gebru (2018) found that commercial facial recognition systems had error rates of up to 35% for darker-skinned women, compared to less than 1% for lighter-skinned men, even when the images were neutral.
That’s how bias can enter: not through intention, but through what the model learns.
Want to explore how this connects to generative AI? Check out the tutorial. Check out the tutorial Generative AI and Bias.
Step 3: The Output Reflects the System
The output you receive isn’t retrieved from a database. It’s rather guessed based on patterns the model has learned from its training data.
That prediction doesn’t just reflect accuracy. It reflects choices about what sounds polite, smart, convincing, or safe. These choices are shaped by the designers, the data, and the rules baked into the system.
A 2025 peer-reviewed study published in Nature analyzed 77 large language models and found that they often favored “ingroup” identities (groups perceived as similar) and showed subtle bias against “outgroups” (those perceived as different), even with neutral prompts.
If you’re wondering what happens inside the model and how it generates the information it displays, explore the Anatomy of an AI Tool infographic.
Step 4: You React
Do you revise the output, check the facts you received, or accept and copy-paste with little thought?
A 2025 peer-reviewed study published in NEJM AI found that people often overtrust AI-generated medical advice, rating it as just as reliable as human doctors, even when it was inaccurate.
That’s ‘automation bias,’ the tendency to rely on AI simply because it sounds confident or is easy to use.
But not everyone leans on AI. Another 2024 study found that many people are actually ‘algorithm averse,’ revealing an ‘anti-AI bias,’ which, simply put, means that they are less trusting of results generated by AI.
So the key question here isn’t just if we trust AI but when, why, and how much.
Do you recognize your own patterns?
Take our interactive quizzes:
Now Try It Yourself
Do you want to know what is shaping the output and shaping you in the process?
Download the worksheet ‘Mirror Audit—AI and You’ to reclaim yourself.
Final Thought
The algorithm is invisible. But its fingerprints are all over your screen. Are you watching closely?
Reflect. Resist the nudge. Reclaim your authorship.
Links
Tutorials & Activities
- Anatomy of an AI Tool (Infographic)
- Generative AI and Bias (Tutorial)
- Your Privacy and AI Tools (Tutorial)
- Are You Offloading Too Much to AI? (Quiz)
- Using Generative AI in Academics (Tutorial)
Studies & Articles
- Social media algorithms determine what we see. But we can’t see them │ The Washington Post (2021)
- We engage with our phones every five minutes, new study shows │ London School of Economics (2020)
- Gender Shades │ Joy Buolamwini & Timnit Gebru (2018)
- Generative Language Models Exhibit Social Identity Biases │ Nature (2025)
- Public Perceptions of AI-Generated Medical Advice │ JMIR Medical Informatics (2024)
- Auditing Radicalization Pathways on YouTube │ FAT* Conference Proceedings (2020)
- People Overtrust AI-Generated Medical Advice Despite Low Accuracy │ NEJM AI (2025)
- The Role of Interface Design on Prompt-Mediated Creativity in Generative AI │ ACM Web Science Conference (2024)
Discuss
Each time you prompt an AI tool to “generate,” you step into a black box. Here, invisible systems guess, filter, and form the response you’ll get. Unfortunately, machines aren’t neutral. Neither are the algorithms they follow. They are built on predictions, design, and data.
- They reflect someone’s values. Are these your values too?
If AI is shaping your habits, your writing, and even your thinking…
- Who’s in charge? You, or the algorithm?
- What’s one habit you’d like to take back from the algorithm?
What do you think? Tell us using the comment below.