Watch
Video credit: Created by Hina Ovais for the Digital Tattoo Project, UBC (2025), licensed under CC BY 4.0.
Download the video script here.
Think
Behind the Terms.
What does AI know about you or think it knows?
AI tools in education are marketed using polished, powerful terms.
But what might these terms hide?
Click each term to uncover what’s behind the language and ask yourself:
How might these words obscure power or risk?
These words are powerful, not because they’re precise, but because they sound good.
Reflect:
- Which of these terms have you encountered before in school or tech marketing?
- What new questions might you now ask when you see them?
Explore
Pause. Now think.
You close your laptop. The video’s done. The logos, the contracts, the surveillance claims—it’s a lot to take in. You wonder: Is this really what’s behind the tools I use to learn, teach, or support others every day?
Take a moment to sit with that.
Now let’s walk through five moments of reflection, each grounded in real scholarship, not science fiction.
1. The Illusion of Help
Shoshana Zuboff (2019) explains that digital platforms aren’t simply offering services. They’re built to extract behavioral data, model your actions, and sell predictions about what you’ll do next.
AI tools may feel helpful or even empowering. But underneath the smooth interface may be systems that monitor you silently. When the product is “free,” your data is often the cost.
Ask yourself:
If your behavior is being harvested to train powerful systems—for education, yes, but also for commerce or surveillance—how free is your learning?
2. The Same Cloud for School and War
Your words are fed into an algorithm trained on massive datasets, which are collections of online text, images, and user behavior gathered at scale.
But those datasets reflect real-world patterns, including unfair ones.
A peer-reviewed study by researchers Buolamwini and Gebru (2018) found that commercial facial recognition systems had error rates of up to 35% for darker-skinned women, compared to less than 1% for lighter-skinned men, even when the images were neutral.
That’s how bias can enter: not through intention, but through what the model learns.
Want to explore how this connects to generative AI? Check out the tutorial. Check out the tutorial Generative AI and Bias.
3. Whose Knowledge? Whose Power?
Nick Couldry and Ulises Mejias (2019, 2021) show how the global expansion of data systems echoes historical colonial structures. This “data colonialism” appropriates everyday life, with speech, clicks, and routines being quantified and commodified.
In this view, AI doesn’t simply process data neutrally. It encodes a worldview: whose voices matter, which languages are supported, and what kinds of knowledge are valued. These decisions shape the landscape of learning, invisibly but powerfully.
Ask yourself:
What does it mean when your knowledge journey is shaped by tools designed elsewhere, by people you’ve never met, for purposes you didn’t choose? What parts of your own knowledge or culture might be left out?
4. Bias in the System and in the Signal
Benjamin, R. (2018) argues that algorithms, especially in search engines, reflect and reinforce structural inequalities. The code doesn’t just sort data. It sorts people.
When bias is built into surveillance tools, some students may be flagged more often, not because of actual risk but because of how the system interprets language, appearance, or behavior.
Ask yourself:
Could the AI in your school misread you or someone else, based on identity, context, or culture? Do you revise the output, check the facts you received, or accept and copy-paste with little thought?
5. Ethics as Strategy or Substance?
Frank Pasquale (2015) and Seele & Schultz (2022) both caution that talk of AI “ethics” can become a smokescreen. Behind the glossy values statements, there may be very little actual human oversight.
When the rules of powerful systems are hidden, when decisions are automated but unaccountable, it becomes harder to challenge injustice. Ethics becomes branding, not governance.
Links
- Artificial Intelligence and Competition │ Competition Bureau Canada (2023)
- Atlas of AI │ Crawford (2021)
- From Greenwashing to Machinewashing │ Seele & Schultz (2022)
- Multistakeholder Perspectives on Military AI │ Sisson (2023)
- Race After Technology: Abolitionist Tools for the New Jim Code │ Benjamin (2019)
- Surveillance Capitalism and the Challenge of Collective Action │ Zuboff (2019)
- The Black Box Society │ Pasquale (2015)
- The Costs of Connection │ Couldry & Mejias (2019)
- The Decolonial Turn in Data and Technology Research │ Couldry & Mejias (2021)
- Towards AI Ethics’ Institutionalization │ Schultz & Seele (2022)
Discuss
Going Beyond Assumptions
Complete this sentence for yourself, then share and discuss:
After what I’ve seen, I will no longer assume that AI in education is just…
Use your responses as a springboard for discussion:
- What new dimensions or risks came into focus for you?
- Did the examples in the tutorial shift your view of AI tools you already use?
- How might your role as a student, educator, or policymaker shape the way you think about AI’s purpose in education?
Where should the boundaries be between educational use and other uses of AI infrastructure?
What do you think? Tell us using the comment below.

