Blog

DT Podcast – Episodes 1 & 2: Copyright and Open Access, now available

Image from publicdomainspictures.net and courtesy of user Circe Denyer

The Digital Tattoo Podcast – Ep. 1 & 2: Copyright and Open Access

The Digital Tattoo Podcast Project explores digital identity issues through interviews and investigations in an engaging audio format. Our first topic is copyright and open access at Canadian Universities.

We’ll explore these topics after considering the life of Aaron Swartz, an American political activist and programmer who co-created the website Reddit, helped launch Creative Commons, and faced a federal indictment for illegally downloading academic journal articles.

Swartz was a passionate advocate for open access. He believed that knowledge is the property of everyone and shouldn’t be hidden behind expensive subscription fees and paywalls. In 2013, after downloading millions of articles from JSTOR, and while facing a $1 million fine and 35-year prison sentence, he committed suicide.

We’ll ask these questions: Could what happened to Aaron Swartz happen in Canada? How well do we understand the laws around copyright? And what is open access all about? We’ll dive into these issues through interviews with leading experts like Michael Geist, open access journals like the University of Toronto’s Medical Journal, legal experts specializing in intellectual property law, and the public lead of Creative Commons Canada.

If you’re confused about how the laws around copyright in Canada work, this podcast is for you.

Listen to episode one, here.

Listen to episode two, here.

Towards Student Privacy

I have some good news: Tomorrow, I’ll be meeting with the UBC ombudsperson for students to discuss the creation of a student privacy bill of rights.

With the support of the AMS and GSS, the Digital Tattoo project is launching an initiative to create a document that will help protect student data on UBC systems.

This comes following a tweet that we received asking about such initiative and after our investigation into the ways that UBC’s learning management system collects and analyzes student data.

The focus of this bill of rights is to grant students the ability to make informed choices about their digital identities within the University context. By enabling students to understand the impact of their interactions within digital systems at UBC, they’ll be better positioned to make choices about how they conduct their online lives outside of the University.

UBC, as an educational institution, should extend teaching into all realms of academic life—including how students are treated within our learning management systems.

By allowing students to make informed choices about what information they give away, by educating them about how this information is going to be used, and by providing them with ongoing updates regarding any changes to this agreement, UBC will not only be fulfilling its mandate to educate students, but will also achieve a more ethical means of consent with students by making transparency integral to the process.

In addition, UBC will be able to distinguish itself as a leader amongst universities by implementing a progressive system that respects student privacy, seeks to achieve an ethical means of consent, and provides students with the opportunity to better understand how their digital identities are being constructed.

So the importance of the initiative is three-fold:

  1. Protecting the privacy of students within UBC
  2. Providing an opportunity to educate students about how their digital identities are constructed through their interactions with online systems
  3. Achieving a more ethical means of consent with students by providing information and options

If you also support these measures, please help by sharing our initiative through social media.


A little background

If you’re not convinced by the above or are still a little confused, let’s rewind a bit. What exactly is a learning management system at UBC? That’s the system that hosts all of the electronic learning materials for courses. Currently, it’s called Blackboard Connect, but that’s about to change.

Right now, Blackboard Connect collects a lot of data about students, including the pages that you visit, the time you spend on those pages, where you’ve clicked, and then organizes this data and presents it to instructors and administrators so they can understand how students are learning and using the system. Great, in theory.

But what if instructors are viewing this data and making inferences about the participation of students based on their usage in comparison to other students in the course? This could be easily done, as Connect presents the data of individuals in comparison to the class. So how might this affect a student without the resources to be constantly logging into Connect? Not very well.

The current system is not perfect and most of what’s collected through Connect is useless. The organization and comprehension of this data is called Learning Analytics and UBC’s current LMS isn’t capable of doing anything productive with the data—even though it may claim to use this information to identify struggling students and lend assistance. When I asked about being directed towards a case if this actually happening, no evidence or examples could be produced.

Okay, so that’s the current system. Massive amounts of data are being collected about students for relatively no reason and no useful purpose. Great. But what about the new system?

Some foreground

Well, it’s not an LMS anymore: UBC now has a Learning Technology Environment. Hurray! It’s called Canvas by Instructure.

With the launch of Canvas, UBC will be ramping up its focus on Learning Analytics and Educational Data Mining. I mean, if you’ve just spent lots of money on a new toy, you might as well put it to use, right?

But here’s where I get concerned. We’ve already been collecting and analyzing huge amounts of student data without any benefit to students. Now, we’re ramping up this collection and analysis without putting the proper safeguards in place that will allow students to control and understand what information they’re giving away, how it’s being stored, and why it’s all happening.

And that’s why UBC, at this crucial juncture in time, needs to create a policy around how student data is being collected, stored, and used. Students should be prompted to opt-in to data collection and informed about what that means. At any time, they should be able to access their data and be allowed to make the decisions about what information they choose to give.

And that’s why UBC, at this crucial juncture in time, needs to create a policy around how student data is being collected, stored, and used.

Currently, UBC students are consenting to this data collection by simply logging into the system. That’s right: By accessing the course materials that are essential to completing your degree requirements at a public, post-secondary institution, you’re also consenting to giving away your data. If this surprises you, it should be an indication of a fundamental flaw in how UBC is currently going about generating consent.

And it gets worse. I uncovered in their terms of service, which aren’t readily available on the log-in page, that UBC is free to update their terms of service at any time without notifying students about the changes. Your continued use of the system is consent enough. This logic basically amounts to: Because we’re already forcing you to use this system, we can also take your data in the meantime, and if we feel like changing that, your continued use is consent enough. This is the definition of forced consent.

Protecting student privacy through instituting a policy that mandates informed consent at UBC isn’t about stifling the potential benefits of Learning Analytics and Educational Data Mining; it’s about protecting the privacy of students who are bettering themselves by studying at a world-class institution like UBC. This is about leading the way by creating a policy that goes beyond the outdated provincial and federal laws and is forward-looking, progressive, and, most importantly, puts the privacy and safety of students before anything else.

From Student to Teacher: Becoming a Professional

To access the case studies, as well as the resources that can be used to guide decision-making, visit our Case Studies for Student Teachers page.

Before I became a graduate student at UBC, I was a teacher candidate preparing to take on the role of a professional educator. Like all of my peers transitioning into the same professional role, there were elements of teaching that I looked forward to with a great deal of excitement: working with children and youth, turning my experiences and passions into learning opportunities for others, and having a positive impact in my community. It took longer to realize that the privilege of a teaching role also means placing certain limits and restrictions on my off-duty conduct, including how, when, and where I interacted digitally. Although I did not feel that any aspect of my digital identity was unfitting of a teacher, I found myself worrying about each of my online posts, pictures, likes, shares, and clicks, not wanting to do anything that someone else could use to call my character into question. Because of my experience—wondering about the right ways to participate digitally, feeling worried about the consequences of every decision I made online, and looking for answers that did not seem to exist—I jumped at the opportunity to work with Digital Tattoo, writing case studies and finding resources that I hope will help current and future teacher candidates as they address the same concerns in their own transition from student to teacher.

The intersection of digital identity and professionalism can be difficult to navigate for all new professionals, but it can be especially challenging for teachers, because we are expected to represent the entire teaching profession both at school and out of school, both offline and online. Due to the nature of our work with young people, the Supreme Court of Canada has ruled that “educators are held to a higher standard than other citizens,” and these standards extend to our digital lives (BC Ministry of Education, 2017).

But what exactly are these “higher standards?” Who decides? And where is the line between acceptable and unacceptable?

It is unsurprising that teacher candidates who are beginning to build professional identities often feel confusion, fear, and frustration when it comes to making decisions about our online interactions. After all, teachers are human and we make mistakes, but when public perceptions of our character can impact our job, there is a pressure to make the right decisions every single time.

 

But what are the right decisions?

What makes decision-making even more challenging for teachers is that there isn’t an answer key for that question. As citizens, we have the right to our online lives, but when we seek guidance regarding acceptable participation, we are often met with rules that are vague, incomplete, or unclear. This can lead to some teachers making basic errors in judgment, saying or sharing things online that can result in discipline, while others choose to stay offline because it feels like the simplest way to avoid any wrongdoing. Disengagement, however, does not help teachers build literacy in the digital world. Disconnecting from a tool that our students use every day can lead to a loss in teaching opportunities. It can also mean missing out on the online community-building and professional development that can contribute to a new teacher’s sense of belonging.

 

Recognizing the difficult situation in which teacher candidates find themselves, what can the Digital Tattoo project do to help them navigate this important part of their lives?

Working with the Digital Tattoo project, I have written a series of case studies informed by teacher candidates and faculty members to help teacher candidates build their confidence for decision-making in regards to their digital identities. The case studies allow teacher candidates to become familiar with the types of decisions they will be making as they transition into their professional roles and practice using policies and other resources to make informed decisions.

It should be noted that, despite having been written with teacher candidates in mind, these case studies can be used or adapted to explore professionalism and digital identity in any field in which public scrutiny plays a role. I hope that these case studies and the accompanying resources leave emerging professionals feeling a greater sense of control over their digital identities, that they feel better prepared to navigate any potential issues that could result from their digital participation, and that they are more confident making appropriate and effective decisions online.

 

References:

BC Ministry of Education. (2017). Standards: Questions and case studies. Retrieved from https://www.bcteacherregulation.ca/Standards/QuestionsCaseStudiesContents.aspx

The Artist Series Two: iSquare Protocol

Artist Series Two: iSquare Protocol, What is Information?

An interview with Jenna Hartel @ the iSchool, University of Toronto

 

This past spring, an exhibition called REWIRED: ART X BISSEL opened within the iSchool’s Bissel building which stems of the main Robarts building. The exhibition features the work of artists whose work focuses on the intersections between people, digital technology and information. The exhibition connects contemporary artist’s responses, including works by Tobias Williams, Adrienne Crossman, Connor Buck, Robbie Sinclair, Jessica Zhou, Tabitha Chan and Brandon Dalmer with research, showcasing the qualitative, arts-based research project called the iSquare Protocol, run by Dr. Hartel and her research team. A display-case with drawings collected from around the world can be found on the fourth-floor foyer. The iSquare Research Program is a study that uses drawing as a way of investigating the concept of information, a word that can be confusing, nebulous and vague. Dr. Hartel decided that an arts-informed approach would bring great insight to her field of research. The study asks the questions:

 

1) How do people visualize the concept of information?

2) How do visual conceptions of information differ among various populations?

3)  How do these images relate to conceptions of information made of words?

 

I was fortunate to participate in this study during my first semester as a graduate student in Foundations of Library Information Studies, taught by Dr. Hartel. As a part of this class, students collected iSquare drawings, analyzed them, and responded with writing and our own creative responses. (The methodology, explained in greater detail here and in the video above). As a student with a background in the arts, realizing that drawing could be considered a valid, and even valuable, way of studying a concept was exciting. I had long known from personal experience that drawing and painting was a vital way of processing and communicating information and experience, yet this incredibly rewarding practice tends to be limited to those with training, technical comfort in some artistic medium. I loved that Hartel’s study invites non-professionals to engage in this process of communication and exchange through art and furthermore, was being taken seriously in an academic context.

Our class divided itself into three groups; each would study a different topic. Dr. Hartel had expanded her research, asking students to choose from three possible subjects, which were “information,” “librarian” and “internet.” I opted to join team internet.  I was curious to see the ways that participants would depict something as massive, and hard to grasp as the internet. I wanted to see how others conceptualized the internet, something I relied upon every day but had little-grounded understanding of how it functions. Here are a few of the “internet” iSquare drawings:

This drawing depicts a dark mysterious cloud with giant limbs encircling the earth. The internet is of massive proportions, powerful and unknowable. Many of the drawings touched on the unknown in their illustrations. The 18 year old male who made this drawing said “ The internet is this vast, dark void that is continually growing and can never be fully explored yet it controls our entire existence on earth.” The sense of helplessness, and the lack of human proportion in this response is unsettling.

Other illustrations focused on a more human scale, but still presented bizarre conceptions of the internet. The 25 year old female participate explained her illustration this way: “It’s a window like, square shaped thing.” The drawing communicates so much more than the statement. The square window acts as an intermediary structure connecting the mouth of one subject through to the forehead of a masked, disguised other. The tree stemming of the central square in the center softens an otherwise disturbing image. Lacking humanizing facial features, the bodies seem to be hooked up to the machinery of the internet, neither the “speaker” nor the “recipient” appear to be active, and neither are directly engaging with one an another.

This drawing, more positive than the last one shows the connectivity of the internet. The 53-year-old female participant says “My drawing is meant to illustrate how the internet connects us to information, ideas, bureaucracies, and other people.” This abstract drawing is active, dynamic with more fluid and mutual relationships between its varying parts.

Dr. Hartel’s research affirms that the perspectives of broad publics matters. Her approach to researching information had a beautiful social component which, on top of its impressive archive of drawings, left traces that were undocumented. These traces were the conversations between researchers and participants about big concepts that were initiated by a contemplative drawing exercise. The squares expose visual metaphors, affective responses, presented in short period of time. The squares revealed attitudes, the ways that individuals relate to the systems that we rely so heavily upon, systems that often perplex and overwhelm us while having such a great bearing on how we live our contemporary lives.

After interviewing Dr. Hartel for Digital Tattoo, she introduced me to the Handbook of Arts in Qualitative Research By Sandra Weber. Her writing outlines how, over the last few decades of the 20th century, qualitative researchers in the social sciences began to pay serious attention to the use of images as a way of enhancing understanding of the human condition. The handbook offers a list of ten good reasons that answer the question: “Why use arts-related visual images in research?” These good ideas were so good that I wanted to share them here. Images can be used to capture the ineffable, the hard-to-put-into-words

 

Images can make us pay attention to things in new ways.

  1. Images are likely to be memorable, as images elicit emotional as well as intellectual responses.
  2. Images can be used to communicate more holistically, incorporating multiple layers, and evoking stories or questions
  3. Images can enhance empathetic understanding and generality.
  4. Through metaphor and symbol, artistic images can carry theory elegantly, and eloquently
  5. Images encourage embodied knowledge
  6. Images can be more accessible than more forms of academic discourse
  7. Images can facilitate reflexivity in research design.
  8. Images provoke action for social justice.

Asking the question, “what is the internet” required participants to consider their relationship to technology in a new way, gaining a different perspective through the exercise which requires participants to gain some thoughtful distance from something as ubiquitous as the internet. To quote Weber once more: “An image can be a multi-layered theoretical statement, simultaneously positing even contradictory propositions for us to consider, pointing to the fuzziness of logic and complex paradoxical nature of particular human experiences.” This statement about the potential of images was realized in the study. People’s drawings of the internet conveyed multiple meanings, positive, negative, concrete and abstract. Opening up broad discussion around what the internet is, what information is and how we relate to these subjects is a first step in becoming knowledgeable, engaged social participants. Dr. Hartel’s research is an example of how academic research can break down boundaries between art and academic research, as it engages communities in an inviting, playful way. Her research is building aesthetic, social knowledge through representations, symbols, and conversations.

Weber, S. (2008). CHAPTER 5. USING VISUAL IMAGES IN RESEARCH. In HANDBOOK OF THE ARTS IN QUALITATIVE RESEARCH: PERSPECTIVES, METHODOLOGIES, EXAMPLES, AND ISSUES (pp. 1-18). London: Sage Press.

 

 

The Ethics of Algorithms

 

When we interact with technology, we tend to assume that our tools are neutral, unbiased machines. How can an algorithm, a mathematical sequence, be discriminatory? Those responsible for coding our online spaces, writing the algorithms that mediate our digital lives spaces necessarily, whether knowingly or unknowingly embed their worldviews and biases into our online spaces, constructing digital landscapes that treat people differently. Embedded biases can have potentially harmful effects on the public who engage with these technologies on a daily basis. Unfortunately, it’s hard to see how power structures, and biases are operating within our digital spaces. These mechanisms covertly modulate and record our behaviors and experiences; however, there are barriers which prevent the public from organizing to respond to the potentially harmful and discriminatory outcomes of algorithmic mechanisms. This blog will look at some cases of algorithmic discrimination, and discuss the challenge of creating informed digital citizens so that we are capable of comprehending and feeling invested in this emerging problem.

Michael Brennan, for the Ford Foundation, wrote a provocative article asking, “Can Computers be Racist?” Brennan explains how information systems can embody human flaws like bias. He confirms that algorithmic media such as Google can produce racially discriminatory outcomes. Embedded within Brennan’s article is a video of Harvard professor Latanya Sweeney speaking about a personal experience where a Google search of her name produced an ad implying that she had a history of arrest. While the arrest never happened, her co-worker suggested that Google may be providing this result due to her name being affiliated with black familial names, a suggestion which she initially disbelieved. Sweeney followed up by conducting a study across the United States of 120,000 internet ads and found that indeed, Google’s ads do create an associative link between black affiliated names and a history of arrest. Another example of how technologies treat users differently can be seen in this video, which reveals that HP computers cannot recognize black faces using facial recognition technology. Algorithmic control exists within a social context and in certain scenarios reproduces negative stereotypes, impacting vulnerable groups more heavily. This case, like many others, demonstrates the potential danger of viewing technological apparatuses as neutral, as the guise of objectivity and imperceptibility of algorithmic control amplifies the problem by making it harder to assign responsibility.

Physical and virtual space are becoming increasingly indistinct therefore negative associative links produced by algorithmic media undoubtedly affect individual’s real lives. There is also see an increase of web-connected objects called the internet of things (IOT).  In her essay “(Un)Ethical Use of Smart Meter Data” Winter weighs benefits and negative implications of one manifestation of IOT with the advent of smart grids; the next generation of electrical power grids intended to upgrade and replace aging infrastructure.** Smart grids provide real-time feedback to customers for them to make more informed decisions regarding energy consumption. However, to be prudent, privacy concerns around what data is being collected about IOT users, and how this might become a social issue must be adequately understood before we become reliant on such devices. For example, will this data be used to create user profiles? Will certain users be given preferential treatment based on their consumption habits? The connectivity of our appliances further breaks down a sense of privacy and security previously associated with homes. Winter asks, “how can we reap the many benefits of technologies like smart grids and smart meters without risking the loss of personal privacy, loss of jobs or housing or government intrusion into one’s home life.”** It is unclear if the benefits of connectivity can be provided without accepting serious consequences. Users need to have a clearer understanding of what they are giving up. Opening up alternative streams of information around privacy, algorithmic control, and the devices that use it, can prevent the mystifying force of advertising from being the dominant voice. A first step in resisting the serious implications algorithmic control is education, the public must become more aware of how algorithms work.

Algorithmically controlled media is characterized by the personalization of information flow, for example, Facebook’s newsfeed, which creates a filter bubble. Personalization of media consumption may challenge our ability to feel a sense of belonging with a larger social body with common political concerns. Gilles Deleuze, in his prescient 1992 essay Postscript on the Societies of Control anticipates this social break down, going so far as to claim that social bodies can be further divided even threatening our sense of being an individual: “Individuals have become ‘dividuals,’ and masses, samples, data, markets or ‘banks.” Our lives are being compressed into masses of data points that represent, and describe who we are in fragmented pieces.

Traditional media sources associated with print cultures such as maps, newspapers, and censuses have had the benefits of aiding in the creation of a sense of social cohesion, encouraging participation.  For example, “punctual media” like newspaper contributes to the construction of a sense of national identity by allowing individuals to imagine they exist in a common time with others.* “The date at the top of the newspaper, the single most important emblem on it, provides the essential connection- the steady onward clocking of homogeneous empty time.”* The “continuous temporality” of algorithmic media, such as Facebook, dissolves this collective sense of time, conflicting with our ability to form collective responses to events in our environment. Continuous temporality means that the flow of information is continuous, indistinct, and global. New imaginative ways of building publics must also be imagined and enacted.

How can the public detect and care for the consequences of algorithmic control? Part of the problem is that the most popular digital social spaces are owned and operated by massive corporations, such as Facebook and Twitter. The algorithmic code for these spaces are proprietary, trade secrets which are not exposed to public debate or scrutiny for human rights abuses. We are all affected by algorithms whether or not we are aware of it and there is a lack of accountability for the discriminatory outcomes of algorithmically controlled spaces. Algorithmic environments pose a threat to our ability to feel a sense of boundaries, or “aggregations” whether they are geographical, temporal or social, and make the goal of social cohesion more slippery. According to McKelvey, algorithmic media creates a social condition where the body public is “constantly being dissected and re-assembled.” *

In a recent video published on the website “Free Assange” Noam Chomsky states that: “The architects of power in the United States must create a force that can be felt but not seen. Power remains strong when it remains in the dark. Exposed to the sunlight, it begins to evaporate.” What steps can be taken to unmask the power of algorithms, to create a tangible collective awareness of this issue? Surely, the effort will need to be multifaceted, including extensive public participation between artists and technical experts who can bring algorithms into the public imagination and translate into expressive forms that are dynamic, poetic and affective. All kinds of storytelling, whether it be visual art, writing, video games or virtual reality can be created to unveil the elements which cloak algorithmic control in obscurity. In order to address these issues, we need to mobilize knowledge of how certain algorithms affect people negatively. How do we get this conversation started? How can we work together to prevent algorithms from being a naturalized part of our digitized environment?

 

*McKelvey, F. (2014). Algorithmic Media need Democrative Methods: Why Publics Matter. Canadian Journal of Communication, 39.4, 597-613. 

**Winter, J. S. (2014). (Un) Ethical Use of Smart Meter Data. In Data and Discrimination (pp. 37-42). Open Technology Institute.

Was this helpful?
59

Comments are closed, but trackbacks and pingbacks are open.