Have you thought about the impacts of your actions while online? Maybe you have deleted a post that you made in error, or accidentally clicked on an ad while browsing the web. Although these actions were accidental to you, they may have been perceived as intentional actions that characterize your online identity to an algorithm that is tracking your behaviours. You may be aware that several popular services such as Facebook [1] and Google [2] employ behaviour tracking tools such as cookies [3] and mouse tracking [4] to create logs of user behaviours and descriptions. Descriptions can include elements such as the browser you are using, IP address [5], and device identifiers such as MAC addresses [6].
A diagram published by the Panopytkon Foundation [7], a Polish not-for-profit organization, frames the dialogue between the behaviours and actions of a user and the algorithms that track and generate insights about that user, as a layered process with three distinct layers. As data moves outwards from a source user and progresses through the layers, it transforms from machine-logged actions into human readable insights and a persona that grows with each new behaviour logged by behaviour surveillance algorithms.
In the most central layer, information is generated as users share data about themselves on a variety of accounts and across a variety of domains and networks. As Katarzyna Szymielewicz describes [8], this is the only layer in which users have control over. Subsequent layers of a user’s data double are generated through the inference of information and observations about a user’s online browsing behaviour.
In the second layer, machine-readable aspects of your behaviour are collected. Information such as which content a user ignored, whether they are using a VPN [9], and the speed at which they type are added the user’s persona description. Alone, these observations may seem innocuous or benign, however, Szymielewicz explains that when aggregated among thousands or millions of users, demographic data can be inferred by comparing a user’s behaviour to a statistical baseline.
“Your data are analyzed by various algorithms and compared with other users’ data for meaningful statistical correlations. This layer infers conclusions about not just what we do but who we are based on our behavior and metadata.” – Katarzyna Szymielewicz, Co-founder of Panoptykon Foundation
These inferences and insights that are produced by the algorithms that process the information shared by a user in the first layer and the observations made in the second layer make up the data in the third layer. Importantly, these conclusions are generated by data analysis software, and may in fact not be representative of the user’s true personality. Depending on the data generated by a user, these conclusions can be quite definitive: examples include conclusions on a user’s religious beliefs, existence of mental illness, eating habits, time spent at work, and much more.
So What?
This current model of machine-generated conclusions about users is unsettling because it does not provide users with recourse for incorrect assumptions made about someone. Steps are being made to increase the transparency of personal information, such as legislation like the European GDPR [10], which lets individuals view or delete information about themselves from social media sites and other private businesses that collect, or have collected, personal information in the past [11]. In Canada, the Personal Information Protection and Electronic Documents Act (PIPEDA [12] mentions that individuals can request deletion of personal information held by third parties when they successful demonstrate an inaccuracy or incompleteness, as mentioned in its principles for individual access. Szymielewicz suggests that companies should stop guessing about us and instead open the doors for a more transparent conversation and for customers to have a chance to verify or correct incorrect conclusions, or delete them entirely.
Take a look at the diagram below. Each circle in the model only lists some examples of the types of information that belongs in each layer.
Discussion Questions
- Were you aware that observations of your IP address, content consumed, and browsing history may be tracked?
- What do you think are the conclusions made about you by tracking algorithms?
- How do you take control of the information and conclusions inferred from the information you share online?
- Was there anything in the Panoptykon diagram that surprised you or was not known before?
- What suggestions do you have to others looking to take control of the conclusions that algorithms made about them?
You can submit an original comment or reply to another reader’s comments in the discussion forum below!
Disclaimer: The views and opinions expressed in this article are those of the author and do not constitute legal or financial advice.
Always do your own research for informed decisions.
Edited by: Elyse Hill
Key Source:
- Your digital identity has three layers, and you can only protect one of them [Quartz]
Additional Sources
- Online Behavioral Tracking [Electronic Frontier Foundation]
- Me and My Shadow [Tactical Technology Collective]
- Behavioural tracking: You’re being stalked across the web [Maclean’s]
- Just how much of your personal data is actually online? We take a look [Ottawa Citizen]
- How Companies Turn Your Data Into Money [PC Mag]
People said…