Guest Post: Deepfake Impact is Real
By Hossein Torabi
Deepfaking is growing fast and has already gained popularity in social media. According to Cellan-Jones (2019), the number of online deepfake videos has almost doubled in the past nine months, rising from 7964 online videos in December 2018 to 14698 in October 2019.
A lot has already been said about the evident danger this technology is bringing to our world such as revenge pornography, fake news, malicious hoaxes and financial fraud. These issues have raised concerns over online safety and privacy of social media users and how effectively the abuses can be detected and punished.
From a technological standpoint, the fight against deepfaking does not seem to be promising at least in the near future. There have been many efforts to develop algorithms to detect deepfake videos.
A recent example is an open source database made by Google that contains over 3000 original manipulated videos in order to improve the development of deepfake detection tools. It recorded the speaking, common gestures and body movements of 28 actors and then used publicly available algorithms to deepfake their faces (Nick Dufour, Andrew Gully, 2019).
Similar efforts have been made by Facebook to perform common face alteration methods on almost 1000 compiled YouTube videos (Hao, 2019). The problem with these detection methods is that they can be used to correct the flaw of the original deepfake algorithm and work in its favour.
Now the question is, how will this phenomenon affect the future of social media and the safety and security of its users? Today, we might doubt about the authenticity of videos we see on social media and ask ourselves: is it real or fake? In a few years, social media will be flooded by deepfaked videos and the very same question may arise for every single video we watch online.
However, constant doubt and uncertainty could negatively affect our trust and comfort over the digital ecosystem in which we live. The moment when a user stops questioning the authenticity and originality of videos that they see in their timeline could be a turning point. That is when they give up and trust in any online video. It would be very likely that we see a shift towards producing and consuming other types of digital media in social networks.
It would not be a surprise to see text-based contents and animations gain more popularity among users.
One might argue that the same types of challenges have already surfaced on social media. Fake news, trolls and state sponsored propaganda have long been threatening users and many of them are in plain text. How different are deepfake videos? In my opinion, there are two main differences. First, it would be more difficult to analyse and verify the authenticity of deepfake videos without technical assistance unlike text-based materials that can be analyzed, criticized and justified by human brains. Secondly, users are more vulnerable to revenge pornography, fraud and malicious hoaxes through videos than text due to the visual nature of deepfake videos.
These concerns and challenges might shift users towards more private, two-way communications with people whom they trust and result in less exposures of personal videos to the public which will change the nature and usability of what we call social media today.
If there is not a strong will to find legal, political and social solutions to address and resolve these issues, technology cannot resolve them either.
References
Cellan-Jones, R. (2019, October 7). Deepfake videos ‘double in nine months’.
Contributing Data to Deepfake Detection Research. (2019, September 24).
Hao, K. (2019, September 25). Google has released a giant database of deepfakes to help fight deepfakes.
Featured image Cyber attack abstract
People said…