Really Fake
Really Fake
Share on FacebookTweet about this on TwitterShare on LinkedIn

By Glen Whelan.

  • With generative technologies on their way to maturity, ‘Fake News’ may soon reach a whole new level of ‘realness’
  • No less than the authenticity and credibility of video and audio footage is at stake
  • Verified identities might help, but come with their own problems

Approximate reading time: 2-3 minutes

 As little as five years ago the idea of ‘fake news’ referred to satires like The Daily Show or The Colbert Report. Now, fake news refers to phenomena that are ‘really fake’: i.e., news or events that are fabricated to appear real. Recent examples include those that place a target or puppet – such as Obama or Françoise Madeline Hardy – under the control of a puppeteer who directs the puppet’s facial expressions, speech, and so on.

Technically faking reality
Whilst we have long been told not to believe everything we see or read, and whilst photoshop and fashion and hip-hop and auto-tune have gone hand in hand for a while now, current developments look like a step change. One key technology behind these changes is Generative Adversarial Networks (GANs). A sub-field of deep learning (which Facebook, Google, and Microsoft all have a major interest in), GANs work by pitting two algorithmic models – a generative model and an adversary – against each other.

The generative model can be thought of as analogous to a team of counterfeiters, trying to produce fake currency and use it without detection, while the discriminative model is analogous to the police, trying to detect the counterfeit currency. Competition in this game drives both teams to improve their methods until the counterfeits are indistinguishable from the genuine articles. (Goodfellow et al., 2014: 1)

Although currently limited, machine learning expert Ian Goodfellow – who has a PhD from the University of Montréal, but now works for Google Brain – suggests that “the generation of YouTube fakes that are very plausible may be possible within three years”. Where all this is heading gives rise to two concerns.

Was it you?
The first concern is that our ability to distinguish between the fake and the real regarding other people is undermined. Once generative technologies hit a certain level of advancement, it will be prima facie impossible to tell whether or not a given piece of audio or video is a fake generated by a puppeteer, or a true piece of documented experience. When one realizes that this does not just apply to the powerful and famous, but to our partners, children and friends as well, the full extent of the problem becomes clear.

The second and related concern is that generative technologies might increase the likelihood of people raising doubts as to whether or not documented footage or recordings, of themselves, are true. A person filmed engaging in something embarrassing, unsavory, or outright criminal, could suggest that the footage in question is a fake generation, and not a real documentation of any actual event.  Whereas the first concern relates to the ability to identify truths about other people, the second concern relates to people potentially escaping, or avoiding the consequences of, truths about themselves.

To be or not to be (verified)
In light of such, an increased push towards verification should be expected. Verified identities are already central to platforms such as Facebook, AirBnB and Twitter, and Amnesty International is involved in creating verification processes for ‘citizen media’ images or video that are relevant to human rights considerations.

In and of itself, this seems a good thing. But the fact that verification processes need to be organizationally controlled suggests prudence is warranted. It is not, for example, difficult to imagine one of the current tech giants coming to monopolize the world of verified interactions in both our private and public lives. As the threat of fake realities become ever present, then, we should remind ourselves that an increasingly verified existence would likely come with its own, all-encompassing, problems.


Glen Whelan teaches at McGill, is a GRB Fellow at CBS, a Visiting Scholar at York University’s Schulich School of Business, and the social media editor for the Journal of Business Ethics. His research focuses on the moral and political influence of corporations, and high-tech corporations in particular. He is on twitter @grwhelan.

Pic by EtiAmmos, Fotolia.

Share on FacebookTweet about this on TwitterShare on LinkedIn

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *