Fake news and the future of the truth

By Jan Michael Bauer

At least since the last U.S. elections in 2016, the issue of “fake news” is frequently debated in the public and the news. The strategic and targeted distribution of misinformation to undermine political opponents peaked in the conspiracy theory termed “Pizzagate”.

Originated from leaked emails, the story suggested that the former presidential candidate Hillary Clinton along with other high-level Democrats ran a child trafficking out of a pizzeria in Washington[1]. Despite these absurd claims and the lack of any credible evidence, the owner received multiple death threats and the restaurant was attacked with an assault rifle[2]. Luckily, nobody was injured.

The hunger for likes

Though admittingly an extreme case, this is only one example of many fake news stories shared on social media and often echoed among equal minded users. Even though multiple psychological studies emphasize the human tendency to believe information that supports prior beliefs, it remains astonishing that even the most outlandish fakes find their believers and are frequently shared. This phenomenon fueled by the hunger of many users for likes and reach of their posts, which seems to be extended with more extreme content.

These dynamics have given prominence to the recent focus on “fake news” but looking at the latest technological developments the future might even hold dire prospects.

Modern computer software, like Photoshop©, allows for realistic manipulations of images since many years. While some faked photos have famously traveled through the internet, I would argue that people have developed a healthy and critical attitude towards digital images as people can no longer trust their own eyes. Increasing processing power and novel algorithms start to enable trained users to not only alter photos, but also voice recordings and video material [3]. While not yet perfect, with enough training data these technologies are able to rearrange and even create new audio and video material that is hard to distinguish from the original.

Thinking a few years ahead, it is not hard to imagine that these methods become better and better, and fakes will ultimately be indistinguishable from real footage.

This will allow the creation of fake content about individuals using their own voice and presented by a realistic video of the person without their knowledge. While this will certainly trigger a cat and mouse game between people creating fake material and others trying to identify the fake through digital forensics, it will always be easier to create a fake than detecting one. Hence, one might hope that people develop a similar skepticism towards videos and voice recordings than most have towards images. In any case, the line between what is real and what is fake will inevitably become blurrier as technology increases.

Type 2 error

Currently, the discussion about fake news focuses on the spread of what is literally fake news, the spreading of information that is not true – like Pizzagate. Borrowing from the language and ideas of statistics, people believing the Pizzagate conspiracy make what is called a Type 1 error: they believe a story to be true, even though there is nothing to it.

I, however, would like to focus attention on the second type of error that has so far received less attention. A Type 2 error occurs if someone does not believe a story, even though it is actually true. In other words, declaring something fake news, even though it is real. There are a few recent cases that highlight this problem.

For instance, in 2015 a real video of the former Greek Minister of Finance Yanis Varoufakis surfaced where he showed “Germany the middle finger”. However, in the name of satire, a German comedian wrongly claimed to have created the video by showing a fake video of the Minister only raising a clenched fist and declared it to be the original before his team added the raised middle finger digitally [4]. This “Varoufake” controversy circulated the media until an official clarification stating that the video with a raised middle finger is actually real footage. Resolving the confusion took several days. A long time for the current speed of information on social media.

A more recent example stems from Prince Andrew involved in a sex scandal [5]. Confronted with the accusation of an inappropriate relationship with, at the time, underaged Virginia Giuffre, he claimed to not remember ever meeting her and responded to a photo showing him with his arms around her that there is no way to prove the authenticity of this image and suggested that it could have been faked.

Fakes affecting social media and public opinion

While fakes might ultimately be identified by experts in the famous cases or the court, it is unlikely that social media and public opinion will not be affected by this issue. The mere possibility of fake images, audio, or video evidence might undermine the credibility of real incriminating evidence and help perpetrators spread doubt about the authenticity of evidence against them.

In 2012, a shaky video surfaced where republican candidate Mitt Romney declared 47% of the nation as government-dependent and his job would not be to “worry about these people”. In 2016, a hot microphone recorded Donald Trump before leaving a bus bragging about sexual assault. In the latter case, Trump on numerous occasions suggested that the audio might be a fake,[6] creating doubt at least among some voters, and ultimately won the election.

An increase in such “Type 2 fake news” issues might be even more problematic than the currently discussed Type 1 problems.

If the public can no longer trust any of their senses to separate truth from fake due to technological progress, the democratic process is certainly in danger. And if at some point even experts struggle to clearly identify the authenticity of the evidence, the issue might even spread into our courts and the legal system.

When teaching my students about the different error types in statistics, the lecture generally concludes with the lesson that the probability of making either of the errors is connected. Being more skeptical reduces Type 1 errors but increases the probability of making the 2nd types.

Despite this link, it is ex ante not clear which errors cause more harm and we should be careful that our current emphasis on “fake news” focusing on type 1 error not inadvertently creates too much skepticism which will leave us with many more type 2 errors. “Pizzagate” is the former, climate change denial is the latter.


References

[1] https://www.nytimes.com/interactive/2016/12/10/business/media/pizzagate.html

[2] https://www.nytimes.com/2016/12/05/business/media/comet-ping-pong-pizza-shooting-fake-news-consequences.html?action=click&contentCollection=Business&region=Footer&module=WhatsNext&version=WhatsNext&contentID=WhatsNext&moduleDetail=undefined&pgtype=Multimedia

[3] https://www.youtube.com/watch?v=cQ54GDm1eL0

[4] https://www.euronews.com/2015/03/19/varoufake-when-satire-acts-as-media-watchdog

[5] https://www.mercurynews.com/2019/11/26/cal-forensics-expert-casts-doubt-on-prince-andrews-claim-sex-slave-photo-was-faked/

[6] https://observer.com/2018/09/trump-still-wants-you-to-think-the-access-hollywood-tape-is-fake/


About the author

Jan Bauer is Associate Professor at Copenhagen Business School and part of the Consumer & Behavioural Insights Group at CBS Sustainability. His research interests are in the fields of sustainability, consumer behavior and decision-making.


Last year, the Seminar on Fake News – Digital Transformation Platform took place at Copenhagen Business School. The organizers highlighted: The problem of Fake News and other problematic online content is one of our times’ most pressing challenges — it is widely believed to have played a major role in the election of Trump and the current situation with Brexit.

Read more by the same author

Are you choosing what you really want?

Behavioural change in the work environment: a first review on MSC’s sustainable food policy

A Story of Poison, Pork and Consumer Protection

Photo by Christian Gertenbach on Unsplash

Fake news and what it means for discussions about CSR-related issues

By Daniel Lundgaard.

There is a saying on online forums that

“About 78% of all statistics shared online are made up to prove a point – including this one.”

This has become particularly relevant lately, where we have seen many discussions about fake news. And while it is often discussed in relation to politics, in particular during political elections, there has been little attention on the impact of fake news in discussions about CSR-related issues. As such, this blog elaborates on the rise of fake news and explores how fake news might have grave implications for CSR-discussions.

What is “fake news”?

The increasing relevance of fake news can, in part, be attributed to the rise of a networked society. Here, mass communication technologies and the rise of the post-truthera has created new circumstances where

“objective facts are less influential in shaping public opinion than appeals to emotion and personal belief”

(Oxford Dictionaries.)

Fake news is often compared to disinformation, which is described as

“intentional falsehoods spread as news stories or simulated documentary formats to advance political goals.”

(Bennett & Livingston, 2018)

This, along with an increased distrust in news outlets ability to disseminate objective information, has caused more and more people to turn to social media as their primary source of information. This is especially seen with the younger generation, as they grow up in a world defined by more racial, ethnic and political diversity than ever, and consequently distrust news outlets ability to disseminate information from a single “objective” point of view (Marchi, 2012).

As a result, the younger generation often prefer information that they know to be subjective, e.g. from opinionated talk shows or shared by friends. This has created a more polarized news landscape, where people often seek out information from social media contexts and news outlets that confirm their views, which means that it has become possible to live a life where you can almost completely avoid serendipitous encounters with conflicting views that forces you to rethink your opinion.

What are the implications for CSR-related discussions?

This development towards a preference for information confirming current beliefs combined with a fundamental distrust in objective information is particularly relevant for discussions about CSR-related issues. The main issue is that a defining part of disinformation, is that it is described as intentional, which suggests a serious concern, seeing how social media has amplified the impact of intentionally misleading statements. Consequently, we have seen that some organizations, in an attempt to pursue economic and sometimes illegitimate goals, exploit this distrust in information and diminished impact of objective facts to polarize opinions and derail discussions about important issues such as climate change.

As a result, the increased awareness of disinformation has created a context where companies, instead of adopting more socially responsible practices, attempt to question the legitimacy of the research and the groups trying to prove the ramifications of neglecting these issues e.g. that climate change is a real and serious issue. This is especially seen with the rise of astroturfing organizations – a term derived from ‘AstroTurf’, a brand of a synthetic grass often used on football fields – which describes the practice of masking the sponsors of a message to make it appear as something that originates and is supported by grassroots participants. The goal with astroturfing is to ensure that a message or an idea (e.g. fake news) appear as something that emerges through legitimate processes, often with the intent to cause confusion and distrust in legitimate information. Companies thereby attempt to derail CSR discussions, as seen for example when ExxonMobil allegedly created and funded a think tank to appear independent and legitimate, but with the sole purpose of challenging the consensus around climate change as a serious issue and a result of human action.

What are the implications – and what can be done?

This does however present us with a bit a paradox, as increasing awareness about the use of disinformation and shedding light on the existence of astroturfing organizations is not only a positive thing. The challenge is that while questioning the legitimacy of research or news shared by friends is positive, increased awareness about the existence of astroturfing organizations might spark a distrust in the legitimacy of “real” grassroots movements.

Increased awareness thus not only affects the illegitimate ones, but potentially also undermines and questions all forms of grassroots movements, thereby eroding the very foundation that some of the movements fighting for CSR are built on. Consequently, the key is balance. You need to be critical about what you read online, but the increased awareness about fake news should not discourage you from pursuing collaborative goals, after all

“The main idea underlying collaborative projects is that the joint effort of many actors leads to a better outcome than any actor could achieve individually”

(Kaplan & Haenlein, 2010)

Therefore we need to be aware of the destructive power of disinformation, but also understand that not all ideas and opinions are the product of hidden political agendas – some are and it is crucial to be able to identify those – but some are still trying to make the world a better and more sustainable place.

Literature

  • Bennett, W. L., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33(2), 122–139.
  • Kaplan, A. M., & Haenlein, M. (2010). Users of the world, unite! The challenges and opportunities of Social Media. Business Horizons, 53(1), 59–68.
  • Marchi, R. (2012). With Facebook, blogs, and fake news, teens reject journalistic “objectivity.” Journal of Communication Inquiry, 36(3), 246–262.

Author

Daniel Lundgaard is a PhD Fellow embedded in the Governing Responsible Business research environment and part of CBS Sustainability. His research is mainly focused on the impact of the digital transformation, in particular, the influential dynamics that shape the communicative constitution of public opinion as citizens, politicians, NGOs and corporations engage in a highly fluid negotiation of meaning between millions of actors. Daniel is currently focusing on the influential dynamics shaping this communicative constitution within the field of sustainability and responsible business, in particular, how interactions on social media shape the sustainability agenda and thereby the production of governance for responsible business.


Photo by Elijah O’Donnell on Unsplash