The Age of Information Disorder
Written by: Claire Wardle
Claire Wardle leads the strategic direction and research for First Draft, a global nonprofit that supports journalists, academics and technologists working to address challenges relating to trust and truth in the digital age. She has been a Fellow at the Shorenstein Center for Media, Politics and Public Policy at Harvard's Kennedy School, the Research Director at the Tow Center for Digital Journalism at Columbia University's Graduate School of Journalism and head of social media for UNHR, the United Nations Refugee Agency.
As we all know, lies, rumors and propaganda are not new concepts. Humans have always had the ability to be deceptive, and there are some glorious historical examples of times when fabricated content was used to mislead the public, destabilize governments or send stock markets soaring. What’s new now is the ease with which anyone can create compelling false and misleading content, and the speed with which that content can ricochet around the world.
We’ve always understood that there was complexity in deception. One size does not fit all. For example, a white lie told to keep the peace during a family argument is not the same as a misleading statement by a politician trying to win over more voters. A state-sponsored propaganda campaign is not the same as a conspiracy about the moon landing.
Unfortunately, over the past few years, anything that might fall into the categories described here has been labeled “fake news,” a simple term that has taken off globally, often with no need for translation.
I say unfortunate, because it is woefully inadequate to describe the complexity we’re seeing. Most content that is deceptive in some way does not even masquerade as news. It is memes, videos, images or coordinated activity on Twitter, YouTube, Facebook or Instagram. And most of it isn’t fake; it’s misleading or, more frequently, genuine, but used out of context.
The most impactful disinformation is that which has a kernel of truth to it: taking something that is true and mislabeling it, or sharing something as new when actually it’s three years old.
Perhaps most problematic is that the term fake news has been weaponized, mostly by politicians and their supporters to attack the professional news media around the world.
My frustration at the phrase led me to coin the term “information disorder” with my co-author Hossein Derakhshan. We wrote a report in 2017 entitled “Information Disorder,” and explored the challenges of the terminology that exists on this topic. In this chapter, I will explain some of the key definitional aspects to understanding this subject, and critically talking about it.
7 Types of Information Disorder
Back in 2017, I created the following typology to underscore the different types of information disorder that exist.
Understandably, many people have pushed back against my including satire in this typology, and I certainly struggled with including this category. But unfortunately, agents of disinformation deliberately label content as satire to ensure that it will not be “fact-checked,” and as a way of excusing any harm that comes from the content. In an informational ecosystem, where context and cues, or mental shortcuts (heuristics) have been stripped away, satirical content is more likely to confuse the reader. An American might know that The Onion is a satirical site, but did you know that, according to Wikipedia, there are 57 satirical news websites globally? If you don’t know the website is satirical, and it’s speeding past you on a Facebook feed, it’s easy to be fooled.
Recently, Facebook took the decision not to fact-check satire, but those who work in this space know how the satire label is used as a deliberate ploy. In fact, in August 2019, the U.S. debunking organization Snopes wrote a piece about why they fact-check satire. Content purporting to be satire will evade the fact-checkers, and frequently over time, the original context gets lost: people share and re-share not realizing the content is satire and believing that it is true.
This is old-fashioned clickbait: the technique of making claims about content via a sensational headline, only to find the headline is horribly disconnected from the actual article or piece of content. While it’s easy for the news media to think about the problem of disinformation as being caused by bad actors, I argue that it’s important to recognize that poor practices within journalism add to the challenges of information disorder.
This is something that has always been a problem in journalism and politics. Whether it’s the selection of a partial segment from a quote, creating statistics that support a particular claim but don’t take into account how the data set was created, or cropping a photo to frame an event in a particular way, these types of misleading practices are certainly not new.
This is the category where we see the most content: It almost always occurs when genuine imagery is re-shared as new. It often happens during a breaking news event when old imagery is re-shared, but it also happens when old news articles are re-shared as new, when the headline still potentially fits with contemporary events.
This is when the logo of a well-known brand or name is used alongside false content. This tactic is strategic because it plays on the importance of heuristics. One of the most powerful ways we judge content is if it has been created by an organization or person that we already trust. So by taking a trusted news organization’s logo and adding it to a photo or a video, you’re automatically increasing the chance that people will trust the content without checking.
This is when genuine content is tampered with or doctored in some way. The video of Nancy Pelosi from May 2019 is an example of this. The Speaker of the U.S. House of Representatives was filmed giving a speech. Just a few hours later, a video emerged of her speaking that made her sound drunk. The video had been slowed down, and by doing so, it made it appear like she was slurring her words. This is a powerful tactic, because it’s based on genuine footage. If people know she gave that speech with that backdrop, it makes them more trusting of the output.
This category is for when content is 100% fabricated. This might be making a completely new fake social media account and spreading new content from it. This category includes deepfakes, where artificial intelligence is used to manufacture a video or audio file in which someone is made to say or do something that they never did.
Understanding Intent and Motivation
These types are useful for explaining the complexity of the polluted information environment, but it doesn’t tackle the question of intent. This is a crucial part of understanding this phenomenon.
To do that, Derakhshan and I created this Venn diagram as a way of explaining the difference between misinformation, disinformation and a third term we created, malinformation. Misinformation and disinformation are both examples of false content. But disinformation is created and shared by people who hope to do harm, whether that’s financial, reputational, political or physical harm. Misinformation is also false, but people who share the content don’t realize it’s false. This is often the case during breaking news events when people share rumors or old photos not realizing that they’re not connected to the events.
Malinformation is genuine information, but the people who share it are trying to cause harm. The leaking of Hillary Clinton’s emails during the 2016 U.S. presidential election is an example of that. So is sharing revenge porn.
These terms matter, as intent is part of how we should understand a particular piece of information. There are three main motivations for creating false and misleading content. The first is political, whether foreign or domestic politics. It might be a case of a foreign government’s attempting to interfere with the election of another country. It might be domestic, where one campaign engages in “dirty” tactics to smear their opponent. The second is financial. It is possible to make money from advertising on your site. If you have a sensational, false article with a hyperbolic headline, as long as you can get people to click on your URL, you can make money. People on both sides of the political spectrum have talked about how they created fabricated “news” sites to drive clicks and therefore revenue. Finally, there are social and psychological factors. Some people are motivated simply by the desire to cause trouble and to see what they can get away with; to see if they can fool journalists, to create an event on Facebook that drives people out on the streets to protest, to bully and harass women. Others end up sharing misinformation, for no other reason than their desire to present a particular identity. For example, someone who says, “I don’t care if this isn’t true, I just want to underline to my friends on Facebook, how much I hate [insert candidate name].”
The Trumpet of Amplification
To truly understand this wider ecosystem, we need to see how intertwined it all is. Too often, someone sees a piece of misleading or false content somewhere, and believes it was created there. Unfortunately, those who are most effective when it comes to disinformation understand how to take advantage of its fragmented nature.
Remember also, that if rumors, conspiracies or false content weren’t shared, they would do no harm. It’s the sharing that is so damaging. I therefore created this image, which I call the Trumpet of Amplification, as a way of describing how agents of disinformation use coordination to move information through the ecosystem.
Too often, content is posted in spaces like 4Chan or Discord (an app used by gamers to communicate). These spaces are anonymous and allow people to post without recourse. Often these spaces are used to share specific details about coordination, such as "we’re going to try to get this particular hashtag to trend,” or “use this meme to respond to today’s events on Facebook.”
The coordination often then moves into large Twitter DM groups or WhatsApp groups, where nodes within a network spread content to a wider group of people. It might then move into communities on sites like Gab, Reddit or YouTube. From there, the content will often be shared into more mainstream sites like Facebook, Instagram or Twitter.
From there, it will often get picked up by the professional media, either because they don’t realize the provenance of the content and decide to use it in their reporting, without sufficient checks, or they decide to debunk the content. Either way, the agents of disinformation see it as a success. Poor headlines where the rumor or misleading claim is reported, or debunks where the false content is embedded in the story, play into the original plan: to drive amplification, to fan the rumor with oxygen.
At First Draft, we talk about the concept of the tipping point. For journalists, reporting on falsehoods too early provides additional and potentially damaging oxygen to a rumor. Reporting too late means it has taken hold and there is little that can be done. Working out that tipping point is challenging. It differs by location, topic and platform.
Language matters. This phenomenon is complex and the words we use makes a difference. We already have academic research that shows that increasingly audiences equate the description “fake news” with poor reporting practices from the professional media.
Describing everything as disinformation, when it might not actually be false content, or is being shared unknowingly by people who don’t think is false, are other crucial elements of understanding what is happening.
We live in an age of information disorder. It is creating new challenges for journalists, researchers and information professionals. To report or not to report? How to word headlines? How to debunk videos and images effectively? How to know when to debunk? How does one measure the tipping point? They are all new challenges that exist today for those working in the information environment. It’s complicated.