The Lifecycle of Media Manipulation
Written by: Joan Donovan
Dr. Joan Donovan is the Research Director at Harvard Kennedy’s Shorenstein Center on Media, Politics and Public Policy
In an age where a handful of powerful global tech platforms have disrupted the traditional means by which society is informed, media manipulation and disinformation campaigns now challenge all political and social institutions. Hoaxes and fabrications are propagated by a mixed group of political operatives, brands, social movements and unaffiliated “trolls” who have developed and refined new techniques to influence public conversation, wreaking havoc on a local, national and global scale. There’s widespread agreement that media manipulation and disinformation are important problems facing society. But defining, detecting, documenting and debunking disinformation and media manipulation remains difficult, especially as attacks cross professional sectors such as journalism, law and technology. Therefore, understanding media manipulation as a patterned activity is an essential first step in working to investigate, expose and mitigate them.
Defining media manipulation and disinformation
To define media manipulation, we first split the term in two parts. In its most general form, media is an artifact of communication. Examples include text, images, audio and video in material and digital mediums. When studying media, any relic can be used as recorded evidence of an event. Crucially, media is created by individuals for the purpose of communicating. In this way, media conveys some meaning across individuals, but interpreting that meaning is always relational and situated within a context of distribution.
To claim media is manipulated is to go beyond simply saying that media is fashioned by individuals to transmit some intended meaning. The Merriam-Webster dictionary defines manipulation as “to change by artful or unfair means so as to serve one’s purpose.” While it can sometimes be difficult to know the exact purpose a single artifact was created to serve, investigators can determine the who, what, where and how of its communication to help determine if manipulative tactics were used as part of the distribution process. Manipulation tactics can include cloaking one’s identity or the source of the artifact, editing to conceal or change the meaning or context of an artifact, and tricking algorithms by using artificial coordination, such as bots or spamming tools.
In this context, disinformation is a subgenre of media manipulation, and refers to the creation and distribution of intentionally false information for political ends. Technologists, experts, academics, journalists and policymakers must agree on the distinctive category of disinformation because efforts to fight against disinformation require the cooperation of these groups.
For our part, the Technology and Social Change research team (TaSC) at Harvard Kennedy School’s Shorenstein Center is using a case study approach to map the life cycle of media manipulation campaigns. This methodological approach seeks to analyze the order, scale and scope of manipulation campaigns by following media artifacts through space and time, drawing together multiple relationships to sort through the tangled mess. As part of this work, we’ve developed an overview of the life cycle of a media manipulation campaign, which is useful for journalists as they attempt to identify, track and expose media manipulation and disinformation.
The life cycle has five points of action, where the tactics of media manipulators can be documented using qualitative and quantitative methods. Note that most manipulation campaigns are not “discovered” in this order. Instead when researching, look for any one of these points of action and then trace the campaign backward and forward through the life cycle.
Case study: ‘Blow the Whistle’
Let’s examine the social media activity around the whistleblower complaint made about the activity of President Donald Trump related to Ukraine to see how a media manipulation campaign unfolds, and how ethical action by journalists and platforms early in the life cycle can help thwart manipulation efforts.
Planning and Seeding (Stages 1 & 2) — In the conspiracy theory media ecosystem, the whistleblower’s identity is already known and his name is circulating on blogs, Twitter, Facebook, YouTube videos and discussion forums. Importantly, unique names can substitute for keywords and hashtags, which function as discrete searchable data points. There was a concerted push to spread the alleged name and the person’s photo. Yet, the name seems to be locked in this online media echo chamber of right-wing and conspiracy accounts and entities. Even with this coordinated effort by conspiracy-themed influencers to push the alleged whistleblower’s name into the mainstream, they were not able to break out of their own filter bubbles. Why is that?
Responses by journalists, activists etc. (Stage 3) — In contrast, leftist and centrist media did not print the name of the alleged whistleblower or amplify claims that he was outed. Mainstream media outlets refrained from calling attention to the circulation of this person’s name in the social media ecosystem, even though it’s a newsworthy story for reporters on the tech and politics beat. Those that did cover it often emphasized how the act of circulating this name was an attempt to manipulate the discussion around the whistleblower’s complaint, and avoided spreading the name. This is due in large part to the ethics of journalism, where reporters have a special duty to protect the anonymity of sources, which extends to whistleblowers.
Changes to information ecosystem (Stage 4) — While mainstream journalists were omitting his name, the alleged name of the whistleblower, “Eric Ciaramella,” is a unique keyword. This meant that people who searched for it could pull up a wide variety of content rooted in the conspiracy-influenced point of view. In addition to ethical journalists effectively turning down a story that could attract significant traffic, each platform company began actively moderating content that used the alleged whistleblower’s name as a keyword. YouTube and Facebook removed content that used his name, while Twitter prevented his name from trending. Google’s search did allow for his name to be queried and returned thousands of links to conspiracy blogs.
Adjustments by manipulators (Stage 5) — Manipulators were aggravated by these attempts to prevent the spread of misinformation and changed their tactics. Instead of pushing content with the alleged whistleblower's name, manipulators began circulating images of a different white man (with glasses and a beard) that resembled the image they previously circulated with his name. These new images were coupled with a “deep state” conspiracy narrative that the whistleblower was a friend of establishment Democrats, and therefore had partisan motives. However, this was an image of Alexander Soros, the son of billionaire investor and philanthropist George Soros, a frequent target of conspiracies.
When that failed to generate media attention, President Trump’s Twitter account, @RealDonaldTrump, retweeted an article giving the alleged whistleblower’s name, emphasizing that “The CIA whistleblower is not a real whistleblower!” to his 68 million followers. The original tweet came from @TrumpWarRoom, which is his campaign’s official and verified account. A cascade of media coverage followed, including many major mainstream outlets, all of which took pains to remove or cover the alleged whistleblower’s name. Many people called on social media for the whistleblower to testify in the Senate impeachment hearings, where his name was invoked alongside other important potential witnesses, broadening the possibility that others will stumble on it when searching for other names. And thus begins a new cycle of media manipulation.
Queries for the name of the whistleblower are on the rise and conspiracies abound on blogs about his personal and professional motivations for informing on Trump’s activities. Journalists reporting on these tweets oscillate between discussions of witness intimidation, citing that an act like this can deter future whistleblowers, while also tipping into lurid curiosity by reporting on the gossip surrounding Trump’s motive for outing the alleged whistleblower. As such, it is laudable that some media organizations are trying to hold elites to account, but the task is impossible without the platform companies’ addressing how their products have become useful political tools for media manipulation and spreading disinformation.
Documenting the life cycle
Media manipulators attempted to “trade up the chain” by seeding a name and photos on social media in order to eventually cause large, legitimate media to amplify it, where platforms would allow it to trend and become easily discoverable. But decisions and actions by platforms and journalists meant the attempt to push the alleged identity of the whistleblower into mainstream consciousness largely failed until a newsworthy figure pushed the issue. While many media organizations strive to abide by ethical guidelines, social media has become a weapon of the already powerful to set media agendas and drive dangerous conspiracies.
Generally speaking though, this case study is a significant improvement over prior efforts to stop the spread of disinformation, where journalists amplified disinformation campaigns as they tried to debunk them, and platform companies felt no duty to provide accurate information to audiences. This overall shift is promising, but accountability for elites is still lacking. For journalists and researchers alike, the stakes of detecting, documenting and debunking media manipulation campaigns are high. In this hyperpartisan moment, any claim to name a disinformation campaign may also bring hordes of trolls and unwanted attention. Grappling with the content and the context of disinformation requires us all to forensically document with rigor how campaigns start, change and end. And to recognize that every perceived ending of a campaign may very well be a new beginning.