Write a response

Data’s Role in the Disinformation War

Journalists use digital tools to fight back

A self-described “college nerd” sat on a porch in Birmingham, Ala., explaining via Zoom how he runs one of the most-followed Twitter feeds on the war in Ukraine. Around 272,000 regularly check his account The Intel Crab.

Justin Peden, 20, is an example of how data is being used to debunk disinformation in today’s high tech ecosystem. He uses geolocation, satellite imagery, TikTok, Instagram and other sleuthing tools to monitor the deadliest conflict in Europe since World War II.

Scouring the Internet for streaming Webcams, smartphone videos and still photos to pinpoint Russian troop locations, air bombardments and the destruction of once peaceful neighborhoods is a routine part of his day. If a Russian commander denies bombing an area, Peden and other war watchers quickly post evidence exposing the falsehood.

“I never dreamed in a million years that what I was doing could end up being so relevant. I just wanted to expose people to what was going on [in Ukraine}. I really am just a regular college kid,” said the University of Alabama – Birmingham junior. His Twitter profile photo is a crab holding a Ukrainian flag.

Captura de Pantalla 2022 09 05 a las 9 59 30

Open source intelligence has become a potent force in a conflict the United Nations describes as “a grave humanitarian crisis.” Online detectives like Peden use data to break through the fog of war, operating on computers thousands of miles away. Their impact has not gone unnoticed.

“The intelligence gathering, fact-checking, and debunking is happening in real time. The online crowd is also documenting the movement and placement of Russian troops, creating something more than a snapshot of recent history. It is often actionable intelligence,” said veteran science journalist Miles O’Brien during a PBS – Public Broadcast Service -- program in April.

On the air that day, O’Brien singled out Peden as “a highly regarded practitioner in the fast-growing field of open-source intelligence, or OSINT” and noted that his postings on Ukraine are followed “outside and inside the intelligence community.” The Washington Post included him in a story on the “rise of Twitter spies.”

When the Russians invaded on Feb. 24, Peden combed through images on social media, using metadata embedded in still photos and video to pinpoint time and place they were taken. He learned a valuable lesson along the way.

At one point, he received an image taken from a balcony in the southern port city of Kherson, showing what appeared to be Russian troops on the move. He verified the image and posted the exact coordinates on Twitter.

Suddenly, he realized the tweet might have placed a Ukrainian in danger of being identified by the enemy. When he deleted the post minutes later, it had already been retweeted 100 times. He no longer geolocates content in Russian occupied areas.

Truth is first casualty

What is happening now in Ukraine is nothing new. Disinformation has been a factor in conflicts and dictatorships dating back to the Roman Empire. Hitler and Stalin were masters at it. There is the saying, “The first casualty of war is truth.”

Today, however, there is a major shift in the equation.

With the click of a mouse, anybody can transmit false information to the entire planet, no matter how dangerous, malicious or intimidating. The invasion of Ukraine is a textbook example of how digital untruths fueled a humanitarian crisis and fomented hatred that has led to death and massive destruction.

PBSs O’Brien noted in a broadcast, “We are seeing a war unfold like never before. What once might have been kept secret is out there for all of us to see. The real secret now? Knowing who to trust and what to believe.”

O’Brien’s comment places journalists at the heart of the debate. Technology enables the spread of falsehoods. Open source intelligence helps set the record straight.

It is important to note that disinformation differs from misinformation in that it not only is false but false as part of a “purposeful effort to mislead, deceive, or confuse.” In short, it is content intended to harm.

Journalists strike back

Historically, media have played a crucial role in debunking falsehoods about major events, from conspiracies about Covid vaccines, to climate change, immigration and, most recently, Russia’s invasion of Ukraine. Germany’s Deutsche Welle (DW) is a prime example of how a verification system can expose actors with a malicious intent to inflict damage.

In the run-up to the war, DW’s fact-checking team began compiling a file of false claims and propaganda from both sides in the conflict and publishing corrections. They also made a startling discovery. Fakes were being put out under their name.

In July, they reported “Pro-Russian fabricated posts pretending to be those of the BBC, CNN and DW are fueling the mis- and disinformation war between Russia and Ukraine.” The story cited an example from a Japanese Twitter network. Here is an excerpt:

"It looks like a DW report," a Twitter user comments in Japanese on an alleged DW video about a Ukrainian refugee who is claimed to have raped women in Germany — serious accusations against a man named 'Petro Savchenko'.

The Twitter user writes: `Please share with me the URL of the original video.’ The user seems to doubt the origin of the video — and rightly so. It is not a DW production. It is a fake.”

Among other examples from the DW website: When a Twitter user posted a video purporting to be a live broadcast from Ukraine, a formation of fighter jets could be seen swooping over an urban area. Using reverse image technology, fact checkers revealed it was from a 2020 air show near Moscow. Another video allegedly showing fierce air-to-ground combat between Russia and Ukraine was traced to a 2013 computer game.

DW turned to scholars and practitioners for suggestions on how to make fact-checking more effective. The advice is relevant to journalists anywhere in the world. Among the tips:

Captura de Pantalla 2022 09 05 a las 9 59 58
  • “Emphasize correct information rather than amplifying claims” (“Consider using truth sandwiches: first state what is true, then introduce the truthless or misleading statement and repeat the truth, so the falsehood is not the takeaway”
  • “Provide unambiguous assessments (and avoid confusing labels like `mostly false’)”
  • “Avoid drawing false equivalencies between opposing viewpoints”
  • “Situate fact checks within broader issues – don’t just focus on isolated claims“
  • Analyze and explain the strategies behind misinformation – connect fact checks with media and information literacy”

This list also prevents reporters from being duped into spreading false and misleading information. To the deceivers, any amplification of their message in mainstream media is the ultimate success. It gives their lies oxygen and authenticity.

The “Ghost of Kyiv,” a false story about a heroic Ukrainian fighter pilot, made it into the Times of London, a home run for the fakers. Viral video showing the Ghost shooting down a Russian plane was viewed over 1.6 million times on Twitter. The video was from a video game simulator released in 2008.

Russia’s propaganda model

Gaining a better understanding of how propaganda techniques work to undermine truth is another way to disarm spin masters. A Rand corporation report on the “Russian Firehose of Falsehood” is a good place to start.

The title refers to a strategy “where a propagandist overwhelms the public by producing a never-ending stream of misinformation and falsehoods.” Even flagrant lies delivered rapidly and continuously, over multiple channels, such as news broadcasts and social media, can be effective in molding public opinion, according to the report.

Published in 2016 at the height of the U.S. presidential election, this analysis provides a road map to how Russia’s disinformation system operates. At the time, Russia was being accused of dirty tricks to influence American voters.

“The report is very much on target for what is going on today. Bucket after bucket of nasty propaganda is being dumped on us,” said social scientist Christopher Paul, the report’s co-author. His research includes counterterrorism, counterinsurgency and cyber warfare.

The report outlines and analyses four main components of the Russian model:

  • High volume and multi-channel
  • Rapid, continuous, and repetitive
  • Lacks commitment to objective reality
  • Lacks commitment to consistency.

The Russians command a powerful arsenal of disinformation tools.

Besides the usual, such as social media and satellite imagery, a vast network of internet trolls attack any views or information that runs counter to Vladimir Putin. They infiltrate online discussion forums, chat rooms and websites along with maintaining thousands of fake accounts on Twitter, Facebook and other platforms.

Their mantra: Repetition works. “Even with preposterous stories and urban legends, those who have heard them multiple times are more likely to believe that they are true,” said the report.

The Rand study offered best practices on how to beat the Russian Firehose of Falsehoods, among them, “Don’t direct your flow of information directly back at the falsehood; instead, point your stream at whatever the firehose is aimed at, and try to push that audience in more productive directions.”

Other tips included:

  • Warnings at the time of initial exposure to misinformation.
  • Repetition of the refutation or retraction,
  • Corrections that provide an alternative story to help fill the gap in understanding when false “facts” are removed.

“It all goes back to journalistic standards. All journalists really need to turn the screws is to be as professional as possible. Double-checking, verifying sources, confirming attribution, using data to be accurate and reliable. The burden of truth, the burden of evidence is much higher,” said Paul, a principal investigator for defense and security-related research projects.

Research by a disinformation team at the Stanford Internet Observatory (SIO) supports that notion and provides fodder to data journalists. Led by scholar Shelby Grossman, they identify how to spot disinformation trends in the Russian- Ukraine war and defend against them.

Following is a sample of their findings:

The trend: Old media circulating out of its original context

Grossman saw a video on her TikTok feed of a parachuter recording himself jumping out of a plane. It appeared he was a Russian soldier invading Ukraine. In fact, the video was from 2015.

How to spot: If something seems suspicious or outrageous, use reverse image searching to verify. Upload a screenshot of the photo/video into the search bar of Google Image or TinEye to check where else it might have appeared.

The trend: Hacked accounts

A Belarusian hacking group took over Ukrainian Facebook accounts and posted videos claiming to be of Ukrainian soldiers surrendering.

How to spot: “Sometimes the name of the account is changed, but the handle – the username often denoted by the @ symbol – isn’t. Advised Grossman: “Just spending 10 seconds looking at an account, in some cases one can realize that something is weird.”

The trend: Pro-Kremlin narratives

Before the invasion, claims began circulating that the West was fueling hysteria about impending attacks in order to boost President Biden politically.

How to spot: Look for reports out of Russian state-affiliated media. SIO reported that both Facebook and Twitter try to label these accounts, including some that are not commonly known to be connected to the Russian state.

Grossman would like to see platforms be more transparent and proactive. “I think that would be useful and important. It gives people information about the political agenda of the content and might give them pause before sharing”, she said in SIO’s March report.

Veteran policy expert Kevin Sheives has another view. He believes civil society, not government and social media companies, is better suited to fight back against disinformation.

“We are looking for solutions in the wrong place. The campaign against disinformation should have civil society at its core,” said Sheives, associate director, International Forum for Democratic Studies, National Endowment for Democracy.

He points out that social media platforms and governments are not designed to prioritize values over business or national interests. That leaves it to journalists, fact-checkers, community groups, and advocates to create counter-disinformation networks.

Countering disinformation networks

“TikTok algorithm directs users to fake news about Ukraine war, study says.” This headline appeared in The Guardian, March 21, 2022.

An investigation, conducted by NewsGuard, a website that monitors online disinformation, discovered that a new TikTok account “can be shown falsehoods about the Ukraine war within minutes of signing up to the app.”

Among NewsGuard’s findings, “At a time when false narratives about the Russia-Ukraine conflict are proliferating online, none of the videos fed to our analysts by TikTok’s algorithm contained any information about the trustworthiness of the source, warnings, fact-checks, or additional information that could empower users with reliable information.”

How NewsGuard did it: Researchers created new accounts on the app and spent 45 minutes scrolling through the For You Page, stopping to view in full any video that looked like it was about the war in Ukraine, according to the report.

Around the same time, the BBC listed several categories of misleading content about the war appearing on TikTok, describing it as “one of the leading platforms for snappy false videos about the war in Ukraine which are reaching millions.”

A TikTok spokesperson noted the company has added more resources to fact-check Russian and Ukrainian content, including local language experts, and beefed up safety and security resources “to detect emerging threats and remove harmful misinformation.”

Since the invasion, many social media platforms and messaging services have taken steps to block state-sponsored or state-affiliated media or add labels to alert users to the source of the information. The jury is out on how well these efforts will work to improve transparency and credibility of information.

The stakes are high. Misinformation and disinformation can have life or death consequences and undermine the democratic way of life. Data journalists are in the thick of this expanding field of digital warfare. Are they up to the challenge as more sophisticated methods of deception sweep the globe?

Resources that can help

subscribe figure