Conversations with Data: #51
Do you want to receive Conversations with Data? Subscribe
Today's information environment has never been more chaotic or easier to manipulate. To help journalists navigate this uncertainty, the European Journalism Centre supported by the Craig Newmark Philanthropies is releasing the Verification Handbook For Disinformation and Media Manipulation.
This week's podcast features a conversation between Buzzfeed's Craig Silverman, the editor of the handbook, and Dr Claire Wardle, the executive director of First Draft. The pair spoke about how the verification landscape has changed since 2014 along with the tools and techniques journalists need to spot, monitor and debunk disinformation.
What we asked
You contributed to this Verification Handbook and an earlier edition. What was happening in 2014 when the last version came out?
Back then, we weren't really talking about misinformation or even disinformation. I remember writing that original Verification Handbook because lots of newsrooms were struggling with breaking news. They needed the tools to understand how to verify a video or image that emerged on Twitter or Facebook during a breaking news event.
With Hurricane Sandy and the Arab Spring, there was more of an awareness that online content existed that journalists could use in stories. Content could be old or a hoax, but it was nothing like the world that we live in today. While some newsrooms like the BBC and NPR understood this, many relied on other organisations like Reuters or Storyful to verify content. Most newsrooms did not understand how to do this, nor view it as a required skill set that they needed to have.
In 2014 newsrooms avoided debunking inaccurate news. How is that different from today's news landscape?
In many ways, if you have a headline with the words "bots", "hacking", "Russia", it's a very human response to want to click on those kinds of stories. Unfortunately, there is a kind of business model now in debunking misinformation. And it does get a lot of page views. It's important to have a conversation about what are the long-term consequences for this type of reporting. Does it drive down trust in organisations or democracy or the news media itself?
What should journalists be aware of when understanding how humans process and share information?
It doesn't matter who we are, how educated we are, where we are on the political spectrum -- all of us are vulnerable to sharing misinformation if it taps into our existing world views. When we do these kinds of investigations, we need to be rigorous. We need to use these tools. We need to be fact-based. But I think when we're understanding how this information spreads, we can't just assume that by pushing more facts into the ecosystem, we're going to slow this down. We need to understand why people are sharing it. And sometimes people are sharing it for malicious intent.
So why might people be sharing this?
Let's talk about three motivations of why people do this. The first is financial. People want you to click on scams, or they want you to click on a website for advertising revenue. Or it could be political purposes that might be foreign interference. Or it might be on a domestic level that you're trying to shape the way that people think about politicians, issues or policies. But the last one is social and psychological. Some people do this to see if they can get away with it. And that motivation is sometimes the hardest to understand.
At First Draft, you train a lot of newsrooms in verification. Tell us about the principles for investigating disinformation you try to instil in people.
The biggest lesson for everybody who is working in this space is if you go searching for this, you will always find an ongoing disinformation or misinformation campaign. The challenge for those of us who are doing reporting is when do you do that reporting? And I think it's essential that we have journalists monitoring these spaces. It's vital that we understand which conspiracy theories are bubbling up, along with whether or not there's a coordinated attempt to impact trending hashtags. But the challenge is just because you can find it doesn't mean you necessarily have to report it.
What are some of the criteria you have for deciding whether to report on it?
We have five different kinds of criteria that we use to make that distinction. For instance, what kind of engagement is it getting? Is it moving across platforms, has an influencer shared it. For all journalists, there's this instinctive response to shine a light on it. That's the central paradigm of journalism. But what does that mean when bad actors are deliberately targeting journalists, knowing that that's what their inclination is to do? And so when you know that bad actors are hoping that you'll report on that conspiracy, how can you make that decision about whether or not to report it? And if you report, how do you report it? What's the headline? What's the lead or image?
What are some of the principles and cautions for journalists investigating inauthentic activity and figuring out who is behind it?
That is the $64 million question because newsrooms want to be able to say in real-time who is behind this. It's also partly because of the reporting about how Russia medalled in the 2016 U.S. election -- it's the go-to explanation. The problem is that people want immediate answers. But the truth is because anybody can be anyone on the Internet, it makes it very difficult to be able to say at any given time that this is who is behind this campaign.
Even if you find out that the person behind the campaign is living in a basement in Ohio, you can't necessarily say that there isn't a connection to a state actor. It's very difficult to get to these answers even by talking to the social platforms. They either don't want to give up this information, or they don't have enough information.
What else should journalists be mindful of with these types of investigations?
The other thing journalists have to be careful about is there is now an industry of these new kinds of data driven platforms where reports will show evidence that there was Russian influence in a particular moment. And as a journalist, it's very easy to type that almost like a press release. But unless the journalist can ask the questions about the data, we have to be very careful about these claims. Just as we need to teach journalists how to read academic research, there is a need to teach them how to read this kind of data.
Finally, what other resources can help journalists to question the data? First Draft recently worked with the Stanford Internet Observatory to create a new website called Attribution News. It takes you through the questions, tips and techniques to understand how to question the data or question sources when they're making these sorts of claims. This resource, combined with the latest Verification Handbook, can help journalists to find, monitor and investigate the data.
Latest on DataJournalism.com
As the COVID-19 crisis deepens, audiences around the world are seeking stories that show the impact of the virus on their daily lives. In our latest Long Read article, journalist and professor Paul Bradshaw explains how journalists can use data to generate relevant story ideas that resonate with their audience.
Our next conversation
In the next few episodes of our Conversations with Data podcast, we'll be speaking with other contributors to the latest edition of the Verification Handbook. The discussions will continue to centre around case studies and advice for spotting misinformation, disinformation and media manipulation.
As always, don’t forget to let us know what you’d like us to feature in our future editions. You can also read all of our past editions here.
Tara from the EJC Data team,
P.S. Are you interested in supporting this newsletter as well? Get in touch to discuss sponsorship opportunities.
If you experience any other problems, feel free to contact us at email@example.com