Detecting deepfakes

Conversations with Data: #52

Do you want to receive Conversations with Data? Subscribe

Unnamed 6

Today's media landscape has never been more chaotic or easier to manipulate. AI-generated fake videos are becoming more prevalent and convincing, while orchestrated campaigns are spreading untruths across news outlets, messaging apps and social platforms.

To better understand this, our latest podcast features a conversation between Buzzfeed's Craig Silverman, the editor of the latest Verification Handbook, and Sam Gregory, programme director at WITNESS. As an expert in AI-driven myths and disinformation, he discusses deepfakes and synthetic media along with the tools and techniques journalists need to detect fakery.

You can listen to our entire 30-minute podcast on Spotify, SoundCloud or Apple Podcasts. Alternatively, read the edited Q&A below between Craig and Sam.

What we asked

What are deepfakes and who is most impacted by them?

Deepfakes are new forms of audiovisual manipulation that allow people to create realistic simulations of someone’s face, voice or actions. They enable people to make it seem like someone said or did something they didn’t. They are getting easier to make, requiring fewer source images to build them, and they are increasingly being commercialised.

Unnamed 2

The latest version of the Verification Handbook is available for free on DataJournalism.com.

When it comes to fakery, what level of media manipulation are journalists most likely to come across?

At the moment, journalists are most likely to encounter shallow fakes -- lightly edited videos that are mis-contextualised. If they encounter a deepfake -- a manipulation made with artificial intelligence -- it is most likely to be in the form of gender-based violence directed towards them. The biggest category of deepfakes right now are those that are made to attack primarily women, both ordinary people, journalists, and celebrities.

How should newsrooms approach training journalists in detecting synthetic media?

I think there are good investments that journalists can make now. While it doesn't necessarily make sense to train everyone in your newsroom in how to spot fakes, thinking about ways to have access to shared detection tools or shared expertise does. Building a greater understanding of media forensics generally is becoming increasingly necessary for newsrooms given deepfakes are becoming a growing form of media manipulation.

WITNESS works with human rights activists all over the world. What is the big difference between the media environments in the Global North versus the Global South when it comes to media manipulation?

I think one dimension is the context. For instance, from discussions with people-based in South Africa, the perception is that the government might well be the source of myths and disinformation. Another aspect is the lack of resources. If large news organisations in the Global North think they're underresourced, imagine what it is like for a community media organisation who is the only real source of information for documenting a favela in Rio or Sao Paolo.

The skills gap is another issue. Citizen journalists in the Global South don't have access to OSINT training or an opportunity to build media forensic skills. Fakery also affects local communities differently. For instance, if a rumour spreads on WhatsApp, it is often the people in close proximity to the harm who are most affected and may face direct violence. That is less likely to be the case for journalists reporting in Europe or North America.

Unnamed 7

Sam Greggory is programme director at WITNESS, an international organisation that trains and supports people using video in their fight for human rights. In the new Verification Handbook, he authored a chapter on deepfakes and emerging manipulation technologies.

If a journalist suspects inauthenticity in a video what should their step-by-step verification approach be?

At the moment, there aren't any good tools out yet to do this kind of detection. There is a three-step process worth trying. First, take a look and see if there are obvious signs that something is being manipulated. Look frame by frame to see if there are distortions. Look at the details. You may see things that indicate it is. Secondly, do a video visual verification process. The final step involves reaching out to someone who has expertise in deepfakes to review it. At the moment, this is a critical need. We've got to work out how to provide access to greater expertise for journalists so they know who to reach out to.

Unnamed 8

Craig Silverman is the editor of the latest Verification Handbook. As Buzzfeed's media editor, he is one of the world's leading experts in misinformation and disinformation.

As the ability to manipulate media becomes easier to do, there is a desire to create tools with built-in verification. How might that be problematic?

There's a great push within professional media to be able to assign an owner, location and clear data in order to verify and publish authentic videos and photos. However, we've got to figure out a balance of protecting against manipulation, but also not exposing people who are doing this kind of work.

WITNESS is part of a working group within the Content Authenticity Initiative. Founded by The New York Times, Adobe and Twitter, the aim is to set the industry standard for digital content attribution. In conversations with platforms, we ask questions like who might get excluded if you build this authenticity infrastructure. For example, WITNESS works with people who document police violence in volatile situations. For privacy and security reasons, they can't have their real identity linked to every piece of media or share very close location information on every video. They might want to do that on a single video or a photo, but they can't do it consistently. So we are concerned about those kinds of technical and privacy issues.

Is there a timescale for a deepfake having a large impact on the world?

I hope it is as far away from now as possible. I do not wish for us to have this problem. I think the time frame of a large scale deepfake is hard to predict. What is predictable is that the process of creating them is becoming easier and more accessible to more people. That's why it's so important to invest in the detection and the authenticity infrastructure, along with making these tools accessible and available to journalists.  

Latest on DataJournalism.com

From handling financial projections and estimates to navigating company reports, journalist Erik Sherman explains how to frame business stories with data on your side. Read our latest long read article here.

Unnamed 3

Are you an investigative journalist looking to conduct a cross-border investigation? Applications are now open for the Investigative Journalism for Europe Fund! Grants of up to €6,250 are available for teams of journalists based in the EU and EU candidate countries. Apply today!

Unnamed 4

Our next conversation

In the next episode of our Conversations with Data podcast, we'll be speaking with Professor Denise Lievesley from Green Templeton College, University of Oxford. The discussion will focus on what data journalists can learn from statisticians and the parallels between the two professions.

As always, don’t forget to let us know what you’d like us to feature in our future editions. You can also read all of our past editions here.

Onwards!

Tara from the EJC Data team,

bringing you DataJournalism.com, supported by Google News Initiative.

P.S. Are you interested in supporting this newsletter as well? Get in touch to discuss sponsorship opportunities.

subscribe figure