5. Verifying and questioning images

Written by: Hannah Guy , Farida Vis , Simon Faulkner

Farida Vis is director of the Visual Social Media Lab and a professor of digital media at Manchester Metropolitan University. Her academic and data journalism work focuses on the spread of misinformation online. She has served on the World Economic Forum’s Global Agenda Council on Social Media (2013-2016) and the Global Future Council for Information and Entertainment (2016-2019) and is a director at Open Data Manchester.

Simon Faulkner is a lecturer in art history and visual culture at Manchester Metropolitan University. His research is concerned with the political uses and meanings of images, with a particular focus on activism and protest movements. He is also the co-director of the Visual Social Media Lab and has a strong interest in the development of methods relevant to the analysis of images circulated on social media.

Hannah Guy is a Ph.D. student at Manchester Metropolitan University, examining the role of images in the spread of disinformation on social media. She is a member of the Visual Social Media Lab, where her current projects explore images shared on Twitter during the emergence of the Black Lives Matter movement, and Visual Media Literacy to combat misinformation in the context of Canadian schools.

Communication on social media is now overwhelmingly visual. Photos and video are persuasive, compelling and easier to create than ever, and can trigger powerful emotional responses. As a result, they have become powerful vehicles of mis- and disinformation.

To date, the discussion of images within the context of mis- and disinformation has either focused on verification techniques or, more recently, been disproportionately focused on deepfake videos. Before considering deepfakes, as we do in the next chapter, it’s essential to understand the more common low-tech use of misleading photos and videos, especially those shown out of context.

Given the widespread use of visuals in attempts to influence and manipulate public discourse, journalists must be equipped with fundamental image verification knowledge and with the ability to critically question and assess images to understand how and why they are being deployed. This chapter focuses on developing this second set of skills, and uses a framework we developed at the Visual Social Media Lab.

Building on verification

In the Visual Social Media Lab, we focus on understanding the roles online images play within society. While we mainly focus on still images, this also encompasses a range of different types of images: photos, composite images, memes, graphic images and screenshots, to name a few. Tackling visual mis- and disinformation requires its own set of strategies. To date, image verification by journalists has focused on establishing if the image is what they think it is. In the original “Verification Handbook,” Trushar Barot outlined four core basic principles for image verification, which remain invaluable. The First Draft Visual Verification Guide is another useful resource that uses these principles by focusing on five questions for photos and videos:

  1. Are you looking at the original version?
  2. Do you know who captured the photo?
  3. Do you know where the photo was captured?
  4. Do you know when the photo was captured?
  5. Do you know why the photo was captured?

Standard tools that can help with investigating photos and video include InVID, Yandex Image Search, TinEye, Google Image Search and Forensically. These verification methods focus on the origin of the image.

While that method remains crucial, the strategies and techniques frequently used in mis- and disinformation, and in a range of forms of media manipulation, mean it is also important to consider how images are used and shared and by whom, and also what role journalists play in potentially further amplifying problematic images.

To go beyond standard forms of image verification, we have combined methods from art history with questions designed specifically for mis- and disinformation content. Our framework, “20 Questions for Interrogating Social Media Images,” designed collaboratively with First Draft and journalists, is an additional tool journalists can use when investigating images.

Interrogating social media images

As the title suggests, the framework consists of 20 questions that can be asked of any social media image (still image, video, gif, etc.), with an additional 14 questions aimed at digging deeper into different aspects of mis- and disinformation. The questions do not appear in a set order, but these five questions are useful to address first:

  1. What is it?
  2. What does it show?
  3. Who made it?
  4. What did it mean?
  5. What does it mean?

Questions 1 to 3 are similar to established approaches to verification and are concerned with establishing what kind of image it is (a photograph, video, etc.), what it depicts and who made it. However, questions 4 and 5 take us somewhere else. They introduce considerations of meaning that encompass what the image shows, but also cover any meanings produced by the use of the image, including through its misidentification. When thought about together, questions 4 and 5 also allow us to focus on the changing nature of the meaning of images and on the ways that meanings ascribed to images through reuse can be significant in themselves. This doesn’t simply concern what images are made to mean in a new context and how this misidentifies what they show, but also what the effects of such misidentifications are. This approach is no longer about verification, but more akin to the analysis of the meanings of images performed in disciplines such as art history and photo theory.

In the development and early deployment of this framework with journalists, we often heard that they had never thought about images in this much detail. Many said the framework helped them recognize that images are complex forms of communication, and that a clear method is required to question them and their meaning.

Most of the time, you will not need to answer all 20 questions in the framework to get a comprehensive understanding of what’s going on with an image. The questions are there to fall back on. In our own work, we found them particularly useful when dealing with complex high-profile news images and videos that have received significant media attention and scrutiny. To show what this looks like in practice, here are three case studies with high-profile examples from the U.K. and U.S.

Case Study 1: Breaking Point, June 2016

What is it?

The “Breaking Point” image was a poster used by the UK Independence Party (UKIP) as part of its campaign during the EU referendum of 2016. It used a photograph taken by the photojournalist Jeff Mitchell in October 2015, focused on the refugee crisis.

What does it show?

A large queue of Syrian and Afghan refugees being escorted by Slovenian police from the border between Croatia and Slovenia to the Brezice refugee camp. The poster used a cropped version and added the text “BREAKING POINT: The EU has failed us all” and “We must break free of the EU and take back control of our borders.” Because the refugees appear to move toward the viewer en masse, it has a strong visual impact.

Who made it?

The Edinburgh-based advertising firm Family Advertising Ltd., which was employed by UKIP for its Brexit campaign.

What did it mean?

UKIP did not try to misrepresent the content, but layered further meaning through adding slogans. Exploiting existing anti-immigrant and racist sentiment, this manipulation focused on generating further fear of immigration and refugees, on the basis of unsubstantiated claims and insinuations about EU border policy.

What does it mean?

In November 2019, in the run-up to the U.K. general election, the campaign organization Leave.EU also used a tightly cropped version of the photograph in an anti-immigration image uploaded to Twitter, making a clear reference back to UKIP’s 2016 poster.

What other questions are useful to ask?

Is the actor official or unofficial? The key actor in creating and distributing the image, UKIP, is an official political party and not the type of actor usually associated with mis- and disinformation.

Is it similar to or related to other images? Some likened the poster to Nazi propaganda; it resonates both with previous anti-migrant imagery and a longer history of U.K. political posters involving queues, including one used by UKIP in May 2016 focused on immigration from the EU.

3 key takeaways:

  • Official political parties and politicians can be actors in the spread of misinformation.
  • Misinformation does not necessarily involve fake images or even the misidentification of what they show. Sometimes images can be used to support a message that misrepresents a wider situation.
  • Some misinformation requires more than verification. There is a need to critically examine how real images are used to manipulate, and what such images do and mean.

Examples of media coverage of this case:

Nigel Farage's anti-migrant poster reported to policeThe Guardian

Brexit: UKIP's 'unethical' anti-immigration posterAl-Jazeera

Nigel Farage accused of deploying Nazi-style propaganda as Remain crash poster unveiling with rival vans The Independent

Case Study 2: The Westminster Bridge Photograph, March 2017

What is it?

A tweet from a Twitter account that appears to be operated by a white Texan man, which received significant media attention. The account was later revealed to be operated by Russia’s Internet Research Agency, and was used to spread mis- and disinformation. The tweet shared a photograph from the aftermath of the Westminster Bridge terrorist attack in London (March 22, 2017).

What does it show?

A Muslim woman walking past a group of people and a person on the ground, who has been injured in the terrorist attack. The text has Islamophobic connotations, claiming that the woman is purposefully ignoring the injured person, as well as an overtly anti-Islamic hashtag.

Who made it?

The Internet Research Agency worker who operated the @SouthLoneStar Twitter account, though it was not known to be an IRA account at the time of the tweet. The picture itself was taken by press photographer Jamie Lorriman.

What did it mean?

In March 2017, this appeared to be a tweet from a right-wing Texan Twitter user, who interpreted the photograph as showing that the Muslim woman did not care about the injured person. It suggested this example spoke to a larger truth about Muslims.

What does it mean?

As of today, the tweet is evidence of the Internet Research Agency’s purposefully spreading Islamophobic disinformation in the aftermath of a terrorist attack.

What other questions are useful to ask?

What responses did it receive? This tweet received a significant response from the mainstream media. Dozens of U.K. newspapers reported on it, in some cases more than once. While most of these articles condemned @SouthLoneStar, it also moved the tweet from the confines of social media and opened it to a mainstream audience. After the image spread, the woman in the photo spoke out to say that she was distraught over the attacks at the time, and that “not only have I been devastated by witnessing the aftermath of a shocking and numbing terror attack, I’ve also had to deal with the shock of finding my picture plastered all over social media by those who could not look beyond my attire, who draw conclusions based on hate and xenophobia.”

Is it similar to or related to other images? The image that was circulated most of the time was one of seven images taken of the woman. Others showed clearly that she was distraught, something few publications picked up on.

How widely and for how long was it circulated? The added mainstream media attention means the tweet spread widely. However, within a few days, circulation slowed significantly. It was recirculated in November 2017, when it was discovered that @SouthLoneStar was operated by the Internet Research Agency. This later November circulation was notably smaller in the mainstream media compared to March.

3 key takeaways:

  • Visual disinformation is not always wholly false and can involve elements that are based on truth. The photograph is real, but its context has been manipulated and falsified, and it relies on the reader/viewer not knowing what the woman was actually thinking in that moment.
  • Journalists should think carefully about bringing further attention to such emotionally fueled, controversial and potentially harmful disinformation by reporting on it, even with positive intentions.
  • More attention could be paid toward correcting disinformation-based news stories and ensuring that the true picture of events is most prominent. The limited coverage in November means that some readers may not have found out that the tweet was Russian disinformation.

Examples of media coverage of this case:

People are making alarming assumptions about this photo of ‘woman in headscarf walking by dying man’Mirror

‘Who is the real monster?’ Internet turns on trolls who criticised ‘indifferent’ Muslim woman seen walking through terror attackDaily Mail

British MP calls on Twitter to release Russian ‘troll factory’ tweetsThe Guardian

Case Study 3: Lincoln Memorial confrontation, January 2019

What is it?

A video of a group of students from Covington Catholic High School who took part in the pro-life March for Life and an indigenous man, Nathan Phillips, who was accompanying other Native Americans in the Indigenous Peoples March.

What does it show?

A confrontation between one of the students from Covington Catholic High School and Phillips. The two demonstrations converged on the Plaza, with a large group of Covington students wearing MAGA hats supposedly facing off against Phillips. This paints a picture of a lone Native American facing off against a barrage of young alt-right bullies.

Who made it?

The video was first uploaded to Instagram by an Indigenous Peoples March participant. This received nearly 200,000 views. Hours later, the video was uploaded to Twitter, receiving 2.5 million views before being deleted by the original account. The video was then reposted across different social media sites, subsequently grabbing mainstream media attention. Within 24 hours, several articles about the video had been published.

What did it mean?

The initial narrative that spread online presented the video as a straightforward faceoff between Philips and the students in which the students were seen as intentionally taunting and ganging up on Phillips.

What does it mean?

A much longer video of the encounter, which emerged several days after the first video, painted a more complex picture. The memorial was also occupied by a group of Black Hebrew Israelites, who were taunting passersby, including the Covington students and Indigenous Peoples March participants. This led to a heated standoff between all three groups, with Phillips allegedly trying to pacify. This is where the first video begins.

What other questions are useful to ask?

What contextual information do you need to know?

Without the longer video, and the knowledge that the Black Hebrew Israelites were present and actively fueling conflict, all context is lost. While the students were recorded saying racist things, what led to this was more complicated than simply alt-right teens ganging up on an elderly indigenous man.

Where was it shared on social media?

While the video was originally shared on Instagram by someone who attended the Indigenous Peoples March, this received limited attention. It was subsequently reuploaded to Twitter and YouTube by other users, which greatly amplified awareness and secured mainstream media attention. Therefore, the attention came from these reuploads and not from the original video on Instagram.

3 key takeaways:

  • When such emotion-ladened visuals spread so quickly online, it is easy to lose context and allow the superficial, reactionary online narrative to take control.
  • In retrospect, some journalists argued that the initial articles served to fuel the controversy and further push the incorrect narrative. This suggests that, without proper investigation, mainstream media can unintentionally continue the spread of misinformation.
  • The speed with which the video spread online meant a lot of mainstream media outlets “fell for” the narrative pushed on social media and did not investigate further. Many news sites were forced to retract or correct their articles once true events emerged, and some were sued.

Examples of media coverage of this case:

Native American Vietnam Vet Mocked And Surrounded By MAGA Hat-Wearing TeensUNILAD

Outcry after Kentucky students in Maga hats mock Native American veteranThe Guardian

Fuller video casts new light on Covington Catholic students' encounter with Native American elderUSA Today

Conclusion

So much of what is shared on social media is visual. Journalists must be equipped with the ability to critically question and assess images to unearth important content and intent. The speed with which visual misinformation can spread further highlights the need for journalists to proceed with caution and make sure to investigate image-related stories fully before publishing. The “20 Questions for Interrogating Social Media Images” is an additional tool journalists can use when investigating images, especially when the story is primarily centered on something visual. Not every question in the framework is relevant to every image, but the five basic questions are a strong starting point and build on basic verification skills, with the aim of developing more accurate and more in-depth reporting.

APPENDIX

Below is the full list of questions from the 20 Questions framework, including 14 prompt questions specifically focused on mis- and disinformation. As we have noted in the chapter, there are five questions that are useful to address first (in bold). The prompt questions relate to either the agent, the message or the interpreter of the mis- and disinformation:

  • AGENT (A) - Who created and distributed the image, and what was their motivation?
  • MESSAGE (M) - What format did the image take, and what are its characteristics?
  • INTERPRETER (I) - How was the message interpreted, and what actions were taken?


The framework takes inspiration from:

  1. The "Interrogating the work of Art" diagram (Figure 2.4, p.39), in (2014, 5th Edition), Pointon, M. History of Art: A Student’s Handbook, London and New York: Routledge.
  2. "Questions to ask about each element of an example of information disorder" (Figure 7, p. 28), in (2017), Wardle, C. and Derakshan, H., Information Disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe report DGI(2017)09.


subscribe figure