Write a response

Hard data and soft statistics: a guide to critical reporting

Basic metrics and indices are not always objective reflections of reality

It is generally believed by the layman, the expert, and the journalist that numbers are hard and judgements are soft. It means that when we see a number, or a statistic, we think of it as objective, accurate, and incontestable. However, when we hear that someone consider, believes, or has an opinion our sceptical mind awakens. But often numbers are far softer than we commonly assume. Basic metrics such as inflation, or debt as a share of GDP, have been shown to change radically after revisions and, at times, they have been revealed to be fraudulent. It turns out that just as we would take one person’s view as an anecdotal observation that needs to be questioned, numbers and statistics should also be subject to serious cross examination.

Why and how are statistics used?

There are many reasons why numbers and statistics are so popular in usage, but perhaps the most important one is because they allow us to say something and make decisions about things we know nothing about. An important example for finance journalism comes from Bond Credit Ratings -- where countries are rated ‘Triple A’ or not, which in turn allows investors to decide whether they should put their millions in, say, a Cypriote bond or not.

Without the Triple A rating, the investor would have no signal, no information, and no basis whatsoever to make that investment decision. The Triple A rating allows the investor to sort countries, which they may have no knowledge of, and invest based on this signal of the economy’s credit worthiness. Should Cyprus boast a good rating, for example, investors may buy bonds without knowing whether it is a sovereign state, or whether it is a dependency of Italy, Greece, Turkey, or Great Britain, let alone what its capital and main export is, or the name of the currency. It further allows the investor to change their mind about the investment when the rating changes -- even if this change only reflects the mood of investors, rather than the economy’s stability.

Europe Standard and Poors 1

Financial ratings of European states by Standard & Poor's rating on 18 February 2019. Credit: Wikimedia (CC BY-SA 4.0).

The fact that numbers are not always objective reflections, based on accurate readings and observations, is easy to forget. After all, some of the key metaphors we use to report financial numbers are often taken from meteorology, where we do indeed have ways of measuring rainfall, temperatures, and wind. These phenomena do exist, regardless of whether we measure them or not. But that is not true of all things we measure and, for a lot of the numbers that are regularly reported, it is important to be wary that the process of measuring does not in itself constitute the existence of a phenomenon.

Measuring the immeasurable

There is no way of objectively inserting a measurement stick to determine a Triple A rating, and much less is there any way of reading other subjective phenomena. Take corruption, for example. Corruption is one of those things that is undeniably important and on the forefront of what journalists should report on, but there is no way the level of corruption can be objectively gauged. By its nature, corruption takes place in secret and concealed forms, and it’s possible to argue that the extent to which it is acceptable or not depends a lot on the specific situation, the cultural setting, and variations in law.

4363265760 f764648ca6 o

futureatlas.com on Flickr (CC BY 2.0).

Yet, we want to measure it, and there are many rankings that purport a definitive picture of corruption across countries. To understand the limitations of these indices, let’s take a look at a hypothetical: If you stopped one random European person in the street and asked them “how corrupt is Nigeria?” and the person responded “oh very corrupt, I don’t trust African people at all, they don’t follow rules like we do in Europe”, you should, and I think most journalists would, dismiss this as unreportable prejudice. However, if an organisation, like Transparency International (TI), calls up 100 people in a survey and asks them to rate Nigeria and Sweden on a corruption scale from 1 to 10, this number and the resultant Corruption Perception Index becomes headline news.

These indices are created to be influential and the channel by which they gain most of their influence is through being re-reported by journalists as an accurate reflection of a country’s socio-political circumstances. The truth is: TI’s Corruption Perception Index is just an average subjective perception index, which reflects the prejudices people have about governance in poor countries. Corruption is not observable, and nor are many other social-political phenomena, so reporting on them without reporting on how the data is generated should not pass a journalist’s basic fact-checking tests.

Fact-checking a statistic

Fact-checking a statistic requires more than checking a claim by turning to statistics. In 2014, The Economist wrote a guide on lying with indices, which laid out the dirty tricks that are in use by the compilers of rankings, indices, and other numbers that summarise the world. Journalists can seek help from researchers and institutions that use these numbers in their work, as they normally know their weaknesses, and there is an increasing area of critical scholarship that questions the effects of quantification. Simple first questions that journalists might ask themselves:

  • Is this phenomena objectively observable? There is, for example, a difference between recording rainfall and happiness.
  • Under what conditions was the issue observed? Numbers on rape victims might be very difficult to observe, compared to cars crossing a toll bridge.
  • Who made the observation? Consider whether they would have any reason to present a biased measure.

Just as we would subject a witness statement to cross examination, we should also tear apart a statistic when it is presented to us.

Case study: Fact-checking subjective indices

By Kate Wilkinson, Africa Check

In 2014, Africa Check investigated reports that South Africa’s maths and science education was the ‘worst in the world’.

These regular headlines were based on the findings of the World Economic Forum’s (WEF) Global Information Technology Reports, which rank countries on the quality of their math and science education. In the latest report from 2016, South Africa was again ranked last out of 139 countries.

But the WEF does not conduct standardised tests to assess the quality of maths and science education in the countries surveyed. Rather, the rankings are the result of an ‘executive opinion survey’, where unidentified ‘business leaders’ are asked to rate the quality of maths and science education in their country on a scale from 1 (worst) to 7 (best).

The resulting education rankings are not, in fact, an assessment of the quality of education in South Africa, or any of the other countries. Instead, this subjective index reflects the personal opinions of a small group of unidentified people about a topic in which they are not an expert.

In fact-checking this statistic, we spoke to leading education experts in South Africa. They were able to critique the ranking and point us to the most recent and reliable education rankings.

For instance, Martin Gustafsson, an economics researcher at the University of Stellenbosch, told us: “There is valuable data in the [WEF] report. For things like business confidence it is useful. But you can’t apply opinions to things like education. It is like asking business experts what they think the HIV rate is”.

For journalists reporting on education levels, looking into other sources provides a more comprehensive view of the country’s performance. For example, standardised, cross national testing reveals that South Africa does have problems with its maths education, but it performs better than a number of countries. In 2007, the Southern and Eastern Africa Consortium for Monitoring Educational Quality ranked South Africa eighth out of fifteen countries for it’s math performance. Mozambique, Uganda, Lesotho, Namibia, Malawi and Zambia all performed worse.

Remember: Make sure to check whether a ranking is measuring the actual phenomena or simply what people think about it. Always look for additional sources and expert help to corroborate or contextualise a subjective index.

The impact of statistical reporting

Uncritical reporting on subjective indices does, of course, have real consequences. The more frequently soft statistics are re-reported, the more likely we’ll bring them into the fold of our arsenal of hard statistics, forgetting that ‘corruption’ is not a quantity that can be easily gauged. There are consequences for pretending that you can measure something that you cannot -- as it may seriously mislead us to think we know something we do not -- and, in the end, these can translate into very bad decisions.

One of these bad decisions was made in 2015, when the UK’s Financial Conduct Authority penalised the Bank of Beirut for failing to establish sufficient controls against money laundering and other financial crimes. As part of this decision, the Authority banned the Bank from taking on customers in ‘high-risk’ corruption jurisdictions. But there is no list of countries that are corrupt or not, or an objective measurement, so they reached for the second best thing: TI’s Corruption Perception Index.

As commentators pointed out at the time, there is a close to perfect correlation in a country’s GDP per capita and its ranking on the Corruption Perception Index. So, in effect, low income countries were not getting the banking services of high income countries, simply because the people surveyed by TI suspected that these lower income countries are more corrupt.

High risk map

The Center for Global Development mapped the countries which the FCA considered to be ‘high risk’, that is, any ranking 60 or below on the Index.

Corruption reportage is something that we need to our focus attention on, along with other similar ‘unknowns’ in the list of global problems, like trafficking, drugs, and illicit finance. Increasingly, organisations that are involved in promoting political traction on these issues find themselves involved in a numbers game. There is always going to be an incentive to highlight the bigness of a problem to help further the cause. And there is no reason to doubt that TI, or other organisations such as Walk Free, which publishes a Global Slavery Index, are not aware of the flawed nature of the data. But they make a calculated risk where they hope that the upside, in terms of more influence, is bigger than the downsides of mismeasuring.

To combat superficial reporting, and avoid exacerbating its impacts, journalists need to dig deeper and question clickbait indicators. Strategies include:

  • Contacting researchers and experts in the field -- they’ll be able to comment on the usefulness of subjective indicators, as well as any limitations
  • Looking beyond numbers, by producing investigative stories that look for information which isn’t considered or communicated by an indicator
  • Asking yourself how the story would hold if you used a different measure or a different data source to frame the narrative.

Case study: Reporting on corruption

By Kate Wilkinson, Africa Check

How much money has South Africa lost due to corruption since democracy started in 1994? A popular and widely shared estimate is R700 billion. This figure has been published by newspapers and tweeted by a prominent trade union leader -- but, as Africa Check’s 2015 fact-check revealed, it’s a thumbsuck.

Underpinning these reports, a civil society handbook claimed that ‘damages from corruption’ is usually estimated at ‘between 10% and 25%, and in some cases as high as 40 to 50%’ of the country’s public procurement contracts. But no source was provided to substantiate these estimates.

As years passed, the claim changed. It was then reported that around 20% of the country’s gross domestic product -- not procurement contracts -- was lost every year to corruption.

So, how much has corruption cost South Africa? The frustrating -– and logical –- answer is that we just can’t say for sure.

The country’s treasury has not attempted to calculate an estimate. And while governance experts agree that a large amount of money has been lost, they won't be drawn on an exact number.

With little information available, journalists should be wary of definitive reports on national corruption levels. Instead, it may be possible to piece together a picture of corruption in a country by bringing together different sources.

National surveys are one resource that can be used to shed light on people's experience of bribery and corruption. For example, in 2016 nearly a third (32.3%) of adults in Nigeria reported paying bribes to a public official or said that they were asked to in the year before. But, as always, all surveys should be interrogated and corroborated with country-level experts.

Trust and mistrust in official numbers

So far, we’ve looked at how numbers tend to be reported as hard facts, and now we’ll move onto its exception: when statistical reporting is linked to foul play. Perhaps a country is accused of skewing incomes to receive aid, or another country is seen to downplay the social impacts of a political crisis; on the other end of the spectrum, overly critical reporting of statistic can also be problematic. The fact that the social world is complex and difficult to understand means that even quantitative phenomena don’t lend themselves to cheap and fast summaries that are easily reportable. To illustrate this point, let’s look at something that should be relatively easy to count – namely money. But turns out that numbers are soft here too.

Ghana Cedi banknotes 1

Measuring economic activity is hard at the best of times, let alone when faced with the challenges presented in low income countries. Credit: Wikimedia.

On 5 November 2010, Ghana’s Statistical Services announced new and revised GDP estimates. Overnight, the size of the economy was adjusted upward by over 60%, which meant that previous GDP estimates had missed about US$13 billion worth of economic activity. While this change in GDP was exceptionally large, it turned out to be far from an isolated case.

In 2012, I wrote a summary of this situation, explaining in layman terms how one country, like Ghana, could go from being so poor one day to an aspiring middle-income one the next. My intent with the piece was to demystify the process and to lay bare the basic discrepancies between global standards of measurement and local challenges of data availability and resources. The simple fact is: It is very demanding and costly to measure a country’s whole economy, particularly in low income countries where very few businesses and individuals are reporting taxes, and only a minority of economic transactions are recorded, with most taking place in the informal, unrecorded economy. Yet, despite these nuances, when the Guardian reprinted the story, they smacked the headline: Lies, damn lies and GDP on it. As anyone who has read that piece, or my book will know, I go to some lengths to dispel the beliefs that there was a hidden political agenda behind this revision.

While critical skills often fail when it comes to thinking about converting complex realities into simple numbers, data users also manage to maintain an inherent criticism of any ‘official number’, based on simply distrusting some states and trusting others. A similarly misguided gut reaction exists among scholars, who may never trust a number from Sudan, Ghana or South Africa, but would not hesitate to use the same number if the World Bank recycled it.

Case study: Promoting trust through fact-checking national statistics

By Kate Wilkinson, Africa Check

In 2018, Africa Check investigated claims by US-based news website Quartz that much of Kenya’s borrowing in recent years has originated from China. Further, they reported that the country’s obligations to Beijing run ‘much deeper than many ordinary Kenyans realise’, under the headline: ‘China now owns more than 70% of Kenya’s external debt’.

The size of Kenya’s public debt was a campaign issue in the country’s 2017 elections and this debt is sometimes conflated with China’s role in Kenya. The Asian country financed the standard gauge railway project, the most visible of the government’s economic growth projects. Yet, its full cost remains unclear because the agreement is closed.

Inaccurate reporting on the issue breeds mistrust in the Kenyan government and its official treasury data.

The first step in fact-checking this claim was to find out what information it was based on. Quartz said that the source of the information was an article in the Nairobi-based Business Daily newspaper. That article had relied on information from the 2018/19 Kenyan budget statement which showed that Kenya owed China KSh534.1 billion. This, they said, was 72% of the country’s total bilateral debt of KSh741 billion.

But while it may be true for bilateral debt, it’s not true for all of Kenya’s foreign debt.

Next, Africa Check sought out experts to explain and unpack the numbers.

“All bilateral debt [is] external debt but not all external debt [is] bilateral debt,” said Odongo Kodongo, a financial economist and associate professor at Wits University’s business school.

External debt is the total public and private debt that a country owes foreign creditors. It includes multilateral debt, commercial debt, bilateral debt, and guaranteed debt.

Experts used Kenyan treasury documents to estimate that China’s share of Kenya’s external debt was KSh534.07 billion of KSh2.51 trillion as of 31 March 2018. This was equal to 21.3% -- not 70%.

China’s role in Kenya is a controversial issue. This fact-check showed the public that official government data could be relied upon to determine if claims about the countries’ relations were true.

The failure to reflect on the softness of statistics in general, and rather look to a higher authority, was also reflected in an episode a few years later. On 7 April 2014, the Nigerian Bureau of Statistics (NBS) declared that their GDP estimates were also being revised upward to $510 billion, an 89% increase from the old estimate. It was controversial: The GDP revision was bigger than expected, and, the IMF’s Statistics Department had provided technical assistance for the revision, which they viewed as incomplete. But the IMF has no authority to endorse or not endorse statistics, so they couldn’t stand in the way when Nigeria wanted to release the new numbers.

Nigeria

GDP information is released by the NBS in quarterly reports. This infographic from the Q4 and 2018 full year report shows a substantial increase in the 2014 growth rate.

When Yemi Kale, the director of NBS, announced the new numbers, they were keen to make sure that the credibility of the statistics was not undermined, well aware that the international media’s trust was low. This meant that the IMF Mission Chief to Nigeria was well prepared for the inevitable question as to whether they ‘endorsed’ the new numbers. When asked, he made a long statement that ended by summarising how the IMF supported “the efforts being made to improve the statistics of the Federation, as a basis of sound decision making. Let me state that we endorse this wholeheartedly and will support Nigeria in this regard”. In the end, this statement became reported as ‘the IMF endorsed the numbers’, despite the broader messages of statistical capacity building that it contained.

So, what would be a more conscientious way to report on this number? Rather than focusing on narratives of trust or distrust around the revision itself, possible stories could’ve looked at:

  • The challenges of measuring economic activity in low income countries
  • Resourcing limitations at the NBS, which meant benchmark data for the GDP hadn’t been updated properly for more than a quarter of a century
  • The role of the IMF, and its inability to endorse a country’s data.

Conclusion

By digging deeper, and looking beyond a simplistic numbers, journalists are able to provide more accurate and comprehensive reports on social issues. When confronted with a number on corruption, unemployment, or the size of the illegal economy, journalists should always think critically about where that number came from. Numbers are much softer than we would like to think, and we trust and mistrust them far more than justified by their accuracy.

subscribe figure