Write a response

Why understanding margins of error matters in journalism

How to prevent technical misunderstandings


There are many aspects to ‘ethics in data journalism’. Examples such as the gun permit map raise complex questions about privacy and accountability. There are also legal questions, such as the use of hacked data or copyrighted materials. There are problems with visualisations ranging from confusion to deception, such as those documented by Junk Charts. And there are important questions about what is being counted in any particular dataset, and profound questions what we should be counting.

But another ethical category we should keep an eye on is technical misunderstandings, because these can easily lead to bad conclusions. I want to highlight a straightforward problem that is often ignored: the margin of error in the available data.

Stray margins1

For example, the 90% confidence interval on the monthly US Bureau of Labor Statistics (BLS) jobs growth number is +/-105,000. The September 2015 jobs number was 142,000, which news organisations labelled as ‘disappointing’ to ‘grim’. The October jobs number was 271,000, which was reported as ‘strong’ to ‘stellar’.

But if the underlying jobs growth was constant at 200,000 per month, it is quite likely we would see both of these numbers just from sampling error. While it is true that the number of created jobs most likely increased, there is much less here than meets the eye.

Perhaps +/-105,000 seems like a very wide margin error, but it’s tiny as a percentage. The monthly BLS survey accurately determines the number of people employed, but the change from month to month -- the number of jobs added -- is very small by comparison. There are about 250 million people [≈ population of Indonesia] in the potential US workforce, so 105,000 is a margin of error of 0.04%. This is a lot of accuracy to ask from a survey, considering that the typical political poll has a margin of error of 2 or 3 percent. Yet +/-0.04% is still enough to make accurate estimates of the monthly change quite unreliable, because the monthly change is typically only about double that.

Stray2

My suspicion is that jobs growth has actually been fairly stable, at around 200,000 per month, and the swing we saw between September and October 2015 was mostly just survey noise. If we truly want a sense of where the economy is going, it would be best to look at lots of different indicators. There are many other economic statistics that we could use to build up a more accurate picture of employment trends. There are even two completely different monthly BLS surveys that measure employment using using different methodologies (the establishment and household surveys).

It doesn’t make sense to get so excited about the wild swings of a single number that we know to be inaccurate.

Curiously, the New York Times (which wrote ‘grim’ and ‘strong’) knows this very well -- they have visualised how the statistical error in the BLS jobs growth numbers affects interpretation in a fantastic piece called How not to be misled by the jobs report. The other desks at the paper do not seem to have gotten the message.

Stray3

New York Times article by Neil Irwin and Kevin Quealy in 2014 on ‘statistical noise’.

A long list of statisticians have warned against the temptation to over-interpret noisy data. Nate Silver has addressed these particular temptations in journalism. We should listen to them, and be the voice of reason. We could even be fact-checking other people's wild interpretations and unjustified conclusions from data.

Or, if accuracy is really what we’re going for, we should build a model that accounts for noise (it'll be some sort of moving average) and integrates other factors. Either that, or report the results of someone else's model.

If this is impractical, just pull back from strong conclusions. Sometimes telling the truth means saying you don’t know.

This is one example of a technical issue that becomes an ethics issue: ignoring the uncertainty.

But there is a long list of things that can go wrong in data interpretation. There are entire books on how to interpret and write about multiple variable linear regressions. If we are going to use data to generate headlines, we need to get data interpretation right.

subscribe figure