Write a response

Data, diets and disease

Best practices on the public health beat

Drink less coffee to be healthier, suggested a recent article on MSN, citing the fact that coffee can increase anxiety. Two days later, a story on Yahoo recommended the exact opposite. People should drink more coffee, it said, because the beverage may reduce the risk of type 2 diabetes and high blood pressure.

Similarly, it’s been reported that chocolate can improve sleep and prevent depression, but also may lead to an earlier death. A glass of red wine can help you live longer, according to some articles, and increase your risk of cancer, according to others.

These stories, all of which cite scientific studies, embody a common criticism of health journalism: it misrepresents scientific findings, resulting in a nonsensical series of articles claiming some food is healthy one day, and harmful the next. The whiplash in reporting on scientific research is so pervasive that it inspired an entire episode of ‘Last Week Tonight’ with John Oliver.

But the real harm of this type of health reporting is that it can erode public trust in science. It can lead people to believe there are no true findings at all, and fuel movements like climate change denial and anti-vaccination campaigns.

However, if journalists manage to navigate around the minefields of hyperbolic claims, complex medical trial data, and potential conflicts of interest, health reporting presents a real opportunity to provide a public service. Because health affects us all, thoughtful and responsible health stories can empower communities to make important social and policy choices, as well as advocate for their own health.

To deliver such stories, journalists need to learn how to interpret studies responsibly, investigate potential bias and ethical concerns, and provide broader context around new findings. Although it may seem intimidating, even a basic understanding of these concepts can go a long way toward public health journalism that serves the public good.

All studies are not created equal

Academic research of all kinds is often referred to as a study. “New study shows soybean oil can cause genetic changes in the brain,” a headline might say. Or “study finds this medication can reduce the risk of bowel cancer.”

But too often the details of those studies -- what was done, how significant the findings are, and what limitations the study has -- are left out. Like the fact that the study on soybean oil and the study on bowel cancer medication were both done in mice.

That’s a key difference, said James Heathers, a data scientist at Northeastern University in Boston. Although mice studies are a valuable part of medical research, their findings are not always transferable to humans, he told STAT News.

That’s why he’s created a twitter account called @justsaysinmice, which retweets articles about scientific research often with just two words added on: IN MICE. The goal is to help the public, and journalists, recognise the importance of that difference.

As a health journalist, one of the most crucial things to do is understand different types of studies and the benefits and drawbacks of each. Here’s a crash course:

Medical Heirarchy of Evidence 1

Test tube and animal research is highly preliminary and should be treated as such. Many news publications won’t even report on these types of studies because it’s unclear what human impact, if any, they will have.

Ideas, editorials, and opinions from knowledgeable researchers can be interesting, but are not yet at the threshold of scientific evidence.

A case report is the publication of a single case in a journal or other academic forum, typically because the case is unique and significant.

While such cases can be fascinating, journalists should be careful not to generalise their findings.

For instance, a recently published case report on scurvy diagnosed in an otherwise healthy 3-year-old boy should not be reported as “new study shows all children may be at risk for scurvy.”

In reality, all that’s been found is one instance in which a child without typical risk factors, such as malnutrition or Crohn’s disease, had scurvy. It could be an anomaly rather than a generally applicable phenomenon.

A case control study is a study that compares patients who have a disease (cases) with patients who do not (controls).

It is not an active intervention, but rather a retroactive observation of what was different between the controls and the cases.

Those differences may or may not be meaningful, and this type of study cannot indicate if the differences caused the disease or not.

A cohort study involves following one or more samples of subjects (called cohorts) over time.

Just like a case control study, there is no intervention here. Instead, it is a long-term observational study.

Cdc UD Kj D Ypp Sfw unsplash 1

The Framingham Heart Study, which began in 1948 in Framingham, Mass., is one of the most well-known cohort studies. By observing the participants over decades, seeing who developed heart disease and what their lifestyles were like relative to those who did not, researchers were able to identify several risk factors for heart disease.

The findings of randomised control trials are considered some of the most trustworthy. Participants are randomly assigned to either get an intervention (i.e. receive a drug, start a new diet, get a vaccine) or not. The results of the different participants are then compared.

For even greater confidence, some randomised control trials are double-blinded, meaning neither the patients nor the researchers know who has received the intervention and who has gotten the placebo. This avoids bias in the reporting and interpreting of the data.

The smaller the sample size of a study, the more reason to be cautious of its findings.

Meta-analyses and systematic reviews are studies where researchers amass data from a number of previously published studies on the same issue and reanalyse the data together. These are considered some of the best forms of evidence since they’re based on multiple experiments. Yet, they rarely generate news.

In addition to these different study methods, journalists should also note that medical studies done on a new drug or therapy are often referred to by a certain phase. ° Phase I -- when a drug is being tested for safety, typically on a small group of human volunteers

° Phase II -- when a drug is being tested for efficacy on a slightly larger group of people

° Phase III -- large-scale testing, often involving randomised and blinded study designs on large populations

° Phase IV -- after a drug has been approved for consumer use by the FDA; often testing its long-term efficacy or comparing it to other drugs on the market

There are no hard and fast rules about which types of studies should or should not be covered, but understanding the differences can help journalists make educated decisions and convey the significance of any particular study appropriately.

Understanding research statistics

After learning what type of design a study is using, reporters have to be able to interpret the findings. While it can be easy to skim past unfamiliar terms or seemingly complex numbers in an academic paper, this data can be a critical tool for evaluating the study’s merit.

While health reporters don’t need to be statisticians, knowing three basic statistics can be helpful:

1. P-value is a statistic meant to indicate the probability that a finding is due to chance rather than the intervention being tested.

Generally, the cutoff for p-values is 0.05. A p-value lower than that suggests confidence in the findings, as there is less than a 1 in 20 likelihood that the finding is due to chance. The lower the p-value, the greater confidence one can have in the findings. Conversely, a higher p-value means the findings are less statistically significant and could be coincidental.

However, even a low p-value does not mean the findings should be taken at face value. P-values can be manipulated -- often referred to as p-hacking -- or they can simply be misleading. FiveThirtyEight illustrated this point by conducting a survey in which they gathered demographic data and asked people about their diet. The reporters then went on to run regressions on that data and found statistically significant relationships (meaning p < 0.05) between eating raw tomatoes and being Jewish, and eating cabbage and having an innie belly button.

Obviously one does not cause the other in these cases, no matter what the p-value -- which is why it’s crucial to look at other study statistics too.

2. Confidence interval is a range of values within which researchers are fairly certain the true value lies.

Many medical studies may have a statement like the following:

“This study reports this result as a relative risk reduction for the development of prostate cancer of 22.8% (95% CI: 15.2-29.8; P < 0.001) for patients taking dutasteride compared to patients taking placebo.”

That means that 95 percent of the time, the relative risk of developing prostate cancer is between 15.2% and 29.8% lower for patients taking the drug versus those taking a placebo.

The smaller the range of the interval, the more confidence one can have in the findings (e.g. 2.46-5.63 is better than 21.93-132.3).

Also, the lower and upper values of the interval should either both be positive or both be negative. If the range contains zero -- for example -1.03-3.02-- the finding is not statistically significant. Taking the previous example, if the confidence interval for the relative risk reduction with the drug was -1% to 4%, that would mean sometimes the drug reduces the risk and other times it doesn’t. Thus, the finding is not significant.

3. Sample size -- the number of participants or other type of population in which the study was conducted, often written as n.

The smaller the sample size of a study, the more reason to be cautious of its findings. If a study was only done in white men over the age of 50, the results may not apply to black women in their 20s. Reporters should clearly state in their articles what sample was used and what that means about the generalisability of the results.

Cdc UD Kj D Ypp Sfw unsplash 1

Uncovering conflicts of interest and other ethical concerns

By now, it’s been widely documented that the tobacco industry funded research to undermine the harms of smoking and that the fossil fuel industry hired scientists to discredit climate change. Yet for years the findings from these industry-backed studies were reported as fact, without caveats.

Luckily, today there are many tools to help reporters uncover funding sources and potential conflicts of interest that can help avoid repeating the mistakes of the past.

It all starts with the basics: read to the bottom of the study. Often at the end, researchers will list their affiliations with trade organisations or companies, and the source of their funding. Of course, they may not be completely honest, but it’s a good place to start.

Another way to go is to ask the researchers outright: “How is your study funded? Do you have any conflicts of interest or associations that might influence the way the public perceives this study?” Again, they may not respond honestly, but it never hurts to get them on the record.

The best way to uncover ethical concerns is to do some independent sleuthing.

Start by running a Google search of the study authors. It’s surprising how often something so simple will produce interesting information -- a pharmaceutical company’s press release about the author coming to work for them or a conference presentation where the author promoted a particular drug.

Then check the researcher’s LinkedIn profile. Are they consulting for companies in addition to their academic work? Did they spend five years in an industry-funded think tank?

If that doesn’t yield much, there are several databases designed to shed light on this exact issue.

° CMS Open Payments -- This U.S. federal database includes information on industry payments to physicians and teaching hospitals. You can search by doctor, hospital, or company. The data is free to download.

° ProPublica’s Dollars for Docs - This tool allows reporters to search for payments from pharmaceutical and medical device companies to a variety of doctors and U.S. teaching hospitals for everything from promotional talks to research and consulting. ProPublica updates the database every so often. Using the search tool is free, but downloading the dataset costs $250 for journalists.

° ProPublica’s Dollars for Profs - Similar to the Dollars for Docs, but for university researchers. The database allows you to search records from multiple state universities and the National Institutes of Health for outside income and conflicts of interest of professors, researchers, and staff. It’s a limited sample of universities, but a good place to start. The search tool is free, but downloading the entire dataset can cost anywhere from $200 to $3000 depending on the use.

° Kaiser Health News’ Prescription for Power - If the study involves or is promoted by a patient advocacy group, it’s a good idea to check if the group receives funding from pharmaceutical companies. Although this database hasn’t been updated in a few years, it remains a useful tool.

Keep in mind that studies with negative results often don’t get published at all, so that can skew the results that surface.

Financial conflicts are not the only ethical concerns that can occur in the research world. Sometimes study authors can manipulate their data or interpret results in a biased manner to increase their apparent significance. For example, researchers might initially set out to test the effect of a drug on blood pressure. But when the results come in, they realise the drug didn’t affect blood pressure the way they hoped. Yet it did lower cholesterol. So then the researchers might change the measured outcome to cholesterol in order to show positive results. This is flawed study design, yet it can be difficult to know unless you’re part of the research process. Fortunately, there’s a tool to help with that too.

ClinicalTrials.gov collects data on publicly and privately funded clinical trials from more than 200 countries. It includes information on the study design, participants, outcomes to be measured, and results. Most importantly, it is updated throughout the lifetime of a study, and historical versions of the information get saved on an archival site.

Reporters simply need to get the NCT Number, a unique identifier for every clinical trial which can be found at the top of the study page on ClinicalTrials.gov, and enter it on the archival site. The tool will then show all previous versions of the study details. It can even highlight changes between any two versions (red for older versions and green for newer versions).

Screenshot 2020 02 24 at 23 22 50

Credit ClinicalTrials.gov

If the study design changed from the original proposal to the final paper, that doesn’t necessarily mean there is unethical conduct. But it’s important to ask the study authors about the reason for the adjustments.

Another website to keep an eye on is Retraction Watch, which tracks papers that get retracted due to some error. These often don’t get any media attention, but they can reveal important flaws in the scientific system. It’s worth monitoring the website for highly publicised studies that may be retracted, as well as to learn about the types of errors that are occurring and some of the researchers who are making them. That doesn’t mean you should never interview researchers who’ve had a study retracted. After all, people in all fields make mistakes sometimes. But it’s a good reminder to remain cautious.

Questions to ask the study author(s)

After reading through a study and investigating it as best as possible, the next step is to present questions directly to the researchers. They’ve spent the most time with the material and can say the most about it. Many researchers are eager to share their work with the public, and these can be very fruitful conversations.

But it’s key to go in prepared with the right questions. Here are a few -- by no means a complete list, but a place to start:

° What was the original study question? If it changed, when and why?

° Were any types of participants excluded from the study? (i.e. if anyone with heart disease was removed from the study of a high blood pressure medication, then the findings are limited)

° Did people drop out of the study midway through? If so, how many and why?

° What was the intervention being compared with? Was there a control group? Was it being compared to another drug? If so, is that drug representative of what’s currently on the market or is it an older version?

°What are the benefits of the intervention? And what are its potential harms/adverse effects?

° How easily available is the intervention? What does it cost?

° What are the limitations of the study?

° How generalisable are your findings?

° How do your findings fit in with the existing literature in this area?

Providing context for the reporter and the reader

As it’s likely clear by now, health reporting is complex and nuanced. That means one of the most crucial things reporters can do for readers is to provide context. Yet journalists are not scientists, so first they must seek out the context themselves. Here are a few places to get started with that:

Find a biostatistician to be a regular, go-to source who can comment on studies. While knowing basic research statistics like p-value and confidence interval is helpful, a biostatistician can get into the weeds and alert journalists to questions they need to ask or concerning study elements they may have missed. Most local universities have statisticians who can act as expert sources.

Read meta analyses and systematic reviews on the subject. As mentioned earlier, these are papers that take data from several different studies and reanalyse it together, giving a much larger sample size and more confidence in the findings. Meta analyses and systematic reviews can provide perspective on where the public health field currently stands on any given subject, and whether the new study makes sense in that realm or if it is too far off base. The Cochrane Library is a good resource for systematic reviews, as is a search with the keywords “systematic review” on Google Scholar or PubMed.

Keep in mind that studies with negative results often don’t get published at all, so that can skew the results that surface. To that end, sometimes it’s helpful to talk to others in the field.

Have an outside researcher comment on studies as a regular part of the reporting process. As scientists themselves, they’ll be able to spot important findings as well as study flaws better than most reporters. They can be found through local universities, professional associations (i.e. American College of Cardiology), or the other papers cited in the study of interest.

Public health reporting can be incredibly complex, especially for journalists without a background in medical science. But for diligent and compassionate reporters, it is also an opportunity to provide a public service. They simply need to master the tools to cut through the inflated claims and misleading rhetoric. That way, they can ensure pharmaceutical companies or individual scientists are not the ones needlessly gaining power from the reporting, but rather it is readers who walk away empowered to make better-informed health decisions.

Screenshot 2021 06 28 at 21 48 55 squashed
subscribe figure