October 9, 2017

The titles read like a spammy chain email:

“Can chocolate make the heart grow healthier?”

“Early to bed, early to a healthy BMI?”

And they hint at the point of Boston University’s new podcast — to debunk health claims in studies with faulty methodology or problematic media coverage. It’s basically a way for you to fact-check those stories your mom always sends you.

“I don’t normally go and find that study that I heard on NPR claiming chocolate is good for you or bad for you, but everybody wants to know the answer to those,” said Matthew Fox, a professor in the Departments of Epidemiology and Global Health at BU. “And we thought we have the skills … to be able to pick these studies apart and think about where they hold up and where they don’t.”

The podcast, titled Free Associations and produced by BU’s School of Public Health, launched last week as part of an initiative to develop more outward-facing projects for the college. During the show, hosts Fox, Dr. Chris Gill, an associate professor of global health, and Dr. Don Thea, a professor of global health, weigh the merits of popular studies and discuss best practices for reading academic literature.

Fox said while they don’t have any metrics about downloads or audience insights yet, they’re trying to appeal to a general audience.

“We are going for something that we think a general audience could understand, with a little bit of stuff thrown in for people who might have a little bit more training,” he said. “It took me a little while to warm up to (a podcast), but when I actually thought about it I thought this could be a way to reach a larger audience.”

Poynter caught up with Fox to talk about the podcast, what some common problems are with health studies (we have some tips here) and how podcasting can pique interest in niche topics. This Q-and-A has been shortened for clarity.

Talk a little bit about the foundations for the podcast. How did the idea come about, and what need does it fill?

What we do in public health all the time is we delve into studies and we look to see the quality of the research and we try to draw conclusions based on inevitably flawed studies — because all studies are flawed. But some are better than others … So what I would really categorize (the podcast as doing, is) digest the quality of that latest study that came out.

The origins for this are that this is something we do as part of our job in trying to stay current on the literature and trying to understand what’s going on: We pick up the latest study and get a group of people together and talk through what the potential problems are with the way the study was done or the limitations of the study. We generally enjoy it, so we thought this is something we think potentially has a larger audience.

What’s your distribution strategy like? How many episodes do you put out?

Our plan is to put out an episode every two weeks, and our plan is to sort of pick up on whatever gets a lot of coverage in the news and seems like it would be something that is worth taking on, so we’re sort of monitoring the news aggregators on the big health studies.

That’s a part of what we do. On other segments of the podcast, we take on more general topics. Sometimes those are methods-based topics; we did one on what kinds of health studies does the media generally pick up on? How does the quality of a journal’s reputation lead to the way that people see the quality of the results?

It seems like the show addresses a lot of studies that people are interested in on a basic level, such as eating chocolate and sleeping more. From an audience perspective, how do you think about what kinds of things to cover and for whom you’re covering them?

The way we pitched this initially is we wanted to talk about things on the level of what we thought our incoming students would generally understand — not the students at the end of our program. We have a Master of Public Health program, but there’s generally not undergraduate programs in public health. So for most students, it’s their introduction to public health and they don’t have a background in reading medical literature or understanding study design and those things. So we wanted to try and use it initially as a way that our own students could listen and get engaged, but also to start thinking more critically about the stuff that they would be reading over the course of their training.

But as we got into it, we realized if we’re pitching at a level that we’re assuming our students haven’t yet had any training, then we’re really pitching it much closer to a general audience. We don’t know yet exactly who would find it completely comprehensible, but my kids came to listen to one of them and they said they could understand most of it. So we are going for something that we think a general audience could understand, with a little bit of stuff thrown in for people who might have a little bit more training.

Were you inspired by any other podcasts? I don’t think I’ve ever listened to one that’s focused on debating the merits of a study while also debunking some of the finer points (besides Gimlet Media-produced Science Vs, which fact-checks health fads).

It comes out of this idea of journal clubs, which are kind of a time-honored tradition in medical schools and schools of public health — of just getting together and trying to find a way to stay current on the literature in the field, because there’s so much going on that it’s hard to read in depth all of the studies that are coming out. And you’re basically taking the opportunity to pool everyone’s ability to think critically about studies.

But as an epidemiologist, people will say all the time, “Oh, you’re in public health, this study came out — should I believe it?” My answer is always: I have no idea … I don’t normally go and find that study that I heard on NPR disclaiming chocolate is good for you or bad for you, but everybody wants to know the answer to those. I’m in the dark as much as anyone else, so I thought if everyone’s asking this question, presumably everybody is having the same experiences of “I heard about it on the news and I don’t know what to do.” And we thought we have the skills … to be able to pick these studies apart and think about where they hold up and where they don’t.

What are some examples of the biggest falsities or incomplete conclusions you’ve checked so far?

To me, the falsities come in when the findings are translated into the news. Most of the studies that we tend to cover, the biggest problems you get into are around how far you want to push the conclusions that you can draw from the data. But I wouldn’t necessarily call those falsehoods — they’re just one person’s interpretation of the data.

There’s like three major categories of problems we run into. Anything we do in studying human beings requires us to be able to measure the things that we’re interested in. You go to the chocolate and Atrial Fibrillation example — in order to study that, you have to have good data on people’s chocolate consumption. And nobody’s out there watching you every day consume chocolate, and people’s memories around how much chocolate they consume are pretty average at best. So your ability to draw strong conclusions is largely dependent on how well you can measure the thing you’re trying to measure.

The second one we run into is what we talk about as the confounding problem. Anything you’re interested in, if you’re not doing an experimental study where you control how much chocolate people consume, then people who consume chocolate tend to consume other things that might be related to whether or not you develop heart disease. So people who consume chocolate may be more likely to consume caffeine, and we know caffeine puts you at risk for Atrial Fibrillation. So teasing out whether or not I’m looking at the effects of the chocolate or the effects of the caffeine are tricky.

The third one is it’s often very unclear in a research study exactly who the results apply to. Even if the study is good, is well-designed and came to sensible conclusions, doesn’t mean the effects we observe apply to everyone. This is a topic we specifically took on in one of the episodes. Trying to figure out whether the effects of chocolate on Atrial Fibrillation, if they are true effects, apply to everyone in the world, do they apply just to men or women, is tricky, and we often don’t have enough data to really tease that out in the studies that we look at.

What are some of your plans for the future of this podcast, perhaps ways you can expand to new audiences and coverage areas?

We want to take on a broad range of topics. What we find is that the things that the media picks up on a lot are things that relate to diet — it’s one of those things that people just really want to know about. So the stuff that hits the news is so much often about diet and heart disease, or diet and cancer. And they all generally have the same kinds of issues … but what we don’t want to is be so repetitive as to be saying the exact same thing, just substitute in your new favorite food or your new favorite outcome. So we’re looking for things that allow us to communicate something unique, something that this study did differently that makes it better than the others, or something that they did that is a common trap that studies fall into — really just trying our best to educate people about the way studies go wrong.

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.
Daniel Funke is a staff writer covering online misinformation for PolitiFact. He previously reported for Poynter as a fact-checking reporter and a Google News Lab…
Daniel Funke

More News

Back to News