That's an awfully narrow claim, right? Like a baseball player arguing he led the team in 9th inning doubles in the month of August.(1) Coffee has already been touted as a way to lower risk of type 2 diabetes for a while. Before you get too excited about this "filtered" coffee preventing diabetes, we need to remember what they are measuring - numbers, not coffee. This is not a science finding, it is an "exploratory" result. Drinking coffee, filtered in paper or Turkish in a pan, is not going to prevent diabetes any more than a juice cleanse prevents whatever that stuff is claiming to prevent.
Coffee is only good for you if it's a substitute for something bad, like alcohol. That is rarely the case and journalists are often guilty of willfully ignoring science reality to get epidemiology pageviews.
It's just correlation - but the farther people are removed from science the more popular correlation is
They are only talking about a group of people who claim on surveys they did X and then got Y. In common parlance, there is a phrase - 'it's academic' - which means it is just arguing a point that won't have any real-world relevance, like who did a better criticism of a criticism of Proust. In science parlance there is a similar phrase - 'it's just correlation.' "Correlation is not causation" has become such a cliché that we tend to forget what it means but the meaning remains important. Just because we can correlate the rise in autism to the increase in organic food purchases does not make it science.(2)
An exploratory finding means that science might want to take a look at it and scientists will determine how credible the link is. But often today these claims are truly irrelevant, taking 'it's academic' to the next level. Yet media highlight them just about every week so it's important to remember how we got here and to demand science journalism get back to critical thinking.
Bad correlation - it's the Harvard way
Harvard School of Public Health epidemiology groups popularized taking giant rows of foods and columns of diseases/benefits and writing about whatever among those could be linked with poorly understood "statistical significance." It was not malicious at first, but just like some in the science community rush to study every health fad(3), some saw the attention in the New York Times and began to chase it, rationalizing that the grant money they get if they work on something popular in the public will allow them to do important things.
Thanks to Harvard SPH abusing food frequency questionnaire data for decades, you now read strange claims about foods being "linked" to causing or preventing cancer every month. It's not science, it is just bad epidemiology. I like to use Dr. Stan Young's examples because he is an expert in statistics and can show examples using anything, from coin flips to Dungeons & Dragons dice. Using their method, lots of diseases and benefits or harms and lots of foods, he was able to show with statistical significance that coin flips are not random, there is an increased risk of tails occurring.
The numbers of rows and columns he used were from an actual Harvard paper. He showed you were guaranteed to get a statistically significant result. Even that coin flips will be tails. Frame it properly - 'coin flips are biased, women and minorities impacted most' or something provocative - and you will be in the Washington Post.
Food frequency questionnaires are why the public trusts epidemiology little and food claims even less. The discipline has fallen a lot from the time when they showed cigarettes, PM10 smog, and alcohol caused cancer. Now it's more like a media hustle and groups with an agenda, like International Agency for Research on Cancer (IARC) in France, Ramazzini Institute in Italy, or our own National Institute of Environmental Health Sciences (NIEHS) on the scaremongering side, and Harvard on the Miracle Food side, leading the way.
In this case, the team, led unsurprising by a former member of the Harvard School of Public Health, used a proxy for "health" - so already a confounder - which was previously correlated to less type 2 diabetes - another confounder - but only in dose - a third confounder. They highlight filtered coffee because that is the only way they found a statistical difference. Boiled coffee (Turkish, etc.) did not cause more type 2 diabetes.
Epidemiology papers are only exploratory - they are hoping to create a hypothesis - so it might be unfair to criticize epidemiologists for the hype they get when clearly corporate media is exaggerating these findings, but IARC and NIEHS not only allow it to happen, they encourage it. Though their papers can't really show risk, they can only identify a potential hazard (often at 10,000 times normal doses), their media kits and press release claims throw around the term risk all of the time.
Though America has the highest adult science literacy in the world, as we have become increasingly advanced we are also increasingly separated from common sense about chemicals and food, and that means it is easy for activist groups to conflate hazard (the potential for harm) and risk (the likelihood of that potential harm happening) leaving the public worried about endocrine disruption, even if the chemical is parts per billions, formaldehyde from their floors, and one gene versus another gene in vegetable oil.
If a picture is worth a thousand words, here is hazard and risk courtesy of the Campaign for Accuracy in Public Health Research:
How disenfranchised are we from reason today? An alarming number of people will see that trade groups fund CAPHR and dismiss the factual nature of this, while believing Natural Resources Defense Council or Center for Food Safety despite their well-known efforts to manipulate the public and line their coffers.
Using IARC simplistic epidemiology, bacon is as dangerous as plutonium, and we know that is not true. Why don't journalists at the New York Times?
Activists journalists and their zeal to generate pageviews for their corporate bosses don't just ruin the public's trust in science, it makes scientists despondent. Why bother doing science when epidemiologists and media are going to ignore it and then politicians are going to make decisions based on what polls say?
"Why should I continue to do science?"
Professor Nina Cedergreen from University of Copenhagen said today about calls to ban Roundup in Denmark despite any evidence of harm (single quotes because I am not sure how accurate Google Translate is), 'Should we ban it anyway? Where does it leave knowledge-based legislation and principles if we can just ban anything without evidence? Why should I continue to do science and try to assess which chemicals pose a risk and which do not if society is unwilling to respect the knowledge we create?'(3)
These kinds of correlation papers are even more reason to do science. If scientists don't, the world will be overrun by National Institute of Environmental Health Sciences and Ramazinni Institute woo. A world of epidemiological correlation is a world where biology, chemistry, and toxicology don't exist. That isn't good for anyone.
NOTES:
(1) "Mr. Baseball", and I used that twice in the last two weeks so I might as well link to the clip.
(2) Before you dismiss the 'organic food causes autism' link, that thinking is pretty common, activists with an agenda just use other inputs. Pesticides, GMOs, vaccines, fracking, you name it and someone has correlated it to disease outcomes. All it takes is two curves going the same way. Good epidemiologists avoid that, but the bad ones who are out to scare the project about their pet concern from inside National Institute of Environmental Health Sciences do it all of the time.
(3) Gary Taubes in conversation about how that can be a honey pot for the the lazy-brained and miscreants:
I used to joke with my friends in the physics community that if you want to cleanse your discipline of the worst scientists in it, every three or four years, you should have someone publish a bogus paper claiming to make some remarkable new discovery — infinite free energy or ESP, or something suitably cosmic like that. Then you have it published in a legitimate journal ; it shows up on the front page of the New York Times, and within two months, every bad scientist in the field will be working on it. Then you just take the ones who publish papers claiming to replicate the effect, and you throw them out of the field. A way of cleaning out the bottom of the barrel.
Comments