In Sunday’s NY Times Magazine, Gary Taubes argued that epidemiology does not provide a good basis for health decisions — it is often wrong, he claimed. By “wrong” he meant experiments were more pessimistic. Things that seemed to help based on surveys turned out not to help, or help much less, when experiments were done. A 2001 BMJ editorial disagrees:

Randomized controlled trials and observational studies are often seen as mutually exclusive, if not opposing, methods of clinical research. Two recent reports, however, identified clinical questions (19 in one report, five in the other) where both randomized trials and observational methods had been used to evaluate the same question, and performed a head to head comparison of them. In contrast to the belief that randomized controlled trials are more reliable estimators of how much a treatment works, both reports found that observational studies did not overestimate the size of the treatment effect compared with their randomized counterparts. . . . The combined results from the two reports indeed show a striking concordance between the estimates obtained with the two research designs. . . . The correlation coefficient between the odds ratio of randomized trials and the odds ratio of observational designs is 0.84 (P<0.001). This represents excellent concordance.

Here is the data:

experiment vs observation

They should have reported that the slope of a line through the points is close to 1. Unlike the correlation, that is relevant to their main question — whether surveys tend to find larger risk ratios than experiments.

A later (2005) paper by John Ioannidis, one of the authors of the 2001 paper, claims to explain, in the words of its title, “why most published research findings are false.” The above data suggest that most published research findings in Ioannidis’s area are accurate.