By now you'll be familiar with publication bias: the phenomenon where studies with boring, negative results are less likely to get written up or published. You can estimate this using a tool such as, say, a funnel plot. The principle is simple: expensive landmark studies are harder to brush under the carpet, but small ones can disappear more easily. So split your studies into "big ones" and "small ones": if the small studies, averaged out together, give a more positive result than the big studies, then maybe some small negative studies have gone missing in action.

Professor John Ioannidis took a different approach. He collected a large, representative sample of these anatomical studies, counted up how many positive results they got, and how positive those results were, and then compared this to how many similarly positive results you could plausibly have expected to detect, simply from the sizes of the studies.

The answer was stark: even being generous, there were twice as many positive findings as you could realistically have expected from the amount of data reported on.

Researchers don't mean to exaggerate, but lots of things can distort findings by Ben Goldacre Guardian