What is on the mind of all the physicists all over the world right now? Quantum Gravity? Global warming? No. It is the same that is on the mind of all the other scientists in academia, too. Impact factor (IF)! How can I get my name on a paper into a high IF journal – that is the question. Publish Or Perish – POP science, popular science.

I came across a new paper (and some older ones) these days that are definitively worth a read if you are interested in the precarious state of science and its likely getting worse:

"Nefarious numbers" by Douglas N. Arnold and Kristine Fowler just appeared on the archive:

“We demonstrate that significant manipulation of the impact factor is being carried out by the editors of some journals and that the impact factor gives a very inaccurate view of journal quality, which is poorly correlated with expert opinion.”




They have a (not so) hidden agenda, as they seem to imply that the problem is almost only important with mathematical journals. It feels somewhat as if D. Arnold would be quite happy if IF were flawed the other way around instead, rather than fixed or abandoned. Problems with extreme Editors like El Naschie and He may be more acute in the field of mathematics, but IF poorly correlating with quality is certainly in most fields the case (and in some, like philosophy, high IF guaranties low quality).

Another rather recent (2009) paper about the uselessness of IF worth a read and by the same author: Douglas. N. Arnold: “Integrity Under Attack: The State of Scholarly Publishing” Newsjournal of the Society for Industrial and Applid Mathematics (SIAM News) 42(10) (2009)

And two good papers about plain cheating and spinning results, which are nowadays necessities in science:

D. Fanelli: “How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data” PLOS One 4(5) E5438 (2009)

And: I. Boutron, S. Dutton, P. Ravaud, D. G. Altman: “Reporting and Interpretation of Randomized Controlled Trials” JAMA 303(20), 2058-2064 (2010)

“Conclusion: In this representative sample of RCTs published in 2006 with statistically nonsignificant primary outcomes, the reporting and interpretation of findings was frequently inconsistent with the results.”

Their “frequently” is more than 40% of the papers! So, for anybody not an expert in a scientific field, picking up a scientific paper, the chances are close to 50-50 having rubbish in your hands.
This confirms my personal insight won from working in the few fields that I know about. So much to the basis on which we can found our insistence that politicians or the public should read more about science and trust the conclusions of scientific journals even if they do not grasp the details, because the details passed peer review. If I may as well flip a coin, why read those boring papers? Cherry picking – why not?

Previous entries in my series on cheating in science:

Golden Rules of Error Analysis

More on the Usual Cheating in Science

POP Physics: The Free Nanotech Scientific Journal Article Generator