In Adventures in Ethics and Science, Janet Stemwedel asks some questions about peer review — its purpose and its effect — prompted by strong online criticism of a peer-reviewed paper that was published with at least some significant review comments ignored.

One particularly interesting statement that Janet makes is in the second sentence of this paragraph:

As Bora was the "editor" of the paper rather than an official referee of the paper, it’s not clear whether the journal editors overseeing the fate of this submission actually forwarded Bora’s critiques onto the author, or if they did forward the critiques to the author but indicated that they wouldn’t count. Myself, if I were the author of the manuscript, I think I’d want more prepublication feedback, not less, on the theory that this would help me produce a stronger paper.

Now, there are certainly some authors — perhaps many, and perhaps some of them are even graduate students — who, like Janet, are eager for critical feedback from peer reviewers, the better to improve their papers, to make them faster, higher, stronger. Most of the authors I know, though, do not ride in that bobsled.

No, what I see, mostly, are authors that look at peer review as, to move the Olympics analogy to the summer games, hurdles to jump and bars to clear. Far from looking forward to suggestions for improvement, they are hoping for minimal required changes to get the paper published. Reviews that point out experiments that should have been done, data points that are missing, and analyses that are flawed or incomplete are decidedly not happily received, and will usually evoke not thanks to a thorough reviewer, but unkind epithets for a picky jerk.

To be sure, this partly comes from the fact that knowing what reviewers will expect has already steered the paper, making it better than it would have been without the “threat” of failure in peer review. I’ve heard many authors note that they’ll have to do such-and-so experiment, include data on this or that, clarify the explanation of the methodology, beef up the evaluation section, or review more related work, lest the review process require it later (or, worse, reject the paper outright).

I agree with the sentiments of commenter number 5 to Janet’s post:

I have always operated under the impression that publication in a peer-reviewed journal constitutes an endorsement that the paper in question is reasonable, complete, and methodologically sound.

The peer review is doing quality control, making sure there are no obvious problems, holes, inconsistencies, or the like. It is not, though, endorsing the analysis and conclusions — the opinions of the researchers.

And that’s an important point. Research will often show particular patterns and correlations, but interpreting those and concluding cause and effect from those correlations is a tricky process. It’s fair for authors to include what they think they see, and it’s fair for reviewers to call the authors on it if they disagree. But I would not like to see a paper with clean methodology and solid results be rejected for this reason. The accuracy (or not) of the authors’ interpretation is what the community discussion of the published paper should be hashing out.

This all gets tricky when a paper has mixed reviews. Ideally, if we have three reviews they’ll all hover somewhere in the same area, perhaps giving somewhat differing recommendations, but basically agreeing on the quality-control aspects. But things are sometimes less than ideal, and it’s not uncommon to have one reviewer who loves the paper and one who hates it — one who gives a recommendation to publish immediately with no changes, and one who just votes to reject, or to re-review only after major changes are made.

It’s usually up to the editor to resolve things at that point, and the editor’s prerogative may be to accept the paper despite the dissenter’s serious — and, perhaps, quite valid — objections. That appears to be what happened with the paper that prompted this discussion (though, oddly, one of the strongest dissenters seems to have been the editor, so maybe none of the official reviewers registered a strong objection — which seems unlikely, because the methodology in the paper is seriously flawed, to the point of being entirely non-scientific). Different journals have different rules about the role of an issue’s editor, and the extent to which the editor can override one or more reviews.

The whole point of peer review is, after all, not to put the decision into the hands of a single person.