Banner
    How Reliable Is Peer Review?
    By News Staff | May 21st 2010 12:00 AM | 2 comments | Print | E-mail | Track Comments
    Peer Review is universally used to ensure the quality of scientific research, but the process may not be as reliable as people assume. A new study in PLoS One suggests that the recommendations reviewers may not be much more reliable than a coin toss.

    "Peer review provides an important filtering function with the goal of insuring that only the highest quality research is published," said William Tierney, M.D., a Regenstrief Institute investigator and study co-author. "Yet the results of our analysis suggest that reviewers agree on the disposition of manuscripts – accept or reject – at a rate barely exceeding what would be expected by chance. Nevertheless, editors' decisions appear to be significantly influenced by reviewer recommendations."

    A total of 2,264 manuscripts submitted to the Journal of General Internal Medicine (JGIM) were sent by the editors for external review to two or three reviewers each during the study period. These manuscripts received a total of 5,881 reviews provided by 2,916 reviewers. Twenty-eight percent of all reviews recommended rejection.

    However, the journal's overall rejection rate was much higher -- 48 percent overall and 88 percent when all reviewers for a manuscript agreed on rejection (which occurred with only 7 percent of manuscripts). The rejection rate was 20 percent even when all reviewers agreed that the manuscript should be accepted (which occurred with 48 percent of manuscripts).

    "We need to better understand and improve the reliability of the peer-review process while helping editors, who make the ultimate publish or not publish decision, recognize the limitations of reviewers' recommendations," said Dr. Tierney, who served as JGIM co-editor-in-chief from 2004-2009.

    "Published research is becoming a more and more significant factor in scientific dialogue. Physicians and other researchers are no longer the only readers of medical studies. Patients and their families and friends now regularly access medical literature. This makes the review process even more important."


    Citation: Kravitz et al., 'Editorial Peer Reviewers' Recommendations at a General Medical Journal:
    Are They Reliable and Do Editors Care?', PLoS ONE 5(4): e10072;
    doi:10.1371/journal.pone.0010072

    Comments

    Gerhard Adam

    In general, I think the concept of peer-review is over-rated.  Overall, any acceptance or rejection of an article still comes down to whether individuals are convinced by the evidence and likely reflects their own personal opinions and bias (the most objective individual will still be influenced by their own ideas and "theories").

    Until an experiment is repeated or a theory tested, what is actually published is little more than an idea, good or bad.  While it may be useful to have a consensus of opinions to help weed out the truly terrible ideas, it should be clear that even the most preposterous "theories" will still find a way to be heard.

    As a result, information presented to the public doesn't really require peer-review since that is simply an "argument from authority" position (presuming the general public isn't capable of verifying the data or information provided).  Anyone engaged in research is likely already pursuing their own ideas and may well utilize published information to bolster or refute their work, but in the end what is published is largely immaterial until a sufficient consensus of experiments makes a more permanent determination.



    In the end, peer-review is simply the first round in which an idea can be criticized and potential problems be exposed.  However, there isn't a scientist alive that is exempt from considerations from their own careers, their livelihoods, and their own pet ideas to ever provide a truly objective assessment.  Therefore, it doesn't matter what process is used, be it objective evaluation, or the toss of a coin.  What matters is only that ideas are subject to some degree of scrutiny and that there is a follow-up to ensure repeatability.  Publication simply allows ideas to be exposed over a much broader range of individuals than any other mechanism, but it doesn't make the article any more true until it is repeatedly confirmed. 

    Mundus vult decipi
    It is probably better to use a scale of ratings for peer review, such as 1 to 5, worst to best. One can plot probabilities of these ratings from past experience with items to be reviewed and reviewer scorings. If you know that one reviewer has given a specific rating for an item under review, you can plot the probabilities of other reviewers giving alternative ratings, again from past experience with items being reviewed. You can actually estimate the information in a rating from the difference in the average uncertainty of the a priori and a posteriori distributions.

    This approach, which I have used in the past, does away with the assumption that reviewers define "ground truth" and gives a basis for determining how many reviews to seek per item to be reviewed and how much credence to give to their reviews.