Banner
    Your Thoughts On Peer Review Needed
    By Tommaso Dorigo | June 21st 2010 04:30 AM | 26 comments | Print | E-mail | Track Comments
    About Tommaso

    I am an experimental particle physicist working with the CMS experiment at CERN. In my spare time I play chess, abuse the piano, and aim my dobson...

    View Tommaso's Profile
    On July 4th I will speak at ESOF 2010, in Torino (Italy), about the topic of "What's up with peer review: The future of peer review in policy, research and public debates", in a panel which includes Philip Campbell, editor in chief of Nature (the magazine, not the bitch), and Adrian Mulligan from Elsevier.

    As you might imagine, the topic is varied and spans several levels. Each of us will have 8 minutes to make a few points, and then a debate moderated by Tracey Brown (from Sense about Science, the organizer of the session) will ensue.

    Here is the "abstract" for the session:

    What is the future of peer review? What does it do for science and what
    does the scientific community want it to do? Should it detect fraud and
    misconduct? Does it illuminate good ideas or shut them down? Does it
    help journalists report the status and quality of research? Why do some
    researchers do their bit and others make excuses? And why are all these
    questions important not just to journal editors, but to policy makers
    and the public? In September 2009 Sense About Science in association
    with Elsevier are publishing the latest results from worldwide survey
    of 100,000 scientists’ preoccupations and preconceptions as both
    authors and reviewers of scientific papers. The survey will explore
    whether researchers attitudes to peer review are changing and whether
    there is a gap between their perception of peer review and the reality
    of what it can do. These insights will provide the baseline for
    discussions on how the system needs to evolve to cope with challenges
    it faces such as the expansion of the international research community,
    the issues of fraud, the development of open access and the role peer
    review plays in science policy and public debates about the quality of
    science. In this session a panel will respond to these latest results
    and discuss what the future for peer review is and what the
    international community can do to address the challenges facing peer
    review.

    Now, these are interesting topics, on which I am sure that some of you have quite definite ideas and opinions. Can I ask you to contribute by offering in the thread below your own thoughts, and proposals for what I should report at the session ? Your help is appreciated. To guide the discussion, here is a shorter list of issues I would like to collect thoughts about:




    • fraud detection: does it really work with peer review by scholars who often look at their peer review charges as a hindrance to more interesting obligations ?



    • Loosely connected to this issue, is the (arguably broader) topic of crackpottism, anti-crackpottism, and the degree to which we want "true science" to be sheltered from non-scientific attitudes. Blogs have beaten this topic to death, but in connection with the topic of the session above, there might be some ideas to bring up.



    • Do we need peer review in case the preprint article contains the signature of hundreds, or even thousands, of distinguished scientists collaborating in huge experiments ? Imagine a single reviewer rejects a paper with 2500 signatures (this happens). Is it not ridiculous ? What do you think ?








    Comments

    One suggestion that takes care of most problems in peer review: Publishing by protocol.
    1) Bad science is never conducted because the scientist knows ahead of time that it will not be published.
    2) Publication bias is eliminated because publication is not contingent on significance.
    3) Collecting hundreds of measures and only reporting a few is not possible.
    4) Takes the 'con out of econometrics' because data analyses must be specified ahead of time rather than allowing for selective reporting of significant regressions.
    5) Makes better science because promising research can be refined by back-and-forth between reviewers and authors before research is conducted.
    6) There is no incentive to 'game the system', only to be an honest scientist.
    7) Makes the reviewers job more interesting as the reviewer can make suggestions to the author to improve the quality and importance of the research rather than merely be a passive gatekeeper of publication.
    8) Tracks important methodological details precisely, allowing for replication.

    -Alex

    Hfarmer
    fraud detection: does it really work with peer review by scholars who often look at their peer review charges as a hindrance to more interesting obligations ?

    No it does not work in peer review.  We see this in the case of Jan Hendrik Schon. His papers were peer reviewed and got into prestigious journals.  Only after other researchers, who were not the reviewers, really looked at them was his fraud made clear as day.  Peer review is not automatically good for fraud detection. 

    Loosely connected to this issue, is the (arguably broader) topic of crackpottism, anti-crackpottism, and the degree to which we want "true science" to be sheltered from non-scientific attitudes. Blogs have beaten this topic to death, but in connection with the topic of the session above, there might be some ideas to bring up.


    Peer review is of limited use as an anti-crackpot measure.  The main limitation of peer review in this is that reviewers often are people who came up with the currently accepted models or results.  Which they will bear only minor modifications to , and no serious questioning of the foundations of as "true science".  It is human nature to want to be right, and to protect ones position.  Some young whipper snapper comes with results that show the well established models of the reviewers generation are fatally flawed.  That reviewer will sometimes take those strange new ideas and brand them crack pot.


      Do we need peer review in case the preprint article contains the signature of hundreds, or even thousands, of distinguished scientists collaborating in huge experiments ? Imagine a single reviewer rejects a paper with 2500 signatures (this happens). Is it not ridiculous ? What do you think ?


    On the face of it I would say that peer review is not needed at that point.  The thing is with papers like that how many of the people actually had a real role in the research?  I know some people who were summer interns at Argonne and Fermi who got publication credit on such papers.   Personally if I were a peer reviewer I would be a real a hole and ask  that that list of names be cleansed of such names. Just so I had a  way of getting back at the world for my not having had such opportunities.  (^_^)


    As long as paper based journals are used peer review needs to be used. So ok. here is what I would propose.  Why not open up peer review to the scientific community as a whole.  Any paper worthy of being sent for review would be put online and subjected to review by interested scientist for say six months.  If it got good marks overall and positive comments then and only then it would be committed to paper.   Oh I'll bet I know the real reason such a thing would never be done.  It wouldn't be exclusive enough for allot of people.
    Science advances as much by mistakes as by plans.
    lumidek
    The primary purpose of the peer review (in natural sciences), whenever it works and is useful (which is not always), is to eliminate completely wrong papers, and to suppress completely wrong and unsubstantiated statements from the papers....

    This extra filter may increase the average quality of the published papers. But it's never a universal recipe for success and quality. Bad and worthless papers may still get published while the peer review may also kill some papers that contain valuable stuff. There's no perfect recipe to eliminate these two kinds of mistakes.

    In some fields and some periods of time, the community could have worked without any peer review. However, it's clear that ultimately too much rubbish gets into the system and needs to be removed or disfavored relatively to the stuff that is more promising or more valuable - or at least that has a higher change to satisfy these conditions.

    How much work should be put into peer review, what fraction of the papers should be suppressed etc. are quantities that depend on the discipline and context. Some values may make the community more efficient than others. It's silly to imagine that there exists a predetermined universal answer to the "optimum rate" or "optimum amount of time spent with peer review".

    When peer review becomes a tool of a clique to protect some predetermined views, instead of judging papers by their content and quality, peer review may become counterproductive rather than helpful. That's the case of climate science and maybe several other disciplines. Obviously, in such cases, there's no easy "internal" solution of this problem because the bulk of the community has become corrupt. The only workable solutions involve interventions from outside.

    Referees have no business to impose predetermined ideological criteria - and not even their predetermined fixed ideas how much unusual the papers should be - into their judgements. Referees shouldn't systematically promote mavericks and they shouldn't systematically fight against mavericks, either. Only particular statements, their validity, justification, and level of interest for other researchers should play a role.

    Peer review is just one method for the community or a journal to try to increase the quality of its scientific production. It may be replaced by a reviewing of a standardized editor - who is supposed to be competent (and who may have competitors in other journals) - and in many other ways. It's absurd to imagine that some particular tradition how peer review works is a necessary pre-requisite for science. After all, ideal scientists don't need any peer review. They can do science well by themselves.

    Concerning peer-reviewing Fermilab/CERN papers with thousands of authors, of course that a single reviewer may be right while the thousands (well, the small core that actually led the paper) may be wrong. In fact, I think that such a situation is very frequent - almost comparable to 50% if you choose smart enough referees. On the other hand, sociologically speaking, it is clear that the large collaboration could feel irritated. In this sense, it makes no sense to try to "peer review away" papers written by large collaborations that are effectively thought to have a [near] "monopoly". 

    There can still be critical feedback and competition and it is important for science to go on - but it is more reasonable if such competition takes place after the publication rather than before it.

    Mathematicians have incredibly high standards of the peer review. I was completely unfamiliar with it when I received my first math-journal article to review. It's a different culture, linked to their generally higher standards of rigor. The impact of the papers may still be pretty small but the reliability of the proofs etc. should be guaranteed higher than in physics. The faster a discipline one considers, the harder to imagine it is to introduce a similar system - especially because the referees would be shocked how much work is expected from them "for free".
    Hfarmer
    I find myself agreeing with the illustrious Dr. Motl.   That's the last sign before the Apocalypse. 
    Science advances as much by mistakes as by plans.
    Hank
    Peer review works well, just not 5 standard deviations well that some might like.   

    These are both multi-billion dollar corporations who insist that quality costs money for what they do but not for what the scientists do.   If they really cared about peer review they would improve the people doing it by paying them - when people are paid, there is accountability and everyone will accept it.   It's no difficulty at all to have people paid and maintain anonymity, the same way they do it now without paying them.
    dorigo
    My hearty thanks to all of you for your thoughts, which I need to digest. I will get back to you with some commentary.

    Hank, I have one quick one though. I think you are right on the money ;-) and I think I have a solution. Let me add, a brilliant one. So brilliant I think it is impossible that it has not been thought of before:
    - peer reviewers do it for the improvement of science, but often they do it as a lower priority because they are busy;
    - sloppy reviews make the system ineffective, and they should be avoided;
    - paying reviewers is a hard option to implement because publishers do not like it; authors do not want to pay for the review either;
    - peer reviewers would like some recognition; they could use it for their curricula, just as much as they use their publication lists.

    My solution: if the reviewer interacts positively with the authors and accepts the paper for publication, he gets mentioned in the paper below the author list: "article reviewed by dr. Lubos Motl, Harvard University". If he rejects the paper, or if he does not want to be mentioned, he remains anonymous.

    I will bounce this idea off the editors of Nature and Elsevier during the panel discussion.

    Cheers,
    T.
    lumidek
    I don't really think that reviewers may get an "automatic credit" just for being reviewers. Whether they deserve to be praised or chastised would have to be determined by another criterion, for example by other meta-reviewers. ...

    Of course, you may choose the latter to be the authors themselves. But that's also risky because in that case, the reviewers could be specifically encouraged to flatter the authors which is another unwelcome bias.

    Also, most jobs to do a review are at least one order of magnitude smaller tasks than co-authoring the paper itself. In some special circumstances, a referee may do enough useful work to be a co-author, if not  a lead author of the paper :-) - but if that's so, it should probably be determined by the original authors, or (s)he should write his or her own paper.

    Moreover, most reviewers would likely reject to be reviewers unless their anonymity were preserved. I would not be sure about myself in a similar situation.
    Fred Phillips
    Not much use arguing the absolute virtue of peer review; we must ask, what is the alternative? To let a financially interested corporation judge the science? A government functionary? An autocratic editor-in-chief?

    Then if we agree that imperfect peer review beats the alternatives, how can we ensure it happens? Perhaps rules for reviewers need to be like jury duty in the US - a matter of law that you must serve if called, and your employer cannot dock you for the time spent.

    Tommaso, if only because it's so current, read the June 13 article "We Must Stop the Avalanche of Low-Quality Research" by Mark Bauerlein, Mohamed Gad-el-Hak, Wayne Grody, Bill McKelvey, and Stanley W. Trimble, in the Chronicle of Higher Education, before you go to Torino.
    Hank
    Not much use arguing the absolute virtue of peer review; we must ask, what is the alternative? To let a financially interested corporation judge the science? A government functionary? An autocratic editor-in-chief?
    That is the case now.   There is no small independent publisher that is prestigious in publishing, they are all million and billion dollar corporations.    Government functionaries decide what gets funded based on studies they may not full understand and editors-in-chief have the final say on what goes out in journals.

    In those 3 examples, the people are paid.  The only ones not paid, in currency of any kind, are the reviewers doing the bulk of the post-research work.    I don't think it means a sacrifice in anonymity, as Lubos is concerned about - these corporations keep reviewers anonymous now (mostly - it isn't like most researchers don't know the style of their peers, so they often have a clue, and requests to add certain citations are a giveaway) so it is just a question of how you get better people desiring to do it, and not just as a function of having some time to kill.
    Fred Phillips
    Yes, but...

    Elsevier never, ever second-guesses me (as editor) about what gets published in the journal. In turn it's extremely rare that I second-guess area editors and reviewers.

    And next month I'm on a government review panel for research proposals,with other academics, and the agency will fund what we tell 'em to fund, up to the budget they've specified. (At least, that's what they say they're gonna do.) We get paid a token amount and expenses covered. But no pressure exerted on us to favor this proposal or that one.
    I think every paper no matter how crazy should be published online. Then there should be a process of open peer-review from public comments and ratings. The reviews would have to be reviewed too of course. A score would be built up from the ratings that gauge the value and correctness of the paper according to various criteria that the reviewers judge.

    I think people would need encouragement to do reviews. That could be handled in various ways. For example reviewers could be given credits for good value reviews with the credits helping to make the reviewers own papers attract more reviewers. Being a highly rated reviewer would also be a good mark on a CV.

    The advantage of this system is that if a paper that originally seemed too crazy to be right is eventually found to be important it will always be available and someone can come back to it and give it a better review later.

    lumidek
    Dear PhilG,...

    surely you're joking when you talk about "peer reviews from public comments and ratings". 99.999% of the public has no idea about science. Its ratings would be noise or worse and it would kill pretty much all valuable science. Instead, pure junk would thrive, for completely irrational reasons, and I don't have to tell you which authors who already exist in the current institutionalized science may be considered junk. There would be many more of them.

    You can't fix this fundamental problem by adding even more reviewers and reviewers of reviewers assuming they're mostly from the public. They still have no idea and they won't have any idea, regardless of whether they're called authors, reviewers, reviewers of reviewers, or reviewers (of reviewers) to the power of 137.

    A necessary pre-requisite for science to work is the *isolation* from the pressures by the public - its bulk that is not interested in science and that doesn't respect (or is unable to respect) its rules. Ratings by all readers in the style of PageRank is an interesting idea but be sure that by the moment when most "votes" for a paper in a discipline to spread (or to be accessible) will be submitted by the general public, science will be dead.



    Cheers
    LM
    I didn't mean "public" in the sense that it would be the public that was doing the reviewing rather than scientists. I meant it in the sense that the reviewing would be publically open for anyone to see. However the system I describe would be open to anyone to contribute to both as authors and reviewers. The idea is that contributions from people who do not know what they are talking about would be low rated so that people looking for research that is endorsed by the knowledgeable scientific community can filter it out. Others may use different criteria to decide what they want to read. This principle would apply to the reviews as well as the papers.

    The concept is that people who are professional scientists would naturally get good rating as reviewers and authors because of the quality of their work, not because of their academic position or qualification. Professional scientists would be the ones who did most of the reviewing because they would care about the positive effect it has on their carears.

    lumidek
    Dear PhilG, I think that this popularity contest of yours would be (even) less meritocratic than the appraisal by "academic positions or qualifications". The latter is not perfect but in scientific disciplines that work, it actually mostly boils down to the decisions of people who know what they're talking about. In your regime, it would not....

    You just don't seem to appreciate how extremely selective the appraisal has to be kept in order for the rating to remain meaningful in advanced disciplines.

    If you consider e.g. a paper on topological string theory, there are roughly 20 people in the world who have ~50% qualification to judge a new paper and only 5 people who can actually look at the new papers as peers (or superiors). Chances are that you won't force all (and not even most) of them to reveal their rating.

    So even if you dilute the votes of others by a factor of 1,000 or more, your rating will be dominated by noise and unscientific criteria and interests. It's pretty much noise even if you ask the community of string theorists (about a topological string theory article) only. Another important observation you're missing is that there is no "universal expertise". People have different expertise in different things. If your algorithms don't properly reflect these things, they would be useless or worse.


    Experimental particle physics may be a bit less selective but it's still hugely selective for your inclusive algorithms to be completely unrealistic. A D0 paper may have thousands of authors but only 50 of them really understand the cutting-edge problems of the paper at an active level, to be able to evaluate and/or improve the paper, and there are 50 others in the world. You will only get a few of them to vote. Almost everyone else contributes noise.

    The scientists who actually do meaningful cutting-edge science will simply ignore any website whose author list or reviewer list is dominated by the public. And they do so for a very good reason. It's just not possible to do science in the populist way you are imagining.
    Lubos, your points are all valid and I agree that it appears like it would be very difficult to get such a system to work. Part of the problem is that academic scientists might not like making their reviews public, but at least they would be accountable for them.

    However, I think such a system could still work. With Google rank the weight increases rapidly for important pages and the vote of a small number of good pages can easily swamp the voice of less important ones. If the user interface was right such a system would work for science reviews. Just have a look at how mathoverflow.net has worked out. If someone had described such a site before it existed I would have been very skeptical about its success. Now it has a high quality of dicussion and some great mathematicians take part despite its openess.

    What I described would be something on a much grander scale and I think it would be difficult to make work, but if some clever people worked out the right rules it might just fly. At present a lot of academics complain about problems with peer-review, its cost and its short comings, yet they are not bold enough to try out alternatives that are not just more of the same thing.

    lumidek
    Dear Phil, don't get me wrong, PageRank-like algorithms are things of the future and I am convinced that at some point, they will play a role in the publishing, too....

    If you want to create such a system, and I am sure that you have the potential and IT skills needed for that, you must just abandon your particular biases and activist goals for a moment. Your goal should be to adjust a system that gives the researchers what they actually want - a reasonable ensemble of a readable number of papers that they really care about and find important. It's not important whether there will be a few bad ones but it's more important that the ensemble won't be completely flooded with nonsense, and it's important that the elimination of the good papers will be unlikely.


    If a candidate algorithm leads to something completely different as hep-th than what gets into the arXiv, it is a signal of a flaw in your candidate algorithm, not a flaw in the system behind the arXiv.
    dorigo
    Lubos,  I agree that we have to avoid noise in the rating of papers, but there are one of two interesting ideas in what PhilG writes. Maybe all it takes to improve his model is to have reviewers' votes count with a weight proprtional to their h-index, or something like that.

    As for publishing online, it is however already effective. People can publish on vixra if they want, or on their private web pages, for a dime a dozen. I do not think that gems go undiscovered this way.

    Cheers,
    T.
    lumidek
    Dear Tommaso, 
    I agree that these are good ideas - he's not the only one who does it. But even the rough ideas how it would work are completely unrealistic. In particular, proportionality is surely not enough to fix this systematic problem. You will find not only 90 (Witten's h-index or so) but easily 10,000 people with h=1 who could almost completely eliminate Witten's perspective. The noise would still win.

    If you said "h-index squared", which scales like the number of citations (because "h" measures the size of a maximum triangle you can fit into the distribution), it would be more reasonable. But a random ad hoc formula doesn't really offer a systematic solution to the problem of counting.

    I don't think that gems go "discovered" by appearing on a random website. Relevant people don't read and can't read all websites in the world - or vixra.org, for that matter. There's just way too much random information in the modern world - much more than the amount in the 17th century - and it needs to be filtered and classified. The huge amount of info is the reason why the categorization is damn important. It must be very "elitist" and selective for science, especially its advanced disciplines, to work.

    The rating may be personalized, after all. People could have their own computer-assisted rating schemes. But when you talk about some objective criteria to sort, it's clear that the scientific achievements have to be acknowledged at least proportionally to the number of citations. Also, the system should probably protect itself against people who vote so much and don't have that much to offer.

    Cheers
    LM
    Hfarmer
    I get the gist of what you are saying here.  The problem is the world is changing and the traditional ways won't work precisely because of the amount of information there is to filter.  The word you keyed on is PEER.  How can things like the open peer review system ensure the relevant peers will be the reviewers? 
    Traditional peer review is S L O W.  It can take months or better than a year in some cases.   In a world with more people and hence more scientist in every discipline that means way more that has to be filtered.  If something does not change then their could be insurmountable backlog of unreviewed and unpublished papers at some point.  

    Now as for getting actual peers to review papers we know how to do that.  The question is will enough such people be willing to give up anonymity?  You doubt that.  I do not.  As time passes a generation of new scientist who are used to having basically no privacy in the internet age will be way more willing to expose themselves to criticism of their reviews.  Instead of each paper getting a sort of cloak and dagger review, it could get a comment thread or forum that only recognized scientist in the relevant field and subfield could comment in.   Sort of like people who can endorse papers to Arxiv, but their selection and status would also be more transparent.  

    However radical changes like that are probably best left for the next decade or two.

    Science advances as much by mistakes as by plans.
    What about peer review WITHIN large collaborations?

    Does the insistence on consensus inhibit publication of non-consensus ideas by collaboration members?

    If collaboration consensus rules had been applied to the commission studying the Challenger space-shuttle disaster,
    would Feynman's ideas have been allowed to be published in the final report?
    In fact they were only allowed to be published as an Appendix to the Rogers report, and were required to be entitled "Personal observations ..." of Feynman,
    but
    maybe collaboration consensus rules are more restrictive than the rules of the Rogers report.

    Tony

    Tomaso, yes we can already all publish before review. Yet the peer-review system is still based on the principle that you can only "publish" if and when you pass review, even if it does not happen like that in practice. That is the anachronistic anomaly that needs to be sorted.

    I hope you find the perfect solution in time for your talk!

    logicman
    If an animal, plant or geographical feature is being studied because it is vanishing, the publication of a paper after the thing has gone forever is merely of historical value.

    In a rapidly changing world we need rapid peer review.

    Tommaso, pardon my dropping in late but I think that double-blind review would be very helpful to the "unknowns" who have talent. American Journal of Physics e.g. allows or used to allow this choice. DBR allows a submitter to ask for his or her name and affiliation to be withheld, to avoid prejudicial attitudes in the reviewers (which may be subconscious no matter how hard the latter tries to be objective.) There was a study, a sort of "sting" (I forget details) in which previously accepted scientific papers by well-known workers were resubmitted as if from unknown or unaffiliated writers. Many of them were rejected, which shows the bias is real. There are talented unorthodox investigators out there like Carl Brannen, an ethanol plant engineer last I checked. He made various theoretical advances in particle physics and has been published in PRJs.

    dorigo
    Hi Neil,

    if the study shows that a double blind approach rejected papers previously accepted, it means that it is an even stricted criterion; it does not automatically show to be a better way to handle peer review, IMO.

    I think that the option of going double blind still betrays that the author is unknown. If he or she was not unknown, the choice of double blind review would not be taken. So it's either double blind for all, or for none. For all, it may still work in some fields of research, but not across the board...

    These are my first thoughts on this matter.

    Best,
    T.
    Dear Tommaso and others,

    Here is something new in this front: The Peerage of Science Community. I have not looked very thoroughly into it yet, but it seems like an interesting endeavour. Decide for yourselves:

    http://www.peerageofscience.org/public/etusivu.php

    dorigo
    Thank you for the link Sakari!
    Cheers,
    T.