User:Daniel Mietchen/Sandbox/Peer review: What is it good for

From Citizendium
Jump to navigation Jump to search


The account of this former contributor was not re-activated after the server upgrade of March 2022.


 <img style="max-width: 800px; float: left; margin-top: 10px; margin-bottom: 10px; margin-right: 10px;" src="http://ways.org/files/3157622458_e601ac31f9_o-250.png" /> The following is a resampling of <a href="http://cameronneylon.net/blog/peer-review-what-is-it-good-for/">a blog post by Cameron Neylon</a> in which he discussed the rational basis behind the current scholarly peer review system. While he focused on manuscript peer review and rarely mentioned the review of grant proposals, I took his text as a basis and replaced phrases which referenced solely the publishing process with comparable references to the research funding process, as detailed in ### add link to wiki diff ### this list of changes. Note that the same modifications have been applied to Cameron's text and to text he quoted, so these quotes should not be regarded as quotes in this new context but rather as verbal illustrations. Similarly, the simple find-and replace routine has produced a text which does not necessarily represent Cameron's views, nor mine — it is merely a thought experiment to stimulate debate on the matter. Like the original, this text is licensed <a href="http://creativecommons.org/publicdomain/zero/1.0/">CC0</a>.

Image adapted from <a href="http://www.flickr.com/photos/33895652@N04/3157622458">Gideon Burton</a>


The first week in February hasn’t been a really good week for peer review. In the same week that <a class="zem_slink" title="The Lancet" rel="homepage" href="http://www.thelancet.com/">the Lancet</a> fully <a href="http://www.thelancet.com/journals/lancet/article/PIIS0140-6736%2810%2960175-7/fulltext">retracted the original Wakefield MMR article</a> (while keeping the retraction behind a login screen – way to go there on public understanding of science), the <a href="http://news.bbc.co.uk/2/hi/science/nature/8490291.stm">main stream media went to town</a> on the report of<a href="http://eurostemcell.org/commentanalysis/peer-review"> 14 stem cell scientists writing an open letter</a> making the claim that peer review in that area was being dominated by a small group of people blocking the publication of innovative work. I don’t have the information to actually comment on the substance of either issue but I do want to reflect on what this tells us about the state of peer review.

There remains much reverence of the traditional process of peer review. I may be over interpreting the tenor of <a href="http://www3.interscience.wiley.com/cgi-bin/fulltext/123248917/HTMLSTART">Andrew Morrison’s editorial</a> in <a class="zem_slink" title="BioEssays" rel="homepage" href="http://eu.wiley.com/WileyCDA/WileyTitle/productCd-BIES.html">BioEssays</a> but it seems to me that he is saying, as many others have over the years “if we could just have the rigour of traditional peer review with the ease of funding of the web then all our problems would be solved”. Scientists worship at the altar of peer review, and I use that metaphor deliberately because it is rarely if ever questioned. Somehow the process of peer review is supposed to sprinkle some sort of magical dust over a text which makes it “scientific” or “worthy”, yet while we quibble over details of managing the process, or complain that we don’t get paid for it, rarely is the fundamental basis on which we decide whether science is funded or formally published examined in detail.

There is a good reason for this. THE EMPEROR HAS NO CLOTHES! [sorry, had to get that off my chest]. The evidence that peer review as traditionally practiced is of any value at all is equivocal at best (<a href="http://www.sciencemag.org/cgi/content/abstract/214/4523/881">Science 214, 881; 1981</a>, <a href="http://jis.sagepub.com/cgi/content/abstract/30/1/2">J Clinical Epidemiology 50, 1189; 1998</a>, <a href="http://brain.oxfordjournals.org/cgi/content/full/123/9/1964">Brain 123, 1954; 2000</a>, <a href="http://clrcraldidcotgbr.library.ingentaconnect.com/content/alpsp/lp/2009/00000022/00000002/art00007">Learned Publishing 22, 117; 2009</a>). It’s not even really negative. That would at least be useful. There are a few studies that suggest peer review is somewhat better than throwing a dice and a bunch that say it is much the same. It is at its best at dealing with narrow technical questions, and at its worst at determining “importance” is perhaps the best we might say. Which for anyone who has tried to get published in a top journal or written a grant proposal ought to be deeply troubling. Professional editorial decisions may in fact be more reliable, something that Philip Campbell hints at in his response to questions about the open letter [<a href="http://news.bbc.co.uk/1/hi/sci/tech/8490291.stm">BBC article</a>]:

Our funding managers [...] have always used their own judgement in what we fund. We have not infrequently overruled two or even three sceptical referees and funded a proposal.

But there is perhaps an even more important procedural issue around peer review. Whatever value it might have we largely throw away. Few funders make referee’s reports available, virtually none track the changes made in response to referee’s comments enabling a reader to make their own judgement as to whether a proposal was improved or made worse. Referees get no public credit for good work, and no public opprobrium for poor or even malicious work. And in most cases a proposal rejected from one funder starts completely afresh when submitted to a new funder, the work of the previous referees simply thrown out of the window.

Much of the commentary around the open letter has suggested that the peer review process should be made public. But only for funded grants. This goes nowhere near far enough. One of the key points where we lose value is in the transfer from one funder to another [note added in resampling: there are way fewer funders for any given field than there are journals]. The authors lose out because they’ve lost their priority date (in the worse case giving the malicious referees the chance to get their grant in first). The referees miss out because their work is rendered worthless. Even the funders are losing an opportunity to demonstrate the high standards they apply in terms of quality and rigor – and indeed the high expectations they have of their referees.

We never ask what the cost of not funding a proposal is or what the cost of delaying funding could be. <a href="http://www.eric-weinstein.net/">Eric Weinstein</a> provides the most sophisticated view of this that I have come across and I recommend watching <a href="http://pirsa.org/08090036/">his talk at Science in the 21st Century</a> from a few years back. There is a direct cost to rejecting proposals, both in the time of referees and the time of funders, as well as the time required for authors to reformat and resubmit. But the bigger problem is the <a class="zem_slink" title="Opportunity cost" rel="wikipedia" href="http://en.wikipedia.org/wiki/Opportunity_cost">opportunity cost</a> – how much research that might have been useful, or even important, is never performed (or way too late)? And how much is research held back by delays in funding? How many studies not done, how many leads not followed up, and perhaps most importantly how many follow up projects not refunded, or only funded once the carefully built up expertise in the form of research workers is lost?

Rejecting a research proposal is like gambling in a game where you can only win. There are no real downside risks for either editors or referees in rejecting proposals. There are downsides, as described above, and those carry real costs, but those are never borne by the people who make or contribute to the decision. Its as though it were a futures market where you can only lose if you go long, never if you go short on a stock. In Eric’s terminology those costs need to be carried, we need to require that referees and editors who “go short” on a paper or grant are required to unwind their position if they get it wrong. This is the only way we can price in the downside risks into the process. If we want public peer review [note added in resampling: open peer review generally refers to non-anonymous peer review, while public peer review refers to reviews being made in public, regardless of anonymity], indeed if we want peer review in its traditional form, along with the caveats, costs and problems, then the most important advance would be to have it for unfunded grants.

Funders need to acknowledge the proposals they’ve rejected, along with dates of submission. Ideally all referees reports should be made public, or at least re-usable by the authors. If full publication, of either the submitted form of the proposal or the referees report is not acceptable then funders could publish a hash of the submitted document and reports against a local key enabling the authors to demonstrate submission date and the provenance of referees reports as they take them to another funder.

In my view referees need to be held accountable for the quality of their work. If we value this work we should also value and publicly laud good examples. And conversely poor work should be criticised. Any scientist has received reviews that are, if not malicious, then incompetent. And even if we struggle to admit it to others we can usually tell the difference between critical, but constructive (if sometimes brutal), and nonsense. Most of us would even admit that we don’t always do as good a job as we would like. After all, why should we work hard at it? No credit, no consequences, why would you bother? It might be argued that if you put poor work in you can’t expect good work back out when your own papers and grants get refereed. This again may be true, but only in the long run, and only if there are active and public pressures to raise quality. None of which I have seen.

Traditional peer review is <a href="http://www.rin.ac.uk/our-work/communicating-and-disseminating-research/activities-costs-and-funding-flows-scholarly-commu">hideously expensive</a>. And currently there is little or no pressure on its contributors or managers to provide good value for money. It is also unsustainable at its current level. My solution to this is to radically cut the number of proposals subjected to pre-funding peer review probably by 90-95%, leaving the rest to be funded on a <a href="http://ways.org/en/blogs/2009/apr/09/research_grant_systems_that_encourage_innovation">baseline grant</a>, science prize or random basis (with public evaluation once the research has been performed). But the whole industry is addicted to traditional peer reviewed publications, from the funders who can’t quite figure out how else to measure research outputs, to the researchers and their institutions who need them for promotion, to the publishers (both OA and toll access) and metrics providers who both feed the addiction and feed off it. Similarly, science funders are addicted to ranking proposals based on non-public peer review, rather than based on public peer review or an equivalent of the editorial decisions referred to in the mis-quote from Philip Campbell above.

So that leaves those who hold the purse strings, the funders, with a responsibility to pursue a value for money agenda. A good place to start would be a serious critical analysis of the costs and benefits of peer review.

Cameron's addition after the fact: [It was p]ointed out in the comments that there are other posts/papers I should have referred to where people have raised similar ideas and issues. In particular<a href="http://network.nature.com/people/mfenner/blog/2009/07/13/the-value-of-peer-review"> Martin Fenner’s post at Nature Network</a>. The comments are particularly good as an expert analysis of the usefulness of the kind of “value for money” critique I have made. Also a <a href="http://arxiv.org/abs/0911.0344">paper in the Arxiv</a> from Stefano Allesina. Feel free to mention others and I will add them here.