Editorial Peer Reviewers’ Recommendations at a General Medical Journal: Are They Reliable and Do Editors Care?


Peer review is central to how we evaluate science and therefore how journal papers, grants and jobs are awarded. Peer review is done in many different ways, and has dramatically changed in the last 25 years. But the purpose of peer review is still to improve the quality of research by providing feedback, and to evaluate the quality of research. The evaluation serves as a filter both for limited resources (e.g. grants or jobs; publication in a journal is no longer a limited resource), and for other researchers to focus on the most relevant work in their field.

It is therefore surprising that relatively little research on peer review itself has been done. Most discussions focus on the shortcomings of peer review, and the arguments are often based on personal experience and/or interests. Good research on peer review can help to improve the peer review process. Last week such a paper was published in PLoS ONE.

Flickr image by Gideon Burton.

Richard Kravitz and colleagues looked at the recommendations of peer reviewers, and how they influenced the editorial decision to publish or reject a paper. The study looked at 6213 manuscripts received 2004-2008 at the Journal of General Internal Medicine (JGIM) where four of the authors were either current or former editors in chief.

At JGIM submitted manuscripts were first screened by an editor in chief and a deputy editor. Most manuscripts were rejected at this step, 2264 manuscripts (36%) were sent out for peer review. 2916 reviewers wrote a total of 5581 reviews (1-4 per manuscript) which included comments and a recommendation. Eventually 43% of the manuscripts were accepted for publication.

Overall, there was agreement between all reviewers in just over half of the manuscripts (54.6% ), furthermore editors did not follow these recommendations in another 10% of manuscripts:

Table 1. Likelihood of Initial Decision to Reject in Relation to Reviewer Agreement.

The inter-reviewer agreement was slightly higher than what would have been expected by chance, and was lower than the agreement between recommendations for several manuscripts by the same reviewer. In contrast, there was little correlation between editorial decisions for different manuscripts handled by the same editor.

The authors write in the discussion:

If reviewers cannot regularly agree on whether to recommend rejection or further consideration, the marginal contribution of such summative recommendations may be small, and worse, they may distract from reviewers' primary contribution, which is to improve the reporting – and ultimately the performance – of science.

The paper authors consider the following to improve reviewer recommendations: using more reviewers per manuscript, providing better training for reviewers, or recommendations could be dropped altogether and reviewers asked to focus instead on evaluating the strengths and weaknesses of manuscripts. Some journals are obviously using this latter approach.

Several studies have shown that most rejected manuscripts will eventually be published somewhere else. One important reason is that publication space in journals is no longer a scarcity as it was before electronic publishing became widespread. This means that the ultimate decision whether or not something will be published in a peer-reviewed journal rests with the authors and not the editors or reviewers. Reviewers should keep this in mind.

Kravitz, R., Franks, P., Feldman, M., Gerrity, M., Byrne, C., & Tierney, W. (2010). Editorial Peer Reviewers' Recommendations at a General Medical Journal: Are They Reliable and Do Editors Care? PLoS ONE, 5 (4) DOI: 10.1371/journal.pone.0010072

Further reading:
* Questioning the Value of Recommendations in Peer Review (Michael Long)
* Scrap peer review and beware of top journals (Richard Smith)
* Peer review: What is it good for? (Cameron Neylon)
* Peer Review VI (Sabine Hossenfelder)
* The value of peer review (me)

This entry was posted in Snippets. Bookmark the permalink.

5 Responses to Editorial Peer Reviewers’ Recommendations at a General Medical Journal: Are They Reliable and Do Editors Care?

  1. Matt Brown says:

    Interesting. However, the study is only on one journal. Can it really tell us anything about peer review beyond this one title, given the variety of editorial practices?

  2. Maxine Clarke says:

    It’s interesting that many of these studies into peer review are done by medical journals. As Matt says, medical journals represent but one discipline. I haven’t read the paper referred to in your excellent post, Martin, but different journals have different editorial practices with how they deal with referee reports (whether or not they differ). One type of “difference” that can occur between peer-reviewers about the same paper is if they are commenting on different aspects of it – eg the statistics supporting the conclusions, the technological innovation, or the degree of scientific advance. So editors have to weight the advice they get, as peer-reviewers are not homogenous. (This may have been covered in the paper.)

  3. Martin Fenner says:

    Matt, references 5-8 and 13-15 of the paper also looked at peer review and editorial decision making (all medical journals). There are also at least two systematic reviews of the peer review process:
    * Effects of Editorial Peer Review: A Systematic Review. _JAMA_ 2002 “doi:10.1001/jama.287.21.2784″:http://dx.doi.org/10.1001/jama.287.21.2784
    * Editorial peer review for improving the quality of reports of biomedical studies. _Cochrane Reviews_ 2008 “doi:10.1002/14651858.MR000016.pub3″:http://dx.doi.org/10.1002/14651858.MR000016.pub3
    The authors of the second review conclude:
    _At present, little empirical evidence is available to support the use of editorial peer review as a mechanism to ensure quality of biomedical research. However, the methodological problems in studying peer review are many and complex. At present, the absence of evidence on efficacy and effectiveness cannot be interpreted as evidence of their absence. A large, well-funded programme of research on the effects of editorial peer review should be urgently launched._
    Maxine, this is a very good argument. Unfortunately the authors didn’t look at the reasons for reviewer disagreement. But they discuss the possibility that comments were more important for editorial decisions than reviewers’ summary recommendations.

  4. On Peer Review

    (This is a guest blog I wrote for the Research Information Network.) I’m a fan of peer review. There, I’ve said it. And I’m not saying it in the way that Sir Winston Churchill famously spoke of democracy; ‘the worst…

  5. Nathaniel Marshall says:

    The British Medical Journal ran an experiment in open peer review a few years ago. They decided to keep the identities of the reviewers and authors open to each other but closed to the outside world. When you review for them you sign your name to the report and you are also able to read other reviewers comments later (which is what I love about their system) I think it works very well for them- but then I don’t think it would work as well for journals that get lower quality material submitted to them.