Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Absolutely Maybe

Trials At Last & Even More Questions: Milestones in Journal Peer Review Research, Part 2 (1990–2018)

 

 

Cartoon of editors talking

 

 

The scientific community invests a huge amount of time into peer review at journals, and it’s critical to what we end up reading, and to scientists’ careers. Yet, as we saw in part 1, the practice spread from the 1940s without being led by, or followed by, rigorous research. Things have slowly improved, and we have seen more trials since then, as well as some systematic reviews at last. After all this time, we are getting closer to some answers, but the questions keep growing, too.

 

 

 

The first known randomized trial of blinded journal peer review

 

“The effects of blinding on the quality of peer review: a randomized trial” (1990)

 

Robert McNutt and colleagues randomized 127 consecutive manuscripts submitted to the Journal of General Internal Medicine. Each went to 2 reviewers, one of whom got a version with the name and institutions of the authors removed. Blinding was successful 73% of the time, and a single rated the quality of the blinded peer reviews, blinded to whether or not the peer reviewer had been blinded. The blinded reviews rated a bit higher (3.5 versus 3.1 on a 5-point scale).

If you want to know how the evidence on this is panning out, I maintain a post on that here.

 

Cartoon of peer reviewer hundreds of years ago

 

On the early history of editorial peer review

 

“Peer review in 18th-century scientific journalism” (1990)

 

David Kronick tracked the antecedents of editorial peer review back to the 17th century – before the first journal:

In the broadest sense of the term, peer review can be said to have existed ever since people began to identify and communicate what they thought was new knowledge. That is because peer review (whether it occurs before or after publication) is an essential and integral part of the process of consensus building and is inherent and necessary to the growth of scientific knowledge. But even in the narrower sense of prepublication review, which we are considering today, the practice came into existence long before the Royal Society of London decided to take over fiscal responsibility for the Philosophical Transactions. For example, the Royal Society of Edinburgh, in the preface to the first volume of its Medical Essays and Observations, published in 1731, clearly stated the society’s editorial policy and objectives. It described a process that antedated that of the Royal Society of London by at least 20 years and closely resembles some of our forms of peer review today.

 

A randomized trial of open peer review that started a journal down the path to switching to open

 

“Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial” (1999)

 

It’s a big standout. In 2014, the BMJ moved to publishing pre-publication histories and open peer review reports with all their research. The roots of this decision began with this 1999 randomized trial by Susan van Rooyen and colleagues, followed by another in 2010. This scale of evidence-based science journal practice remains a unicorn. Since the evidence doesn’t support being opaque, the journal opted for “increased accountability, fairness, and transparency”.

 

 

The first Cochrane systematic review of peer review

 

“Editorial peer-review for improving the quality of reports of biomedical studies” (2001)

 

This is the first systematic review of peer review I could find. Authored by Tom Jefferson and colleagues, it grew out of the work for the book, Peer Review in Health Sciences published in 1999. What they found:

The well‐researched practice of concealing the identities of peer‐reviewers or authors, while laborious and expensive, appears to have little effect on the outcome of the quality assessment process (9 studies). Checklists and other standardisation media have little reliable evidence to support their use (2 studies). There is no evidence that referees’ training has any effect on the quality of the outcome (2 studies). Electronic communication media do not appear to have an effect on quality (2 studies). On the basis of one study little can be said about the ability of the peer‐review process to detect bias against unconventional drugs. Validity of peer‐review was tested by only one small study in a specialist area. Editorial peer‐review appears to make papers more readable and improve the general quality of reporting (2 studies), but the evidence for this may be of limited generalisability…

At present there is little empirical evidence to support the use of editorial peer‐review as a mechanism to ensure quality of biomedical research, despite its widespread use and costs.

This review has been updated twice, but not since 2007. There’s been no dramatic change in the evidence or conclusions.

 

On gender bias, and 2 elephants in the room

 

“Gender bias in the refereeing process?” (2002 – PDF)

 

This paper by Tom Tregenza wasn’t particularly influential, as near as I can see. But it zeroes in on a couple of weaknesses of the research in this field: the selection bias (and perhaps publication bias) of which journals’ experiences are published – and that evidence at the journal level could be obscuring the impact of bias of individual editors.

Tregenza invited 24 primary research journals in ecology to provide data: but only 7 editors from 5 journals agreed to participate. And although the numbers of manuscripts for some of those editors are small, the results are enough to underscore why data at the individual editor level could be vital.

As I’ve written before:

There would have been at least 16,000 active science and technology journals in the Scopus database in 2011. The experience of a small proportion of the journals in 2 areas is not enough to sustain a conclusion about the presence or absence of gender bias in science journals. Publication bias is a major concern as well. Journals whose editors have high levels of concern about gender fairness may be more likely to analyze their performance. Journals whose editors identify recent poor gender performance are presumably unlikely to want to broadcast this failing.

 

Gender bias in peer review, solved by double-blind peer review?

 

“Double-blind review favours increased representation of female authors” (2008 – PDF)

 

The studies that have found signs of gender bias in peer review have been very influential, despite not having strong experimental proof: there’s some confirmation bias in how often they are drawn on in arguments, and as the rationale for blinded peer review. (I discuss this more here.)

 

 

 

Breaking out of black-box, pre-publication-only peer review

 

“Multi-stage open peer review: scientific evaluation integrating the strengths of traditional peer review with the virtues of transparency and self-regulation” (2012)

 

Ulrich Pöschl reported on the experience of the Copernicus journals, which opened up peer review, making it interactive and keeping on post-publication from 2001, publishing more than 2,000 papers a year by 2010. And he poses 10 questions that need to be addressed.

 

 

 

 

Evaluation of open peer review reports

 

“Impact of peer review on reports of randomised trials published in open peer review journals: retrospective before and after study” (2014)

 

One of the advantages of open peer review is the chance it gives people completely independent of journals to study what’s currently a black box. Sally Hopewell and colleagues analyzed the peer review of all 93 clinical trials in 2012 in BMC medical journals:

Peer reviewers fail to detect important deficiencies in reporting of the methods and results of randomised trials. The number of these changes requested by peer reviewers was relatively small. Although most had a positive impact, some were inappropriate and could have a negative impact on reporting in the final publication.

 

 

A systematic review on training for peer review – and more unknowns

 

“A systematic review highlights a knowledge gap regarding the effectiveness of health-related training programs in journalology” (2015)

 

James Galipeau and colleagues study several questions in this systematic review, including whether or not training for authors and editors can improve the quality of journal peer review. Important questions, but not a lot of evidence to give us answers:

Included studies were generally small and inconclusive regarding the effects of training of authors, peer reviewers, and editors on educational outcomes related to improving the quality of health research. Studies were also of questionable validity and susceptible to misinterpretation because of their risk of bias.

 

Another systematic review – and getting closer to some answers

 

“Impact of interventions to improve the quality of peer review of biomedical journals: a systematic review and meta-analysis” (2016)

 

This systematic review by Rachel Bruce and colleagues is the most recent we have at this point – although again, it is restricted to biomedical journals.

As compared with the standard peer review process, training did not improve the quality of the peer review report and use of a checklist did not improve the quality of the final manuscript. Adding a statistical peer review improved the quality of the final manuscript (standardized mean difference (SMD), 0.58; 95 % CI, 0.19 to 0.98). Open peer review improved the quality of the peer review report (SMD, 0.14; 95 % CI, 0.05 to 0.24), did not affect the time peer reviewers spent on the peer review (mean difference, 0.18; 95 % CI, –0.06 to 0.43), and decreased the rate of rejection (odds ratio, 0.56; 95 % CI, 0.33 to 0.94). Blinded peer review did not affect the quality of the peer review report or rejection rate.

Another key point here? Although there has been an increase in the number of randomized trials – they found 25 altogether – it’s still nowhere near enough. And there were only 7 published since 2004.

 

Cartoon of person fiercely scribbling

 

 

The types of open peer review defined

 

“What is open peer review? A systematic review” (2017)

 

Tony Ross-Hellauer’s review showed

…exactly how ambiguously the phrase “open peer review” has been used thus far, for the literature offers 22 distinct configurations of seven traits, effectively meaning that there are 22 different definitions of OPR in the literature reviewed.

“Open peer review”, he concluded, is an umbrella term. Here are the 7 aspects of openness Ross-Hellauer identified (direct quote):

  1. Open identities: Authors and reviewers are aware of each other’s identity

  2. Open reports: Review reports are published alongside the relevant article.

  3. Open participation: The wider community are able to contribute to the review process.

  4. Open interaction: Direct reciprocal discussion between author(s) and reviewers, and/or between reviewers, is allowed and encouraged.

  5. Open pre-review manuscripts: Manuscripts are made immediately available (e.g., via pre-print servers like arXiv) in advance of any formal peer review procedures.

  6. Open final-version commenting: Review or commenting on final “version of record” publications.

  7. Open platforms (“decoupled review”): Review is facilitated by a different organizational entity than the venue of publication.

 

Overview of the evolution of peer review across disciplines

 

“A multi-disciplinary perspective on emergent and future innovations in peer review” (2017)

 

Jonathan Tennant and colleagues take a sweeping look at the past and potential for peer review post-internet, with a view to radical reform, including issues like incentives and giving academic credit for peer reviewing.

 

 

Doing the peer review after preprint publication

 

“f1000: our experiences with preprints followed by formal post-publication peer review” (2018)

 

It’s a model that’s growing, especially as funders embrace it for the research they fund (see my post here). Rebecca Lawrence and Vitek Tracz report some data on f1000’s experience over 5 years, and the Wellcome Trust’s shorter experience. They’re achieving 75% and 86% success with getting to the peer-reviewed stage respectively, and it’s happening within 53 and 35 days respectively.

 

Editors and their own conflicts of interest

 

“Getting more light into the dark room of editorial conflicts of interest” (2018 – PDF)

 

Ana Marušić and Rafael Dal-Ré conclude that:

[T]he transparency of disclosure of editorial CoI has not improved across journals from a range of disciplines and influence in the scientific community in the last 12 years, despite greater awareness and the published evidence about the problem.

 

 

 

We’re left with lots more questions than answers about peer review. We know a lot about comparatively few journals, really, and more so in biomedicine. The opaqueness of what goes on at journals serves them well, but it doesn’t serve authors or readers. At least the pace of research picked up. I think there’s enough now, to justify regular roundup posts of what’s new, so stay tuned! My prediction: peer review will become more open and collaborative, just as other aspects of science have.

 

 

~~~~

 

Cartoon of a bird out on a limbAll my posts on peer review are here. Some key posts:

Signing Critical Peer Reviews and the Fear of Retaliation

The Fractured Logic of Blinded Peer Review in Journals

Weighing Up Anonymity and Openness in Publication Peer Review

Flying Flak and Avoiding “Ad Hominem” Response

 

Disclosures: In the 1990s, I was the founding lead (“Coordinating”) editor for the Cochrane Collaboration’s reviews on consumers and communication. I have served on the ethics committee for the BMJ, participated in organizing special issues of the BMJ, attended conferences funded by the BMJ, and I am a contributor to BMJ Blogs. I contributed a chapter to a book on peer review. I was an associate editor for Clinical Trials, academic editor of PLOS Medicine, and am a member of the PLOS human ethics advisory committee. These days, I only peer review for open access journals.

 

The cartoons are my own (CC BY-NC-ND license). (More cartoons at Statistically Funny and on Tumblr.)

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top