Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS PLOS SciComm

Is It Time for Pre-Publication Peer Review to Die?

 

Pre-publication peer review seems to be as old as science communication itself. The idea is simple: qualified experts will examine your work to see if it passes muster for publication. Reviewers can spot potential problems with the experimental design or your interpretation of the data before the work is presented to the masses.

That sounds great on paper, but the way we perform pre-publication peer review has spawned significant problems that have been shrugged off for too long. I’ve lampooned the process before by imagining what it would be like if major motion picture franchises like Star Wars or The Avengers went through it. Surprisingly few studies have been conducted to support the efficacy of pre-publication peer review, and the pitfalls mentioned below should rattle our confidence that this system is the best we can do.

Pre-publication peer review is far from fair and objective. There is clearly a subjective element to reviewing a scientific paper. Reviewers and editors are not always qualified or free of bias. Senior investigators may be given a “free pass” because of their reputation in the field, and the work from new investigators is usually examined with much closer scrutiny. Gender bias is another shameful problem plaguing the review process. Findings that challenge dogma can be exceedingly difficult to publish, especially if they contradict discoveries made by one of the reviewers. Even Edward Jenner’s paper describing his smallpox vaccine in 1797 was rejected for publication (Jenner published his findings independently in a booklet the following year). In addition, negative results rarely get published. Knowing the outcome of an attempted experiment or clinical trial is valuable information, yet results from thousands of clinical trials remain unpublished.

Reviewers and editors must also forecast how significant the findings might be, as this is the major criterion for publication in the elite journals. Who wrote the paper and where they are located inappropriately factors into many of the decisions. A famous study performed in 1982 showed how inconsistent journal decisions can be: the authors of this study resubmitted papers that were already published in a prestigious journal after changing the author names and their affiliations. The vast majority of these resubmitted papers were rejected by the same journal that had accepted them previously.

Pre-publication peer review is far from efficient. If you’re lucky, you will get comments back on your initial submission in 3-4 weeks; however, this can take much longer. You will likely need to conduct additional experiments to address a reviewer’s concerns; some of these truly improve the study, but others are frivolous and will not alter the conclusions. Review of the revised paper can take a few more weeks. In ideal circumstances, when your reviewers are timely and not unnecessarily demanding, your study could be published in a few months. But as many scientists will lament, this is rarely the case. Reports of publication times taking 9 months to more than a year are not uncommon. And that’s if the paper is accepted – the process starts all over again if rejected.

Because academics are stretched so thin these days, it is also getting harder and harder to recruit good reviewers, which further delays the process or pairs the paper with a less-than-qualified evaluator. Serving as a journal reviewer is a thankless job: this service is (usually) anonymous and unpaid, it doesn’t facilitate career advancement, and it siphons off valuable time that could be spent on your own research program. Some researchers have expressed such frustration with for-profit journals that they refuse to review their manuscripts. As Dr. Scott Aaronson has stated, “I got tired of giving free labor to these very rich for-profit companies.” To put it simply: we pay to publish our papers, we pay to read published papers, and we volunteer vast quantities of time without pay to critically evaluate papers for journals. An outsider analyzing this situation would call us suckers. The point is that unless things change, an increasing number of scientists are going to rebel against this system, depriving authors of qualified peer reviewers.

Attempts to address some of the problems associated with pre-publication peer review have been made, including blinding reviewers to the identity of the authors, making the process transparent, and providing training on how to review a paper. By and large, these efforts have made little difference in the consistency and quality of reviews.

Many of the shortcomings may be remedied by post-publication peer review, which involves posting your paper on an online forum, such as Biorxiv, F1000 Research, or PubPeer, where others can read and comment on it. Importantly, one does not need to abandon pre-publication peer review; scientists can (and should) have their work reviewed by colleagues before posting to such sites. The crucial difference is that they can do this on their own terms, without a journal assigning reviewers of potentially questionable proficiency and commitment to the turnaround time. When the authors are satisfied with their paper, it can be published on a public server and further critiqued (and improved) as a living document through post-publication comments.

Key advantages offered by post-publication peer review include the acceleration of science reporting, free access to all who wish to read and comment on the study, no publication fees, interactive discussion threads, and the ability to edit or update the study with new data. There are no biased gatekeepers preventing the world from seeing dogma-shattering work. The entire process is open and transparent.

As appealing as that may sound, post-publication peer review will surely generate new challenges. For example, should comments about papers be anonymous? If so, this provides fertile ground for trolls. If not, this may prevent some people from being critical of the work. Another issue is information overload. How are we to filter the signal from the noise? How would the press spot a high-quality study from a low-quality study? New metrics may need to be devised that provide visitors a snapshot view of how much “buzz” the paper is creating. Similarly, comments may also need a ranking system to sort the wheat from the chaff. Publishing outside the realm of journals will also present challenges to university administrations taxed with evaluating faculty productivity for promotion and tenure. But that is their problem and should not be the reason we remain mired in the quicksand of an antiquated and outmoded system of scientific publishing.

Some of these ideas are already being implemented in ways that resemble dipping our foot into a chilly pool rather than taking the plunge. Alternative metrics (altmetrics) are being adapted by numerous journals, providing the number of page visits, tweets, and downloads as a means to highlight interest in the paper. People are free to comment on papers at pre-print servers. Innovative new models for scientific publishing are being tested, such as mSphereDirect, which involves the authors choosing the reviewers, addressing their feedback, and submitting the final version of their paper along with the reviewer critiques.*

Moving to a new model of scientific publication will not come without some growing pains. But at least our growth will no longer be stunted.

 

Continue the conversation on Twitter! Follow Bill @wjsullivan and use #PeerReview

*The author serves as an Editor at mSphere.

As for all posts on PLOS SciComm, this article is available for reposting and reprinting as long as you acknowledge the author and provide a link back to the original article on the PLOS site.

 

Related posts you may like:

To Catch a Predatory Publisher

If the Script for Avengers: Infinity War Was Peer-Reviewed

If the Script for Star Wars: The Last Jedi Was Peer-Reviewed

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top