Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS The Official PLOS Blog

Guest blog by Richard Smith: More evidence on why we need radical reform of science publishing

PLoS Medicine invited Richard Smith, former editor of the BMJ and
current board member of PLoS, to discuss an essay published this week by Neal Young, John Ioannidis and Omar Al-Ubaydli that argues that the current system of publication in biomedical research provides a distorted view of the reality of scientific data.

More evidence on why we need radical reform of science publishing, Richard Smith

Ask scientists whether they’d prefer an all expenses paid fortnight in the best hotel in San Tropez, a Ferrari, a Cezanne painting, or the publication of one of their original papers in Nature – and most, I’d bet, would go for Nature. Getting published in one of the few elite journals is a very big deal for researchers, but, argues a stimulating paper published in PLoS Medicine, (1) the fact that it is so important is distorting science. And I think that the authors are right.

Neal Young, John Ioannidis, and Omar Al-Ubaydli unusually for a scientific publication use economic concepts to make their case, and by doing so they illustrate the value of crossing disciplinary boundaries. Their argument is built around “the winner’s curse.” Imagine many firms competing for a television franchise. Each will try to work out the value of the franchise, and inevitably there will be a range of bids. If the franchise is simply awarded to the highest bidder then there’s a high chance that that bid is too high, meaning that the winner will lose money — hence “the winner’s curse.” Those who run such bids often recognise the problem of the curse and discount the highest bid or go for a lower bid.

This phenomenon operates in science publishing because the elite journals that accept only a fraction of papers submitted to them go for the “best” and are thus likely to be publishing papers that are suffering from the winner’s curse — for example, in that they give dramatic results that are a considerable distance from the “true” results. They are exciting outliers — and so very attractive to the elite journals. The articles that the high impact journals publish are bound to be atypical and will present a distorted view of science, leading to false conclusions and “misallocation of resources.”

The authors have some empirical evidence to support their argument. A study of the 49 most highly cited papers on medical interventions published in high profile journals between 1990 and 2004 showed that a quarter of the randomised trials and five of six non-randomised studies had been contradicted or found to be exaggerated by 2005. (2) We know too that “positive” drug trials are much more likely to be published than “negative” trials, although we don’t know how much this is the result of conscious manipulation by authors and sponsors and how much the result of “the winner’s curse.” (3-5)

Most scientists read a few high profile journals — and so are fed a systematically distorted view of the evidence. It’s also these journals that are most widely reported in the media and fed to policy makers, so increasing the impact of the distortion.

The hope of many is, of course, that the elite journals are selecting “the best” research — hence providing a way of coping with information overload. But we know from good evidence that peer review is a deeply flawed system and that it’s very hard to know what will be important in the long term. So readers of Nature, Science, and the New England Journal of Medicine are not reading “the best” but the “systematically distorted.”

What might we do about this problem? Young and others suggest a range of options, including preferring publication of negative over positive results — a version of those choosing among bids discounting the highest. It’s hard to see, however, how building such an explicit bias into the system would be helpful. Better might be for editors to pay no attention to whether the results are positive or negative but rather to concentrate simply on the importance of the question being asked and the rigour of the methods. We tried to do this when I was editor of the BMJ, but I’m not sure how successful we were. Inevitably you are excited by an unusual result, and the winner’s curse can surely operate not only in relation to whether the results are positive or negative but also in relation to the “importance” of the question.

For me this paper simply adds to the growing evidence and argument that we need radical reform of how we publish science. I foresee rapid publication of studies that include full datasets and the software used to manipulate them without prepublication peer review onto a large open access database that can be searched and mined. Instead of a few studies receiving disproportionate attention we will depend more on the systematic reviews that will be updated rapidly (and perhaps automatically) as new results appear.

References

1. Young NS, Ioannidis JPA, Al-Ubaydli O (2008) Why Current Publication Practices May Distort Science. PLoS Med 5(10): e201 doi:10.1371/journal.pmed.0050201. Find this paper online.

2. Ioannidis JPA (2005) Contradicted and initially stronger effects in highly cited clinical research. JAMA 2005: 294:218-28. Find this paper online.

3. Lee K, Bacchetti P, Sim I (2008) Publication of Clinical Trials Supporting Successful New Drug Applications: A Literature Analysis PLoS Medicine Vol. 5, No. 9, e191 doi:10.1371/journal.pmed.0050191. Find this paper online.

4. Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R (2008) Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. NEJM 2008; 358: 252-60. Find this paper online.

5. Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B (2003) Evidence b(i)ased medicine—selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. BMJ 2003;326:1171-1173 (31 May), doi:10.1136/bmj.326.7400.1171 Find this paper online.

Back to top