Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Absolutely Maybe

Bias, Conflicts, Spin: The 8th Olympiad of Research on Science & Publishing Begins

 

Once every 4 years editors, publishers, and meta-researchers assemble in Chicago for the Peer Review Congress – an intense researchfest about “enhancing the quality and credibility of science”. I’m live-blogging day 1 – with the latest entries at the top. (Plus some notes and thoughts from the pre-Congress workshop on predatory journals.) And the keynote addresses are going to be live-streamed. Abstracts for today online here.

Last session of the day was on history: Aileen Fyfe on “refereeing” a hundred years ago…

 

Cartoon of peer reviewer hundreds of years ago

 

She’s talking about the Royal Society’s Philosophical Transactions. This is what the refereeing records look like:

 

 

Early on, they decided to move to a “Committee of Papers” instead of a single editor. Referees began in 1832 – the journal had begun in 1665. And all these people had to be Fellows of the Royal Society:

It was a closed club, with very specific characteristics…They socialized to share certain polite, scholarly behaviors…[But] there was something of a lack of diversity at the Royal Society! They were male, based in Britain, and at least 40 years old.

The occasional woman wasn’t admitted to the Society until 1945. Then the commercial enterprises entered journal publishing on scale in the 1950s and 1960s.

And that’s a wrap – we went about our own socializing! See you tomorrow!

 

Earlier, a very caffeinated group gathered for the Data Sharing Session…

Would you share your data? Sara Tannenbaum from Yale School of Medicine and colleagues surveyed authors of clinical trials in 3 high impact medical journals (58% response rate). Their answers to this question is basically, “it depends”:

 

 

They concluded that many authors were not adequately prepared for the work of sharing data, and that many trialists:

…are unwilling to share when it conflicts with their own research interests.

Kay Dickersin tweeted:

Investigators may leave the door open for further study. Reality is that they won’t get to it. I don’t think this indicates conflict though.

Fiona Godlee on journal editors’ decision to not require data sharing at publication, she now thinks they were too influenced by clinical trialists and did not take patients’ concerns into account enough: “Trialists’ voices are very loud”.

Next up, Joseph Ross on the YODA data-sharing project.

 

 

Ross reports that 91% of requests have been approved. As of June 2017, they have had 73 proposals related to 159 trials – “Usually to look at new secondary aims” or for meta-analyses. They have 17 requests to validate published results.

People underestimate how much work is involved in re-using data too:

No one’s going to use it if we haven’t succeeded in our work.

Doug Altman asked, why would a request be disapproved? Ross said, “We have a very low bar”. Data cannot be intended for commercial use or litigation.

Clinicaltrials.gov added an option for sharing individual patient data (IPD). Deborah Zarin reported who answered “yes” to the question about sharing IPD in more than 35,000 trials in 2016/17: only 11% said yes – and “We have evidence that many of them didn’t know what they were saying!”

Well, that’s pretty grim! If we want to see IPD sharing, Zarin said, we need “widespread changes in the clinical research community”.

In good news, Zarin reminded us that people can now upload trial protocols and data plans to Clinicaltrials.gov.

 

After lunch, we were onto the Integrity and Misconduct session:

Daniele Fanelli and David Moher looked at what happens to meta-analyses if you remove retracted studies. They had a very small sample though – only 15 meta-analyses in the end, too small to show much. Some retracted studies in small bodies of literature, if they affected data, made a difference – but there wasn’t a difference overall (which isn’t surprising, given how small the sample was).

Next on to problems with problems in images like, in the American Physiological Society’s 7 journals. Christina Bennett reported on a quality control process for the image modifications in every article accepted for publication – introduced in 2009. A few a year get rejected as a result, and many images have to be investigated and replaced. They can’t tell, though, if images have been re-used from a prior publication. The number of post-publication errata has gone down:

 

 

A semi-automatic tool (called seek and blastn) to fact-checking nucleotide sequences in publications was the subject of the next study, presented by Cyril Labbé. It’s early days for this (here’s the prototype). We don’t know yet how much errors (or worse) are affecting the literature.

In discussion time, Constance Zou remarked on what this has to say about our priorities: “We have someone to check the commas, but not the gene sequences”.

Next we were onto the Research Integrity Group at Biomed Central: what comes at journals? Stephanie Broughton reported on all the questions that came to the group from 2015 to 2016. Over a thousand queries came their way – a very small proportion of the articles they publish – and about two-thirds before an article was published:

 

 

 

Reporting bias session, with more on spin, was up after morning coffee:

Gaelen Adam (Brown Center for Evidence Synthesis) and others in the AHRQ family tackled trials (pre-)registered in ClinicalTrials.gov. As so many have before them, they found that only a minority of trials were registered there (24%) – but you can find studies you don’t find any other way. Adam had a plea: please, journals, please – include a properly formatted trials registry number in the abstract.

Hear, hear! I’ve discussed that here – with a cartoon for this:

 

Cartoon of computer and robot wishing researchers would be user-friendly

 

This is the case that got Steve Woloshin and colleagues to do their study on interim reporting of clinical trial results:

 

 

That study had high-profile more positive interim results, progressively getting less profile while showing less benefit, too. Was this typical? Turns out it’s not, but it does happen. Out of 158 interim trial results, they found 90 with subsequent publications. About 15% of the time the conclusions are different. We need titles that end with “: unplanned interim analysis”, Woloshin said – also noting that he was himself now reporting an interim analysis! Which raises the problem, doesn’t it?, of all the interim analyses reported at conferences!

Ivan Oransky from Retraction Watch asked, what about the 68 that didn’t have a subsequent publication? An important part of the whole picture here.

Back to spin: this time Mona Ghannad, looking at spin – “inflated and selective reporting” – and facilitators of spin in biomarker studies. Roughly a third had no spin, a third had only 1 instance of spin, and a third had more spinning. The most common forms of spin were incorrect presentation of results and a mismatch between the people studied and the people who would be affected in practice.

Facilitators of spin? Ghannad point to lack of clearly pre-specified statistical tests and not stating sample size calculations. Pre-specified/registered protocols could help here.

 

In the discussion, Steve Goodman from METRICs raised the question of the use of the word “spin”, including:

Things like falsification and mis-statement of results – you have a whole lot of things that are frankly errors…really misrepresentations of what they studied, and sometimes the results themselves.

We need to consider whether we put too much into the spin barrel.

What is regarded as spin in the spin literature?

Quinn Grundy and her colleagues did a systematic review (now published) and found 35 studies of spin – some of which themselves had signs of spin! Mostly the studies are in clinical research – and commercial sponsorship isn’t associated with spin. We need to widen what we look at around spin, she said:

Very little is known about the contextual factors that contribute to spin and would could be done about it.

One example she pointed to: universities and journals are encouraging spin by rewarding any social or other media attention. We need nuanced checklists for spin, Grundy said.

But one of the things all of this is missing, it seems to me, is when is the study itself spin?

 

The session on conflicts of interest and peer review included Nature and more non-biomedical journals joining this community – a real landmark! Also worth noting: all-female panel of early career researchers – from a double-blind selection process!

Quinn Grundy and her colleagues studied 1,650 biomedical publications. In which they found “130 different ways to state authors have no conflicts of interest” – and they had very different meanings. It’s a bit of a trap, really: you have to parse the words very carefully. Altogether, 64% of papers don’t reveal a conflict of interest: that doesn’t mean there were none, though.

Camilla Hansen reported systematic review results on conflicts of interest disclosures in systematic reviews. They found 9 studies, which covered 983 in drug studies and 15 for medical devices. The conclusion?

It remains unclear whether financial conflicts of interest have an impact on results.

It’s hard, isn’t it?, when studies are relying on disclosed conflicts of interest only.

Up next was Elisa De Ranieri reported on Nature‘s study of referee bias in single- and double-blind peer review: 128,454 primary research submissions across 2 years (March 2015 – February 2017). Authors opted in to double-blind or single blind. Editors weren’t blinded. Only 12% opted for double-blind review: no gender difference, but they were more likely to be in prestigious journals and authors from prestigious institutions. There were more rejections in double-blind peer review. (Note gender was only determined by a program, so not a very degree of accuracy here.)

They couldn’t tell from this study whether there was bias from peer reviewers towards double-blind peer review itself – or from the editors, come to that. Hard to imagine they didn’t have strong feelings about whether Nature should be doing this, right? The editors were rougher on the papers choosing double-blind peer review than the peer reviewers were! And authors from China and India were more likely to opt for double-blind peer review – the number was too small to know whether that had an impact on the results they saw.

The study was based on gender of corresponding authors, for example, and we don’t know if they were the ones making the decision for or against choosing double-blinded peer review. (You can check out the trials on this in my post here.) Fiona Godlee, editor of BMJ, asked if Nature would consider open review: “small steps”, said De Ranieri – revealing peer reviewers’ names with publications, yes.

 

Jory Lerback from the American Geophysical Union reported on gender bias at their journal across 5 years. Women are represented among authors at a similar rate to their membership in the society, but they are under-represented as peer reviewers:

 

 

The situation is beginning to improve: they are urging people to think of women in particular when they propose peer reviewers, because that’s where an important part of the problem lies. Interestingly, overall, people rejecting invitations to review is declining for them.

 

Up first: the awesome Lisa Bero got the Congress started. She talked about bias and conflicts: are we looking in the right place? “We believe in peer review – but we’re still not quite sure what it is”.

 

Cartoon of editor who wants peer reviewers without conflicts of interest about peer review

 

Here’s Bero’s slide on some of the problems and some of the solutions:

 

 

But one of the problems is that many of these solutions are mostly applied to clinical trials – and even then not as well as they should be. She said,

Provision of more information has made little headway on conflicts of interest or spin… It has led to even more problems. The bottom line is that peer review has gotten harder and there is more to do.

Social media offers authors “even more opportunities to spin your paper”. Bero defined spin as: “Biased presentation or interpretation to make results appear more favorable.” Some solutions, which again have been applied more to clinical trials:

 

 

Getting rid of discussion sections doesn’t get rid of all opportunities for spin by any means. One option people are trying is to have multiple discussions, and not only by the authors. However there Bero warns of the potential for a megaphone effect of biased or conflicted views.

 

 

Pre-Workshop on 9th September: “Predatory Journals”

This was lively! It was organized by Ottawa’s Centre for Journalology, which has just made a big plash with another study on “predatory journals” in Nature and published a study on evidence-based criteria for dodgy journals.

No, there is no clear definition what is, and what isn’t, predatory – making it impossible for both white lists and black lists to be comprehensive or fair. There are “overt scams” – which are the province of regulatory agencies that deal with fraud. But then there is a spectrum from fraudulent to the best, said Ana Marušic. Peush Sahni from WAME (World Association of Medical Editors) said that as long as people had to have a lot of publications for career advancement, the problem will persist: it’s not just that people are getting caught by scams.

Jason Roberts said it’s clear that a market exists for rapid, cheap publishing – and they are coming from everywhere, not only unsuspecting early career researchers in low income countries, which has been the stereotype.

With open or public access mandates at major funders like the NIH, these manuscripts will appear in PubMed/PMC – the articles will be indexed, even though the journals they are in are not. (That seems to be confusing a lot of people.)

At Ottawa, they are dealing with this among other problems, by having a publisher officer, Kelly Cobey, to help authors. Institutions can obviously do a lot more to educate and support their people.

It’s not a simple problem, is it? Martin Paul Eve and Ernesto Priego have written a thought-provoking paper challenging assumptions about the politics here, of imposing what’s “legitimate” and why. It seems to me preprints are going to be a large part of the solution for the future, in terms of ensuring that rapid cheap dissemination methods exist where people can find permanently archived literature.

There was concern among some people that the massive outbreak of journals enabled by the internet and fears that it would be the death of verifiable knowledge and expertise. That’s what we always fear when there is a new medium, and we haven’t yet found the ways we find to manage. My favorite reminder of this is the anguish that followed the invention of the Gutenberg press. Here’s what Erasmus said, in his 1508 Adages:

[Y]ou may print anything. Is there anywhere on earth exempt from these swarms of new books? Even if, taken out one at a time, they offered something worth knowing, the very mass of them would be an impediment to learning…

We’ll adjust.

 

Holbein portrait of Erasmus

 

Continued on day 2…

 

~~~~

The portrait of Erasmus is by Hans Holbein (the Younger), the Louvre via Wikimedia Commons.

Cartoons are my own (CC BY-NC-ND license). (More cartoons at Statistically Funny and on Tumblr.)

 

* The thoughts Hilda Bastian expresses here at Absolutely Maybe are personal, and do not necessarily reflect the views of the National Institutes of Health or the U.S. Department of Health and Human Services.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top