Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Absolutely Maybe

Innovations in Peer Review and Scientific Publishing

 

Over 500 science journal editors, publishers, and meta-researchers are gathered in Chicago for the 8th Peer Review Congress (#PRC8), a once-every-4-years researchfest about “enhancing the quality and credibility of science”. I’m live-blogging – you can catch up on what happened on day 1 here and day 2 here. Abstracts are online here. (Summaries of sessions are added as each finishes – the most recent is at the top.)

 

Final session: Post-publication matters….

Ana Marušić and colleagues looked at articles indexed in MEDLINE, Web of Science, and Scopus that had been tagged as corrected and republished. They looked at 29 articles in 24 biomedical journals in 2015 and 2016. When articles were retracted and replaced, that status wasn’t shown at PubMed. CrossMark didn’t include the information for all of them.

For readers, Marušić concluded, there’s large variation both at journals and indexing – so much so that,

This diminishes the credibility and transparency of the research and publication system.

I had a conflict of interest here, because I’m working on this problem in my day job. I asked if she had a preferred way of seeing this improved. Marušić liked the process of interlinking at PubMed. But everyone has to get together, she said, and think of the poor readers:

We still live in the paper age and we are not using what digital technology can offer us.

Ivan Oransky, from Retraction Watch, asked what she thought should happen with DOIs: should there be 1 or 2 DOIs? Marušić said she wasn’t sure, but would welcome consistency. Annette Flanigin from JAMA favors the single DOI, to reduce the stigma that can be associated with “retraction”.

Melissa Vaught on who’s commenting at PubMed Commons, the system in the biomedical literature database, PubMed.  (Major conflict of interest disclosure: I’m a co-author of this one, and it’s part of my day job. That gives me a major conflict on the next presentation as well.)

Comments appear directly below abstracts in PubMed: it’s been available since 2013. Comments can’t be anonymous, and comments are moderated after comments are posted. Only authors of publications indexed in PubMed are eligible.

At the end of 2016, there were 4,372 PubMed records with 5,483: 268 comments were removed by moderators and a similar amount were removed by members themselves. This is the typical profile of a comment:

 

 

(Summary of this previous study is available here.)

In the last half of 2016, 21% of comments had multiple authors. Women were under-represented (21%), and most commenters come from English-speaking countries – 85% altogether from Europe and North America.

The last presentation goes to Liz Wager and Emma Veitch‘s study of commenting on PubPeer, presented by Mario Malicki. The 2 authors both categorized all comments at PubPeer on publications in 3 medical journals: BMC Medicine, BMJ, and the Lancet, from the first in 2013 until December 2016.

There were 344 comments on 99 publications, 60% of which were signed comments imported from PubMed Commons. 94% of the non-PubMed comments were anonymous. In terms of the comments made at PubPeer, journal editors were unaware of the comments, although the same concerns had generally reached them anyway. There were 5 author replies. The authors judged 7% of the comments to be ones “that might require journal action”.

And that was a wrap for the 8th Peer Review Congress!

After lunch, heading down the home straight: pre-publication and moving towards post-publication.

Stylianos Serghiou was up first, looking at what happens to preprints in bioRxiv. Serghiou and John Ioannidis looked at 7,750 unique preprints and 3,628 publications, downloaded in January 2017. Within a year, 36% of the preprints had been published in a journal. They also wanted to know how much attention the publication of a “preprinted” article gets, according to Altmetrics. They found that previously “preprinted” papers tended to have higher Altmetric scores than ones that had never been on bioRxiv, driven by Twitter.

Jessica Polka, from ASAPbio, raised the question that was probably in the minds of all of us who publish and tweet: there may be a relationship between scientists on Twitter and pro-bioRxiv sentiment, leading to more tweets about their papers. Serghiou hadn’t looked at who was doing the tweeting that was driving up the Altmetric scores.

Up next, Ramya Ramaswami from the New England Journal of Medicine (NEJM). Here’s how much attention 338 articles on clinical trials from 2012 to 2015 got:

 

 

In the discussion, Ivan Oransky pointed out that while other journals issue press releases on selected articles, NEJM releases whole issues under embargo to journalists – and that might make a difference to media coverage.

Lauren Maggio asked if people had access to the articles, or were they hitting pay walls, and what impact did that have? Ramaswami replied that they didn’t look at that issue.

Then we were back to meta-analysis with Matthew Page. He and colleagues looked to see what proportion of meta-analyses were, in theory, reproducible.

 

 

Using Cochrane review software ensures all of this data is downloadable. So 100% of them were available: it was about half of non-Cochrane meta-analyses. All the other steps involved in ending up with a meta-analytic result weren’t looked at: reproducibility would take a steep dive if you went there, is an easy bet!

 

The re-caffeinated audience back, for another session on innovations editor-side at journals.

Malcolm MacLeod and colleagues looked at a new policy implementing a checklist for in vivo research manuscripts at Nature Publishing Group (NPG) – you can read about the background here. This one had an important impact on adherence with the checklist – it involved a lot of commitment from authors and editors.

MacLeod’s group found that in the NPG journals, the rate of adherence was 0/204 before the checklist, and 31/190 afterwards. For the non-NPG journals, it was 1/164 before and 1/89 afterwards.

MacLeod said that although this was an important improvement (absolutely!), but it hasn’t solved the problem either. (The preprint of this study is going to be out soon.) This study was independent of NPG (funded by the Arnold Foundation).

Striking here is the effort: the effort made at the journal, and the effort in doing a solid study. Imagine how much work all that rigorous assessment of hundreds of articles was!

MacLeod stressed this as a problem for stakeholders across science to solve: “To lay this at the door of the journal is unfair”. He also pointed out you can’t expect seeing a checklist like this for the first time to have an impact on everything: if you didn’t randomize, it’s too late now! But you might randomize in the next experiment you do, and that’s how we’ll get progress.

So energizing to see these good news stories! Meanwhile, the preprint is already online at bioRxiv: MacLeod said he looked forward to someone having already re-analyzed the data by the time he sat down!

Next up was Elizabeth Seiver from PLOS. (Disclosure: I blog at PLOS etc.) She and colleague Helen Atkins looked at the issue of signing peer reviews at 3 PLOS journals. Here’s the breakdown of authors’ preferences for seeing peer reviews signed:

 

 

This survey involved 451,306 reviews, but only 8% were signed. Overall 48% preferred signed reviews – but only 16% reported that they sign reviews when they do them!

There really is a distinct authors’ perspective, isn’t there?

Next up was authors’ ORCIDs, their unique author number. (Here’s mine to see what that looks like.) Most people haven’t signed up for them yet. Alice Meadows presented. She and her colleagues found that the largest share of articles connected to ORCIDs is in clinical medicine (14%), technology and other applied sciences (13%), and biology (12%). It’s more of a European thing, given funder encouragement there: 42%.

Some organizations are adding ORCIDs for peer review done at their journals too. (The journal has to do it, you can’t yourself.)

Last in this session: Sara Schroter from BMJ, and studying including patients as peer reviewers. (Disclosure: I was keen to see this one – back when I was a consumer advocate, I wrote a chapter in a BMJ book on “Non-peer review”. That was 2003.)

This was an internal evaluation, coming out of their 2014 Patient Partnership Strategy. They want patients involved in research, co-production of articles, inviting content by patients, having patients on decision-making committee. And patients in peer review. That’s what Schroter reported today.

BMJ receives about 3,500 manuscripts a year, rejecting 80% without peer review. In 2016, 359/647 manuscripts (55%) had a patient review. Their behavior was similar to other reviewers, and their decisions about acceptance of manuscripts was similar, too. They get a year’s worth of free subscription to the journal (where research is open, but not the rest) – and they would like to get more access to more literature in their area.

20% of the patients were concerned about doing open review, and anonymity for patients is supported. One of the editors said the reviews were frequently helpful, the other 6 surveyed said “occasionally”: 4 of the editors supported adopting it, the other 3 said they were unsure. Editors are finding it hard to find the right reviewers – and the time to do it.

What did patient reviewers want other journals to know? Basically, “Nothing about us without us”.

 

 

Discussion from the floor about conflicts of interest (COI), given how much industry support there is of patient groups. Schroter said they get the same COI questions as everyone else. As a former consumer advocate, people in that area don’t tend to regard the funding of their organization as a personal conflict – but it is.

Several of the questions were around representativeness and perspective. All that stuff that no one asks of clinicians or any other stakeholder, eh?

(On the issue of content analysis of consumer input – here’s a paper we published about input on evidence-based health information that reported on the results of studies.)

 

The next session: peer review models.

Simon Harris from the Institute of Physics (IOP) reported on offering an option for double-blind peer reviewing at 2 journals. Most people didn’t choose it (80%), and those that did were more likely to be rejected and rated as poor quality, and were more likely to be from India (25% versus a 20% average). The authors do the work of anonymizing the manuscripts.

It’s the world of all worlds, isn’t it, creating what could be a stigma for authors – a marker for “these authors come from an institution that isn’t prestigious”.

Trish Groves tackled this in the discussion. There’s a problem with even really good work from India and other lower-income countries getting accepted for publication. Double-blind peer reviewing isn’t the solution here, she argued, it’s who is doing the peer reviewing:

We should all be bending over backwards finding more peer reviewers from these regions and then we can solve the problem.

Next up was Sarah Parks, from Publons. They provide a way for peer reviews to be shared, primarily behind the scenes among journals. Parks and her colleagues looked at open peer review at Publons and open access journals at PeerJ. They found that the peer reviewers who share their peer reviews often want them open, but the journal policy doesn’t allow it. Only 1.7% of the 474,036 peer reviews were open.

Theo Bloom pointed out that yesterday, CrossRef announced DOIs for a category of peer reviews. COPE‘s policy is that peer reviewers have to honor the policy of the journal for which they did the peer review. I still don’t understand what the rationale is for peer reviewers not having copyright of what they wrote, so I can’t explain this.

Maria Kowalczuk and colleagues from SpringerOpen looked at acceptance rates for peer review depending on the peer review models. They had data for almost 1.5 million invitations to about 500 journals. Most used single-blind peer review, but there were groups offering open or double-blind too.

About 50% of invitations are accepted, with less (42%) for open peer review. It did mean that open peer review journals had to send out more invitations, she said:

But this difference was not staggering…This wouldn’t break a journal.

 

 

Open-ness is one of the issues potential reviewers take into account, she said, but it’s obviously not the only one. Open peer review is more established in clinical medicine journals.

From the floor, someone pointed out this did represent an important burden: it could cost a couple of weeks to keep inviting more peer reviewers. Kowalczuk said journals invite a larger number of peer reviewers right from the start to increase the chances, without increasing the length of the peer reviewing process.

Ad break: here’s my post on the evidence (before today) on open peer review. (I’ll be writing again on this later after absorbing all this new information and discussion.)

Scott Kubner from ALiEM.com, an emergency medicine blog, looked at open peer review for the blog, and including inline links highlighting peer review comments. Their approach was based on Mayer’s cognitive theory of online learning. They found that people clicked more, but didn’t spend more time reading the posts. It’s going to take us time, isn’t it?, to get used to new styles of reading.

Kubner said there were tweet storms, sometimes around some reviewers’ comments, and there seemed to be potential for people to learn from that. They are particularly concerned with seeing if it’s an aid, or doing nothing but raising the cognitive load.

 

First keynote of the day: Harlan Krumholz on preprints.

If our aim is not publications, but scholarship for impact, for us, it’s never about the paper…It’s about the work that can be done, the lives that can be improved…

 

 

You use preprints that gets your work out there quickly, get feedback so you can improve it, and get the work archived in an enduring way at the same time, Krumholz argued. He said that every journal has improved his work. Preprints, he said, “augment the ecosystem”. The goals?

How do we easily and rapidly archive and share information with other scientists to accelerate research, enhance collaboration, reduce waste, increase transparency.

It’s well established in other fields, Krumholz pointed out. For many,

If we wait for publication, we’ll be years behind in our field.

We’ve always had pre-publication sharing – “medical meetings can be headlines”. Preprints, he said, are about “transparency in our discussions in science”.

What about the arguments against? That there’s too much information, and too much that’s wrong: well, Krumholz argued, that’s always been the case. We need better filters, not less information.

What about the danger of the public being misinformed? He argued,

It needs to be framed as science for scientists.

Krumholz suggested these risk mitigation strategies for clinical research in preprints:

  • High-level screening.
  • IRB/ethics approval or exemption necessary.
  • Clinical trials must be registered first.
  • ORCID – unique researcher identifier – for the corresponding author.
  • Preprints should be labelled/watermarked.

They are proposing MedArXiv for clinical research, but they want to get more collaboration before rolling it out, “So we’re fighting disease, not each other”. They would like feedback: medarxiv@yale.edu

Krumholz asked:

Is there any difference between presenting at a meeting or posting on a preprint server?

In the discussion, Steve Goodman answered:

Yes, there is. One could argue that the very structure and sociology of meetings represent that mitigation strategy…Sometimes the release at conference does harm.

Goodman argued strongly for a structured evaluation approach to MedArXiv: start in a narrow, with a registered protocol for studying it, monitoring for harms as well as benefits. That would be great, I reckon – something I raised, too, when I wrote about preprints (here).

 

Cartoon of a symposium on the future of scientific publishing

 

Finally, Deborah Zarin asked if there would be commenting functionality at MedArXiv: yes, said Krumholz.

 

 

Go back to day 1 or day 2

 

~~~~

The slide “this is true” presented by Harlan Krumholz: image credit is Emmy van Deurzen.

Cartoons are my own (CC-NC-ND-SA license). (More cartoons at Statistically Funny and on Tumblr.)

 

* The thoughts Hilda Bastian expresses here at Absolutely Maybe are personal, and do not necessarily reflect the views of the National Institutes of Health or the U.S. Department of Health and Human Services.

 

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top