Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Absolutely Maybe

What’s Open, What’s Data? What’s Proof, What’s Spin?

 

Cartoon: Does it work? It depends what you mean by does, it, and work

 

My last post on the use of open science badges for articles set off a flurry of debate. There were some issues that I had touched on, but clearly could use more discussion – especially some of the parts where I used jargon with specific meaning in my neck of the science woods. So here’s an FAQ!

 

Can you really say an article has open data if the article itself is behind a paywall?

 

I really want to start here with the questions, what’s open and what’s data? There are degrees of open – it’s not just whether you can get your hands on it without someone paying (here’s a guide). So that’s one issue.

Funder open data mandates tend to go wide open – everything created and used, that you need to understand and use the work. (Except of course data that can never be open, for privacy reasons, for example.)

When you start to award “open badges” to closed access articles, then “data” gets narrowed down into artificial definitions for expediency, not scientific robustness. The article that is being sold can’t be redundant, can it? It’s still badged as a research article – not an opinion piece with zero extra substance accompanying a fully open study.

When you really need to make sense of a study or use data, or verify claims, you need “data” in its widest “information” sense: qualitative, quantitative, visual.

I was a consumer advocate for a long time, without access to institutional subscriptions. Even now, with fabulous literature access at work, most days there is something I really want to read that I can’t get hold of – usually because it’s outside “biomedicine”. You should want everyone who’s interested to get access – even out of self-interest. They might point out something important you never would have noticed, for example.

“Open badges” in closed articles is generally only going to be sorta-kinda-open. There could even be nothing useful for most purposes in there, yet still be badged “open”.

 

 

 

I say “co-interventions”, you say “confounding variables”: why does that matter?

 

A confounding variable is an outside factor that might influence whatever you’re looking at it. The potential for unknown variables in human beings and complex systems is pretty much limitless.

A co-intervention is something that is done with the intention of having an effect. It’s not indirect at all, and it’s about the whole study – unless you have control groups that don’t get the intervention.

One of the reasons you have control groups is to control who gets what intervention. The idea is to isolate the intervention you’re studying, and then known and unknown confounding variables are theoretically evenly distributed among the intervention and control groups.

In the badges study, the only intervention the authors were interested in was the badges initiative they had developed for a journal. But at the journal, the badges initiative was one of a package of 5 interventions it introduced at the same time. Prospective authors faced the whole package, which was intended to influence journal staff, reviewers, and authors. There was no study of just offering badges in the context of business as usual otherwise.

 

You only need to report variables you think might be plausible explanations for your observations – or else where would it ever end?

 

This was an argument given for not reporting that there were 4 co-interventions at the journal, explicitly intended to improve the reproducibility of research published by the journal. That the co-interventions could have affected ability or willingness to share data or materials was considered by the authors to be implausible.

There were only 4 intentional, clearly specified, co-interventions. Reporting them and citing a source or sources for readers to follow up was not an onerous task with no clear boundary. It was the bare minimum for openness about what was being studied. Open-ness isn’t only for numbers or methods of analysis.

Adequate reporting of interventions is arguably the most critical bit of information. You need to know enough about what “it” is, if you want to do it, advocate it, or avoid it, don’t you?

In clinical research, there is a reporting initiative to try to improve this: TIDieR, the Template for Intervention Description and Replication. (Disclosure: I was involved in an early related work by this group, who are friends and colleagues.)

So that’s the theoretical side of it. Why do I believe the co-interventions are critical? I concluded that the effects observed were likely because the changed editorial policies and practices created a form of “natural selection”: the journal wanted to publish more reproducible research and was ratcheting up its requirements. The number of articles they published dropped a lot – which didn’t happen at the comparator journals (indeed one had a substantial increase, overtaking the intervention journal’s productivity).

Thus, it seems likely they were rejecting/repelling the kind of author unable/unwilling to share data – and attracting others who were able/willing. The co-interventions included “new statistics” and various data disclosures. If authors doing these things are characteristics associated with also being able/willing to share, then the journal was enriching its author pool with ones willing to share data, with or without badges to some extent. And the whole process seemed to be very effortful.

Changing where authors who are going to be open publish – possibly encouraging them to go to a closed access journal – does not necessarily increase open-ness. (There’s some data on characteristics of open scientists, too, in one of the papers that contributed to the interventions at Psychological Science.)

Here’s how Nick Brown expressed some hypotheses on Twitter:

  1. Strong perception on the part of authors that editor is supporter of open data and may give preference to manuscripts that have it.
  2. Self-selection of authors who were already prepared to share their data and see opportunity to get publication in high-impact journal.

John Sakaluk tweeted a paper by David Giofrè and colleagues that studied some of the co-interventions. (Thanks, John!)

The Giofrè group reported data on compliance with some of the new editorial guidelines – although they were more lenient on what constituted sharing, they report. So the data aren’t directly comparable. But it’s something that bridges the co-interventions.

Compliance with the guidelines was low for some of them. For some, there was a large increase, as there was with sharing. So I picked out those, and the year 2015 (where the impact of the package was at its highest level for that time period – it dropped after that, then rose again). And then split them into data shared or not. And yes, there is some correlation, for what it’s worth: reproducibility practices are not completely disassociated from each other here. Rejecting/repelling people for other reproducibility issues may reduce the non-sharer proportion of a pool of authors. Which parts of the package do what, is impossible to know.

 

Table - link to data in text

 

Can an observational study ever “prove” anything, or can we only be sure if there’s a controlled trial?

 

Yes, we can be confident about impact of interventions without controlled trials. You don’t need controlled trials for truly dramatic effects of plausible interventions. We don’t need a controlled trial of smoking to know it can cause lung cancer.

We can learn a lot from solid methods in observational studies. The Giofrè paper is a good example of an observational study tackling complexity and coming up with useful information, with conclusions calibrated to the uncertainty. The paper has some of the same problems, but the authors didn’t claim the effects of Psychological Science‘s intervention package was because of any single intervention:

To sum up, we cannot assess the extent that observed changes were caused by guideline changes, but it seems that changes in guidelines may be useful although not sufficient. Changing guidelines may be effective for some practices but rather less so for others. Substantial innovation in science practice seems likely to require multiple strategies for change, including, in particular, the coordinated efforts of journal editors, reviewers and authors.

 

What’s “research spin”?

 

In clinical research, research spin is the subject of study and developing measurement tools. Here’s how I summarized it in a post critiquing another study:

Research spin is when findings are made to look stronger or more positive than is justified by the study. You can do it by distracting people from negative results or limitations by getting them to focus on the outcomes you want – or even completely leaving out results that mess up the message you want to send. You can use words to exaggerate claims beyond what data support – or to minimize inconvenient results.

You can get an introduction to this field of study by checking out Isabelle Boutron’s papers. It’s part of the broader field of transparency and reporting bias. There’s no magic bullet here either, but in health research a lot of progress has been made on setting standards for reporting studies. Check out the EQUATOR Network if you’re interested in reading more about this. (Disclosure: I’m a co-author of one of the EQUATOR reporting guidelines.)

Reliable, reproducible research, accessible to everyone who could use it or contribute to making it better: that’s a lot to aim for, isn’t it? In the debate since my post, John Borghi put it in this nutshell:

I don’t have strong feelings about badges, just an aversion to accepting seemingly simple solutions to systemic problems.

One useful rule of thumb here is an old classic: if it looks too good to be true, it probably is.

 

Follow-up post on 30 December 2019 – Open Badges Redux: A Few Years On, How’s the Evidence Looking?

 

Open heart tattoo

 

~~~~

 

Photo of badges on a bag

 

The original post:

Bias in Open Science Advocacy: The Case of Article Badges for Data Sharing

Absolutely Maybe posts tagged open science.

 

 

Disclosures: My day job is with a public agency that maintains major literature and data repositories (National Center for Biotechnology Information at the National Library of Medicine/National Institutes of Health). I have had a long relationship with PLOS, including several years as an academic editor of PLOS Medicine, and serving in the human ethics advisory group of PLOS One, as well as blogging here at its Blog Network. I am a user of the Open Science Framework. Specific other disclosures are included in the text.

 

The cartoons are my own (CC BY-NC-ND license). (More cartoons at Statistically Funny and on Tumblr.)

 

* The thoughts Hilda Bastian expresses here at Absolutely Maybe are personal, and do not necessarily reflect the views of the National Institutes of Health or the U.S. Department of Health and Human Services.

 

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top