I get linked into a lot of thought-provoking debates about blinded peer review because of a post I wrote a couple of years ago. It’s kept me thinking and digging into research about bias in journals. And that’s shifting my position. The case for “blinding” to make journal peer review fair seems less and less plausible to me for the long run. It even seems antithetical to ultimately reducing the problems it’s a bandaid solution for.
There’s a winding chain of logic about this intervention – with strong, weak, and broken links. We need to look at whose interests are served by concealing editorial participants and peer reviews, too, and the implications of that lack of transparency for social and scientific progress. That’s a lot to untangle in one blog post! But let’s give it a go.
How do people’s social and personal biases influence which manuscripts get accepted? And how could that be affected by blinding authors and peer reviewers from each other, and from readers? We are making a trade-off here, juggling:
- the extent to which revealing identities on its own contributes to prejudiced decisions;
- lack of accountability for what peer reviewers write, including the ability for conflicts of interest to remain undetected; and
- our ability to independently monitor and study peer review at journals.
Double-blind peer review puts its chips on the first of those to reduce socially prejudiced decisions and protect peer reviewers from the retribution of vindictive authors. The other two, though, are also strategies for preventing bias and/or tackling it at a social level. Blinding doesn’t just protect peer reviewers from retribution: it protects the vindictive, by concealing evidence of critical explanatory events and by hiding track records of bad behavior.
So does hiding identities work? And how much social and personal prejudice could it prevent when it does? Let’s break that down.
“Blinding” is a misnomer: it’s more a process of obscuring. The effort journals go to, to conceal the identity of the authors, in particular, varies a lot – and it’s going to be harder in smaller scientific circles, too. One review concluded that on average, blinding fails about half the time, with more prolific or higher profile authors much easier to spot [PDF]. Here’s what I found in comparison trials of blinding peer reviewers:
The rate of failure of blinding in the trials was high: average failure rates ranged from 46% to 73% (although in 1 journal within one of the trials it was only 10%).
But guessing actual identity isn’t the whole picture, is it? The nature of prejudice and unfair dismissal of people’s work isn’t pinned up only in people’s names or the institutions they come from. Deciding what you think and say about a manuscript is a very complex process [PDF]: blinding here can’t be expected to have the same power as it does when ensuring the person in a clinical trial who’s measuring participants’ blood pressure doesn’t know whether the person got an experimental treatment.
Say you have a prejudice against people from certain countries, or newcomers, or feminists. There could be markers of authors being from that group which no amount of redacting citation lists and “our previous work has shown” can remove – including jargon or equipment that they use, writing style, quality of graphics, who they’re studying, and more. Entire subfields can be markers for gender, for example.
It’s hard to unpick it all. I think that’s part of why blinding author names and institutions hasn’t been shown to make a real difference. (My original post remains up-to-date on this: I last checked for comparison studies while preparing this post.)
Another possible contributor to the lack of impact might be that social bias towards authors might not be what dooms a manuscript all that often. I’ve looked at this for gender bias (here and here), and peer reviewer bias against female authors might not be as big an issue for submitted manuscripts at journals as it is in other, more personal, areas (like applications for grants, fellowships, and jobs – or even individuals invited to write journal commentaries).
There’s a lot of bias of all kinds in the publications about editorial bias, too! And not enough systematic review and critique of those publications, either. This field is a cherry-picking feast.
Then there’s the elephant in the room: the editor. “Triple blind” reviewing isn’t feasible – certainly not all the way to the final point of decision making. One editor with a strong personal prejudice can do far more damage than a peer reviewer – even if only because they handle more manuscripts. From a modeling study:
With a small fraction (10%) of biased editors, the quality of accepted papers declines 11%, which indicates that effects of editorial biased behavior is worse than that of biased reviewers (7%).
Journals don’t often check for bias among their editors – or at least, they don’t report it or talk about how they do it, if they do. It didn’t come up as an issue important to monitor in a recent consensus from editors about editorial core competencies, either. (You can see what I said about that: I was a peer reviewer, for a journal with open review.)
To help get “peer review” into perspective, I’ve charted out the stages within a single journal process, and who could be the source of prejudice along the way:
The journal’s reputation, presentation, policy, process and/or representatives attract or deter submission, selectively.
Potential for bias: Editors. Author choices could skew profile of submissions (for example, if early career researchers prefer to submit to journals with double-blind peer review, or authors believe a certain journal doesn’t publish work by authors like them).
Invited opinion pieces or researchers invited to submit specific work.
Potential for bias: Editors.
Manuscript rejected without being sent for peer reviewer. (Elsevier, for example, reports that 30-50% of submissions to its journals are rejected without peer review. The journal Academic Medicine desk rejects 65%.)
Potential for bias: Editors.
Peer reviewers with definite or likely biases or conflicts of interest chosen deliberately; or journal has socially or intellectually biased peer reviewer pool; or authors/peers suggest biased peer reviewers.
Potential for bias: Editors.
Biased peer reviews submitted and accepted.
Potential for bias: Peer reviewers, editors.
Journal decision to accept or reject manuscript.
Potential for bias: Editors.
The journal’s policy and/or process enables appeal, selectively. Uneven distribution of willingness, awareness, powerfulness of appealing. Final journal decision to accept or reject manuscript.
Potential for bias: Editors, authors.
The power here, on balance, lies with editors. They might be the principal beneficiaries of hidden editorial processes, too. From outside at the moment, for most journals people can only monitor what’s published, and an annual “thank you” list to peer reviewers. How often manuscripts from non-Euro-American author groups get peer reviewed by non-Euro-American peers, for example, is something we can’t see – and therefore can’t criticize.
It’s easier for a journal to offer double-blind peer review than to make themselves vulnerable to serious scrutiny. And it has become a sign of fairness to many people. For some, that’s because they believe it’s an effective intervention despite the lack of proof. For others, it’s a conviction that if double-blind peer review only ever stops an occasional author or peer reviewer being disadvantaged, that’s enough reason to justify the policy (here’s a well-articulated recent example). Behind that lies a belief that it can do no harm: a belief unsupported by evidence.
That’s enough support and power interests to make sure double-blind peer review stays around for a while, and maybe even grows. However, it’s clashing with other drivers too: like preprints making it even harder to hide whose paper it is, the push to start gaining credit for peer review, more public criticism of publications, and more meta-research on publishing practices.
Open peer review, and collaborative peer review will grow, too. The historical trajectory of progress in science has been towards more openness and collaboration, and I think that will continue. The culture is likely to slowly shift, in ways that reduce the problems that blinded peer review is meant to prevent. As Stebbing and Sanders wrote recently, in the context of post-publication peer review in clinical research:
The more frequently critiques of the literature, or, for example, of clinical medicine, are provided in the open, and the greater number of people who are engaged in this activity in public, the less likely it is that any individual can be successfully targeted for their honest attempts to correct the scientific corpus or to reveal inappropriate medical practice.
[Update 10 March 2018] Added Academic Medicine‘s 2018 desk rejection study.
* The thoughts Hilda Bastian expresses here at Absolutely Maybe are personal, and do not necessarily reflect the views of the National Institutes of Health or the U.S. Department of Health and Human Services.