Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Absolutely Maybe

The Systematic Review is Dead! Long Live [insert preferred sweeping claim] !

 

No, of course, the systematic review/meta-analysis hasn’t died. But I kind of wish the flag-waving for the claim that its current form should die, would.

Before we go any further, I am not waving the flag for systematic reviews being so special, nothing about them should ever change. And I don’t think anyone would argue any more that just because something is, or calls itself, a systematic review that it’s necessarily good evidence. Every systematic method isn’t a good way to review evidence, either. Some are even truly egregious, and the straws people grasp at to make evidence synthesis easier can be, well, quite breathtaking. Exhibit A:

 

Snippet from a systematic review's methods saying they would choose only one trial for each drug to keep the tables manageable

 

The genre of dissing systematic reviews is getting truly extreme, though. In my last post, I tackled John Ioannidis’ claim that only 3% of systematic reviews are both “decent” and “clinically useful”. To be very clear, he at least was only arguing for a gradual transition to a new way of tackling evidence in health care. He argues for moving away from meta-analyzing retrospectively, and doing primary research in the context of prospective meta-analysis.

That is a case in point for aspects of the other genre, though. This prescription for the future is not really a workable solution at scale. And that set of methods might not resolve issues as neatly as they could in theory. Just dig into the controversies around data on adverse effects of statins that have engulfed the prospective meta-analysis of cholesterol-lowering treatment. (You could start here or here.) Whatever we do, will generate a new set of problems to address, and perhaps not be as effective an intervention as theory suggested.

Something about the challenges we face does seem to encourage extremist solutions, though. For a while at least, you would hear a strong case being argued against ever using published clinical trial reports at all, which could be a poster child for throwing out the evidence baby with the bathwater. (People have been more circumspect in what’s made it to print, like writing that the literature should “probably” be ignored [PDF].)

This month, there was a new entrant to the genre: Richard Horton, the editor of The Lancet, weighing in to signal boost Ian Roberts’ claim that current systematic review practice is “an anachronistic religion”.

 

Cartoon of person fiercely scribbling

 

This case rests on two pillars. The first is an apparently Damascus Road experience Roberts had because of the impact of a prolific serial fraudster in his subject area. There is no way I want to diminish the importance of research integrity on systematic reviews: it’s the major focus of my doctoral dissertation, after all (coming soon!). That said, shaping your perspective on the issue based on one of the few corners where there is a catastrophically large number of fraudulent trials is going to distort the picture. Fraudulent trials are very rare events. To proceed as if this means that we should no longer take evidence on trust is overkill.

I’ve argued for years that one of the biggest threats to us being able to keep up with evidence is escalating demands on systematic reviewers without very good reason. Going as far as treating every trial as guilty until proven innocent would be just that kind of disastrous intervention.

Roberts’ second pillar is on more solid ground: spending time reviewing massive quantities of low-quality trials. It’s a colossal waste of money, he says, and Horton quotes him as saying, “Nobody wants to hear this story”. I think people over-reacting to the extreme position that emerges is more of a worry.

The cumulative effect of systematic review/meta-analysis bashing is to discredit something that, while obviously too often deeply flawed, still makes a really important contribution in an imperfect world that will always be imperfect – despite some people’s fervent beliefs that they have found the one, true, way forward.

Roberts’ preferred sweeping claim, according to Horton, is to declare a kind of war on “poor-quality trials, often underpowered and from single centres”. The message that came through for me was, no one should fund those trials, journals shouldn’t publish them, and systematic reviewers shouldn’t rely on them (or maybe not even include them). (Roberts and colleague Katherine Ker wrote about this in 2015, too.)

I certainly wish funders and everyone else would exert every fiber of their beings to improving the quality of trials. But in my opinion, the rest of what’s being argued would be dangerous, and in some aspects (like not publishing and ignoring), unethical. It would doom a lot of us into being victims of the perfect being the enemy of the good.

I see the argument about not wasting time on lousy evidence, and I think that’s a valuable discussion to have. But there’s a problem with the theory of the case. Judging the reliability of evidence isn’t something that has surgical precision. Reading about the poor inter-rater reliability of methods for deciding what’s poor quality in a trial makes me wince (for example, here). And even then we’re talking about probabilities. Being high quality doesn’t guarantee a trial’s answers are “right”, and being low quality doesn’t guarantee a trial’s answers are “wrong”.

Road sign cartoon

Even if it were crystal clear, what’s to become of all the questions where large, high-quality, multi-center trials are never, ever going to happen? And that will be most health and social care questions. Even if it were feasible, the cost is out of the question. As societies we also want great justice systems, social welfare support, schools, roads… There is only so much money, and so many people, available for running optimal clinical research. People who make their livings, or a big chunk of them, out of doing clinical research have important expertise, but serious conflicts of interest come into play at some point, when they are arguing for more funding for what they do.

Back in 1996, Iain Chalmers and Adrian Grant wrote a hard-hitting editorial about what it was like in a critical health care issue before a landmark clinical trial changed everything [PDF]. What they said could have applied to any number of other clinical questions:

For 70 years, the proponents of various drugs and drug cocktails have hurled disdainful abuse at each other from separate mountain tops, secure in the knowledge that no strong evidence existed that could undermine any one of their multitude of conflicting opinions.

If we were to disregard high quality conventional systematic reviews, or dismiss too much of the evidence, we would be back there.

Systematic reviews have to evolve, and they do. Different types will be reasonable ways to tackle different questions and bodies of evidence. One size doesn’t fit all any more, and there won’t be one type of systematic review to rule them all in the future either. We need a slew of solutions to tackle the complicated problem of having vast numbers of clinical trials and other types of primary studies. Discrediting conventional systematic reviews or proposing radical changes in methodology without strong evidence aren’t solutions.

 

Comic title - Goldilocks and the three reviews

 

Comic panel - this review is too complicated

 

Comic panel - this review was too simple

 

Comic panel - this review was just right

 

~~~~

 

Goldilocks and the Three Reviews originally appeared in a 2013 Statistically Funny blog post. I remain now, as then, grateful to the Wikipedians who put together the article on Goldilocks and the three bears. That article pointed me to the fascinating discussion of “the rule of three” and the hold this number has on our imaginations.

 

The review with the snippet shown for selecting just the one trial for each drug is this one.

 

The cartoons are my own (CC BY-NC-ND license). (More cartoons at Statistically Funny and on Tumblr.)

 

 

Discussion
  1. This is a problem I have been thinking about since the mid-eighties when I first thought the solution was much easier https://www.jameslindlibrary.org/orourke-k-detsky-as-1989/ or this take on it https://statmodeling.stat.columbia.edu/2012/02/12/meta-analysis-game-theory-and-incentives-to-do-replicable-research/

    Now, I believe the problems raised in Redefining the ā€˜Eā€™ in EBM are real but of course rather than banning systematic reviews they need to be strengthened.

    One productive way to do this would be to randomly audit a sample of clinical trails similar to income tax audits (not so much to catch errors as to provide strong incentives not to make them).

    Who should do this and how, will be challenging but the University of Toronto does randomly audit their faculty for compliance with ethics requirements.

    I believe most of the problems result from unintentional sloppiness and methodological misunderstanding.

    But no one knows that or the prevalence of “too low quality” studies in the various fields.

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top