It can be a long, unrewarding slog to unpack the problems in a published paper. David Allison and colleagues talked about 18 months of their experience trying to get the record on 25 papers corrected, and why they gave up. It took too much time, and was often futile, they wrote:
Many journal editors and staff members seemed unprepared or ill-equipped to investigate, take action or even respond. Too often, the process spiralled through layers of ineffective e-mails among authors, editors and unidentified journal representatives, often without any public statement added to the original article. Some journals that acknowledged mistakes required a substantial fee to publish our letters: we were asked to spend our research dollars on correcting other people’s errors.
All that doesn’t even take into account the work of identifying and writing up the problems in other people’s work. Doing that rigorously takes serious time. The last few months I’ve spent an inordinate amount of it analyzing the validity of claims in a critique of a Cochrane review, and then being responsive about what I wrote and associated turmoil that was erupting in the Cochrane Collaboration.
That turmoil made this situation unusual, and not in a good way. It’s unusual in a very good way for another reason, though. Not only is the journal moving towards correcting the critique, it’s published its current thinking and inviting public comments. And that’s fantastic. I’ve been very critical of the role played by the journal, BMJ Evidence Based Medicine (BMJ EBM), and the harm caused by publishing this without enough diligence can’t be undone now (see my previous post). But excellent post-publication management, and doing it without dragging it out over years, is a valuable corrective.
This post is my response to the journal’s review of the critique. If you want to dig into the backstory, here are some key places:
- You can catch up on the Cochrane review here, and the critique here.
- My critique was posted here on 25 August.
- Cochrane’s editors in chief responded on 3 September [PDF].
- The original response by the BMJ EBM editors came on 12 September here.
- The critique’s authors responded to the Cochrane response here on 17 September.
- The BMJ EBM editors’ review and call for feedback was posted here on 16 October.
The editors’ review of issues and proposed response is broken down into 7 questions. Here’s my response to each.
1. Did the Cochrane HPV review miss “nearly half” of the eligible trials, as reported by the BMJ EBM article?
It’s great that the table subsequently provided by the critique’s authors, Jørgensen and colleagues, is proposed as a correction [PDF]. However, it is not a table of 20 eligible trials, of which 4 had already been included in the Cochrane review, leaving 16 missing trials, as described by the editors. It’s a table of 21 trials. Of the 21, only 11 are categorized as “eligible”, 3 of which were not missed by the Cochrane review: it’s only 8 missing trials, and 3 Cochrane non-inclusions challenged. Of those, 2 were excluded by Cochrane because the trial publications didn’t report female participants separately, although the data were available elsewhere.
The third trial was the only large trial, which makes it critical: it was excluded as a Phase IV trial in accordance with the Cochrane review’s inclusion criteria. Jørgensen and co said the CSR describes it as a Phase III/IV trial (see here). The NCT register and the trial publication both call it a Phase IV study, and so does the manufacturer registry.
The other 10 trials are categorized by Jørgensen and colleagues as “possibly eligible”. That’s a critically different designation. In fact, it’s a bit misleading – a group of them are trials that are included in the review, but data from them was missing without explanation from adverse events analyses. That’s very important, but it’s not the same as missing a trial.
It would be correct to speak of at most 11 trials regarded by Jørgensen and colleagues as “eligible” and “missing”. And then to speak of missing data separately.
2. Was the BMJ EBM article calculation of the number of missing participants in the 20 “missing trials” correct?
I agree it’s important to note that the original version of the critique didn’t exclude male participants. However, it’s also critical to note here that 20,515 participants come from 1 trial (the vaccine strategy effectiveness trial which was excluded by Cochrane as a Phase IV trial).
The correction proposed by the BMJ EBM editors is this:
The number of randomised participants could be assessed for 42 of the 46 trials, and we found an additional 25,550 females (and possibly up to 30,195 for the Cochrane HPV Review’s serious adverse events meta-analyses) who are eligible for the Cochrane HPV Review’s meta-analyses. With nearly half of the trials and possibly half of the participants missing, the Cochrane authors’ conclusion, ‘that the risk of reporting bias may be small’, was inappropriate.
The 25,550 females comes from the 11 trials (obviously mostly from the disputed vaccine strategy trial). And those 11 eligible trials don’t make the total 42 to 46 trials: it makes the total 37 by my count. (The Cochrane response accepted 34 as eligible altogether.) Obviously, 11 out of 37 isn’t “nearly half” – and nor is 11 out of 42 or 46. It would be better to reconcile all these numbers consistently, and then report a percentage.
It’s also critical to point out that the Cochrane review included 73,428 participants: if the vaccine strategy trial remains excluded, only a very small percentage of participants would be added. Even adding all of these participants, so that the total reaches around 100,000, the number given here isn’t remotely close to half.
3. Did the Cochrane HPV Review authors mistakenly use the term placebo to describe active comparators, as reported by the BMJ EBM article?
I agree that this is a matter of opinion. But a correction is still required, because this is what the article says:
The Cochrane authors mistakenly used the term placebo to describe the active comparators. They acknowledged that ‘The comparison of the risks of adverse events was compromised by the use of different products (adjuvants and hepatitis vaccines) administered to participants in the control group’. Nevertheless, this statement can easily be overlooked, as it comes after 7500 words about other issues in the discussion and under the heading ‘Potential biases in the review process’. Active comparators was not a bias in the review process but a bias in the design of the HPV vaccine trials.
Their definition of placebo comparator is spelled out in the abstract and the methods section, not only 7,500 words into the review. However, I agree with the subsequent criticisms that the use of just “placebo” in the plain language summary and summary of findings is misleading. I think this requires a correction, to state that although the term was defined at the outset, the use of the term in parts of the review is misleading.
4. Did the BMJ EBM article substantially overstate its criticisms?
I disagree with the editors’ “no” answer here – especially since it doesn’t establish that the review’s fairly cautious conclusions would be different. But I agree that the article doesn’t requires a correction to its conclusions on these grounds: the critique’s authors’ opinions remain their opinions, even if based on problematic data.
5. Were the serious adverse events rates accurately reported by the authors of the BMJ EBM article?
I’m happy that the editors agree with the point I made here about Jørgensen & co’s error derived from comparing number of events to number of women experiencing those events. However, just correcting that final sentence as proposed by the editor doesn’t solve the key problem here. The faulty calculation was the basis for what is the main issue from the point of view of relying on the review’s conclusions. I’ve added bold and italics to the quote below to show what I mean:
The Cochrane authors reported that they made a ‘Particular effort’ to assess serious adverse events and performed a sensitivity analysis that gave them ‘confidence that published and registry or website-sourced data are similar for the same study’. This seems unlikely. As an example, the PATRICIA trial publication only included two thirds (1400/2028) of the serious adverse events listed on ClinicalTrials.gov. The Cochrane authors included 701 vs 699 serious adverse events (1400) from the PATRICIA trial publication (see the Cochrane reviews’ ‘Figure 10, Analysis 7.6.2’) and 835 vs 829 serious adverse events from its ClinicalTrials.gov entry (see ‘Comparison 7, Analysis 6: 7.6.2’; both analyses were called ‘7.6.2’). We found 1046 vs 982 serious adverse events (2028) when we summarised the data from ClinicalTrials.gov (see ‘Results: Serious Adverse Events’).
The example is wrong. Therefore no basis in data is given for the challenge to the validity of the Cochrane reviewers’ conclusion. What’s more, this goes directly to the reliability of the trial publications, and the reporting bias issue. That entire paragraph is therefore a serious problem, invalidated as it is by this error.
6. Were all trials that were included in the Cochrane HPV review funded by the HPV vaccine manufacturers, as reported by the BMJ EBM article?
I disagree with the editors here. There is no way that an average reader would understand the nuance implied by this suggestion:
Therefore, all included trials were funded or sponsored by the HPV vaccine manufacturers.
That one of the trials was publicly funded is critical information to disclose. The Cochrane reviewers were not in error here, and it’s a critical public trust issue. This is what they said:
All the trials, except one (CVT (ph3,2v), were funded by the vaccine manufacturers. However, vaccine efficacy and adverse effects were not different in trials funded by manufacturers and the one trial conducted with public resources.
This is what the critique said:
They stated that, ‘All but one of the trials was funded by the vaccine manufacturers’, which is not correct.
But the review is correct. That paragraph needs revision – and the nature of the sponsorship role for FDA submission explained.
7. Did the Cochrane HPV review comply with Cochrane’s COI policy?
The statement about the Cochrane COI policy is misleading, because it does not communicate the nuance of the policy. Indeed, that the policy is nuanced and not absolute is a cause for criticism of it – notably by 2 of the authors of this critique of the Cochrane HPV review.
On the same day the BMJ EBM editors published their proposed response to correcting the critique, the BMJ reported that the Cochrane funding arbiters had looked at the various claims made in the critique and the subsequent response to Cochrane – and concluded there was no breach. The information in the BMJ EBM is wrong. If they want to argue that there is not enough distance between the lead author and manufacturer funding, that would be fair enough. But the situation should be described accurately, and it shouldn’t be claimed it does not comply with the COI policy.
Please also note: Markowitz is not an author of the Cochrane review (although she was an author of the protocol), and there’s no information that she was paid by industry for giving that talk – indeed, as a CDC employee, she shouldn’t have been.
I started this post with a reference to a paper by Allison and colleagues. That ends with a quote from an interview with Ben Goldacre about his team’s work on trying to systematically approach errors in trials:
This is a phenomenally laborious process. Not a week goes by that we don’t curse the day we set out to do this.
Even tackling the problems in a single article, like this one in BMJ EBM, has left me feeling like that from time to time! The process of critiquing accurately is painstaking, and then being responsible and responsive to the consequences of this public critique has been onerous. But when the record gets corrected as it should – as it appears will happen here – then somehow the burden seems to shrink.
People argue all the time that we need more incentives, if people are to go through the effort of post-publication peer review. To me, getting the scientific record corrected is the most powerful incentive. Thank you, Carl Heneghan and Igho Onakpoya (editors of BMJ EBM).
This is the 6th in a series of posts about the unfolding events that began with the publication of a critique of the Cochrane review of clinical trials of the HPV vaccine to prevent cervical cancer. The first critiqued that critique. The second looked at what we know about whether it’s working as would be expected from the trials. The third goes into the crisis that unfolded at the Cochrane Collaboration, and the responses to the critique. The fourth discussed extremism and anti-industry bias. And the fifth discussed journals’ responsibilities in vaccine debates.
Disclosures: I led the development of a fact sheet and evaluation of evidence on HPV vaccine for consumers in 2009 for Germany’s national evidence agency, the Institute for Quality and Efficiency in Healthcare (IQWiG), where I was the head of the health information department. We based our advice on this 2007 systematic review including 6 trials with 40,323 women, and an assessment of those trials. The findings were similar to those of the 2018 Cochrane review. I have no financial or other professional conflicts of interest in relation to the HPV vaccine. My personal interest in understanding the evidence about the HPV vaccine is as a grandmother (of a boy and a girl).
I am one of the members of the founding group of the Cochrane Collaboration and was the coordinating editor of a Cochrane review group for 7 years, and coordinator of its Consumer Network for many years. I am no longer a member, although I occasionally contribute peer review on methods. I often butt heads with the Cochrane Collaboration (most recently as a co-signatory to this letter in the BMJ). I have butted heads on the subject of bias with authors of the Copenhagen critique.
I was invited to speak at Evidence Live, and my participation was supported by the organizers, a partnership between the BMJ and the Centre for Evidence-Based Medicine (CEBM) at the University of Oxford’s Nuffield Department of Primary Care Health Sciences – the director of the CEBM is the editor of BMJ EBM. Between 2011 and 2018, I worked on PubMed projects at the National Center of Biotechnology Information (NCBI), which is part of the US National Institutes of Health. I am currently working towards a PhD on some factors affecting the validity of systematic reviews.