Are meta-analyses done by promoters of psychological treatments as tainted as those done by Pharma?

We would not waste time with a meta-analysis from Pfizer claiming the superiority of its antidepressant. Particularly when it’s a meta analysis of trials mostly done by Pfizer. Bah, just another advertisement. What, the review was published? And without a declaration of conflict of interest? We would should be outraged and doubt the integrity of the review process.

If we were not distrustful already, Ben Goldacre’s Bad Pharma taught us to distrust a drug company’s evaluation of its own product. Most randomized drug trials are industry funded. In these trials, the sponsor’s drug almost always triumphs. Overall, industry funded placebo-controlled trials of psychotropic medication are five times more likely to get positive results than those free of such funding.

My colleagues and I have successfully pushed for formalizing what was previously informal and inconsistent: in conducting a meta-analysis, source of funding for an RCT should routinely be noted in the evaluation using the Cochrane Collaboration  risk of bias criteria. Unless this risk of bias is flagged, authors of meta-analyses are themselves at risk for unknowingly laundering studies tainted by conflict of interest and coming up with seemingly squeaky clean effect sizes for the products of industry.

Of course, those effect sizes will be smaller if industry funded trials are excluded. And maybe the meta analysis would have come to a verdict of “insufficient evidence” if they are excluded.

My  colleagues and I then took aim at the Cochrane Collaboration itself. We pointed out that this had been only inconsistently done in past Cochrane reviews. Shame on them.

They were impressed and set about fixing things and then gave us the Bill Silverman Award. Apparently the Cochrane Collaboration is exceptionally big on someone pointing out when they are wrong and so they reserve a special award for who does it best in any given year.

Bill Silverman was a founding member of the Cochrane collaboration and pointed out thatcertified pain in the ass lots of people were making supposedly evidence-based statements that were wrong. That is why some sort of effort like Cochrane collaboration was needed. Silverman was a certified troublemaker. Our getting the award certifies us as having made trouble. I am taking that as a license to make some more.

Meta-analyses everywhere, but not a critical thought…

What is accepted as necessary for drug trials is routinely ignored for trials of psychotherapy treatments and meta-analyses integrating their results. Investigator allegiance has been identified as one of the strongest predictors of outcomes, regardless of the treatment that is being evaluated. But this does not get translated into enforcement of disclosures of conflict of interests by authors or by readers having heightened skepticism.

Yet, we routinely accept claims of superiority of psychological treatments made by those who profit from such advertisements. Meta-analyses allow them to make even bigger claims than single trials. And journals accept such claims and pass them on without conflict of interest statements. We seldom see any protesting letters to the editor. Must we conclude that no one is bothered enough to write?

Meta-analyses of psychological treatments with undisclosed conflicts of interest are endemic. We already know investigator allegiance is a better predictor of the outcome of the trial than whatever is being tested. But this embarrassment is explained away in terms of investigator’s enthusiasm for their treatment. More likely, results are spurious or inflated or spun.  There is high risk of bias associated with investigators having a dog in the fight. And a great potential for throwing the match with flexible rules of data selection, analysis, interpretation (DS, A, and I). And then there is hypothesizing after results are known (HARKING).

It is scandalous enough that the investigators can promote their psychological products by doing their own clinical trials, but they can go further, they can do meta-analysis. After all, everybody knows that a meta-analysis is a higher form of evidence, a stronger claim, than an individual clinical trial.

Because meta-analyses are considered the highest form of evidence for interventions, they provided an important opportunity for promoters to brand their treatments as evidence-supported. Such a branding is potentially worth millions in terms of winning contracts from governments for dissemination and implementation, consulting, training, and sale of materials associated with the treatment. Readers need to be informed of potential conflict of interest of the authors of meta-analyses in order to make independent evaluations of claims.

And the requirement of disclosure ought to apply to reviewers who act as gatekeepers for what gets into the journals. Haven’t thought of that? I think you soon will be. Read on.

This will be the first of two posts concerning the unreliability of a meta-analysis done by authors benefiting from the perception of their psychological treatment, Triple P Parenting Programs (3P) as “evidence-supported.” By the end of the second post, you will see how much work it takes to determine just how unreliable the meta-analysis is and how inflated an estimate it provides of the efficacy and effectiveness of 3P. Maybe you will learn to look for a few clues in examining other meta-analyses. But at least you will have learned to be skeptical.

nothing_to_declare1No conflict of interest statement was included with the article. Nothing_to_DeclareFurthermore, the bulk of the participants included were from studies in which investigators got financial benefits from a positive outcome, including very often one or more authors of this particular meta-analysis.

Nothing said in the article, but an elaborate statement was made in the preregistration of the meta-analyses at International Prospective Register of Systematic Reviews (PROSPERO) CRD42012003402 [Click on statement to enlarge.]

conflict of interest

Many journals, like PLOS One routinely remind editors and reviewers to consult preregistration and be alert to any discrepancies between the details of the preregistration and the published article, which are common and need to be clarified by the authors. Apparently this was not done.

goods to declare_redIt is not that Clinical Psychology Review lacks a policy nothing to declareconcerning conflict of interest.

All authors are requested to disclose any actual or potential conflict of interest including any financial, personal or other relationships with other people or organizations within three years of beginning the submitted work that could inappropriately influence, or be perceived to influence, their work. See also http://www.elsevier.com/conflictsofinterest. Further information and an example of a Conflict of Interest form can be found at: http://help.elsevier.com/app/answers/detail/a_id/286/p/7923.

Interlude: What we learn from the example of 3P, we can apply elsewhere

In the early days of the promotion of acceptance and commitment therapy (ACT), small-scale studies produced positive results at a statistically improbable rate, because of DS, A, and I and HARKING. Promoters of ACT therapy then conducted a meta-analysis that was cited in claims made to Time Magazine that ACT was superior to established treatments. Yet, in the short time since I blogged about this, it has become apparent that ACT is not superior to other credible, structured therapies. Promoters’ praise of ACT continues to depend disproportionately on small, methodologically inadequate studies that very often they themselves conducted.

But promoters of ACT really do not need to worry. They are off doing workshops and selling merchandise. In the workshops, they often show dramatic interventions like crying with patients that are not demonstrated to be integral to any efficacy of ACT but that are sure crowdpleasers.

And their anecdotes and testimonials often involve applications of ACT that are off label, i.e., not supported by studies of similar patients with similar complaints. They have gotten the point that workshops are not primarily about “how to” and “when to” but about drama and entertainment. They have learned from Dr. Phil.

But no one’s going to revoke their uber alles evidence-supported branding.

In a future post, I will show the same phenomenon is beginning to be seen with mindfulness therapy. There was already an opening shot at this in my secondary blog.

I am sure there are other examples. What is different about 3P is the literature is huge and the issue of conflict of interest has now been so squarely placed on the table.

In this and the next post, I will show meta-analysis as self-promotion on a grand scale. And I will begin my analysis with a juicy tale of misbehavior by editors, reviewers, and authors.

I hope the story will anger those interested in the integrity of the literature concerning psychological treatments, be they promoters, policymakers, taxpayers footing the bill for treatment, or consumers who are often receiving treatment because it is mandated.

Why you should not be reading meta-analyses…unless …

Analyses of altmetrics suggest when most people find an interesting study with an internet search, most get only as far as abstracts, not downloading and reading articles. But chances are that they form opinions about those articles based solely on the reading of the abstract.

In the case of meta-analyses, forming independent opinions can take multiple reads and application of critical skills guided by healthy dose of skepticism. And maybe going back to the original studies.

If you do not have the time, the skills, and the skepticism, you should not be reading meta-analyses. And you should not be accepting what you find in abstracts.

Should you stop reading meta-analyses?

Suspend judgment. Let me lay out for you a critical analysis and see if you would be willing to undertake it yourself. Or simply walk away, asking “WTF, why bother?”

3P

Triple P Parenting is described by its developers

The Triple P – Positive Parenting Program is one of the most effective evidence-based parenting programs in the world, backed up by more than 30 years of ongoing research. Triple P gives parents simple and practical strategies to help them confidently manage their children’s behaviour, prevent problems developing and build strong, healthy relationships. Triple P is currently used in 25 countries and has been shown to work across cultures, socio-economic groups and in all kinds of family structures.

They proudly proclaim its branding as evidence supported–

No other parenting program in the world has an evidence base as extensive as that of Triple P. It is number one on the United Nations’ ranking of parenting programs, based on the extent of its evidence base.

An ugly and incredible story

Along came Phil Wilson M.D., PhD. He tried to express skepticism but he got slapped down.

He sent off manuscript describing a meta-analysis to Clinical Psychology Review. The manuscript noted how little evidence that was for the efficacy of 3P that was not tainted by investigator conflict of interest.

It is reasonable to presume that someone associated with the promoters of 3P was invited to review his manuscript.

  • The author’s employer was contacted and informed that Wilson had written a manuscript critical of 3P.
  • The manuscript was savaged.
  • Promoters of 3P sent Wilson some studies that had been published after the period covered by his meta-analysis.
  • The journal refused Wilson’s request for ascertainment of whether reviewers had disclosed conflicts of interest.

Wilson filed a formal complaint with the Committee on Publication Ethics (COPE). You can read it here and the letter from COPE to Clinical Psychology Review here.

Wilson’s rejected manuscript was nonetheless published in BMC Medicine. I praised it in a blog post, but noted that it had been insufficiently tough on the quality and quantity of the studies cited in widely touted branding of 3P has evidence supported. With Linda Kwakkenbos, I then wrote the blog post as an article that appeared in BMC Medicine.

Wilson’s meta-analysis

The meta-analysis is well reasoned and carefully conducted, but scathing in its conclusion:

In volunteer populations over the short term, mothers generally report that Triple P group interventions are better than no intervention, but there is concern about these results given the high risk of bias, poor reporting and potential conflicts of interest. We found no convincing evidence that Triple P interventions work across the whole population or that any benefits are long-term. Given the substantial cost implications, commissioners should apply to parenting programs the standards used in assessing pharmaceutical interventions.

My re-evaluation

My title says it all: Triple P-Positive Parenting programs: the folly of basing social policy on underpowered flawed studies.

My abstract

Wilson et al. provided a valuable systematic and meta-analytic review of the Triple P-Positive Parenting program in which they identified substantial problems in the quality of available evidence. Their review largely escaped unscathed after Sanders et al.’s critical commentary. However, both of these sources overlook the most serious problem with the Triple P literature, namely, the over-reliance on positive but substantially underpowered trials. Such trials are particularly susceptible to risks of bias and investigator manipulation of apparent results. We offer a justification for the criterion of no fewer than 35 participants in either the intervention or control group. Applying this criterion, 19 of the 23 trials identified by Wilson et al. were eliminated. A number of these trials were so small that it would be statistically improbable that they would detect an effect even if it were present. We argued that clinicians and policymakers implementing Triple P programs incorporate evaluations to ensure that goals are being met and resources are not being squandered.

You can read the open access article, but here is the crux of my critique

Many of the trials evaluating Triple P were quite small, with eight trials having less than 20 participants (9 to 18) in the smallest group. This is grossly inadequate to achieve the benefits of randomization and such trials are extremely vulnerable to reclassification or loss to follow-up or missing data from one or two participants. Moreover, we are given no indication how the investigators settled on an intervention or control group this small. Certainly it could not have been decided on the basis of an a priori power analysis, raising concerns of data snooping [14] having occurred. The consistently positive findings reported in the abstracts of such small studies raise further suspicions that investigators have manipulated results by hypothesizing after the results are known (harking) [15], cherry-picking and other inappropriate strategies for handling and reporting data [16]. Such small trials are statistically quite unlikely to detect even a moderate-sized effect, and that so many nonetheless get significant findings attests to a publication bias or obligatory replication [17] being enforced at some points in the publication process.

In response, promoters of 3P circulated on the Internet a manuscript that was labeled as being under review at Monographs of the Society for the Study of Child Development. The  manuscript is here. You can see that it explicitly cited Wilson’s meta analysis and my commentary.  But our main points are not recognizable.

Captured from Google Scholar

Click to enlarge

 

Many journals, including those of the American Psychological Association expressly forbid  circulating a manuscript with a label of it being under review at that particular journal. Perhaps that contributed to the rejection of the manuscript. But it was resubmitted to Clinical Psychology Review and published – without any conflict of interest statement.

Here is the abstract to the abortive Monographs of the Society submission.

This systematic review and meta-analysis examined the effects of the multilevel Triple P-Positive Parenting Program system on a broad range of child, parent and family outcomes. Multiple search strategies identified 116 eligible studies conducted over a 33-year period, with 101 studies comprising 16,099 families analyzed quantitatively. Effect sizes for controlled and uncontrolled studies were combined using a random effects model for seven different outcomes. Moderator analyses were conducted using structural equation modeling. Risk of bias within and across studies was assessed. Significant short-term effects were found for: children’s social, emotional and behavioral outcomes (d = 0.473); parenting practices (d = 0.578); parenting satisfaction and efficacy (d = 0.519), parental adjustment (d = 0.340); parental relationship (d = 0.225) and child observational data (d = 0.501). Significant effects were found for all outcomes at long-term including parent observational data (d = 0.249). Separate analyses on available father data found significant effects on most outcomes. Moderator analyses found that study approach, study power, Triple P level, and severity of initial child problems produced significant effects in multiple moderator models when controlling for other significant moderators. Several putative moderators did not have significant effects after controlling for other significant moderators, including country, child developmental disability, child age, design, methodological quality, attrition, length of follow-up, publication status, and developer involvement. The positive results for each level of the Triple P system provided empirical support for a blending of universal and targeted parenting interventions to promote child, parent and family wellbeing.

It is instructive to compare the above abstract to what was subsequently published in Clinical Psychology Review. Presumably, they should be some differences arising from the manuscript having been peer-reviewed. The original manuscripts submitted as a monograph, and so it is length had to be cut to conform to the limits of Clinical Psychology Review. But if you compare the two, some obvious flaws in the original manuscript were retained in the published version. Obviously, peer review was deficient in not noticing them and otherwise leaving little mark on the final, published version.

Unfortunately, the subsequent Clinical Psychology Review article is behind a pay wall. But you can write to the senior author at matts@psy.uq.edu.au and request a PDF. But here is the abstract:

This systematic review and meta-analysis examined the effects of the multilevel Triple P-Positive Parenting Program system on a broad range of child, parent and family outcomes. Multiple search strategies identified 116 eligible studies conducted over a 33-year period, with 101 studies comprising 16,099 families analyzed quantitatively. Moderator analyses were conducted using structural equation modeling. Risk of bias within and across studies was assessed. Significant short-term effects were found for: children’s social, emotional and behavioral outcomes (d = 0.473); parenting practices (d =0.578); parenting satisfaction and efficacy (d = 0.519), parental adjustment (d = 0.340); parental relationship (d = 0.225) and child observational data (d = 0.501). Significant effects were found for all outcomes at long-term including parent observational data (d = 0.249). Moderator analyses found that study approach, study power, Triple P level, and severity of initial child problems produced significant effects in multiple moderator models when controlling for other significant moderators. Several putative moderators did not have significant effects after controlling for other significant moderators. The positive results for each level of the Triple P system provide empirical support for a blending of universal and targeted parenting interventions to promote child, parent and family wellbeing.

Impressive, hey? Certainly the effect sizes are and they contrast sharply with Wilson. If you had not been sensitized by my blog post, would you be inclined to simply accept the conclusion that is conveyed that the meta-analysis produces resounding support for 3P?

You would never know it from the abstract, but effect sizes from nonrandomized studies were combined with effect sizes from RCTs. But RCTs with head comparisons bs.etween 3P and other active treatments had the comparison groups dropped, so that the studies were considered not to be RCTs.

I do hope that you form your own opinion by obtaining a copy of the full article. You can also compare it to the PROSPERO registration.

Within 10 days of today, May 20, 2014,I will be posting my critique and we can compare notes.

A much briefer version of this blog post was previewed at my secondary blog. It was narrowly focused on the scandalous events concerning Wilson’s manuscript and not the larger context of how we view meta-analyses conducted by authors who have vested financial interest.

Creative Commons License
This work, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 3.0 Unported License.

This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

8 Responses to Are meta-analyses done by promoters of psychological treatments as tainted as those done by Pharma?

  1. John Peters says:

    We’ve been saying this about CBT and ME for over 25 years and we continue to be mocked for pointing up the glaring flaws in the studies.

    Patients with ME experience a physical illness. There are measurable physical symptoms. It is generally accepted that ME starts with a virus ie has a physical cause. Yet the illness is still considered ‘psychological’.

    Why? Because it is asserted that CBT is an effective treatment. One prominent UK figure has built a career on his understanding and treatment of ‘ME’, yet in over 25 years the effectiveness of his work has never been subjected to study.

    The biggest study so far (PACE trial published 2011, Lancet, and follow up) is riddled with all the same errors you list here.

    So why does no one speak up on our behalf? Where have all the people interested in evidence-based medicine and the pursuit of science been for the last 30 years? Where are you now?

    @johnthejack

    VA:F [1.9.22_1171]
    Rating: 0 (from 2 votes)
    • Discussant says:

      Good question. There’s a growing patient safety movement in medicine that recognizes the need for robust evidence-based treatment, and the physical, psychological, and financial harm that comes from foregoing this. Why do mental health standards continue to stay so far behind? The Research Domain Criteria (RDoC) project may help eventually, but who is protecting consumers from evidence-free psychotherapies today?

      VA:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
    • biophile.pr says:

      Not just for CBT and ME. The next Cochrane systematic review into the (the highly controversial) ‘graded exercise therapy’ for ‘chronic fatigue syndrome’ is to be conducted by a team almost entirely consisting of either staunchly pro GET advocates or GET sympathizers, some with declared financial conflicts of interest, or the ones who conducted most of the studies that will probably be reviewed, and/or a significant stake in the outcome of the controversy since a significant proportion of their careers and reputations have been invested in a safe positive outcome. “What could possibly go wrong?” as they saying goes.

      http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD011040/full

      VA:F [1.9.22_1171]
      Rating: +1 (from 1 vote)
      • Dan Clarke says:

        Thank you to Prof Coyne for this long and interesting post.

        I was also left thinking of the PACE trial and the planned Cochrane review for CFS. I previously left this response at the BMJ which details some of my concerns with the way results from PACE were presented, and the way in which GET is commonly assessed for CFS: http://www.bmj.com/content/347/bmj.f5963/rr/674255

        Also, the protocol for the planned Cochrane review did not seem to mention some previously declared COIs involving the insurance industry, and certainly made no mention of the fact that many of those involved had built their careers upon claims about the efficacy of CBT and GET for CFS.

        VA:F [1.9.22_1171]
        Rating: 0 (from 0 votes)
  2. Pingback: Markierungen 05/22/2014 | Snippets

  3. Anon O'Malley says:

    Large parts of mainstream psychosocial and educational research are living in a fantasy world, claiming the cloak of science without accepting it’s parameters. In US education research, the What Works Consortium is a good example. They both a) explicitly refuse to evaluate treatment fidelity in deciding what is a good evidence, and b) provide equal weight for randomized controlled studies and several case studies. In autism interventions, the “gold standard” is ABA, which is not even an intervention, and specifies no particulars about the content, but is closer to a technique or approach. Any intervention that claims to incorporate ABA is deemed to be evidence-based. This is a fraud, the equivalent of claiming that any drug that is created using the principals of chemistry should be approved by the FDA. Lovaas and others rightly complain that specific interventions and their specific content should be evaluated separately/individually.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  4. Pingback: Sweetheart relationship between Triple P Parenting and the journal Prevention Science? | Quick Thoughts

  5. Pingback: What We Need to Do to Redeem Psychotherapy Research | Quick Thoughts

Add Comment Register



Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>