Salvaging psychotherapy research: a manifesto

NOTE: Additional documentation and supplementary links and commentary are available at What We Need to Do to Redeem Psychotherapy Research.

Fueling Change in Psychotherapy Research with Greater Scrutiny and Public Accountability

John Ioannidis’s declarations that most positive findings are false and that most breakthrough discoveries are exaggerated or fail to replicate apply have as much to with psychotherapy as they do with biomedicine.

BadPharma-Dec2012alltrials_basic_logo2We should take a few tips from Ben Goldacre’s Bad Pharma and clean up the psychotherapy literature, paralleling what is being accomplished with pharmaceutical trials. Sure, much remains to be done to ensure the quality and transparency of drug studies and to get all of the data into public view. But the psychotherapy literature lags far behind and is far less reliable than the pharmaceutical literature.

As it now stands, the psychotherapy literature does not provide a dependable guide to policy makers, clinicians, and consumers attempting to assess the relative costs and benefits of choosing a particular therapy over others. If such stakeholders uncritically depend upon the psychotherapy literature to evaluate the evidence-supported status of treatments, they will be confused or misled.

Psychotherapy research is scandalously bad.

Many RCTs are underpowered, yet consistently obtain positive results by redefining the primary outcomes after results are known. The typical RCT is a small, methodologically flawed study conducted by investigators with strong allegiances to one of the treatments being evaluated. Which treatment is preferred by investigators is a better predictor of the outcome of the trial than the specific treatment being evaluated.

Many positive findings are created by spinning a combination of confirmatory bias, flexible rules of design, data analysis and reporting and significance chasing.

Many studies considered positive, including those that become highly cited, are basicallycherrypicking null trials for which results for the primary outcome are ignored, and post-hoc analysis of secondary outcomes and subgroup analyses are emphasized. Spin starts in abstracts and results that are reported there are almost always positive.

noceboThe bulk of psychotherapy RCTs involve comparisons between a single active treatment and an inactive or neutral control group such as wait list, no treatment, or “routine care” which is typically left undefined but in which exposure to treatment of adequate quality and intensity is not assured. At best these studies can tell us whether a treatment is better than doing nothing at all or than patients expecting treatment because they have enrolled in a trial and not getting it (nocebo).

Meta-silliness?

Meta-analyses of psychotherapy often do not qualify conclusions by grade of evidence, ignore clinical and statistical heterogeneity, inadequately address investigator allegiance, downplay the domination by small trials with statistically improbable rates of positive findings, and ignore the extent to which positive effect sizes occur mainly in comparisons between active and inactive treatments.

Meta-analyses of psychotherapies are strongly biased toward concluding that treatments work, especially when conducted by those who have undeclared conflicts of interest, including developers and promoters of treatments that stand to gain financially from their branding as “evidence-supported.”

Overall, meta-analyses too heavily depend on underpowered, flawed studies conducted by investigators with strong allegiances to a particular treatment or to finding that psychotherapy is in general efficacious. When controls are introduced for risk of bias or investigator allegiance, affects greatly diminish or even disappear.

Conflicts of interest associated with authors having substantial financial benefits at stake are rarely disclosed in the studies that are reviewed or the meta-analyses themselves.

Designations of Treatments as Evidence-Supported

There are low thresholds for professional groups such as the American Psychological Association Division 12 or governmental organizations such as the US Substance Abuse and Mental Health Services Administration (SAMHSA) declaring treatments to be “evidence-supported.” Seldom are any treatments deemed ineffective or harmful by these groups.

Professional groups have conflicts of interest in wanting their members to be able to claim the treatments they practice are evidence-supported, while not wanting to restrict practitioner choice with labels of treatment as ineffective. Other sources of evaluation like SAMHSA depend heavily and uncritically on what promoters of particular psychotherapies submit in applications for “evidence supported status.”

"Everybody has won, and all must have prizes." Chapter 3 of Lewis Carroll's Alice's Adventures in Wonderland

“Everybody has won, and all must have prizes.” Chapter 3 of Lewis Carroll’s Alice’s Adventures in Wonderland

The possibility that there are no consistent differences among standardized, credible treatments across clinical problems is routinely ridiculed as the “dodo bird verdict” and rejected without systematic consideration of the literature for particular clinical problems. Yes, some studies find differences between two active, credible treatments in the absence of clear investigator allegiance, but these are unusual.

The Scam of Continuing Education Credit

thought field therapyRequirements that therapists obtain continuing education credit are intended to protect consumers from outdated, ineffective treatments. There is inadequate oversight of the scientific quality of what is offered. Bogus treatments are promoted with pseudoscientific claims. Organizations like the American Psychological Association (APA) prohibit groups of their members making statements protesting the quality of what is being offered and APA continues to allow CE for bogus and unproven treatments like thought field therapy and somatic experiencing.

Providing opportunities for continuing education credit is a lucrative business for both accrediting agencies and sponsors. In the competitive world of workshops and trainings, entertainment value trumps evidence. Training in delivery of manualized evidence-supported treatments has little appeal when alternative trainings emphasize patient testimonials and dramatic displays of sudden therapeutic gain in carefully edited videotapes, often with actors rather than actual patients.

Branding treatments as evidence supported is used to advertise workshops and trainings in which the particular crowd-pleasing interventions that are presented are not evidence supported.

Those who attend Acceptance and Commitment (ACT) workshops may see videotapes where the presenter cries with patients, recalling his own childhood.  They should ask themselves: “Entertaining, moving perhaps, but is this an evidence supported technique?

Psychotherapies with some support from evidence are advocated for conditions for which there is no evidence for their efficacy. What would be disallowed as “off label applications” for pharmaceuticals is routinely accepted in psychotherapy workshops.

We Know We Can Do Better

Psychotherapy research has achieved considerable sophistication in design, analyses, and strategies to compensate for missing data and elucidate mechanisms of change.

Psychotherapy research lags behind pharmaceutical research, but nonetheless hasCONSORT recommendations and requirements for trial preregistration, including specification of primary outcomes; completion of CONSORT checklists to ensure basic details of trials are reported; preregistration of meta-analyses and systematic reviews at sites like PROSPERO, as well as completion of the PRISMA checklist for adequacy of reporting of meta-analyses and systematic reviews.

nothing_to_declare1Declarations of conflicts of interest are rare and exposure of authors who routinely failed to disclose conflicts of interest is even rarer.

Departures from preregistered protocols in published reports of RCTs are common, and there is little checking of discrepancies in abstracts from results that were actually obtained or promised in preregistration by authors.  There is  inconsistent and incomplete adherence to these requirements. There is little likelihood that noncompliant authors will held accountable and  high incentive to report positive findings in order for a study is to be published in a prestigious journal such as the APA’s Journal of Consulting and Clinical Psychology (JCCP). Examining the abstracts of papers published in JCCP gives the impression that trials are almost always positive, even when seriously underpowered.

Psychotherapy research is conducted and evaluated within a club, a mutual admiration society in which members are careful not to disparage others’ results or enforce standards that they themselves might want relaxed when it comes to publishing their own research. There are rivalries between tribes like psychodynamic therapy and cognitive behavior therapy, but suppression of criticism within the tribes and in strenuous efforts to create the appearance that members of the tribes only do what works.

Reform from Without

Journals and their editors have often resisted changes such as adoption of CONSORT, structured abstracts, and preregistration of trials. The Communications and Publications Baord of the American Psychological Association made APA one of the last major holdout publishers to endorse CONSORT and initially provided an escape clause that CONSORT only applied to articles explicitly labeled as a randomized trial. The board also blocked a push by the Editor of Health Psychology for structured abstracts that reliably reported details needed to evaluate what had actually been done in trials and the results were obtained. In both instances, the committee was most concerned about the implications for the major outlet for clinical trials among its journals, Journal of Consulting and Clinical Psychology.

Although generally not an outlet for psychotherapy trials, the journals of the Associationvagal tone for Psychological Science (APS) show signs of even being worse offenders in terms of ignoring standards and commitment to confirmatory bias. For instance, it takes a reader a great deal of probing to discover that a high-profile paper of Barbara Fredrickson in Psychological Science was actually a randomized trial and further detective work to discover that it was a null trial. There is no sign that a CONSORT checklist was ever filed the study. And despite Frederickson using the spun Psychological Science trial report to promote her workshops, there is no conflict of interest declared.

The new APS Clinical Psychological Science show signs of even more selective publication and confirmatory bias than APA journals, producing newsworthy articles, to the exclusion of null and modest findings. There will undoubtedly be a struggle between APS and APA clinical journals for top position in the hierarchy publishing only papers that that are attention grabbing, even if flawed, while leaving to other journals that are considered less prestigious, the  publishing of negative trials and failed replications.

If there is to be reform, pressures must come from outside the field of psychotherapy, from those without vested interest in promoting particular treatments or the treatments offered by members of professional organizations. Pressures must come from skeptical external review by consumers and policymakers equipped to understand the games that psychotherapy researchers play in creating the appearance that all treatments work, but the dodo bird is dead.

Specific journals are reluctant to publish criticism of their publishing practices.  If we at first cannot gain publication in the offending journals of our concerns, we can rely on blogs and Twitter to call out editors and demand explanations of lapses in peer review and upholding of quality.

We need to raise stakeholders’ levels of skepticism, disseminate critical appraisal skills widely and provide for their application in evaluating exaggerated claim and methodological flaws in articles published in prestigious, high impact journals. Bad science in the evaluation of psychotherapy must be recognized as the current norm, not an anomaly.

We could get far by enforcing rules that we already have.

We need to continually expose journals’ failures to enforce rules about preregistration, disclosure of conflicts of interest, and discrepancies between published clinical trials and their preregistration.

There are too many blatant examples of investigators failing to deliver what they promised in the preregistration, registering after trials have started to accrue patients, and reviewers apparently not ever checking if the primary outcomes and analyses promised in trial registration are actually delivered.

Editors should

  • Require an explicit statement of whether the trial has been registered and where.
  • Insist that reviewers consult trial registration, including modifications, and comment on any deviation.
  • Explicitly label registration dated after patient accrual has started.

spin noCONSORT for abstracts should be disseminated and enforced. A lot of hype and misrepresentation in the media starts with authors’ own spin in the abstract . Editors should insist that main analyses for the preregistered primary outcome be presented in the abstract and highlighted in any interpretation of results.

No more should underpowered in exploratory pilot feasibility studies be passed off as RCTs when they achieve positive results. An orderly sequence of treatment development should occur before conducting what are essentially Phase 3 randomized trials.

Here as elsewhere in reforming psychotherapy research, there is something to be learned from drug trials. A process of intervention development ought to include establishing the feasibility and basic parameters of clinical trials needs to proceed phase 3 randomized trials, but cannot be expected to become phase 3 or to provide effect sizes for the purposes of demonstrating efficacy or comparison to other treatments.

Use of wait list, no treatment, and ill-defined routine care should be discouraged as control groups. For clinical conditions for which there are well-established treatments, head-to-head comparisons should be conducted, as well as including control groups that might elucidate mechanism. A key example of the latter would be structured, supportive therapy that controls for attention and positive expectation. There is little to be gained by further accumulation of studies in which the efficacy of the preferred treatment is assured by comparison to a lamed control group that lacks any conceivable element of affective care.

Evaluations of treatment effects should take into account prior probabilities suggested by the larger literature concerning comparisons between two active, credible treatments. The well-studied treatment of depression literature suggests some parameters: effect size is associated with a treatment are greatly reduced when comparisons are restricted to credible, active treatments; better quality studies; and controls are introduced for investigator allegiance. It is unlikely that initial claims about a breakthrough treatment exceeding the efficacy of existing treatments will be sustained in larger studies conducted by investigators independent of developers and promoters.

Disclosure of conflict of interest should be enforced and nondisclosure identified in correction statements and further penalized. Investigator allegiance should be considered in assessing risk of bias.

Developers of treatments and persons with significant financial gain from a treatment being declared “evidence-supported” should be discouraged from conducting meta-analyses of their own treatments.

Trials should be conducted with sample sizes adequate to detect at least moderate effects. When positive findings from underpowered studies are published,  readers scrutinize the literature for similarly underpowered trials that achieve similarly positive effects.

Meta-analyses of psychotherapy should incorporate p-hacking techniques to evaluate the likelihood that pattern of significant findings exceeds likely probability.

Adverse events and harms should routinely be reported, including lost opportunity costs such as failure to obtain more effective treatment.

We need to shift the culture of doing and reporting psychotherapy research. We need to shift from praising exaggerated claims about treatment and faux evidence generated to  promote opportunities for therapists and their professional organizations.  Instead, it is much more praiseworthy to provide  robust, sustainable, even if more modest claims and to call out hype and hokum in ways that preserve the credibility of psychotherapy.

criticism

Click to Enlarge

The alternative is to continue protecting psychotherapy research from stringent criticism and enforcement of standards for conducting and reporting research. We can simply allow the branding of psychotherapies as “evidence supported” to fall into appropriate disrepute.

Creative Commons License
This work, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 3.0 Unported License.

This entry was posted in evidence-supported, mental health care, meta analysis, psychotherapy, Publication bias, research and tagged , , , . Bookmark the permalink.

13 Responses to Salvaging psychotherapy research: a manifesto

  1. Discussant says:

    Thanks for bringing transparency to this issue. The burden of proof is on proponents of these psychotherapies to show that they are making good use of their clients’ time and money, and that the risk-benefit ratio is favorable. In the absence of having met this burden, they ought to stop practicing, or at a bare minimum clearly inform all of their clients of the weak status of the evidence and indeterminate risk as part of a transparent informed consent process.

    VA:F [1.9.22_1171]
    Rating: +2 (from 2 votes)
    • Oh, wow, this is truly awful, coming on the heals of the discrediting of Seligman’s happiness formula and Fredrickson’s positivity ratio. Readers, do check this out.

      VN:F [1.9.22_1171]
      Rating: +2 (from 2 votes)
      • Todd Kashdan says:

        James, that anonymous person is a troll who has been following me around for over a decade. Here are his amazon reviews- http://www.amazon.com/gp/cdp/member-reviews/A3R3CCL9VW1GQ6/ref=cm_cr_pr_auth_rev?ie=UTF8&sort_by=MostRecentReview

        I will repeat my reply to you on facebook.
        They turned a conversation about research findings into a cute formula. Not the first or last time someone altered what I said to serve them. I don’t believe in exact formulas to describe a person as I study within and between person variability.

        The comparisons with Seligman and Fredrickson are wrong. Here’s why: It’s not a journal article, it’s not a book, its not even a magazine, it’s a press release.

        I told them its good to use your five senses to appreciate your surroundings every day. This was transformed into “Mx16″ in a formula, what they refer to as Mindfulness x the average # of hours a person is awake.

        They made a cute formula. It’s not literal. Not worth the energy being expended. Aberrant behavior? This is the rhetoric you get from anonymous trolls. I don’t trust anonymous cynics (when there are no whistleblower concerns). Neither should you.

        VA:F [1.9.22_1171]
        Rating: -1 (from 1 vote)
        • Anonymous says:

          Let’s focus on the content of the ideas discussed, not the messengers. How robust are the experiments in positive psychology? Are the claims replicable? Are they subject to the research biases and flaws Coyne discusses in this article? (Name-calling, e.g., calling someone a ‘troll’, and ad hominem attacks on characters are not helpful.)

          VA:F [1.9.22_1171]
          Rating: +1 (from 1 vote)
      • anonymous says:

        James – Todd argues that because its a press release it doesn’t matter. The problem is that tens of thousands of people have read this misinformation. Far more than read journals or books. So it matters even more.

        In relation to being misquoted Todd forgets to mention that he was happy to retweet the magic happiness formula and discount it as “harmless fun” on the positive psychology listserve. The responsible action would have been to demand the article be corrected. Or to insist that his name be removed from the article. I wonder if he pursued any of these actions.

        So yes you are right this is truly awful.

        VA:F [1.9.22_1171]
        Rating: +1 (from 1 vote)
    • Discussant says:

      Lives have been ruined and people have even been killed from non-science-based therapies such as attachment therapy, repressed memory therapy, gay conversion therapy, etc. When a therapy is not grounded in robust replicable scientific evidence, the outcome is a crap-shoot: you’re recklessly experimenting with human lives with potentially profound and devastating consequences.

      “If you won’t take responsibility when things go badly, you give up the right to take credit when things go well.”

      VA:F [1.9.22_1171]
      Rating: +1 (from 1 vote)
  2. anonymous says:

    Positive psychology seems be at the centre of this aberrant behaviour. Check this one out. No research supporting the claim

    http://www.dailymail.co.uk/health/article-2378821/The-formula-happy-life-Stay-curious-live-moment-look-health.html

    VA:F [1.9.22_1171]
    Rating: +1 (from 1 vote)
  3. anonymous says:

    James – the Amazon review that Todd quotes seems to be a factual review of ACT for weight loss. It seems to reinforce all the points you make about psychology research running amok.

    It seems like this troll has taken up the baton with your call for independent consumer reviews and obviously Todd isn’t comfortable with this.

    VA:F [1.9.22_1171]
    Rating: 0 (from 2 votes)
  4. Pingback: Playing dice with human lives: intervening without evidence | TryTherapyFree

  5. John Tucker says:

    James, I agree in part, but think the problem is intrinsic to meta analysis and not something you will fix by polishing around the edges. As you note, every meta analysis conducted by a drug manufacturer inevitably concludes that the manufacturer’s own drug is superior to the competitor’s. And we have dozens of pharma critics who can be counted upon to reliably churn out meta analyses in which every drug they study is worthless or harmful.

    The problem is in the intrinsically non-transparent nature of the process. The authors pick through 10 to 50 trial for each one that they consider appropriate to include in their analysis. The average reader is unable to critique the process by which trials were included or exclude without repeating the entire analysis from the ground up. This makes meta analysis the favored tool of those who which to write editorials wrapped in a veneer of objective science.

    Away with the meta analysis! A standard systematic review forces the author to explicitly state their opinions of individual trials, as well as their reasoning for arriving at that opinion. Meta analysis allows the author to skip over these steps, cherry picking without needing to reveal or defend the details of what evidence they put weight on. It can’t be saved.

    VA:F [1.9.22_1171]
    Rating: +1 (from 1 vote)
  6. Pingback: Een kritiek op de multidisciplinaire, diagnostische centra voor CVS — Life With ME/CFS

  7. Pingback: The new, multidisciplinary, diagnostic centres for CFS in Belgium — Life With ME/CFS

  8. Pingback: What We Need to Do to Redeem Psychotherapy Research | Quick Thoughts

Add Comment Register



Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>