Are meta analyses conducted by professional organizations more trustworthy?

Bookmark and Share

gold standard

Updated April 18, 2014 (See below)

A well done meta-analysis is the new gold standard for evaluating psychotherapies. Meta-analyses can overcome the limitations of any single randomized controlled trial (RCT) by systematically integrating results across studies and identifying and contrasting outliers. Meta-analyses have the potential to resolve inevitable contradictions in findings among trials. But meta analyses are constrained by the quality and quantity of available studies. The validity of meta analyses also depends on the level of adherence to established standards in their conduct and reporting, as well as the willingness of those doing a meta analysis to concede the limits of available evidence and refrain from going beyond it.

Yet, meta-analytic malpractice is widespread. Authors with agendas  strive to make their point more strongly than the evidence warrants. I have shown misuse of meta-analysis to claim that long-term psychoanalytic psychotherapy (LTPP) is more effective than briefer alternatives. And then there are claims of a radical American antiabortionist made via a meta analysis in British Journal of Psychiatry that abortion accounts for much of the psychiatric disturbance among women of childbearing age.

Funnel Plot

Funnel Plot

Meta-analyses often seem to have intimidating statistical complexity and bewildering graphic display of results. What are consumers to do, when they have neither the time nor the ability to interpret findings for themselves? Is there particular reassurance that a meta-analysis was commissioned by professional organization? Does being associated with a professional organization certify a meta-analysis as valid?

That is the question I am going to take up in this blog post. The article I am going to be discussing is available here.

Hart, S. L., Hoyt, M. A., Diefenbach, M., Anderson, D. R., Kilbourn, K. M., Craft, L. L., … & Stanton, A. L. (2012). Meta-analysis of efficacy of interventions for elevated depressive symptoms in adults diagnosed with cancer. Journal of the National Cancer Institute, 104(13), 990-1004.

In the abstract, the authors declare

 Our findings suggest that psychological and pharmacologic approaches can be targeted productively toward cancer patients with elevated depressive symptoms. Research is needed to maximize effectiveness, accessibility, and integration into clinical care of interventions for depressed cancer patients.

Translation: The evidence for the efficacy of psychological interventions for cancer patients with elevated depressive symptoms is impressive enough to justify dissemination of these treatments and integration into routine cancer care. Let’s get on with the rollout.

The authors did a systematic search, identifying

  • 7700 potentially relevant studies, narrowing this down to
  • 350 fulltext articles that they reviewed, from which they so selected
  • 14 trials from 15 published reports for further analysis.
  • 4 studies lacked the data for calculating effect sizes, even after attempts to contact the authors
  • 10 studies were at first included, but
  • 1 then had to be excluded as an extreme outlier in its claimed effect size, leaving
  • 9 studies to be entered in the meta analysis, but with one of them yielding 2 effect sizes.

The final effect sizes entered into the meta-analysis were 6 for what the authors considered psychotherapy from 5 different studies and 4 pharmacologic comparisons. I will concentrate on the 6 psychotherapy effect sizes that came from five different studies. You can find links to their abstracts or the actual study here.

Why were the authors left with so few studies? They had opened their article claiming over 500 unique trials of psychosocial interventions for cancer patients since 2005, of which 63% involved RCTs. But most evaluations of psychosocial intervention do not recruit patients having sufficient psychological distress or depressive symptoms to register an improvement. Where does that leave claims that psychological interventions are evidenced-based as effective? The literature is exceedingly mixed as to whether psychosocial interventions benefit cancer patients, at least those coming to clinical trials. So, the authors are left having to decide with these few studies recruiting patients on the basis of heightened depressive symptoms.

Independently evaluating the evidence.thumb on scale

Three of the 6 effect sizes classified as psychotherapeutic—including the 2 contributing most of the patients to the meta analysis—should have been excluded.

The three studies (1,2,3)  evaluated collaborative care for depression, which involves substantial reorganization of systems of care, not just providing psychotherapy. Patients assigned to the intervention groups of each of these studies received more medication and better monitoring. In the largest study, the low income patients assigned to the intervention group had to pay out-of-pocket care, whereas care was free for patients assigned to the intervention group. Not surprising, patients assigned to the intervention group got more and better care, including medication management. There was also a lot more support and encouragement being offered to the patients in the intervention conditions. Improvement that was specifically due to psychotherapy, and not something else,  in these three studies cannot be separated out.

I have done a number of meta-analyses and systematic reviews of collaborative care for depression. I do not consider such wholesale systemic interventions as psychotherapy, nor am I aware of other articles in which collaborative care has been treated as such.

Eliminating the collaborative care studies leaves effect sizes from only 2 small studies (4, 5).

One (4) contributed  2 effect sizes based on comparisons of 29 patients receiving cognitive behavior therapy (CBT) and 23 receiving supportive therapy to the same 26-patient no-treatment control group. There were problems in the way this study was handled.

  • The authors of the meta-analysis considered the supportive therapy group as an intervention, but supportive therapy is almost always considered a comparison/control group in psychotherapy studies.
  • The supportive therapy had better outcomes than CBT. If the supportive therapy were re-classified as a control comparison group, the CBT would have had a negative effect size, not the positive one that was entered into the meta-analysis.
  • Including two effect sizes violates the standard assumption for performing a meta analysis that all of the effect sizes being entered into are independent.

Basically, the authors of the meta-analysis are counting the wait-list control group twice in what was already a small number of effect sizes. Doing so strengthened the authors’ case that the evidence for psychotherapeutic intervention for depressive symptoms among cancer patients is strong.

The final study (5)  involved 45 patients being randomly assigned to either problem-solving or waitlist control, but results for only 37 patients were available for analyses. The study had a high risk of bias because analyses were not intent to treat.  It was seriously underpowered, with less than a 50% probability of detecting a positive effect even if it were present.

Null findings are likely with such a small study. If the authors reported null findings, the study would not likely be published because being too small to find anything but a no effect is a reasonable criticism. So we are more likely find positive results from such small studies in the published literature, but they probably will not be replicated in larger studies.

Once we eliminate the three interventions misclassified as psychotherapy and deal with use of the waitlist control group of one study counted twice as a comparator, we are left with only two small studies. Many authorities suggest this is insufficient for a meta-analysis, and it would certainly not serve as the basis for the sweeping conclusions which these authors wish to draw.

How the authors interpreted their findings

The authors declare that they find psychotherapeutic interventions to be

reliably superior in reducing depressive symptoms relative to control conditions.

They offer reassurance that they have checked for publication bias. They should have noted that tests for publication bias are low powered and not meaningful with such small numbers of studies.

They suddenly offer the startling conclusion without citation or further explanation.

The fail-safe N (the number of unpublished studies reporting statistically nonsignificant results needed to reduce the observed effect to statistical nonsignificance) of 106 confirms the relative stability of the observed effect size.

What?!  Suppose we accept the authors’ claim that they have five psychotherapeutic intervention effect sizes, not the two that I claim. How can they claim that there would have to be 106 null studies hiding in desk drawers to unseat their conclusion? Note that they already had to exclude five studies from consideration, four because they could not obtain basic data are from them, and one because the effects claimed for problem-solving therapy were too strong to be credible. So, this is a trimmed down group of studies.

In another of my blog posts  I indicated that clinical epidemiologists, as well as the esteemed Cochrane Collaboration reject the validity of failsafe N. I summarize some good arguments against failsafe N.  But just think about it: on the face of it, do you think results are so strong that it would take over 100 negative studies to change our assessment? This is a nonsensical bluff intended to create false confidence in the authors’ conclusion.

The authors perform a number of subgroup analyses that they claim show CBT to be superior to problem-solving therapy. But the subgroup analyses are inappropriate. For CBT, they take the effect sizes from two small studies in which the intervention and a control group differed in the control group not receiving the therapy. For PST, they take the effect sizes from the very different large collaborative care interventions that involved changing whole systems of care. Patients assigned to the intervention group got a lot more than just psychotherapy.

There is no basis for making such comparisons. The collaborative care studies, as I noted,sbm involve not only providing PST to some of the patients, but also medication management and free treatment when patients – who were low income – in the control condition had to pay for it and so received little. There are just too many confounds here. Recall from my previous blog posts that effect sizes do not characterize a treatment but rather a treatment in comparison to a control condition. The effect sizes that the authors cite are invalid for PST and the conditions of the collaborative care studies versus the small CBT studies are just too different.

https://www.youtube.com/watch?v=R-sLX5UZaxk

https://www.youtube.com/watch?v=R-sLX5UZaxk

Is you or ain’t you a meta analysis organized by Society of Behavioral Medicine?

 The authors wish to acknowledge the Society of Behavioral Medicine and its Evidence-Based Behavioral Medicine Committee, which organized the authorship group…Society of Behavioral Medicine, however, has not commissioned, supervised, sanctioned, approved, overseen, reviewed, or exercised editorial control over the publication’s content. Accordingly, any views of the authors set forth herein are solely those of the authors and not of the Society of Behavioral Medicine.

Let’s examine this denial in the context of other information. The authors included a recent President of SBM and other members of the leadership of the organization, including one person whom would soon be put forward as a presidential candidate.

The Spring/Summer 2012 SBM newsletter states

 The Collaboration between the EBBM SIG and the EBBM Committee (Chair: Paul B. Jacobsen, PhD) provided peer review throughout the planning process. At least two publications in high impact journals have already resulted from the work.

One of the articles to which the newsletter refers is the meta-analysis of interventions for depressive symptoms. I was a member of the EBBM Committee during the time of the writing of this. This and the earlier meta-analyses were inside jobs done by the SBM leadership. A number of the authors are advocates for screening for distress. Naysayers and skeptics on the EBBM Committee were excluded.

The committee did not openly solicit authors for this meta analysis in its meetings nor discuss progress. When I asked David Mohr, one of the eventual authors, why the article was not being discussed in the meetings, he said that the discussions were being held by telephone.

Notably missing from the authors of this meta-analysis is Paul Jacobson, who was head of the Evidence-based Medicine Committee during its writing. He has published meta-analyses and arguably is more of an expert on psychosocial intervention in cancer care than almost any of the authors. Why was he not among the authors? He is given credit only for offering “suggestions regarding the conceptualization and analysis” and for providing “peer review.”

It would have been exceedingly awkward if Jacobson was listed as an author. His CV notes that he was the recipient of $10 million from Pfizer to develop means of assuring quality of care by oncologists. So, he would have had to have a declaration of conflict of interest on a meta-analysis from SBM evaluating psychotherapy and antidepressants for cancer patients. That would not have looked good.

Just before the article was submitted for publication, I received a request from one of the authors asking permission for me to be mentioned in the acknowledgments. I was taken aback because I had never seen the manuscript and I refused.

I know, as Yogi Berra would say, we’re heading for déjà vu all over again. In earlier blodeja vug posts (1, 2)  I criticized a flawed meta-analysis done by this group concerning psychosocial interventions for pain. When I described that the meta-analysis as “commissioned” by SBM. I immediately got a call from the president asking me for a correction. I responded by posting a link to an email by one of the authors describing that meta-analysis, as well as this one, as “organized” by SBM.

So, we are asked to believe the article does not represent the views of SBM, only the authors, but these authors were hand-picked and include some of the leadership of SBM. Did the authors take off their hats as members of the governance of SBM during the writing of the paper?

The authors are not a group of graduate students who downloaded some free meta-analysis software. There were strong political considerations in their selection, but as a group, they have experience with meta-analyses. Journal of the National Cancer Institute (JNCI) is not a mysterious fly-by-night Journal that is not indexed in ISI Web of Science. To the contrary, it is a respected, high impact (JIF= 14.3).

As with the meta-analysis of long-term psychoanalytic psychotherapy with its accompanying editorial in JAMA, followed by publication of a clone in British Journal of Psychiatry, we have to ask did the authors have privileged access to publishing in JNCI with minimal peer-review? Could just anyone have gotten such a meta-analysis accepted there? After all, there are basic, serious misclassifications of the studies that provided most of the patients included in the meta-analysis for psychotherapeutic intervention. There are patently inappropriate comparisons of different therapies delivered in very different studies, some without the basis of random assignment. I speak for myself, not PLOS One, but if, as an Academic Editor I had received such a flawed manuscript, I would have recommended immediately sending it back to the authors without it going out for review.

Imagine that this meta-analysis were written/organized/commissioned/supported by pharmaceutical companies

What we have is exceedingly flawed meta-analysis that reaches a seemingly forgone conclusion promoting the dissemination and implementation of services by the members of an organization from which it came. The authors rely on an exceedingly small number of studies, bolstered by recruitment of some that are highly inappropriate to address the question whether psychotherapy improved depressive symptoms among cancer patients. Yet, the authors’ conclusions are a sweeping endorsement for psychotherapy in this context and unqualified by any restrictions. It is a classic use of meta-analysis for marketing purposes, branding of services being offered, not scientific evaluation. We will see more of these in future blog posts.

If the pharmaceutical industry had been involved, the risk of bias would have been obvious and skepticism would have been high.

But we are talking about a professional organization, not the pharmaceutical industry. We can see that the meta-analysis was flawed, but we should also consider whether that is because it was written with a conflict of interest.

There are now ample demonstrations that practice guidelines produced by professional organizations often serve their members’ interests at the expense of evidence. Formal standards  have been established for evaluating the process by which these organizations produce guidelines. When applied to particular guidelines, the process and outcome often comes up short.

So, we need to be just as skeptical about meta-analyses produced by professional organizations as we are about those produced by the pharmaceutical industry. No, Virginia, we cannot relax our guard, just because a meta-analysis has been done by a professional organization.

If this example does not convince you, please check out a critique of another one written/organized/commissioned/supported by the same group (1, 2).

UPDATE (April 18, 2014)

meta analyses of efficacy CUTjpgAn alert reader scrutinized the meta-analysis after reading my blog post and found something quite interesting in the table to the left. Click on the table to enlarge. What you see is that every comparison worked out extraordinarily well, too well.

The problem is, of course, that these comparisons are inappropriate, as discussed in the blog. The comparisons hinge upon studies being misclassified as psychotherapy when they were actually complex collaborative care interventions, as well as comparisons of problem-solving therapy for cognitive behavior therapy when the patients receiving problem-solving therapy were not randomized to it. Rather, their randomized to a condition in which the intervention patients got a combination of free medication, careful medication management, and the option of problem-solving therapy, whereas the control group patients had to pay for treatment and received substantially less of any kind. This is clearly meta-analysis malpractice of the highest order.

See my discussion of an exchange of letters with the authors here. Go and comment yourself about this study at PubMed Commons here.

Related Posts Plugin for WordPress, Blogger...

Creative Commons License
Are meta analyses conducted by professional organizations more trustworthy? by PLOS Blogs Network, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 4.0 International License.

This entry was posted in cancer, Conflict of interest, depression, psychotherapy, Uncategorized and tagged , , , . Bookmark the permalink.

Comments are closed.