Understanding Lack of Access to Mental Healthcare in the US: 3 Lessons from the Gus Deeds Story

From “60 Minutes,” CBS Television, January 26th, 2014

Creigh Deeds: There’s just a lack of equity in the way we as a society, and certainly as a government and insurance industry, medical industry, with the way we look at mental health issues.

Scott Pelley: Don’t want to fund it. Don’t want to talk about it. Don’t want to see it.

Sen Creigh Deeds on 60 Minutes

Sen Creigh Deeds on 60 Minutes

 

Creigh Deeds: Absolutely. That– that’s exactly right. But the reality is, it’s everywhere.

 

 

 

 

If inadequate access to mental healthcare in the US is a disease, and I would argue that it can certainly be seen that way in terms of the toll it has taken on American society, then medical school did next to nothing to prepare me to understand its causes; or, to deal with them. After 15 years of treating thousands of patients with psychiatric disorders, I have long struggled to concisely understand and articulate the confluence of factors that determine why my patients do (or do not) have access to mental healthcare.

Recently, whilst watching 60 minutes all that changed. From the story of a young man named Gus Deeds, a clear and concise picture emerged of cause and effect, depicting the factors that largely determine whether a patient in need of mental health care is likely to receive that care.

In this segment, Scott Pelley interviewed Virginia State Senator, Creigh Deeds, about his son Gus, who was 24 years old and had been living with serious mental illness.  His struggle culminated, last November, in a tragic ending. The Deeds’ predicament with their son was echoed by other family members of mentally ill children and adults who were also interviewed for this segment.

I was deeply saddened and perturbed by the story and although I had never met any of the people involved and had no inside knowledge of the situation, Senator Deed’s narrative was all too familiar to my ears as a litany of causes for an avoidable tragedy : inadequate mental health resources; resistance to care by the patient; additional obstacles presented by insurance companies, and fragmented treatment options.

Watching the interview, my head reverberated with all the questions I had asked myself when attempting to provide care for patients with serious mental illness.

These were the types of questions that plagued me during the earlier days of my career.  Why am I not able to stop them falling through the cracks in the system? Why do I have to spend so much time persuading insurance companies to pay for their basic care? What am I doing wrong? What can I do better?  Why does the opinion of their loved ones not seem to count?

The causes behind inadequate access to mental health care in the US must be described with a terminology not taught in medical school. They hail from different worlds than the one in which I was trained: the worlds of law, healthcare policy, sociology and the insurance industry.

Gus Deeds and Craig Deeds, 2009

Gus Deeds and Craig Deeds, 2009

If this situation is going to change, the Gus Deeds story provides a tragic, teachable moment for all Americans.

Here are 3 key lessons we can all learn from what happened to the Deeds family.

 

 

#1 Despite reforms, mental health care services are inadequate or nonexistent to large segments of American Society

 Access to mental healthcare starts with the premise that, if services are available and there is an adequate supply of services, then the opportunity to obtain health care exists, and a population may ‘have access’ to services. Unfortunately, this assumption of adequate supply cannot be made with regards to services provided by mental health professionals.  There is a shortage of mental health professionals in the United States, And the situation is particularly dire in rural and underserved parts of the Nation.  Add to this the fact that funding for community resources such as inpatient psychiatric beds and long-term behavioral health facilities has been shrinking for decades and it is not hard to imagine why the issue of access has become problematic for many who are in urgent need of psychiatric attention.

# 2. Because of stigma and denial surrounding mental illness, patients who most need care don’t always seek it

Stigma can be societal and manifest as discrimination towards people with mental health problems. A response from one of the other parents interviewed by Pelley, says it all: When Pelley asked her what the difference between being the mother of a child who has mental illness and the mother of a child who might have heart disease or cancer was, she answered with one word. Sympathy. Predisposing factors such as patient race, age, and health beliefs also influence an individual’s decision to access mental healthcare. Specifically, in the case of those living with serious mental illness, it is not uncommon for the patient to deny that he/she is ill and, therefore, think that they do not need help or medical treatment, i.e. they choose not to access mental healthcare. This denial brings with it a layer of complexity to interactions between mental health professionals and the patients they serve for, unlike many other illnesses, our patients may hide or not fully disclose essential aspects of his/her symptoms for fear of the consequences of such disclosures.

Another layer of complication is that federal and state laws, surrounding the involuntary hospitalization of individuals with mental illness, whilst designed to protect patient’s rights, often leave loved ones and mental health professionals who understand the patient and their illness with no voice, and minimal sway and influence over decisions that get made in courts. This situation emphasizes why it is so important that mental health professionals have the necessary time to carefully evaluate patients; be able to provide them with the continuity of care they need so that they can, eventually, develop a trusting relationship with their patient.  Often, it is through this trust that some aspects of denial can be challenged to ensure a better outcome for the patient. And this brings us to the next lesson

#3. Current insurance policies create barriers to patient access and encourage providers to offer reductionist mental health care services

The issues surrounding access to mental healthcare are further compounded by discriminatory, and often illegal, barriers to mental health and addiction services imposed by the health insurance industry. One of the most consistent debates that have raged in the psychiatric community, since the advent of managed care, has surrounded such insurance company policies and procedures.

Professional organizations have argued (successfully) that such policies appear to be designed to encourage psychiatrists to provide services that are reductionistic (as they are less time consuming and hence less expensive) and discourage approaches or treatments that: take more time; preserve continuity of care and build trust between patient and the professional caring for them. Americans with mental health disorders have been routinely discriminated against when they are required to pay higher copayments, allowed fewer doctor visits or days in the hospital, or made to pay higher deductibles than those that apply to other medical illnesses.

Whilst the signing of the 2008 Paul Wellstone and Pete Domenici Mental Health Parity Act has been viewed as breakthrough legislation to combat this discrimination it is important to note that the Act does not require employers to offer mental health or substance use disorder benefits, only that IF they are offered they must be offered on par with medical/surgical benefits. From 2014, under the Affordable Care Act, new individual and small group plans in and outside of the mandated insurances will be required to offer coverage. Barriers to the effective implementation of such requirements remain to be seen.

If a picture (in this case a 20 minute segment of TV news reporting) is worth a 1,000 words (in this case a 5,000 research or sober policy document) then this 60 minutes segment is that picture. I would encourage anyone with an interest in mental healthcare to watch it.

http://www.cbsnews.com/news/mentally-ill-youth-in-crisis/

Category: Commentary, mental health care, News, Psychiatry, Uncategorized | Tagged , , , , , | 8 Comments

Psychoanalysts claim long-term psychoanalytic psychotherapy more effective than shorter therapies.

 Part one: Psychoanalytic psychotherapy uber alles?

Freud2One might conclude from a quick internet search that the benefits of long-term psychoanalytic or psychodynamic psychotherapy (LTPP) versus shorter treatments has already been shown: LTPP has a superiority that justifies the greater commitment of time and money. A meta-analysis in the Journal of the American Medical Association (JAMA) makes that claim [PDF available here.]. It is accompanied by a glowing editorial. The authors of the JAMA paper, Leichsenring and Rabung, have also repeated their claims in redundant articles published in other the journals. And their claims have been echoed enthusiastically by proponents of LTPP around the world

Read on and you’ll

  • Romp with me through the many problems that apparently slipped by unnoticed in the publication of a meta-analysis in a high impact journal.
  • Have your confidence shaken that publication in one of the highest impact medical journals is any reassurance of the trustworthiness of evidence and freedom from bias.
  • Get tips how to spot bad meta analyses being shaped for marketing and propaganda purposes.
  • Find a brief distraction in Bambi meets Godzilla, the animated  91-second video that has become a metaphor for confrontations between the practice of psychotherapy and the demand for evidence of cost-effectiveness.
  • Be left grappling with the broader issue of just how much politics and personal connections determine whether manuscripts get published in high impact journals and with accompanying editorials.

The JAMA meta analysis drew on 11 RCTs and 12 non-RCT naturalistic, observational case series studies of long-term psychodynamic psychotherapy. The authors concluded that LTPP is superior to shorter-term psychotherapies. For their assessment to be unseated, they claim, there would have to be 921 negative studies being left unpublished.

With regard to overall effectiveness, a between-group effect size of 1.8 (95% confidence interval [CI], 0.7-3.4) indicated that after treatment with LTPP patients with complex mental disorders on average were better off than 96% of the patients in the comparison groups (P=.002).

The accompanying editorial entitled “Psychodynamic psychotherapy and research evidence: Bambi survives Godzilla?” praised the study’s

thorough search methods, including the requirement for reliable outcome measures, and their careful assessments of heterogeneity and lack of evidence for publication bias are strengths of the study.

And

Does this new meta-analysis mean that LTPP has survived the Godzilla of the demand for empirical demonstration of its efficacy? The answer is a qualified yes. The meta-analysis was carefully performed and yielded a within-group effect size of 0.96 (95% confidence interval [CI], 0.87-1.05) for pretreatment-postreatment overall outcomes, which would be considered a large effect.

Bambi-meets-Godzilla-513d504906daa_hiresThe title of the editorial refers to a classic 1982 article by Morris Parloff, former head of the NIMH Psychotherapy and Behavioral Intervention Section in the Clinical Research Branch (1972-1980). Parloff correctly anticipated that if third party reimbursement were sought for psychotherapy, policy makers would demand evidence that particular therapies worked for particular problems. The encounter between the existing evidence and policy makers was “Bambi meets Godzilla.”

Parloff’s title was in turn inspired by an animation classic of the same name. For a quick digression, you can watch the complete 91-second cartoon here.

 The JAMA meta-analysis claimed that LTPP was superior to shorter psychotherapies by one of the largest margins probably ever seen in the comparative psychotherapy literature. If you are new to the field or simply unfamiliar with the relevant literature, you may be reluctant to dispute conclusions that appeared in a prestigious, high impact journal (JIF=30) and, not only that, were endorsed by the editor of JAMA.

Peter Kramer, author of Listening to Prozac, blogged about this meta analysis and suggested readers should be swayed by this sheer authority:

 [R]esults are suggestive, and they are the best we have. And then, there’s the brand label: JAMA. The study passed a rigorous peer review. There is no comparably prestigious study denying that long-term therapy is the treatment of choice.

You truly disappoint me, Peter. I hope you agree by the end of my longread analysis and critique that your judgment was, uh, hasty.

We don’t have to defer to the authority of publication in JAMA and should not. We can decide for our own, making use of objective standards developed and validated by people who have no dog in this fight.

Intense criticism of these claims.

…As could be expected, letters to the editor poured into JAMA concerning the meta-analysis.

JAMA has limits on their number and length of letters that will be published in response to an article. These restrictions prompted some authors to organize and take their more lengthy critiques elsewhere. I was honored to join a group comprised of Aaron T Beck, Brett Thombs, Sunil Bhar, Monica Pignotti, Marielle Bassel, and Lisa Jewett.

You can find our extended critique here. I also recommend a splendid, incisive critique by Julia Littell and Aron Shlonsky that applied the validated AMSTAR checklist for evaluating meta analyses. The JAMA article plus their brief one makes a great reading assignment for teaching purposes.

If you are only interested only in a summary evaluation of the JAMA meta-analysis and are willing to trust my authoritative opinion (No, distrust all authority when it comes to evaluating evidence!), here it is–

  • The meta-analysis was undertaken and reported in flagrant violation of the usual rules for conducting and reporting a meta-analysis.
  • There were arbitrary decisions made about which studies to include and what constituted a control condition.
  • Different rules were involved in selecting studies of LTPP versus comparison psychotherapies, giving an advantage to the LTPP.
  • Results from randomized controlled trials were integrated with results from  poor quality naturalistic, observational studies in unconventional ways that strongly favored LTPP.
  • For some analyses, effect sizes from LTPP studies were calculated in an unconventional manner. Any benefits of these LTPP conditions being evaluated as part of a randomized trial were eliminated. This maneuver seriously inflated the estimates of effect sizes in favor of LTPP. Results are not comparable to what would be obtained with more conventional methods.
  • Calculation of some effect sizes involved further inexplicable voodoo statistics, so that a set of studies in which no effect size was greater than 1.4 produced an overall effect size of 6.9. Duh!
  • In the end, a bizarre meta-analysis compared 1053 patients assigned to LLTP to 257 patients assigned to a control condition, only 36 of whom were receiving an evidence based therapy for their condition. Yet, sweeping generalizations were made for advantages that should be expected for LLP patients over those receiving any shorter psychotherapies.
  • The effect size comparing LTPP to control conditions would not generalize to any credible psychotherapy, much less one that was evidence-based.

Please read on if you would like a more in-depth analysis. I hope you will. I want to thoroughly disturb your sense that you can trust an article you find in high impact journals simply because it apparently survived peer review there. And aside from encouraging a healthy skepticism, I will leave you with tips for what to look for in other articles.

A flagrant violation of established rules for conducting and reporting meta analyses.

Meta-analyses are supposed to be conducted in an orderly, pre-specified, transparent, and readily replicable fashion, much like a randomized trial. There are established standards for reporting meta-analyses, such as PRISMA for evaluating health care interventions and MOOSE for observational studies as well as AMSTAR, a brief checklist for evaluating the adequacy of the conduct of a meta-analysis.

Increasingly, journals also require published preregistration of plans for meta-analyses, much like preregistration of the design of a randomized trial. My experience has been that JAMA encourages submission of plans for meta-analyses for preapproval. My colleagues and I have always submitted our plans. There is no indication that this was done in the case of this meta-analysis of LTPP.

Judged by the usual standards, this meta-analysis was seriously deficient.

Leichsenring and Rabung formulated an unconventionally broad research question. Systematic review objectives typically define the patient population, intervention, control treatment, outcomes, and study designs of interest. These authors defined the patient population broadly as a group of adults with mental disorders. The sole criterion that outcomes should have fulfilled is that they were reliable. Control treatments were not defined at all. Thus, analyzed diagnoses, outcomes, and control groups showed a very large clinical heterogeneity. Although the reviewers tried to account for this through subgroup analyses, their method of building clusters of heterogeneous disorders and outcomes still allowed for considerable variation. Thus, it is unclear for what disorder, for what outcome variables, and in comparison with which control groups the evidence was shown.

Trick question: how many RCTs were entered into the meta-analysis?

Probably 90% of the casual readers of this article will give the wrong answer of 11, rather than the correct answer of 8. A number of commentators on this article got this question wrong. Understandably.

The abstract clearly says 11 RCTs, and that statement is repeated in the text and figures. To find out what was actually done you have to pay attention to the superscripts in Table 1 and read carefully the last paragraph on page 1554. You can eventually figure out that for the largest RCT, Knekt et al, 2008, the control conditions were dropped. The next largest RCT, Vinnars et al, 2005 was a comparison between two LTPP’s and these were dealt with as if they came from separate trials. The same was true for the Hogeland et al study, which compared  LTPP with and without allowing the psychoanalyst to make transference interpretations. Because there was no difference in treatment efficacy, conventionally calculated effect sizes for this trial would be quite small.

Finally, the control group of Huber and Klug was dropped. So, Leichsenring and Rabung started with 12 comparisons in 11 RCTs, but extracted and integrated data from 4 of these groups in ways that did not preserve the benefits of their having come from an RCT.

Leichsenring and Rabung eliminated the largest comparisons in RCTs and were left with 8 pitifully small, underresourced trials, each of which had less than a 50% chance of detecting a moderate sized effect, even if it were there. Yet, these trials obtained results at a statistically improbable rate. There is clearly wild publication bias going on here.

The table below summarizes the eight surviving trials.

edied bhar page with table-page-001

Leichsenring and Rabung were interested in making sweeping statements about the value of LTPP over shorter treatments, but these 8 RCTs are all they were left to work with. Perhaps the best solution would be to simply declare a “failed meta-analysis.” That’s not such a bad thing. It simply an acknowledgment that the literature does not yet have sufficient high quality studies with large enough samples to draw conclusions.

Instead, Leichsenring and Rabung engaged in all sorts of mischief that is quite misleading to the unwary reader. They started by salvaging the LTPP groups from the excluded RCTs and quantifying effect sizes in ways that would confuse readers expecting something more conventional. They then threw in the uncontrolled case series studies which represent a much lower quality of evidence than an RCT that had been preserved intact.

Stacking the deck in favor of LTPP

The overall methodological quality of the LTPP studies that were included in the meta-analysis was quite poor, particularly the naturalistic studies that just involved collecting uncontrolled case series of patients. Investigators in these studies were free to selectively add or drop cases without reporting what they had done, or to keep the case series going, adding more cases or extending the length of treatment or follow-up as needed to obtain an appearance of an overall positive effect.

One study lost 45% of the patients originally recruited to follow-up. Results do not even generalize back to the patients who entered the study. Results from the many studies that were biased by substantial loss to follow-up were simply combined with fewer studies in which all patients who had been recruited were retained for analysis of outcome.

There were no such naturalistic observational studies of psychotherapies other than LTPP included for comparison, so only the LTPP had the benefits of exaggerated bad data. As we will see, given the odd way in which effect sizes were calculated, the lack of such studies represented a serious bias in favor of LTPP.

Randomized trials in which LTPP were provided for less than a year were excluded from the meta-analysis. Among other studies, the 10 month trial of psychedelic psychotherapy versus cognitive behavior therapy for anorexia would be excluded because the therapy did not go a year. That is unfortunate, because this particular study is of much better methodological quality than any of the others included in the meta-analysis.

There are no evidence-based  criteria for setting duration of psychotherapy of at least a year ahead of time.  Why did these authors settle on this requirement for inclusion of studies? I strongly suspect that the authors were being responsive to the need for evidence to justify a full year of insurance coverage for psychoanalytic psychotherapy.

You can probably get psychoanalysts to agree to keep patients in treatment for over a year, but many other therapists and their patients would object to committing ahead of time to such a length of treatment.  There are of course many other randomized trials of psychotherapies out there, but most do not involve providing a year of treatment. One could ask the very reasonable question ‘Do these trials of shorter treatments provide comparable or better outcomes than a year of LTPP?’ but apparently these authors were not really interested in that question.

There were some seemingly arbitrary reclassifications of studies as being LTPP.

One study supportive of LTPP had previously been classified in a meta-analysis by Leichsenring and Rabung as involving short-term psychodynamic psychotherapy (STPP), but was as reclassified in the present meta-analysis as involving long-term psychodynamic psychotherapy.

And arbitrary exclusion of relevant studies. Conventional standards for conducting and reporting meta-analyses suggest providing list of excluded studies, but that was missing.

the exclusion of the study by Giesen-Bloo et al that favored schema-focused therapy over LTPP appears arbitrary. The original article defined patients in treatment for three years as completers and presented effect sizes on that basis.

Compared to what?

Overall, control conditions included in the meta analysis did not adequately represent shorter-term therapies and blurred the distinction between these therapies and no treatment at all. What went on in these control conditions cannot be generalized to what would go on in credible psychotherapies.

the designation of ‘shorter-term methods of psychotherapy,’ included five treatments that did not constitute formal psychotherapy as it is generally understood. These treatments consisted of a waitlist control condition, nutritional counseling, standard psychiatric care, low contact routine treatment and treatment as usual in the community.

And

In only two studies, LTPP was compared to an empirically supported treatment, as defined by Chambless and Hollon [29] that is, DBT for borderline personality disorder [10], and family therapy for anorexia nervosa [7]. In two other studies, LTPP was compared to cognitive therapy (CT)[8] and short term psychodynamic psychotherapy (STPP)[3], which are established as efficacious for some disorders, but not yet validated for the disorder being  treated (i.e., cluster C personality disorders, “neurosis”). In a fifth study [5], LTPP was compared to “cognitive orientation therapy”, an unvalidated treatment. In these original studies, statistical superiority of LTPP over control conditions was found only when control conditions involved either no psychotherapy, or an unvalidated treatment.  Studies that compared LTPP to an empirically supported (e.g., DBT, family therapy) or established treatment (e.g., STPP, CT) found that LTPP was equally or less effective than these treatments   despite a substantially longer treatment period.

So, Leichsenring and Rabung kept the 1053 LTPP patients in their analyses, but by a complex process of elimination reduced the number of comparison patients to 257. Of these 257 comparison patients only 36 patients were receiving treatment that was evidence based for their condition: 17 receiving dialectical behavior therapy for borderline personality and 19 receiving a family therapy validated for anorexia.

Comparison/control patients came from a variety of conditions, including no formal treatment. Aggregate estimates of outcomes would not apply to patients assigned to any of these particular conditions. Leichsenring and Rabung cannot generalize beyond the odd lot of patients they assembled, but doing so was their intention.  Their efforts could only serve to put an illusory glow on LTPP.

A mixed bag of patients

The abstract stated that patients had “complex disorders,” but the term was never defined and was inconsistently applied. It is difficult to see how it applies to patients in one study having “typically presented outpatient complaints concerning anxiety, depression, low self-esteem, and difficulties with interpersonal relationships” (p. 269). The judgment that patients required a year of treatment seems, again, theoretically, not empirically driven.

Across the eight studies, LTPP was compared to other interventions for a total of nine types of mental health problems, including “neurosis” [3], “self-defeating personality disorder” [8] and anorexia nervosa [5,7] (Table 3). This is akin to asking whether one type of medication is superior to another for all types of physical illnesses [30].

Unconventional calculation of effect sizes

[This section is going to be a bit technical, but worth the read for those who are interested in acquiring some background on this important topic.]

As I have spelled out in an earlier blog post, psychotherapies do not have effect sizes, but comparisons do. Randomized trials facilitate comparisons between a psychotherapy and comparison/control conditions. When you calculate a conventional between-group effect size, it takes advantage of randomization and controls for background factors, like placebo or nonspecific effects. So, you focus on what change went on in a particular therapy, relative to what occurred in patients who didn’t receive it.

In another past blog post, I discussed my colleagues and my comparison of psychotherapies to pill placebo conditions. The between-group effect sizes took into account the difference between the change that went on in psychotherapy and pill placebo conditions, not just the change that went on in psychotherapy. We wanted an estimate of the effects of psychotherapy, above and beyond any benefits of the support and positive expectations that went with being in a clinical trial.Of course, the effect size that we observed were lowered from what we would’ve seen in a comparison between psychotherapy and no treatment.

That is not what Leichsenring and Rabung did. They calculated within-group effect sizes for LTPP that ignored but what went on in the comparison/control group and the rest of the trial. Any nonspecific effects , gets attributed to LTPP, including the substantial improvement over the time that would naturally occur without treatment. These effect sizes were then integrated with calculations from naturalistic, case series studies in which there was no control over patients lost or simply left out of the case series. Confused yet? Again, there were no such naturalistic, case series studies included from other comparison/control therapies. So the advantage was entirely with LTPP. If LTPP did look not better under these odd circumstances, could it ever?

In my last blog post, I reviewed a recent Lancet article reporting a RCT comparing cognitive behavioral therapy to focused psychotherapy for anorexia. Neither therapy did particularly well in increasing patient’s weight, either in absolute terms or in comparison to an enhanced routine care. And the article reported within-group and between–group effect sizes, allowing a striking demonstration of how different they are. The within-group effect size for weight gain for the focal psychodynamic therapy was a seemingly impressive 1.6, p < .001. But the more appropriate between-group effect size for comparing focal psychodynamic therapy to treatment as usual was a wimpy, nonsignificant .13, p< .48 (!).

[Now, we are going to get really technical. Skim or jump down the next section if you do not want to deal with this.]

There are some bizarre calculations by Leichsenring and Rabung that are difficult to explain or replicate, but these calculation gave a clear bias to LTPP. My guess with my colleagues was that:

Leichsenring and Rabung apparently used a conversion formula intended for conversions of between-group point biserial correlations to standardized difference effect sizes in an attempt to convert their correlations of group and within-group pre-post effect sizes into deviation-based effect sizes. As a result, even though none of the eight studies reported an overall standardized mean difference > 1.45 [2, see figure 2 on p. 1558], the authors reported a combined effect size of 1.8. Similarly, these methods generated an implausible between-group effect size of 6.9, equivalent to 93% of variance explained, for personality functioning based on 4 studies [3,5,6,26][1], none of which reported an effect size > approximately 2.

In order to figure out what had been done, my colleagues and I generated 10 hypothetical studies in which

the pre-post effect size for the treatment group was 0.01 larger than the effect size for the control group. In the tenth study, the effect sizes were equal. Despite negligible differences in pre-post treatment effects, the method employed by Leichsenring and Rabung [2] generates a correlation between pre-post effect size and group of 0.996 and an unreasonably large deviation-based effect size of 21.2. Thus, rather than realistic estimates of the comparative effects of LTPP, Leichsenring and Rabung based their meta-analysis on grossly incorrect calculations.

It is simply mind blowing that the editor and reviewers at JAMA let these numbers get by. The numbers are so provocative that they should have invited skepticism.

Almost 1000 new studies needed to reverse the claims of this meta-analysis?

One of the many outrageous claims made in the meta-analysis was that the number of nonsignificant, unpublished, or missing studies needed  to move the meta-analysis to nonsignificance was “921 for overall outcome, 535 for target problems, 623 for general symptoms, and 358 for social functioning.”

What? 921 studies? That is more than the number of control patients included in the meta analysis!  The claim is a testimony to how badly distorted this meta-analysis has become.

Leichsenring and Rabung were attempting to bolster their claims using Rosenthal’s failsafe N, which, among meta analysis methodologists is considered inaccurate and misleading. The Cochrane Collaboration recommends against its use. Mortiz Heene does an excellent job explaining what is wrong with failsafe N. He notes that among the many problems of relying on failsafe N as a check on bias are:

  • Estimates are not influenced by evidence of bias in the available data.
  • Heterogeneity among the studies that are available and those that might be lurking in desk drawers is ignored.
  • Choice of zero for the average effect of the unpublished studies is arbitrary, almost certainly biased.
  • Allowing for unpublished negative studies substantially reduces failsafe N.

The results of the trial described in my recent blog post comparing psychoanalysis to CBT for anorexia certainly contradicts the last assumption of there being no negative trials that were missed.

I know, many meta-analyses of psychological interventions bring in the failsafe N to bolster claims that there are estimates of effect sizes are so robust that we cannot expect any negative studies lurking somewhere to change the overall results. Despite this being a common practice in psychology, failsafe N is uniformly rejected in other fields, and notably clinical epidemiology, as providing an inflated, unrealistic estimate of the robustness of findings.

Coming up in my next blog:

  • Leichsenring and Rabung respond to critics, dodging basic criticisms and condemning that those who reject their claims are bringing in biases of their own.
  • Leichsenring and Rabung renew their claims in another meta-analysis in British Journal of Psychiatry for which 10 of the 11 studies were already included in the JAMA meta-analysis and.
  • The long term psychodynamic/psychoanalytic community responds approvingly and and echo Leichsenring and Rabung’s assessment of skeptics.
  • The important question of whether long-term psychoanalytic psychotherapy is better than shorter term therapies gets an independent evaluation by another group, which included the world-class meta-analyst and systematic reviewer, John Ioannidis.

 

 



 

Category: Uncategorized | Tagged , , , | 14 Comments

Mind the Brain Podcast Episode 04 – Octopus Consciousness

octopus_picFor this edition of my neuroscience podcast series, I chat with David Edelman, who is a professor of neuroscience at Bennington College. David is well known for his work in establishing a theoretical framework for the study of consciousness in animals. He is interested in understanding the neural correlates of consciousness in animals, and whether they even have a form of consciousness that can be studied experimentally. His current work focuses on octopuses, which might seem strange, but in fact octopuses show a wide range of very interesting behaviors, including exploration and learning and memory. They also have a very sophisticated nervous system for an invertebrate, including a relatively large brain as well as a parallel and distributed nervous system. In fact, they actually have more neurons in their tentacles than in their central nervous system, which allows for a huge range of adaptive behaviors, including camouflage and mimicry, all of which suggests a certain type of awareness of their environment.

In this podcast, we’ll discuss the idea of using humans as a benchmark for consciousness, and then trying to demonstrate similar behavioral, cognitive, and neuroanatomical features of animals that might be related to a conscious state. In addition to octopuses, birds also show a number of behavioral and neural features that are very intriguing, including tool use and social learning. I’ll ask David about these features of the bird brain, as well as whether birds and octopuses are capable of “deliberative” actions and have a true awareness of their environments. We’ll end our discussion with a note on what types of experiments David is doing to establish the case that octopuses are indeed conscious, and how much farther we still need to go.

You can listen to and download the podcast here.

If you’re interested in learning more, you can ready some of David’s scholarly articles here and here.

 

Category: Uncategorized | Leave a comment

Cognitive behavior and psychodynamic therapy no better than routine care for anorexia.

Putting a positive spin on an ambitious, multisite trial doomed from the start.

I announced in my last blog post that this one would be about bad meta-analyses of weakStop_Press_2 data used to secure insurance reimbursement for long-term psychotherapy. But that is postponed so that I can give timely coverage to the report in Lancet of results of the Anorexia Nervosa Treatment of OutPatients (ANTOP) randomized clinical trial (RCT). The trial, proclaimed the largest ever of its kind, compared cognitive behavior therapy, focal psychodynamic therapy, and “optimized” routine care for the treatment of anorexia.

This post is an adapt sequel to my last one. I had expressed a lot of enthusiasm for a RCT comparing cognitive behavior therapy (CBT) to psychoanalytic therapy for bulimia. I was impressed with its design and execution and the balanced competing investigator allegiances. The article’s reporting was transparent, substantially reducing risk of bias and allowing a clear message. You will not see me very often being so positive about a piece of research in this blog, although I did note some limitations.

Hands down, CBT did better than psychoanalytic therapy in reducing binging and purging, despite there being only five months of cognitive therapy and two years of psychoanalysis. This difference seems to be a matter of psychoanalysis doing quite poorly, and not that the cognitive behavior CBT doing so well.

However, on my Facebook wall, Ioana Cristea, a known contrarian and evidence-based skeptic like myself, posted a comment about my blog:

Did you see there’s also a recent very similar Lancet study for anorexia? With different results, of course.

She was referring to

Zipfel, Stephan, Beate Wild, Gaby Groß, Hans-Christoph Friederich, Martin Teufel, Dieter Schellberg, Katrin E. Giel et al. Focal psychodynamic therapy, cognitive behaviour therapy, and optimised treatment as usual in outpatients with anorexia nervosa (ANTOP study): randomised controlled trial. The Lancet (2013).

The abstract of the Lancet article is available here, but the full text is behind a pay wall. Fortunately, the registered trial protocol for the study is available open access here. You can at least get the details of what the authors said they were going to do, ahead of doing it.

For an exceedingly quick read, try the press release for the trial here, entitled

Largest therapy trial worldwide: Psychotherapy treats anorexia effectively.

Or an example of a thorough uncritical churnalling of this press release in the media here.

What we are told about anorexia

anorexia-cuando-21

Media portrayals of anorexia often show the extreme self-starvation associated with the severe disorder, but this study recruited women with mild to moderate anorexia.

The introduction of the ANTOP article states

  • Anorexia nervosa is associated with serious medical morbidity and pronounced psychosocial comorbidity.
  • It has the highest mortality rate of all mental disorders, and relapse happens frequently.
  • The course of illness is very often chronic, particularly if left untreated.

A sobering accompanying editorial in Lancet stated

The evidence base for anorexia nervosa treatment is meagre1, 2 and 3 considering the extent to which this disorder erodes quality of life and takes far too many lives prematurely.4 But clinical trials for anorexia nervosa are difficult to conduct, attributable partly to some patients’ deep ambivalence about recovery, the challenging task of offering a treatment designed to remove symptoms that patients desperately cling to, the fairly low prevalence of the disorder, and high dropout rates. The combination of high dropout and low treatment acceptability has led some researchers to suggest that we pause large-scale clinical trials for anorexia nervosa until we resolve these fundamental obstacles.

What the authors claim that this study found.

The press release states

Overall, the two new types of therapy demonstrated advantages compared to the optimized therapy as usual,” said Prof. Zipfel. “At the end of our study, focal psychodynamic therapy proved to be the most successful method, while the specific cognitive behavior therapy resulted in more rapid weight gain.

And the abstract

At the end of treatment, BMI [body mass index] had increased in all study groups (focal psychodynamic therapy 0·73 kg/m², enhanced cognitive behavior therapy 0·93 kg/m², optimised treatment as usual 0·69 kg/m²); no differences were noted between groups (mean difference between focal psychodynamic therapy and enhanced cognitive behaviour therapy –0·45, 95% CI –0·96 to 0·07; focal psychodynamic therapy vs optimised treatment as usual –0·14, –0·68 to 0·39; enhanced cognitive behaviour therapy vs optimised treatment as usual –0·30, –0·22 to 0·83). At 12-month follow-up, the mean gain in BMI had risen further (1·64 kg/m², 1·30 kg/m², and 1·22 kg/m², respectively), but no differences between groups were recorded (0·10, –0·56 to 0·76; 0·25, –0·45 to 0·95; 0·15, –0·54 to 0·83, respectively). No serious adverse events attributable to weight loss or trial participation were recorded.

How can we understand results presented in terms of changes in BMI?

body-mass-index-formulaYou can find out more about BMI [body mass index] here and you can calculate your own here. But note that BMI is a controversial measure, does not directly assess body fat, and is not particularly accurate for people who are large- or small-framed or fit or athletic.

These patients had to have been quite underweight to be diagnosed with anorexia, and so how much weight did they gain as result of treatment?  The authors should have given us the results in numbers that make sense to most people.

The young adult women in the study averaged 46.7 kg or 102.7 pounds at the beginning of the study. I had to do some calculations to translate the changes in BMI reported by these authors with the assumption that they were an average height of 5’6”, like other German women.

Four months after beginning the 10 month treatment, the women had gained an average of 5 pounds and at 12 months after the end of treatment (so 22 months after beginning treatment), they had gained another 3 pounds.

On average, the women participating in the trial were still underweight 22 months after the trial’s start and would have still qualified for entering the trial, at least according to the weight criterion.

How the authors explain their results.

Optimised treatment as usual, combining psychotherapy and structured care from a family doctor, should be regarded as solid baseline treatment for adult outpatients with anorexia nervosa. Focal psychodynamic therapy proved advantageous in terms of recovery at 12-month follow-up, and enhanced cognitive behaviour therapy was more effective with respect to speed of weight gain and improvements in eating disorder psychopathology. Long-term outcome data will be helpful to further adapt and improve these novel manual-based treatment approaches.

My assessment after reading this article numerous times and consulting supplementary material:

  • Anorexia was treated with two therapies, each compared to an unusual control condition termed “optimized” treatment as usual. When the study was over and even in follow-up, anorexia won and the treatments lost.
  • In interpreting these results, note that the study involved a sample of young women with mostly only mild to moderate anorexia. Only a little more than half had full syndrome anorexia.
  • In post hoc “exploratory analyses,” the authors emphasized a single measure at a single time point that favored focal psychodynamic therapy, despite null findings with most other standard measures at all time points.
  • The authors expressed their outcomes in within-group effect sizes. This is an unusual way that exaggerated results, particularly when comparisons are made to the effect sizes reported for other studies.
  • Put another way, results of the trial were very likely spun, starting with the abstract, and continuing in the results and press release.
  • The study demonstrates the difficulty treating anorexia and evaluating this treatment. Only modest increases in body weight were obtained despite intensive treatment.  Interpretation of what happened is complicated by high rates of dropping out of therapy and loss to follow-up, and the necessity of inpatient stays and other supplementary treatment.
  • The optimized routine care condition involved ill-described, uncontrolled  psychotherapeutic and medical interventions. Little sense can be made of this clinical trial except that availability of manualized treatment proved no better (or no worse), and none of the treatments, including routine care, did particularly well.
  • The study is best understood as testing the effectiveness of treating anorexia in some highly unusual circumstances in Germany, not an efficacy trial testing the strength of the two treatments. Results are not generalizable to either of the psychotherapies administered by themselves in other contexts.
  • The study probably demonstrates that  meaningful RCTs of the treatment of anorexia cannot be conducted in Germany with generalizable results.
  • Maybe this trial is just another demonstration that we do not know enough to undertake a randomized study of the treatment of anorexia that would yield readily interpretable findings.

Sad, sad, sad. So you can stop here if all you wanted was my evaluation. Or you can continue reading to find out how I arrived at and whether you agree.

Outcomes for the trial: why am I so unimpressed?

On average, the women were still underweight at follow up, despite having had only mildly to moderate anorexia at the start of the study.  The sample was quite heterogeneous at baseline. We don’t know how much of the modest weight gain and the minority of women who were considered “fully recovered” represents small improvements in women starting with higher BMI and milder, subsyndromal anorexia at baseline.

Any discussion of outcomes has to take into account the substantial number of women not completing treatment and lost to follow up.

Missing data can be estimated with fancy imputational techniques. But they are not magic, and involve some assumptions that cannot be tested with loss of patients to follow up in such small treatment groups. And yet, we need some way to account for all patients initially entering a clinical trial (termed an intent-to-treat analysis) for valid, generalizable results. So, we cannot ignore these problems and simply concentrate just on the women completing treatment and remaining available.

And then there is the issue of nonstudy treatment, including inpatient stays. The study has no way of taking them into account, other than reporting them. Inpatient stays could have occurred for different reasons across the three conditions. We cannot determine if the inpatient stays contributed to the results that were observed or maybe interfered with the outpatient treatment. But here too, we cannot simply ignore this factor.

We certainly cannot assume that failures to complete treatment, loss to follow up and the necessity of inpatient stays are randomly distributed between groups. We cannot convincingly rule out that some combination of these factors are decisive for the results that were obtained.

The spinning of the trial in favor of focal psychodynamic treatment.

positive spin 2The preregistration of the trial listed BMI at the end of treatment as the primary outcome. That means the investigators staked any claims about the trial on this outcome at this time point. There were no overall differences.

The preregistration also listed numerous secondary outcomes: the Morgan-Russell-criteria; general axis I psychopathology (SCID I) ; eating disorder specific psychopathology (SIAB-Ex; Eating Disorder Inventory-2) severity of depressive comorbidity (PHQ-9); and quality of life according to the SF-36. Not all of these outcomes are reported in the article, and for the ones that are reported, almost all are not significantly different at any timepoint.

The authors’ failure to designate one or two of these variables a priori (ahead of time) sets them up to pick-the-best hypothesizing after results are known or HARKING. We do not actually know what was done, but there is a high risk of bias.

We should in general be highly skeptical about post hoc exploratory analyses of variables that were not pre-designated as outcomes for a clinical trial, in either primary or secondary analyses.

In table 3 of their article, the investigators present within-group effect sizes that portray the manualized treatments as doing impressively well.

 ANTOP study 1 page-page-0

Yet, as I will discuss in forthcoming blogs, within-group effect sizes are highly misleading compared to the usually reported between-group effect sizes. These within-group effect sizes attribute all changes that occurred in a particular group to the effects of the intervention. That includes claiming credit for nonspecific effects common across conditions, as well as any improvement due to positive expectations or patients bouncing back after having enrolled in the study at a particular bad time.

The conventional strategy is to provide between-group effect sizes comparing a treatment to what was obtained the other groups.  This preserves the effects of randomization and makes use of what can be learned from comparison/control conditions. Treatment do not have effect sizes, but comparisons of treatments do.

As an example, we do not pay much attention to the within-group effect size for antidepressants in a particular study, because these numbers do not take into account how the antidepressants did relative to a pill placebo condition. Presumably the pill placebo is chemically inert, but it is provided with the same attention from clinicians, positive expectations, and support that come with the antidepressant. Once these factors shared by both the antidepressant and pill placebo conditions are taken into account, the effect size for antidepressant decreases.

Take a look at weight gain by the end of the 12 month follow-up among patients receiving focal psychodynamic therapy. In Table 3, the within-group effect size for focal psychodynamic therapy is a whopping 1.6, p < .001. But the more appropriate between-group effect size for comparing focal psychodynamic therapy to treatment as usual shown in Table 2 is  a wimpy, nonsignificant .13, p< .48 (!)

An extraordinary “optimized” treatment as usual.

Descriptions in the preregistered study protocol, press releases, and methods section of the article do not do justice to the “optimized” treatment as usual. The method section did not rouse particular concern from me. It described patients assigned to the treatment as usual being provided with a list of psychotherapists specializing in the treatment of eating disorders and their family physicians assuming an active role in monitoring and providing actual treatment. This does not sound particularly unusual for a comparison/control group. After all, it would be unethical to leave women with such a threatening, serious disorder on a waiting list just to allow a comparison.

But then I came across this shocker description of the optimized routine care condition in the discussion section:

Under close guidance from their family doctor—eg, regular weight monitoring and essential blood testing—and with close supervision of their respective study centre, patients allocated optimised treatment as usual were able to choose their favourite treatment approach and setting (intensity, inpatient, day patient, or outpatient treatment) and their therapist, in accordance with German national treatment guidelines for anorexia nervosa.11 Moreover, comparisons of applied dosage and intensity of treatment showed that all patients— irrespective of treatment allocation—averaged a similar number of outpatient sessions over the course of the treatment and follow-up periods (about 40 sessions). These data partly reflect an important achievement of the German health-care system: that access to psychotherapy treatment is covered by insurance. However, patients allocated optimised treatment as usual needed additional inpatient treatment more frequently (41%) than either those assigned focal psychodynamic therapy (23%) or enhanced cognitive behaviour therapy (35%).

OMG! I have never seen such intensive treatment-as-usual in a clinical trial. I doubt anything like this treatment would be available elsewhere in the world as standard care.

This description raises a number of disturbing questions about the trial:

Why would any German women with anorexia enroll in the clinical trial? Although a desire to contribute to science is sometimes a factor, the main reason for patients entering clinical trials are because they think they will get better treatment and maybe because they think they can get a preferred treatment which they cannot get it elsewhere. But, if this is the situation of routine care in Germany, why would eligible women not just remain in routine care without the complications of being in a clinical trial?

At one point, the authors claim that 1% of the population has a diagnosis of anorexia. That represents a lot of women. Yet, they were only able to randomize 242 patients, despite a massive two-year effort to recruit patients involving 10 German departments of psychotherapy and psychosomatic medicine. It appears that a very small minority of the available patients were recruited, raising questions about the representativeness of the sample.

Patients had little incentive to remain in the clinical trial rather than dropping out. Dropping out of the clinical trial would still give them access to free treatment–without the hassle of remaining in the trial.

In a more typical trial, patients assigned to treatment as usual are provided with a list of referrals. Often few bother to complete a referral or remain in treatment, and so we can assume that the treatment-as-usual condition usually represents minimal treatment, providing a suitable comparison  with a positive outcome for more active, free treatment. In the United States, patients enrolling in clinical trials often either do not have health insurance or can find only providers who will not accept what health insurance they have for the treatment they want. Patients in the United States enter a clinical trial just to get the possibility of treatment, very different circumstances than in Germany.

Overall, no matter what condition patients were assigned, all received about the same amount of outpatient psychotherapy, about 40 sessions. How could these authors have expected to find a substantial difference between the two manualized treatments and this intensity of routine care? Differences between groups of the magnitude they assumed in calculating sample sizes under these conditions would be truly extraordinary.

Alot of attention and support is provided in 40 sessions of such psychotherapy, making it difficult to detect the specific effects provided by the manualized therapies, above and beyond the attention support they provide..

In short, the manualized treatments were doomed to null findings in comparison to treatment as usual. The only thing really unexpected about this trial is that all three conditions did so poorly.

What is a comparison/control group supposed to accomplish, anyway?

Investigators undertaking randomized controlled trials of psychotherapies know about the necessity of comparison/control groups, but they generally understand less the implication of their choice of a comparison/control group.

Most evidence-based treatments earned their status by proving superior in a clinical trial to a control group such as wait list or no treatment at all. Such comparisons provide the backbone to claims of evidence-based treatments, but are not particularly informative. It may simply be that many manualized, structured treatments are no better than other active treatments patients have similar intensity of treatment, positive expectations, and attention and support.

Some investigators, however, are less interested in establishing the efficacy of treatments, then in demonstrating the effectiveness of particular treatments over what is already being done in the community. Effectiveness studies typically find small effects been obtained in straw-man comparisons between treatments and the weak effects observed in control groups.

But even if their intention is to conduct an effectiveness study, investigators need to better describe the nature of of treatment as usual, if they are to make reasonable generalizations to other clinical and health system contexts.

We know that the optimized treatment as usual was exceptionally intensive, but we have no idea from the published article what it entailed, except lots of treatment, as much as what was provided provided in the active treatment conditions. It may even be that some of the women assigned to optimized treatment obtained therapists providing much the same treatment.

Again, if all of the conditions had done well in terms of improved patient outcomes, then we could have concluded that introducing manualized treatment does not accomplish much in Germany at least. But my assessment is that none of the three conditions did particularly well.

The optimized treatment as usual is intensive but not evidence-based. In my last blog post, we viewed a situation in which less treatment proved better than more. Maybe the availability of intensive and extensive treatment discourages women from taking responsibility for their health threatening condition. They do not improve, simply because they can always get more treatment. That is simply a hypothesis, but Germany is spending lots of money assuming that it is incorrect.

Why Germany may not be the best place to do a clinical trial for treatment of anorexia.

Germany may not be an appropriate place to do a clinical trial of treatment for anorexia for a number of reasons:

  • The ready availability of free, intensive treatment prevents recruitment of a large, representative sample of women with anorexia to a potentially burdensome clinical trial.
  • There is less incentive for women to remain in the study once they are enrolled because they can always drop out and get the same intensity of treatment elsewhere.
  • The control/comparison group of “optimized” treatment as usual complied with the extensive requirements of the German national treatment guidelines for anorexia nervosa. But these standards are not evidence-based and appear to have produced mediocre outcomes in at least this trial.
  • Treatment as usual available to everyone is not necessarily effective, but it precludes detecting incremental improvements obtained by less intensive, but focused treatments.

Prasad and Ioannidis have recently called attention to the pervasiveness of non-evidence-based medical treatments and practice guidelines that are not either cost-effective, ensuring good patient outcomes, or avoiding unnecessary risks. They propose de-implementing such unproven practices, but acknowledge the likelihood that cultural values, vested interests, and politics can interfere with efforts to subject established but unproven practices to empirical test.

Surely, that would be the case in any effort to de-implement guidelines the treatment of anorexia in Germany.

The potentially life-threatening nature of anorexia may discourage any temporary suspension of treatment guidelines until evidence can be obtained. But we need only to look to the example of similarly life-threatening cancers where improved treatments only came about only when investigators were able to suspend well-established but unproven treatments and conduct randomized trials.

It would be unethical to assigned women with anorexia to waitlist control or no treatment when free treatment is readily available in the community. So, there may be no other options but to use treatment has usual has a control condition.

If so, a finding of no differences between groups is almost certainly guaranteed. And given the poor performance of routine care observed in this study, such results were not represent the familiar Dodo Bird Verdict for comparisons between psychotherapies in which all of the treatments were winners in all get prizes.

Why it may be premature to conduct randomized trials of treatment of anorexia.

This may well be, as the investigators proclaim in their press release, the largest ever RCT of treatment for anorexia. But it is very difficult to make sense of it, other than to conclude that no treatments, including treatment as usual, had particularly impressive results.

For me, this study highlights the anonymous barriers to conducting a well-controlled RCT for anorexia with patients representative of the kinds that would seek treatment in real-world clinical context.

There are unsolved issues of patient dropout and retention for follow-up that seriously threaten the integrity of any results. We just do not know how to recruit a representative sample of patients with anorexia and keep them in therapy and around for follow-up.

Maybe we should ask women with anorexia about what they think. Maybe we could enlist some of them to assist in a design of a randomized trial or at least a treatment investigators could retain sufficient numbers of them to conduct a randomized trial

I am not sure how we would otherwise get this understanding without involving women with anorexia in the design of treatment in future clinical trials.

There are unsolved issues of medical surveillance and co-treatment confounding. Anorexia poses physical health problems in the threats associated with sudden weight loss. But we do not have evidence-based protocols in place for standardizing surveillance and decision-making.

Before we undertake massive randomized trials such as ANTOP, we need to get information to set basic parameters from nonrandomized but nonetheless informative small-scale studies. Obviously the investigators in this study could not even estimate effect sizes in order to set sample sizes.

Well,  you presumably having made it through this long read, what do you think?

 

 

 

Category: eating disorders, mental health care, Psychiatry | Tagged , , , , | 6 Comments

When Less is More: Cognitive Behavior Therapy vs Psychoanalysis for Bulimia

American Journal of Psychiatry published a noteworthy report of a randomized clinical trial (RCT) comparing cognitive behavior therapy to psychoanalytic therapy for bulimia.

trophy-lTwenty sessions of cognitive behavior therapy over 5 months reduced binge eating and purging better than 2 years of weekly psychoanalytic psychotherapy. This was true for assessments both at five months (42% versus 6%), marking the ending the cognitive behavior therapy (CBT), and two years (45% versus 16%), marking the ending of the psychoanalytic psychotherapy. Overall, psychoanalytic psychotherapy did not do well, despite the greater intensity of treatment.

If that’s all that you needed to know, you can stop reading here. But continue on if you are interested in finding out more about good conduct and reporting of clinical trials, what’s special about this trial. Then, the next blog post will put this trial into the context of  a larger struggle to secure insurance funding of long-term psychoanalytic or psychodynamic psychotherapy (LTPP)—with exceedingly weak and limited evidence.

What was done in this trial.

The trial was conducted in a University clinic patients recruited through advertisements and referral.

Seventy patients with bulimia nervosa received either 2 years of weekly psychoanalytic psychotherapy or 20 sessions of CBT over 5 months. The main outcome measure was the Eating Disorder Examination interview, which was administered blind to treatment condition at baseline, after 5 months, and after 2 years. The primary outcome analyses were conducted using logistic regression analysis.

Experienced therapists were monitored for their adherence to the model of therapy to which they were assigned.

What was found.

Aside from the primary outcome of reports of binge eating and purging, the secondary outcome was improvement in general psychopathology, measured by standardized measures including self-reported anxiety and depression symptoms. At five months the outcome was better for CBT, but the difference was no longer statistically significant at 2 years.

How the authors explain their results.

CBT is a symptom-focused treatment that is designed to produce a rapid reduction in the frequency of binge eating (10), a change that is highly predictive of the patients’ eventual response (32). In contrast, the psychoanalytic psychotherapy tested in this trial was designed as a nondirective therapy with no specific behavioral procedures directed at the control of binge eating. The more indirect approach to symptoms in psychoanalytic psychotherapy may be insufficient, because binge eating and purging can both be viewed as maladaptive coping strategies that provide an immediate, albeit short-term, relief from negative emotions (5–7). Accordingly, to enable the patient to let go of the symptoms, a directive approach providing concrete alternative problem-solving techniques may be needed.

What was so impressive about this trial.

This trial was designed and reported in ways that substantially reduced risk of bias:

  • Patients randomized to the two therapies were equivalent on key baseline variables.
  • Both treatments were manualized.
  • Authors of this article included developers of both manuals.
  • Both treatments were administered by experienced therapists supervised for their fidelity to the model and manual to which they were assigned.
  • Outcomes were evaluated by raters who were blind to treatment assignment.
  • Analyses were intent to treat, i.e., conducted with all patients who were originally randomized.
  • Timing of outcome assessments were tailored to end of both treatments, conducted at five months when the CBT ended and again at two years two years when the psychoanalytic psychotherapy ended.

Not perfect.

Limited funding for the study left it underpowered to detect the size of differences that are typically found between two established treatments. If there had been only a small difference between the two treatments—usual for comparisons of two credible treatments—the effect would not have been detected as statistically significant.

There was no neutral comparison/control condition. When a trial is underpowered to detect the expected small differences between two active, credible treatments, it’s good to have the fallback of comparing each of the treatments to the comparison/control condition.

For instance, if the difference between CBT in psychoanalytic psychotherapy was too small to achieve significance, secondary comparisons could still be made to determine if either or both were superior to the comparison/control condition. Otherwise, no differences between the two treatments would leave us undecided whether they were equally good or equally bad, compared to the change that would occur in the absence.

The trial was apparently not preregistered. We cannot independently verify whether the final sample was what was originally intended or that the analysis were as planned before the data were available.

Most patients receiving neither therapy had reduced binge eating and purging, but that apparently was not improved by the considerably larger number of sessions of psychoanalysis versus cognitive behavior therapy.

And it is only one study.

The trial is noteworthy for a number of reasons.

It is a rare head-to-head comparison of psychoanalytic therapy to another credible psychotherapy under conditions which proponents of both treatments would agree are fair.head to head

Proponents of psychoanalytic psychotherapy and mainstream empirically-based therapies have great difficulty agreeing how to conduct a study in terms of characterizing patients, length of treatment, and selecting outcomes. The divide is great. Proponents from one camp can typically object to the other’s recruitment and diagnosis of patients, the manner in which psychotherapy is implemented, and for what length of time.

Proponents of psychoanalytic psychotherapy often question whether the randomized control trial is even appropriate for its evaluation. For instance:

The thesis advanced here is that the privileged status this movement accords such research as against in-depth case studies is unwarranted epistemologically and is potentially damaging both to the development of our understanding of the analytic process itself and to the quality of our clinical work. In a nonobjectivist hermeneutic paradigm best suited to psychoanalysis, the analyst embraces the existential uncertainty that accompanies the realization that there are multiple good ways to be, in the moment and more generally in life, and that the choices he or she makes are always influenced by culture, by sociopolitical mind-set, by personal values, by countertransference, and by other factors in ways that are never fully known.

The comparative psychotherapy literature consists of a few studies in which a reader can readily predict the outcome of comparisons of psychoanalytic psychotherapy versus other treatments between by simply looking at investigator allegiance. Yup, which therapy will produce the largest effect is better predicted by which treatment the investigator advocates, not the particular brand of therapy.

It is highly unusual in finding that any credible psychotherapy has a substantial advantage over any another.

Credible psychotherapies typically acquire their evidence-based status in randomized trials in which they are compared to wait list, no treatment, or a routine care that has not been shown simply inadequate care or no care at all. The evidence mustered to show particular psychotherapies are effective does not typically address whether they are effective versus other credible treatments.

There are also few instances of one credible treatment besting another in a head-to-head comparison. Where this does occur, it is typically due to an investigator allegiance effect.

It is just the kind of comparison that is needed to address the important question of whether investing time and money in longer-term psychoanalytic psychotherapy leads to substantially better outcomes than shorter psychotherapies.

In this single, modest, even if well done trial, a lot fewer sessions of CBT produced greater change than considerably more sessions of LTPP. In terms of calculating cost effectiveness, the less expensive CBT treatment needs to be interpreted in the context of its greater efficacy. In this instance, less and cheaper is better.

As we will see in my next blog post, the question of whether the added expense of LTPP can be justified has been the subject of meta analyses that were wretchedly done with almost no quality evidence. The meta-analyses are nonetheless heavily promoted by those who seek to secure insurance coverage of LTPP.

Category: mental health care | Tagged , , , , | 5 Comments