Psychoanalysts claim long-term psychoanalytic psychotherapy more effective than shorter therapies.

 Part one: Psychoanalytic psychotherapy uber alles?

Freud2One might conclude from a quick internet search that the benefits of long-term psychoanalytic or psychodynamic psychotherapy (LTPP) versus shorter treatments has already been shown: LTPP has a superiority that justifies the greater commitment of time and money. A meta-analysis in the Journal of the American Medical Association (JAMA) makes that claim [PDF available here.]. It is accompanied by a glowing editorial. The authors of the JAMA paper, Leichsenring and Rabung, have also repeated their claims in redundant articles published in other the journals. And their claims have been echoed enthusiastically by proponents of LTPP around the world

Read on and you’ll

  • Romp with me through the many problems that apparently slipped by unnoticed in the publication of a meta-analysis in a high impact journal.
  • Have your confidence shaken that publication in one of the highest impact medical journals is any reassurance of the trustworthiness of evidence and freedom from bias.
  • Get tips how to spot bad meta analyses being shaped for marketing and propaganda purposes.
  • Find a brief distraction in Bambi meets Godzilla, the animated  91-second video that has become a metaphor for confrontations between the practice of psychotherapy and the demand for evidence of cost-effectiveness.
  • Be left grappling with the broader issue of just how much politics and personal connections determine whether manuscripts get published in high impact journals and with accompanying editorials.

The JAMA meta analysis drew on 11 RCTs and 12 non-RCT naturalistic, observational case series studies of long-term psychodynamic psychotherapy. The authors concluded that LTPP is superior to shorter-term psychotherapies. For their assessment to be unseated, they claim, there would have to be 921 negative studies being left unpublished.

With regard to overall effectiveness, a between-group effect size of 1.8 (95% confidence interval [CI], 0.7-3.4) indicated that after treatment with LTPP patients with complex mental disorders on average were better off than 96% of the patients in the comparison groups (P=.002).

The accompanying editorial entitled “Psychodynamic psychotherapy and research evidence: Bambi survives Godzilla?” praised the study’s

thorough search methods, including the requirement for reliable outcome measures, and their careful assessments of heterogeneity and lack of evidence for publication bias are strengths of the study.

And

Does this new meta-analysis mean that LTPP has survived the Godzilla of the demand for empirical demonstration of its efficacy? The answer is a qualified yes. The meta-analysis was carefully performed and yielded a within-group effect size of 0.96 (95% confidence interval [CI], 0.87-1.05) for pretreatment-postreatment overall outcomes, which would be considered a large effect.

Bambi-meets-Godzilla-513d504906daa_hiresThe title of the editorial refers to a classic 1982 article by Morris Parloff, former head of the NIMH Psychotherapy and Behavioral Intervention Section in the Clinical Research Branch (1972-1980). Parloff correctly anticipated that if third party reimbursement were sought for psychotherapy, policy makers would demand evidence that particular therapies worked for particular problems. The encounter between the existing evidence and policy makers was “Bambi meets Godzilla.”

Parloff’s title was in turn inspired by an animation classic of the same name. For a quick digression, you can watch the complete 91-second cartoon here.

 The JAMA meta-analysis claimed that LTPP was superior to shorter psychotherapies by one of the largest margins probably ever seen in the comparative psychotherapy literature. If you are new to the field or simply unfamiliar with the relevant literature, you may be reluctant to dispute conclusions that appeared in a prestigious, high impact journal (JIF=30) and, not only that, were endorsed by the editor of JAMA.

Peter Kramer, author of Listening to Prozac, blogged about this meta analysis and suggested readers should be swayed by this sheer authority:

 [R]esults are suggestive, and they are the best we have. And then, there’s the brand label: JAMA. The study passed a rigorous peer review. There is no comparably prestigious study denying that long-term therapy is the treatment of choice.

You truly disappoint me, Peter. I hope you agree by the end of my longread analysis and critique that your judgment was, uh, hasty.

We don’t have to defer to the authority of publication in JAMA and should not. We can decide for our own, making use of objective standards developed and validated by people who have no dog in this fight.

Intense criticism of these claims.

…As could be expected, letters to the editor poured into JAMA concerning the meta-analysis.

JAMA has limits on their number and length of letters that will be published in response to an article. These restrictions prompted some authors to organize and take their more lengthy critiques elsewhere. I was honored to join a group comprised of Aaron T Beck, Brett Thombs, Sunil Bhar, Monica Pignotti, Marielle Bassel, and Lisa Jewett.

You can find our extended critique here. I also recommend a splendid, incisive critique by Julia Littell and Aron Shlonsky that applied the validated AMSTAR checklist for evaluating meta analyses. The JAMA article plus their brief one makes a great reading assignment for teaching purposes.

If you are only interested only in a summary evaluation of the JAMA meta-analysis and are willing to trust my authoritative opinion (No, distrust all authority when it comes to evaluating evidence!), here it is–

  • The meta-analysis was undertaken and reported in flagrant violation of the usual rules for conducting and reporting a meta-analysis.
  • There were arbitrary decisions made about which studies to include and what constituted a control condition.
  • Different rules were involved in selecting studies of LTPP versus comparison psychotherapies, giving an advantage to the LTPP.
  • Results from randomized controlled trials were integrated with results from  poor quality naturalistic, observational studies in unconventional ways that strongly favored LTPP.
  • For some analyses, effect sizes from LTPP studies were calculated in an unconventional manner. Any benefits of these LTPP conditions being evaluated as part of a randomized trial were eliminated. This maneuver seriously inflated the estimates of effect sizes in favor of LTPP. Results are not comparable to what would be obtained with more conventional methods.
  • Calculation of some effect sizes involved further inexplicable voodoo statistics, so that a set of studies in which no effect size was greater than 1.4 produced an overall effect size of 6.9. Duh!
  • In the end, a bizarre meta-analysis compared 1053 patients assigned to LLTP to 257 patients assigned to a control condition, only 36 of whom were receiving an evidence based therapy for their condition. Yet, sweeping generalizations were made for advantages that should be expected for LLP patients over those receiving any shorter psychotherapies.
  • The effect size comparing LTPP to control conditions would not generalize to any credible psychotherapy, much less one that was evidence-based.

Please read on if you would like a more in-depth analysis. I hope you will. I want to thoroughly disturb your sense that you can trust an article you find in high impact journals simply because it apparently survived peer review there. And aside from encouraging a healthy skepticism, I will leave you with tips for what to look for in other articles.

A flagrant violation of established rules for conducting and reporting meta analyses.

Meta-analyses are supposed to be conducted in an orderly, pre-specified, transparent, and readily replicable fashion, much like a randomized trial. There are established standards for reporting meta-analyses, such as PRISMA for evaluating health care interventions and MOOSE for observational studies as well as AMSTAR, a brief checklist for evaluating the adequacy of the conduct of a meta-analysis.

Increasingly, journals also require published preregistration of plans for meta-analyses, much like preregistration of the design of a randomized trial. My experience has been that JAMA encourages submission of plans for meta-analyses for preapproval. My colleagues and I have always submitted our plans. There is no indication that this was done in the case of this meta-analysis of LTPP.

Judged by the usual standards, this meta-analysis was seriously deficient.

Leichsenring and Rabung formulated an unconventionally broad research question. Systematic review objectives typically define the patient population, intervention, control treatment, outcomes, and study designs of interest. These authors defined the patient population broadly as a group of adults with mental disorders. The sole criterion that outcomes should have fulfilled is that they were reliable. Control treatments were not defined at all. Thus, analyzed diagnoses, outcomes, and control groups showed a very large clinical heterogeneity. Although the reviewers tried to account for this through subgroup analyses, their method of building clusters of heterogeneous disorders and outcomes still allowed for considerable variation. Thus, it is unclear for what disorder, for what outcome variables, and in comparison with which control groups the evidence was shown.

Trick question: how many RCTs were entered into the meta-analysis?

Probably 90% of the casual readers of this article will give the wrong answer of 11, rather than the correct answer of 8. A number of commentators on this article got this question wrong. Understandably.

The abstract clearly says 11 RCTs, and that statement is repeated in the text and figures. To find out what was actually done you have to pay attention to the superscripts in Table 1 and read carefully the last paragraph on page 1554. You can eventually figure out that for the largest RCT, Knekt et al, 2008, the control conditions were dropped. The next largest RCT, Vinnars et al, 2005 was a comparison between two LTPP’s and these were dealt with as if they came from separate trials. The same was true for the Hogeland et al study, which compared  LTPP with and without allowing the psychoanalyst to make transference interpretations. Because there was no difference in treatment efficacy, conventionally calculated effect sizes for this trial would be quite small.

Finally, the control group of Huber and Klug was dropped. So, Leichsenring and Rabung started with 12 comparisons in 11 RCTs, but extracted and integrated data from 4 of these groups in ways that did not preserve the benefits of their having come from an RCT.

Leichsenring and Rabung eliminated the largest comparisons in RCTs and were left with 8 pitifully small, underresourced trials, each of which had less than a 50% chance of detecting a moderate sized effect, even if it were there. Yet, these trials obtained results at a statistically improbable rate. There is clearly wild publication bias going on here.

The table below summarizes the eight surviving trials.

edied bhar page with table-page-001

Leichsenring and Rabung were interested in making sweeping statements about the value of LTPP over shorter treatments, but these 8 RCTs are all they were left to work with. Perhaps the best solution would be to simply declare a “failed meta-analysis.” That’s not such a bad thing. It simply an acknowledgment that the literature does not yet have sufficient high quality studies with large enough samples to draw conclusions.

Instead, Leichsenring and Rabung engaged in all sorts of mischief that is quite misleading to the unwary reader. They started by salvaging the LTPP groups from the excluded RCTs and quantifying effect sizes in ways that would confuse readers expecting something more conventional. They then threw in the uncontrolled case series studies which represent a much lower quality of evidence than an RCT that had been preserved intact.

Stacking the deck in favor of LTPP

The overall methodological quality of the LTPP studies that were included in the meta-analysis was quite poor, particularly the naturalistic studies that just involved collecting uncontrolled case series of patients. Investigators in these studies were free to selectively add or drop cases without reporting what they had done, or to keep the case series going, adding more cases or extending the length of treatment or follow-up as needed to obtain an appearance of an overall positive effect.

One study lost 45% of the patients originally recruited to follow-up. Results do not even generalize back to the patients who entered the study. Results from the many studies that were biased by substantial loss to follow-up were simply combined with fewer studies in which all patients who had been recruited were retained for analysis of outcome.

There were no such naturalistic observational studies of psychotherapies other than LTPP included for comparison, so only the LTPP had the benefits of exaggerated bad data. As we will see, given the odd way in which effect sizes were calculated, the lack of such studies represented a serious bias in favor of LTPP.

Randomized trials in which LTPP were provided for less than a year were excluded from the meta-analysis. Among other studies, the 10 month trial of psychedelic psychotherapy versus cognitive behavior therapy for anorexia would be excluded because the therapy did not go a year. That is unfortunate, because this particular study is of much better methodological quality than any of the others included in the meta-analysis.

There are no evidence-based  criteria for setting duration of psychotherapy of at least a year ahead of time.  Why did these authors settle on this requirement for inclusion of studies? I strongly suspect that the authors were being responsive to the need for evidence to justify a full year of insurance coverage for psychoanalytic psychotherapy.

You can probably get psychoanalysts to agree to keep patients in treatment for over a year, but many other therapists and their patients would object to committing ahead of time to such a length of treatment.  There are of course many other randomized trials of psychotherapies out there, but most do not involve providing a year of treatment. One could ask the very reasonable question ‘Do these trials of shorter treatments provide comparable or better outcomes than a year of LTPP?’ but apparently these authors were not really interested in that question.

There were some seemingly arbitrary reclassifications of studies as being LTPP.

One study supportive of LTPP had previously been classified in a meta-analysis by Leichsenring and Rabung as involving short-term psychodynamic psychotherapy (STPP), but was as reclassified in the present meta-analysis as involving long-term psychodynamic psychotherapy.

And arbitrary exclusion of relevant studies. Conventional standards for conducting and reporting meta-analyses suggest providing list of excluded studies, but that was missing.

the exclusion of the study by Giesen-Bloo et al that favored schema-focused therapy over LTPP appears arbitrary. The original article defined patients in treatment for three years as completers and presented effect sizes on that basis.

Compared to what?

Overall, control conditions included in the meta analysis did not adequately represent shorter-term therapies and blurred the distinction between these therapies and no treatment at all. What went on in these control conditions cannot be generalized to what would go on in credible psychotherapies.

the designation of ‘shorter-term methods of psychotherapy,’ included five treatments that did not constitute formal psychotherapy as it is generally understood. These treatments consisted of a waitlist control condition, nutritional counseling, standard psychiatric care, low contact routine treatment and treatment as usual in the community.

And

In only two studies, LTPP was compared to an empirically supported treatment, as defined by Chambless and Hollon [29] that is, DBT for borderline personality disorder [10], and family therapy for anorexia nervosa [7]. In two other studies, LTPP was compared to cognitive therapy (CT)[8] and short term psychodynamic psychotherapy (STPP)[3], which are established as efficacious for some disorders, but not yet validated for the disorder being  treated (i.e., cluster C personality disorders, “neurosis”). In a fifth study [5], LTPP was compared to “cognitive orientation therapy”, an unvalidated treatment. In these original studies, statistical superiority of LTPP over control conditions was found only when control conditions involved either no psychotherapy, or an unvalidated treatment.  Studies that compared LTPP to an empirically supported (e.g., DBT, family therapy) or established treatment (e.g., STPP, CT) found that LTPP was equally or less effective than these treatments   despite a substantially longer treatment period.

So, Leichsenring and Rabung kept the 1053 LTPP patients in their analyses, but by a complex process of elimination reduced the number of comparison patients to 257. Of these 257 comparison patients only 36 patients were receiving treatment that was evidence based for their condition: 17 receiving dialectical behavior therapy for borderline personality and 19 receiving a family therapy validated for anorexia.

Comparison/control patients came from a variety of conditions, including no formal treatment. Aggregate estimates of outcomes would not apply to patients assigned to any of these particular conditions. Leichsenring and Rabung cannot generalize beyond the odd lot of patients they assembled, but doing so was their intention.  Their efforts could only serve to put an illusory glow on LTPP.

A mixed bag of patients

The abstract stated that patients had “complex disorders,” but the term was never defined and was inconsistently applied. It is difficult to see how it applies to patients in one study having “typically presented outpatient complaints concerning anxiety, depression, low self-esteem, and difficulties with interpersonal relationships” (p. 269). The judgment that patients required a year of treatment seems, again, theoretically, not empirically driven.

Across the eight studies, LTPP was compared to other interventions for a total of nine types of mental health problems, including “neurosis” [3], “self-defeating personality disorder” [8] and anorexia nervosa [5,7] (Table 3). This is akin to asking whether one type of medication is superior to another for all types of physical illnesses [30].

Unconventional calculation of effect sizes

[This section is going to be a bit technical, but worth the read for those who are interested in acquiring some background on this important topic.]

As I have spelled out in an earlier blog post, psychotherapies do not have effect sizes, but comparisons do. Randomized trials facilitate comparisons between a psychotherapy and comparison/control conditions. When you calculate a conventional between-group effect size, it takes advantage of randomization and controls for background factors, like placebo or nonspecific effects. So, you focus on what change went on in a particular therapy, relative to what occurred in patients who didn’t receive it.

In another past blog post, I discussed my colleagues and my comparison of psychotherapies to pill placebo conditions. The between-group effect sizes took into account the difference between the change that went on in psychotherapy and pill placebo conditions, not just the change that went on in psychotherapy. We wanted an estimate of the effects of psychotherapy, above and beyond any benefits of the support and positive expectations that went with being in a clinical trial.Of course, the effect size that we observed were lowered from what we would’ve seen in a comparison between psychotherapy and no treatment.

That is not what Leichsenring and Rabung did. They calculated within-group effect sizes for LTPP that ignored but what went on in the comparison/control group and the rest of the trial. Any nonspecific effects , gets attributed to LTPP, including the substantial improvement over the time that would naturally occur without treatment. These effect sizes were then integrated with calculations from naturalistic, case series studies in which there was no control over patients lost or simply left out of the case series. Confused yet? Again, there were no such naturalistic, case series studies included from other comparison/control therapies. So the advantage was entirely with LTPP. If LTPP did look not better under these odd circumstances, could it ever?

In my last blog post, I reviewed a recent Lancet article reporting a RCT comparing cognitive behavioral therapy to focused psychotherapy for anorexia. Neither therapy did particularly well in increasing patient’s weight, either in absolute terms or in comparison to an enhanced routine care. And the article reported within-group and between–group effect sizes, allowing a striking demonstration of how different they are. The within-group effect size for weight gain for the focal psychodynamic therapy was a seemingly impressive 1.6, p < .001. But the more appropriate between-group effect size for comparing focal psychodynamic therapy to treatment as usual was a wimpy, nonsignificant .13, p< .48 (!).

[Now, we are going to get really technical. Skim or jump down the next section if you do not want to deal with this.]

There are some bizarre calculations by Leichsenring and Rabung that are difficult to explain or replicate, but these calculation gave a clear bias to LTPP. My guess with my colleagues was that:

Leichsenring and Rabung apparently used a conversion formula intended for conversions of between-group point biserial correlations to standardized difference effect sizes in an attempt to convert their correlations of group and within-group pre-post effect sizes into deviation-based effect sizes. As a result, even though none of the eight studies reported an overall standardized mean difference > 1.45 [2, see figure 2 on p. 1558], the authors reported a combined effect size of 1.8. Similarly, these methods generated an implausible between-group effect size of 6.9, equivalent to 93% of variance explained, for personality functioning based on 4 studies [3,5,6,26][1], none of which reported an effect size > approximately 2.

In order to figure out what had been done, my colleagues and I generated 10 hypothetical studies in which

the pre-post effect size for the treatment group was 0.01 larger than the effect size for the control group. In the tenth study, the effect sizes were equal. Despite negligible differences in pre-post treatment effects, the method employed by Leichsenring and Rabung [2] generates a correlation between pre-post effect size and group of 0.996 and an unreasonably large deviation-based effect size of 21.2. Thus, rather than realistic estimates of the comparative effects of LTPP, Leichsenring and Rabung based their meta-analysis on grossly incorrect calculations.

It is simply mind blowing that the editor and reviewers at JAMA let these numbers get by. The numbers are so provocative that they should have invited skepticism.

Almost 1000 new studies needed to reverse the claims of this meta-analysis?

One of the many outrageous claims made in the meta-analysis was that the number of nonsignificant, unpublished, or missing studies needed  to move the meta-analysis to nonsignificance was “921 for overall outcome, 535 for target problems, 623 for general symptoms, and 358 for social functioning.”

What? 921 studies? That is more than the number of control patients included in the meta analysis!  The claim is a testimony to how badly distorted this meta-analysis has become.

Leichsenring and Rabung were attempting to bolster their claims using Rosenthal’s failsafe N, which, among meta analysis methodologists is considered inaccurate and misleading. The Cochrane Collaboration recommends against its use. Mortiz Heene does an excellent job explaining what is wrong with failsafe N. He notes that among the many problems of relying on failsafe N as a check on bias are:

  • Estimates are not influenced by evidence of bias in the available data.
  • Heterogeneity among the studies that are available and those that might be lurking in desk drawers is ignored.
  • Choice of zero for the average effect of the unpublished studies is arbitrary, almost certainly biased.
  • Allowing for unpublished negative studies substantially reduces failsafe N.

The results of the trial described in my recent blog post comparing psychoanalysis to CBT for anorexia certainly contradicts the last assumption of there being no negative trials that were missed.

I know, many meta-analyses of psychological interventions bring in the failsafe N to bolster claims that there are estimates of effect sizes are so robust that we cannot expect any negative studies lurking somewhere to change the overall results. Despite this being a common practice in psychology, failsafe N is uniformly rejected in other fields, and notably clinical epidemiology, as providing an inflated, unrealistic estimate of the robustness of findings.

Coming up in my next blog:

  • Leichsenring and Rabung respond to critics, dodging basic criticisms and condemning that those who reject their claims are bringing in biases of their own.
  • Leichsenring and Rabung renew their claims in another meta-analysis in British Journal of Psychiatry for which 10 of the 11 studies were already included in the JAMA meta-analysis and.
  • The long term psychodynamic/psychoanalytic community responds approvingly and and echo Leichsenring and Rabung’s assessment of skeptics.
  • The important question of whether long-term psychoanalytic psychotherapy is better than shorter term therapies gets an independent evaluation by another group, which included the world-class meta-analyst and systematic reviewer, John Ioannidis.

 

 



 

Creative Commons License
This work, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 3.0 Unported License.

This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

15 Responses to Psychoanalysts claim long-term psychoanalytic psychotherapy more effective than shorter therapies.

  1. Discussant says:

    Why are psychoanalytic/psychodynamic types so defensive about their work, and why do they seem more unwilling to openly grapple with questions, challenges, and critiques of their work than researchers in other fields? How does this impact their ability to behave ethically/transparently in accordance with informed consent, warning clients of potential risks/adverse events, “first do no harm,” and other principles of healthcare ethics?

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
    • SF says:

      It appears you are mistaken about the supposed “defensiveness” of psychodynamic practitioners. If one was to attend a psychoanalytic conference they would find harsh critiques of Freud and many other clinicians in the hope to progress the theory. Psychoanalytic clinicians believe that human nature is much more complex and intricate than cognitive behaviorists like to believe. Why are CBT clinicians so adamant to claim they are the one true therapy? In the lab, CBT appears to work (of course in relation to “treatment as usual”) but in real life those that claim they use EST’s are found to utilize psychodynamic techniques more often they would like. Psychodynamic clinicians have one goal, to appreciate the individuals subjective world. CBT does not do this, and I as well as many other clinicians are plagued by angry clients and quick relapse when CBT techniques are used. And of course you will ask the question “well how do you know you were not creating a self-fulfilling prophecy?” It was Freud who said that therapeutic techniques can change from individual to individual, and I, like many other analytic clinicians believe this. So many analytic clinicians enthusiastically use behavioral and cognitive techniques (I myself am a strong believer in mindfulness meditation) with the mindset that therapy is NOT one size fits all. As a clinician who has been trained in CBT and other “cutting edge” EST’s, I am surprised to see the reactions of fellow therapists who say that those techniques just do not always work, and asking me how they can help the client understand the unconscious.

      VA:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
      • Discussant says:

        SF, I have no doubt that one would find harsh critiques of people at a psychoanalytic conference, but I’m wondering about ideas, theories, research, and evidence.

        I see blanket statements about trying to understand the unconscious or a person’s subjective world, but without accompanying explanation of what conditions that is indicated for, what populations/cultures it’s indicated for, what are the effects, by what mechanisms does it achieve those effects, and how can it be known to achieve those effects (and to be worth the large costs and risks) if not by well-designed studies with active control groups that control for expectation effects, demand characteristics, and other biases? Where do such interventions sit with respect to science (including neuroscience)?

        Further, in the absence of well-designed studies and credible evidence, how are the ethical issues around taking clients into these unknown circumstances with unknown probable outcomes (and thus potentially harmful outcomes) handled?

        VA:F [1.9.22_1171]
        Rating: 0 (from 0 votes)
        • Mark C. Vail says:

          ISTDP is a psychoanalytically based treatment that goes further in terms of transparency than any other psychotherapy approach of which I am aware. Sessions are videotaped, analyzed, presented, and the subject considerable study and scrutiny (Please see the substantial body of research produced by Dr. Allan Abbass). The ISTDP wiki is a good resource for many of the questions regarding mechanism of actions and effects raised by the last discussant.

          VA:F [1.9.22_1171]
          Rating: 0 (from 0 votes)
          • Well, Abbass apparently keeps bad company

            Abbass, A. A., Rabung, S., Leichsenring, F., Refseth, J. S., & Midgley, N. (2013). Psychodynamic Psychotherapy for Children and Adolescents: A Meta-Analysis of Short-Term Psychodynamic Models (vol 52, pg 863, 2013). JOURNAL OF THE AMERICAN ACADEMY OF CHILD AND ADOLESCENT PSYCHIATRY, 52(11), 1241-1241.

            VN:F [1.9.22_1171]
            Rating: 0 (from 0 votes)
  2. Pingback: » The Top Ten Questions to Ask Your Paid Search Vendor | By Max Starkov …

    • First, if you checked the literature, you would see that I am not a cognitive behavior therapist and I think that many CBT therapist and researchers would agree that I have been the source of some of the harshest, most sustained and best documented critiques of CBT.

      But I have a great deal of difficulty suffering foolishness and this meta-analysis of long-term psychodynamic therapy represents supreme foolishness, breaking all of the fundamental rules of conducting and reporting a meta-analysis. As I will document in a forthcoming blog post, the psychoanalytic and psychodynamic community has uncritically embraced this meta-analysis and is quite hostile to skeptics. Furthermore, efforts to independently replicated its results, reveal absolutely no evidence of the superiority of long-term psychodynamic psychotherapy over shorter therapies.

      VN:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
  3. Matti Heino says:

    Many thanks for your post.

    As a social psychology student following your blog, I’ve started to identify in myself an emerging emotional pattern I now call Morally Induced Methodological Rage (MIMR). It has many of the features of “Cartesian indignation”, as well as the energizing action tendency of diving much deeper into methodological literature than obliged by my degree.

    I would love to see your thoughts put into a book called something like “Methodological Krav Maga: Self-defence methods for aspiring researchers and consumers of scientific literature”.

    Keep up the good work!

    ps. you write that they excluded a “10 month trial of psychedelic psychotherapy versus cognitive behavior therapy for anorexia” – what would Freud say :p

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  4. MK says:

    I’m pretty sure you really don’t need to go past the photo of the Freud puppet at the top of the page and the offensive title of this section of the article “Psychodynamic Psychotherapy Uber Alles” to discern profound bias and cruel disregard for feelings on Dr. Coyne’s part given that the authors of the meta-analysis he has critiqued are German and of course most of the innovators of the psychoanalytic method were western and central european Jews who were actively persecuted and, in some cases, killed by the Nazis. That is just the pinnacle of insensitivity, Dr. Coyne. I will be filing a complaint with PLOS One about this. Such an important topic with such real-world implications for the real lives of people looking for effective ways to improve their lives and the lives of their loved ones dragged into such muck.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
    • Dr. Krass, Your comments are truly offensive and uncalled for.

      My post deals with the real world implications of psychoanalysts using junk statistics to secure insurance for long term treatment that they have not succeeded in showing is cost-effective.

      If you had bothered to Google uber alles, you would have discovered http://www.urbandictionary.com/define.php?term=uber%20alles

      ” uber alles
      “uber alles (correctly written in German “ueber alles”) has nothing to do with the Nazis, but was a line of a poem written in 1841 which was used for the German National Anthem. It does not translate as “above all” (that would be “ueber allen”) but rather “more than anything else”, as in “ich liebe Dich ueber alles in der Welt” (I love you more than anything else in the world). A misleading translation was purposely chosen by the Allies during the second world war for propagada purposes.
      Deutschland uber alles”

      VN:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
  5. Discussant says:

    Data can be cherry-picked, subject to biases (selection bias, confirmation bias, publication bias, etc.), unequal handling of outliers, and all kinds of statistical manipulations to get any conclusion a researcher wants. And those with opposing views can do the same thing. That’s why we need robust standards — a clear sense of where the burden of proof lies (innocent until proven guilty in law; ineffective and unsafe until proven effective and safe in healthcare), and scientific methods for acquiring knowledge that systematically control for biases or other corruptions.

    This becomes an ethical issue when treatments/interventions for human beings are involved. If studies of a treatment fail to meet the standards of reliable replicable research, then the probable outcomes of that treatment are unknown (and potentially harmful). It is an ethical obligation to disclose the lack of reliable evidence for a treatment to any consumer who is preparing to invest time and money, and risk adverse events.

    That’s why Coyne’s and other work that helps shine a light on poor research quality is so important. There are human lives at stake.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  6. Mark C. Vail says:

    James,

    You side stepped the issue. The question was about transparency, I gave you a source to follow up and you chose to ignore it. Is this about true intellectual curiousity or dogmatism?

    Regards,

    Mark C. Vail

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
    • When I looked him up on Google Scholar, the first article I encountered was a meta analysis with Leichsenring and Rabung that relied on the same misleading within subject effect sizes and so I did not pursue it further. Do you think they should be used?

      VN:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
  7. Thanks for finally writing about >Psychoanalysis more effective than shorter psychotherapies?
    <Liked it!

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>