Much ado about a modest and misrepresented study of CBT for schizophrenia: Part 2

We all hope that an alternative can be found to treating persons with schizophrenia with only modestly effective antipsychotic medication that has serious side effects. Persons with schizophrenia and their families deserve an alternative.

That is why reports of this study in the media were greeted with such uncritical enthusiasm. Maybe even the motive for the results of the study getting hyped and distorted, from exaggerated claims attributed to the authors to further amplification in the media.

But at the present time, CBT has not yet been shown to provide an effective alternative and it has not yet been shown to have effect equivalent to antipsychotic medication. The results of this trial do not at all change this unfortunate situation. And to promote CBT as if it had been shown to be an effective alternative would be premature and inaccurate, if not a cruel hoax.

This is the second of two posts about a quite significant, but not earth shattering Lancet study of CBT for persons with unmedicated schizophrenia.

In this continuation of my previous post at Mind the Brain, I will discuss

  • The missed opportunity the investigators had to make a more meaningful comparison between CBT and supportive counseling or befriending.
  • The thorny problem of separating out effects of CBT from the medication that most patients were receiving in both conditions remaining in follow-up at the end.
  • The investigators’ bad decision to include the final follow-up assessment in calculating effect sizes for CBT, a point at which results would have to be made up for most participants.
  • Lancet’s failure to enforce preregistration and how the investigative group may have exploited this in putting a spin on this trial.
  • The inappropriate voodoo statistics used to put a spin on this trial.
  • How applying the investigators own standards would lead us to conclude that this trial did not produce usable results.
  • What we can learn from this important but modest, exploratory study, what we need to be told more about it by the investigators in order to learn all we can.

At my secondary blog, Quick Thoughts, I distributed blame for the distorted media

Aside from misleading title, Guardian story had photo that better represented male erectile dysfunction than schizophrenia.

Aside from misleading title, Guardian story had photo that better represented male erectile dysfunction than schizophrenia.

coverage of this study. There is a lot to go around. I faulted the authors for exaggerations in their abstract and inaccurate statements to the media. I also criticized media that parroted other inaccurate media coverage without journalists bothering to actually read the article. Worse, a competition seems to develop to see who could generate the most outrageous headlines. In the end, both Science and The Guardian owe consumers an apology for their distorted headlines. Click here to access this blog post and the interesting comments that it generated.

Why didn’t the investigators avoid an unresolvable interpretive mess by providing a more suitable control/comparison condition?

In my previous PLOS Mind the Brain post, I detailed how a messy composite of very different treatment settings was used for treatment as usual (TAU). This ruled out meaningful comparisons with the intervention group.

This investigator group should have provided a structured, supportive counseling as a conventional comparison/control condition. Or maybe befriending.

The sober-minded accompanying editorial in Lancet said as much.

Such a comparison group would have ensured uniform compensation for any inadequacies in the support available in the background TAU. Such a control group would also address the question whether any effects of CBT are due to nonspecific factors shared with supportive counseling. It would have allowed an opportunity to test whether the added training and expense of CBT is warranted by advantages over the presumably simpler and less expensive supportive counseling.

Why wasn’t this condition included? Tony Morrison’s interview with Lancet is quite revealing of the investigative group’s thinking that led to rejecting a supportive counseling comparison/control condition. You can find the interview here, and if you go to it, listen to what Professor Morrison is saying about halfway into the 12 minute interview.

What follows is a rough, but inexact transcription of what was said. You can compare it to the actual audiotape and decide whether I am introducing any inaccuracies.

The interviewer stated that he understood that patients often cannot tell the difference between supportive counseling and CBT, and conceded he struggled with this as well.

Tony Morrison agreed that there were good reasons why people sometimes struggle to distinguish the two because there is a reasonable amount of overlap.

The core components of supportive counseling approach are also required elements of CBT: developing a warm and trusting relationship with someone, being empathic, and nonjudgmental. CBT is also a talking therapy in a collaborative relationship.

The difference is that there are more specific elements to CBT that are represented within a supportive counseling approach.

One of those elements is that CBT is based on a cognitive behavioral model that assumes it is not the psychotic experiences that people have that are problematic, but people’s way of responding to these experiences. If people have a distressing explanation for hearing voices, that is bound to be associated with considerable distress. CBT helps patients develop more accurate and less distressing appraisal.

The interviewer noted there was no placebo given in this trial and the effects of placebo can be quite large. What were the implications of not using placebo?

Tony Morrison agreed that the interviewer was quite right that the effects of placebo are quite large in trials of treatment of people with schizophrenia.

He stated that it was difficult to do a placebo-controlled randomized trial with respect to psychological therapies because the standard approach to comparison group other than treatment usual would be something like supportive counseling or befriending, both of which might be viewed as having active ingredients like a good supportive relationship.

Not difficult to do a placebo-controlled randomized trial in this sense, Tony, but perhaps difficult to show that CBT has any advantage.

So, it sounds like that a nonspecific supportive counseling approach was rejected as a comparison/control group because it would reduce the possibility for finding significant effects for CBT. It is a pity that a quite mixed set of TAU conditions was chosen instead.

This decision introduced an uninterpretable mess of complexity, including fundamental differences in the treatment as usual depending on participant personal characteristics, with the likelihood that participants would simply be thrown out of some of the TAU.

Ignoring is not enough: what was to be done with the patients receiving medication?

When the follow-up was over most remaining participants in both groups— 10 out of 17– had received medication. That leaves us uncertain whether any effects of participants receiving CBT can be distinguished from effects of taking medication.

This confounding of psychotherapy and medication cannot readily be corrected in such a small sample.

We know from other studies that nonadherence is a serious problem with antipsychotic medication. The investigative group emphasized this as the rationale for trial. But we do not know which participants in each group received antipsychotic medication.

For instance, in the control condition, were patients receiving medication found mainly in the richly resourced early intervention sites? Or were they from the poorer traditional community sites where support for adherence would be lower because contact is lower? And, importantly, how did it come about that patients supposedly receiving only CBT began taking this medication? Was it under different circumstances and with different support for adherence than when medication was given in the TAU?

With a considerably larger sample, multivariate modeling method would allow a sensitivity analysis, with participants in both groups subdivided between those receiving and non-receiving antipsychotics.  That would not undo the patients who ended up taking medication having not been randomized to doing so. But, it would nonetheless have been informative. Yet, with only 17 patients remaining each group in the last follow-up, such methods would be indecisive and even inappropriate.

It is a plausible hypothesis that patients enrolled in a trial offering CBT without antipsychotic medication would, if they later accepted the medication, be more adherent. But this hypothesis cannot be tested in the study, nor explored within limited information that was provided to readers.

A decision not to follow some of the patients after the end of the intervention.

The investigators indicated that limited resources prevented following many of the patients beyond the intervention. If so, the investigators should have simply ended the main analyses at the point beyond which follow-up became highly selective. I challenge anyone to find a precedent in the literature where investigators stopped follow-up, but then averaged outcomes across all assessment periods, including one where most participants were not even being followed!

The most reliable and revealing approach to this problem would be to calculating effect sizes for the last point at which an effort was made to follow all participants, the end of the intervention. That would be consistent with the primary analysis for almost all other trials in the psychotherapy and pharmacological literatures. It would also avoid problems in estimating effect sizes for the last follow-up when the data for most patients would have to be made up.

But if that were done, it would have been obvious that this was a null trial with no significant effects for CBT.

The failure of Lancet to enforce preregistration for this trial.

Preregistration of the design of trials including  pre-specification of outcomes came about because of the vast evidence that many investigators do not report key aspects of their original design and do not report key outcomes if they are not favorable to the intervention. Ben Goldacre has taught us not to trust pharmaceutical drug trials that are not preregistered, and neither should we trust psychotherapy trials that are not.

Preregistration, if it is uniformly enforced, provides a safeguard of the integrity of results of trials, reducing the possibility of investigators redefining primary outcomes after results are known.

Preregistration is a now requirement for publication in many journals, including Lancet.

Guidelines for Lancet state:

We require the registration of all interventional trials, whether early or late phase, in a primary register that participates in WHO’s International Clinical Trial Registry Platform (see Lancet 2007; 369: 1909-11). We also encourage full public disclosure of the minimum 20-item trial registration dataset at the time of registration and before recruitment of the first participant (see Lancet 2006; 367: 1631-35).

This trial was not preregistered. The required preregistration of this trial(http://www.controlled-trials.com/ISRCTN29607432/) occurred after recruitment had already started. The “preregistration” appeared at the official website on October 21, 2010, yet recruitment started at February 15, 2010. Presumably a lot could be learned in that time period, and adjustments made in the protocol. We are just not told.

Even then, the preregistration failed to commit the investigators to which primary outcome evaluated at which time point will serve as the main evaluation for the trial. That too defeats the purpose of preregistration.

punchYou pays yer money and you takes yer choice.

The overall PANSS score is designated in the registration as the primary outcome, but no particular time point is selected, allowing the investigators some wiggle room. How much? The PANSS is assessed six times, and then there is the overall mean, which the authors preferred, bringing a total of seven assessments to choose from.

 

The preregistration also indicates that effect size of .8 is expected. That is quite unrealistic, unprecedented in the existing literature, including a meta-analyses. Claiming such a large effect size justifies having  smaller sample. That means that the trial will be highly underpowered from the get-go in terms of being able to generate reliable effect sizes.

Mumbo-jumbo, voodoo statistics used to create the appearance of significant effects.

In order to average outcomes across all assessment points, multivariate statistics were used to invent data for the majority of patients who were no longer being followed at the last assessment. Recall that that was when most patients already had been lost to follow-up. It is also when the largest differences appeared between the intervention and control group. Those differences seem to of been due to inexplicable deterioration in the minority of control patients still around to be assessed. The between-group differences were thus due to the control group looking bad, not any impressive results for the intervention group.

Multivariate statistics cannot work magic with a small sample with so few participants remaining the last follow-up.

The Lancet article reported a seemingly sophisticated plan for data analysis:

Covariates included site, sex, age, and the baseline value of the relevant outcome measure. Use of these models allowed for analysis of all available data, in the assumption that data were missing at random, conditional on adjustment for centre, age, sex, and baseline scores. The missing-at-random assumption seems to be the most realistic, in view of the planned variation in maximum follow-up times and the many other factors likely to affect drop-out; additionally, the assumption is routinely used in analyses of data from longitudinal trials.

chadsworth

Surely you jest! was an expression favored by Chatsworth T. Osborne, Jr., millionaire dilettante in The Many Loves of Dobie Gillis

Surely, you jest.

Anyone smart enough to write this kind of text is smart enough to know that it is an absurd plan for analyzing data in which many patients will not be followed after the end of intervention and with such a small sample size to begin with. It preserves the illusion of a required intent to treat analysis in the face of most participants have not been lost to follow-up.

Sure, multilevel analysis allows compensation for the loss of some participants from follow-up, but requires a much larger sample. Furthermore, the necessary assumption that the participants who were not available are missing at random is neither plausible nor testable with in a small sample. Again, one can test the assumption that missing is at random, but that require a starting sample size at least four or five times as large. Surely the statistician for this project knew that.

And then there is the issue of including control for the four covariates in analyses for which there are only 17 participants in the two groups at the end data point being analyzed.

As I noted in my last blog post, from the start, there were such huge differences among participants that summary statistics based on all of them were not applied to individual participants or subgroups.

  • Participants in the control group came from a very different settings with which their personal characteristics were associated. Particular participants came from certain settings and got treated differently.
  • Most participants were no longer around the final assessment, but we do not know how that is related to personal characteristics.
  • Most participants who were around in both the intervention and control group had accepted medication, and this is not random.
  • There was a strange deterioration going on in the control group.

Yup, the investigators are asking us to believe that being lost to follow-up was random, and so participants that were still around could be randomly replaced with participants who had been lost, without affecting the results.

With only 17 participants per group, we cannot even assess whether the intended effects of randomization had occurred, in terms of equalizing baseline differences between intervention and control groups. We do not even have the statistical power to detect whether baseline differences between the two groups might determine differences in the smaller numbers still available.

We know from lots of other clinical trials that when you start with so few patients has this trial did, uncontrolled baseline differences can still prove more potent than any intervention. That is why many clinical trialists refuse to accept any studies with less than 35 to 50 participants per cell.

Overall, this is sheer voodoo, statistical malpractice that should have been caught by the reviewers at Lancet. But it does make for putting an impressive spin on an otherwise null trial.

night tripperClassic voodoo rock music, among the best ever rock according to Rolling Stone, available here to accompany your rereading of the results of the Lancet paper.

Judging the trial by the investigators’ own standards

A meta-analysis by the last author Paul Hutton and colleagues argued that trials of antipsychotic medication with more than 20% attrition do not produce usable data. Hutton cite some authorities

Medical epidemiologists and CONSORT statement authors Kenneth Schulz and David Grimes, writing in The Lancet in 2002, stated: _a trial would be unlikely to successfully withstand challenges to its validity with losses of more than 20% [Sackett et al., 2000]_

But then, in what would be a damning critique of the present study, Hutton et al declares

Although more sophisticated ways of dealing with missing continuous data exist, all require data normally unavailable to review authors (e.g. individual data or summary data for completers only). No approach is likely to produce credible results when more than half the summary outcome data are missing.

Ok, Paul, fair is fair, are the results of your Lancet trial not credible? You seem to have tofair is fair concede this. What do you make of Tony Morrison’s claims to the media?

 

Then there is the YouTube presentation from 2012 from first author Tony Morrison himself.  He similarly argued that if a trial retains only half of the participants who initially enrolled, most of the resulting data are made up and results are not credible.

The 2012 presentation by Morrison also dismisses any mean differences between active medication and a placebo of  less than 15 points as clinically insignificant.

Okay, if we accept these criteria, what we say about a difference in CBT trial this claim to be only 6.5 after loss of most participants enrolled in the study, putting aside for a moment the objection that that even this is an exaggerated estimates?

We should have known ahead of time what can and cannot learned from an small, underpowered exploratory study.

In a pair of now classic methodological papers [1,2], esteemed clinical trialist Helena Kraemer and her colleagues defended the value, indeed the necessity of small preliminary exploratory, feasibility studies before conducting larger clinical trials.

A pilot study can be used to evaluate the feasibility of recruitment, randomization, retention, assessment procedures, new methods, and implementation of the novel intervention. A pilot study is not a hypothesis testing study. Safety, efficacy and effectiveness are not evaluated in a pilot. Contrary to tradition, a pilot study does not provide a meaningful effect size estimate for planning subsequent studies due to the imprecision inherent in data from small samples. Feasibility results do not necessarily generalize beyond the inclusion and exclusion criteria of the pilot design.

However, they warned against accepting effect sizes from such trials because they are underpowered. It would be unfair to judge the efficacy of intervention based on negative findings with a grossly underpowered trial. Yet, it would be just as unacceptable to judge an intervention favorably on the result of unexpected positive findings with the sample size was small. Such positive findings typically do not replicate and can easily be the result of chance, unmeasured baseline differences between intervention and control groups, and flexible rules of analysis and interpretation by investigators. And Kraemer and her colleagues did not even deal with small clinical trials in which most participants were no longer available for follow-up.

Even if this trial cannot tell us much about the efficacy of CBT for persons with unmedicated schizophrenia. We can still learn a lot, particularly if the investigators give us information that they had promised in their preregistration.

The preregistration promised information important for evaluating the trial that is not delivered in the Lancet paper. Importantly, we are given no information from the log that recorded all the treatments received by the control group and the intervention group.

This information is essential to evaluating whether group differences are really do to receiving or not receiving the intervention as intended, or influenced by uncontrolled treatment including medication was selective dropout. This information could shed light on when and how patients accepted antipsychotic medication and whether the conditions were different between groups.

We cannot torture the data from this study to reveal whether or not CBT was efficacious. But we can learn much about what would need to be done differently in a larger trial if results were to be interpretable. Given what we learned this trial, but would have to be done about participants deciding to take antipsychotic medication after randomization? Surely we could not refuse them that option.

More information would expose just how difficult it is to find suitable participants and community settings and enroll them in such a study. Certainly the investigators should have learned not to rely on such diverse settings as they did in the present study.

Hopefully in the next trial, investigators will have the courage to really test whether CBT has clinically significant advantages over supportive therapy or befriending. There would be risk would to investigator egos and bragging rights, but that would be compensated by the prospect of producing results that persons with schizophrenia and their families, as well as clinicians and policymakers could believe.

 

Category: antipsychotics, psychotherapy, schizophrenia | Tagged , | Leave a comment

The Ten Commandments of Good Psychiatry: Perspectives of a Fundamentalist Psychiatrist

Last month I was invited to give a lecture at an annual conference for Physician Assistants.  The title of my talk was, “Treating Depression in Primary Care.”

manypills

Physician Assistants typically work in a primary care setting, which is where many mental health disorders are diagnosed and treated.  Roughly 60% of psychotropic medications in the United States are prescribed by a primary care provider.

Despite these statistics, many primary care providers receive a limited amount of formal education in mental health/psychiatry related conditions during their training.  This issue was recently highlighted in a New York Times article that described the lengths a psychiatrist and ADHD expert is taking in order to ensure that primary care providers, who diagnose and treat the majority of cases of ADHD, are better equipped to do provide such care.

Training and education aside, the problems with accessing mental healthcare in a timely and sustained fashion leave many primary care providers with no choice but to treat mental illness themselves, especially those who serve in rural and underserved communities of our country. (I previously blogged about issues surrounding access to mental healthcare here.)

For these reasons, I was excited to have the opportunity to talk to and interface with this group of professionals and to disseminate up-to-date information on treating clinical depression in primary care.

 

As I prepared the PowerPoint slides, I was overwhelmed by urges to emphasize the fundamentals of good psychiatric practice to my audience. I felt a need to do this, in part, because of my observation that in our fragmented healthcare system, where we are all being required to do more with less, these fundamentals are increasingly getting pushed aside.

 

So I decided to spend a good portion of my talk on the fundamentals that have proven to be essential to me during my 15 years of psychiatric practice.  Over these years I have treated thousands and thousands of patients, seen pretty much every psychiatric diagnosis, and treated people of all genders, ethnicities, religions, and ages.  I have treated patients in two different continents and two different U.S. states.  I have treated patients in a variety of settings: emergency room, inpatient, outpatient, residential and day programs, private clinics, academic medical centers, county hospitals, the VA, and the British National Health Service.

This immersion and clinical experience has led me to the following observation:  When things go awry in the care of treating a person with mental illness, the majority of the time it is because one or more of these fundamental principles was neglected or overlooked.

 

It dawned on me that I have become a fundamentalist when it comes to my psychiatric practice.

 

For the researchers amongst you, it is akin to the assumptions we make about data before we use a particular statistical test.  If these assumptions are incorrect, then the test and subsequent results mean very little and, worse, can mislead us about what is actually going on.

These fundamentals (which, at their core, are a tribute to the biopsychosocial model that was propagated by Engel) are separate from the fundamental principles of care that should be employed when diagnosing mental health disorders and advising patients regarding specific medical treatment decisions – those are whole other topics for a whole other blog.

In case you are starting to think that this blog is just me waxing lyrical on my personal viewpoints, I would refer you to the American Psychiatric Association guidelines for the treatment of mental health disorders.  Each guideline devotes several pages to these very principles and, again, emphasizes the importance of providing psychiatric care within these fundamental parameters.

 

My fear is that the current medical climate is encouraging the practice of skipping right to the “treatment part” of such guidelines and just glossing over these fundamentals. 

 

So here are the principles that I shared with my audience during the lecture, described as Ten Commandments and listed in no particular order of importance.  Some of the language is specific to the treatment of depression, but the principles are applicable to all mental health disorders.

 

#1. Thou Shalt Always Aim To Establish And Maintain A Therapeutic Alliancehandshake

Perhaps one the biggest challenges to physicians practicing in a 21st century medical environment is preserving relationships with our patients.  Many of us operate in settings where we are pushed for time, have to do more with less, and are bombarded by a constant stream of interruptions that have us focusing more on computer screens, pagers, voicemails, and instant messages than on the patient that sits in front of us.

This is not only frustrating for us (most people I know became healthcare professionals because of a capacity to care deeply for the plight of other human beings, not because of a desire to be stuck in front of a screen, or phone, or to do paperwork) but it is wrong for our patients.  Such an environment inhibits trust, rapport building, and the development of what, in my field, we call alliance.

Therapeutic alliance is a crucial fundamental of good psychiatric practice; it promotes collaboration, trust, and mutual respect.  It can take years to build with false starts and setbacks, but the provider’s commitment to maintaining it must be unwavering.  Any factors or situations that interfere with our ability to maintain an alliance with our patients interferes with our patients’ inclination to fully disclose what is on their minds, share their fears and darkest thoughts freely, and to be truthful in their communication with us.

Our job as the treating clinician is to preserve the sanctity of the relationship between doctor and patient and push back on external factors that impinge on it.  More than just touchy-feely medicine, it is the very foundation upon which good psychiatric care is practiced.

 

#2. Thou Shalt Always Do A Complete Psychiatric Assessment

alliance

Anyone treating a mental health disorder can only do so after they have done a thorough psychiatric assessment; when time is of the essence this can be the first thing that gets short thrift.  At minimum the following areas have to be touched on (and can be done in an efficient way with practice):

  • History of the present illness and current symptoms
  • Past psychiatric history
  • Substance use
  • Relevant social, occupational, and family history
  • Physical examination and appropriate diagnostic tests to rule out physical causes for depressive symptoms

 

#3. Thou Shalt Always Do A Thorough Evaluation For Safety 

 

Any clinician who treats patients living with mental illness has do the following, not only on the initial evaluation but on an ongoing basis:

 

  • Make specific inquiries about suicidal thoughts, intent, plans, means, and behaviors
  • Identify psychiatric symptoms or general medical conditions that might increase the likelihood of acting on suicidal ideas
  • Assess past and, particularly, recent suicidal behavior
  • Assess for potential protective factors that can serve to decrease the chances that the patient will harm themselves or others
  • Identify any family history of suicide or mental illness
  • Have a good sense of the patient’s level of self-care, hydration, and nutrition
  • Evaluate the patient’s level of impulsivity and potential risk to others, including any history of violence
  • Assess the impact of depression on the patient’s ability to care for their dependents

 

#4. Thou Shalt Always Identify the Appropriate Treatment Setting 

800px-Hospital_room_ubt

The patient’s treatment needs and symptom severity should determine what setting they are treated in, from outpatient care with a primary care physician to hospitalization in a specialized psychiatric unit. 

 

Measures such as hospitalization should be considered for patients who pose a serious threat of harm to themselves or others.  Unfortunately, because of mental health parity and inadequate access to mental healthcare for many, health care professionals are often put in the very difficult position of caring for those with mental illness in a setting that is not optimal for comprehensive care.  Whilst this is inevitable at times, the clinician has to remain watchful that these circumstances do not interfere with the patient’s clinical progress.

 

#5. Thou Shalt Focus On The Patient’s Functional Impairment And Quality Of Life573px-FAMILIA_777

Mental illness impacts many spheres of a person’s life, including work, school, family, and social relationships.  Any treatment interventions should be aimed at maximizing the patient’s level of functioning within these spheres and focus on enhancing their quality of life.

 

 

#6. Thou Shalt Coordinate The Patient’s Care With Other Clinicians

American healthcare is famous for being fragmented.  With so many different providers, healthcare systems, and insurance providers involved, talking to each can become a low priority for clinicians involved in a patient’s care.  This lack of communication, however, can have disastrous consequences for patient outcomes.

 

#7. Thou Shalt Monitor The Patient’s Psychiatric Status pill bottle

The patient’s response to treatment should be carefully monitored.  Patients who are on psychiatric medication need ongoing assessment for adherence, symptom control, and side effects.  This is even more important if a patient is new to medication, this is their first episode of mental illness, they have clinical factors that place them at high risk for suicide, or they are not improving clinically.  Ongoing care can be spaced out once the patient is stable, but until that time comes they need to be monitored with sufficient regularity.

 

#8. Thou Shalt Integrate Measurements Into Psychiatric Management 500px-Clipboard

An invaluable option for the busy clinician is to integrate clinician and/or patient-administered questionnaires into initial and ongoing evaluations of patients with mental health disorders.

 

 

#9. Thou Shalt Evaluate A Patient’s Treatment AdherenceCounselling session

Assume and acknowledge that the patient will have potential barriers to treatment adherence, and collaborate with the patient (and if possible, the family) to minimize the impact of such barriers.

The clinician should encourage patients to articulate any fears or concerns about treatment or its side effects and offer patients a realistic notion of what can be expected during different phases of treatment.

 

 

#10. Thou Shalt Provide Education To The Patient And Their Family450px-Dive_off!

Education! Education! Education!  The clinician has to spend time clarifying common misperceptions about antidepressants; emphasizing the need for a full course of treatment; and promoting the benefits of healthy behaviors like exercise, sleep hygiene, and nutrition on mental health.  Family and others involved in the patient’s day-to-day life may also benefit from education about mental illness and its effects on functioning and treatment.

 

 I believe each of us should be a fundamentalist when it comes to providing mental health care.  No matter the treatment setting or level of training of the provider, we cannot adequately care for our patients when these Ten Commandments are forgotten or ignored.

Category: Commentary, mental health care, Psychiatry, Uncategorized | Tagged , , , | 8 Comments

Much ado about very little: Lancet study of cognitive therapy for persons with unmedicated schizophrenia

Once again I interrupt my planned sequence of blog posts to cover  a stop-the-press-300x264__1_controversial study. I had intended to continue a discussion of claims about long-term psychodynamic psychotherapy being superior to shorter forms of therapy. Among other things, I would have told how some psychoanalysts have complained that I am part of a conspiracy of cognitive behavior therapists intent on discrediting legitimate claims about the effectiveness of psychodynamic psychotherapy. I would enter my plea that I am not and I have never been a cognitive behavior therapist [evidence here, here, and here], despite being proud of having written an article with Aaron T Beck and my cordial relationship with this rare scholar who remained so magnanimous in the face of my criticism of his work.

All that will have to wait, but please read on. You will see me saying things that, within the context of the uproar concerning a recent study of CBT in Lancet, might be misconstrued as evidence that I am part of another plot, this time to discredit CBT.

Oh well, you will just have to judge the consistency of the standards that I apply to what is claimed to be “evidence,” whether the claims come from psychoanalysts, cognitive behavior therapists, or sources entirely different.

new_bbc_news_banner

Therapies offer little benefit

 

The story was prompted by a meta-analysis of 50 trials that found unimpressive outcomes of CBT with patients with schizophrenia.

CBT did have a small benefit in treating delusions and hallucinations – which is what the therapy was originally developed to target.

But the researchers said even this small effect disappeared when only studies using ‘blind testing’ were taken into account – this is where researchers do not know which group of patients are receiving the therapy.

Yet in February, the headline was

new_bbc_news_banner

as effective as drugs

 

 

It was prompted by report of a single randomized trial in Lancet with a loss to follow up so only 17 participants per group were left from an initial total randomization of 74. That is less than the number of authors on the Lancet paper.

From the BBC story:

Prof Tony Morrison, director of the psychosis research unit at Greater Manchester West Mental Health Foundation Trust, said: “We found cognitive behavioural therapy did reduce symptoms and it also improved personal and social function and we demonstrated very comprehensively it is a safe and effective therapy.”

It worked in 46% of patients, approximately the same as for antipsychotics – although a head-to-head study directly comparing the two therapies has not been made.

honey coneObviously, somebody at BBC does not listen to the Bayesian R&B group, Honey Cone, who could have told them

One monkey don’t stop no show [Youtube video here].

 

 

You don’t revise expectations derived from 50 trials on the basis of a single trial crippled by high loss to follow up. If we agree that a particular finding is apriori unlikely, based on past research, then the outcome of a small sample study is never convincing.

But then soon afterwards the headline disappeared, replaced by

new_bbc_news_banner

talk therapy moderately effective

 

 

What should consumers think, especially those that are facing difficult choices about whether they or their family members should accept antipsychotic medication with modest efficacy and obvious side effects?

The change in the BBC headline reflected post-publication peer review, an intense debate in the social media concerning the results and significance of this trial. The exchanges on Twitter, Facebook, and blogs were polarized, often fueled by commentators, even recognized experts, who demonstrated they were unfamiliar with the article in Lancet beyond its press coverage and abstract.

An understated post at Mental Elf followed by a post at Keith Laws’s Dystopia started the bleed in the credibility of the trial that became a hemorrhage.

I uploaded a preliminary assessment of the trial at my secondary blog, Quick Thoughts. You can read here about the US$500 wager I offered, modeled after the $50L bet that an author of the Lancet study, Paul Hutton, had publicly made and lost.  I subsequently withdrew the bet because of no takers. And you can read here about how a troll hacked my blog post and spread hostile comments about me across other blog sites before being blocked. Just goes to show the passion aroused by this study.

Meanwhile, the crowdsourced review of the Lancet paper on social media proceeded inefficiently, with often inaccurate claims being made about the study by both supporters and critics. But the debate nonetheless gradually uncovered and amplified serious concerns about the article.

BBC reacted with the changed headline and Lancet reacted by inviting blogger Keith Laws to submit a letter to the editor. A number of us banded together to write some letters after some intensely probing back channel changes.

I continued to reread the Lancet article and its press coverage. I listened to an audio tape interview with Tony Morrison, the lead author of the Lancet paper. I viewed a revealing YouTube video of his 2012 keynote presentation at the British Psychological Society Division of Clinical Psychology.  I also benefited from extended discussions with Keith Laws, Peter McKenna, Sameer Jauhar, and especially Henry Strick van Linschoten. If you have time, scroll down and read the astute comments that Henry made at Mental Elf. You will see just how much he is the inspiration for some of the ideas expressed in this blog post.

But in fairness to Henry and everyone else whom I consulted in writing this post, I have sole responsibility for any inaccuracies in what follows:

 My Take on the Trial and its Reporting

The abstract on which so many relied for the understanding of the study is misleading.

At the end of the intervention, there were no differences in terms of primary outcome between CBT and the control group.

But the trial really did not producing usable data concerning the efficacy of cognitive behavior therapy for patients with unmedicated schizophrenia because of

  • An unusually mixed group of patients participating in the study.
  • An inappropriately constructed control group that does not represent conditions in routine care nor allow meaningful comparisons with an active intervention.
  • Substantial loss to follow-up from an already small exploratory study.
  • A decision of the investigator team to abort long-term follow-up but proceed with data analysis as if this decision had not been made.
  • A substantial number of patients in both the intervention and control group receiving antipsychotic medication, including those in the control group who showed the greatest improvement.

An elaborated “preregistration” of the trial, which should have dictated crucial features of the design and explicitly planned analyses did not do so, and actually occurred after data collection had begun.

At various times and places, the investigators have made comments about other trials, which if applied to their Lancet study, would require that they concur with the decision that the results are not usable.

The study’s investigator team owes a few things to the professional community and the lay public.

But don’t accept my word without considering whether I actually can substantiate my points. Please read on.

Here is the full report of the actual study and here is a link to the brief formal registration.

The abstract states:

Findings. 74 individuals were randomly assigned to receive either cognitive therapy plus treatment as usual (n=37), or treatment as usual alone (n=37). Mean PANSS total scores were consistently lower in the cognitive therapy group than in the treatment as usual group, with an estimated between-group effect size of −6·52 (95% CI −10·79 to −2·25; p=0·003). We recorded eight serious adverse events: two in patients in the cognitive therapy group (one attempted overdose and one patient presenting risk to others, both after therapy), and six in those in the treatment as usual group (two deaths, both of which were deemed unrelated to trial participation or mental health; three compulsory admissions to hospital for treatment under the mental health act; and one attempted overdose).

You can find a description of the primary outcome, the PANSS (Positive and Negative Syndrome Scale) here.

Here is the full report of the actual study and here is the extended registration of the trial, and a link to the brief formal registration. Please click on it to enlarge.

lancet cbt table-page-0

Pay particular attention to

  • The shrinking number patients actually followed up across the study shown at the bottom.
  • The fluctuation in the mean outcomes across the course of the study, particularly the marked deterioration that occurred in the control group between 12 months and 18.
  • And the large standard deviation in outcomes for the control group at the end of the study.

A misleading abstract

What is said in abstracts is crucially important because many people form opinions about the content of an article solely on the basis of the abstract.

When abstracts appear in electronic bibliographies like PubMed, and a small minority of those who view the abstract actually proceed to view the full article.

Most persons alerted to an article by media coverage cannot access anything more than the abstract of the article because of a pay wall.

There are often discrepancies between findings as they are reported in abstracts and what is actually contained in the results sections. CONSORT has developed standards for what is reported in abstracts, but the standards are unfortunately usually ignored.

It has been shown that the exaggerations in media coverage can often be traced to hype and otherwise misleading claims in abstracts.

The abstract of the Lancet article is misleading in a number of crucial ways.

While the abstract states that 74 participants were randomized, it does not indicate the small minority (17 in each group) who were available for the last assessment. That is the more important figure for interpreting findings.

The abstract describes results in terms of a “between group effect size.” Most readers would assume that such a term refers to standardized mean differences between-group effect sizes. If that were so, this 6.52 would be an extraordinarily large effect size. In the psychotherapy literature, it would be comparable only to the effect (6.9) claimed for long-term psychodynamic therapy in a meta-analysis, which have been sharply criticized as exaggerated and miscalculated.

Readers would normally expect “mean group differences” to be described as exactly that, not as “effect sizes.”

Readers would also expect that in an abstract for a randomized trial, outcomes would be reported for the end of the intervention. Earlier outcomes might be unfair because the intervention had not yet been delivered in its full intensity and later would be unfair because of a possible decay in effect once participants are no longer being exposed to the intervention. That is, unless authors knew their results before deciding what to report.

As Keith Laws noted in his blog, the differences between the intervention and control group immediately after the end of the intervention were not significant.

PANSS total

CBT group showed a reduction from 70.24 to 57.95 =12.29

TAU group showed a reduction from 73.27 to 63.26 =10.01

So, although you would never know from the abstract, at the end of the nine month intervention there were no significant differences between CBT and treatment as usual. In interpreting this nonsignificant difference, it might be useful to know that in other contexts, the lead author of the Lancet study have declared differences between treatments of less than 15 points on the PANSS are clinically insignificant.

It is exceedingly odd for authors to calculate construct overall effect size for primary outcomes by averaging from three months into the intervention until nine months later. I cannot find reports of other studies where this has been done.

It is also misleading to make such calculations when there is so much loss to follow-up for the later assessments that outcomes have to be estimated for most patients. To take a point made by the lead author of the study in another context, that would mean that data for most patients are being made up. I will come back later to that.

Basically we are dealing with an abstract spun to portray the trial as being a strongly positive trial, when there were no significant differences between groups at the end of the intervention. The misleading comments to the BBC that the trial showed CBT to be “very comprehensively” effective is rooted in a confusing portrayal of results of the study in the abstract.

The unusual mixture of participants in the study.

Like a lot of things about the study, we’re not given sufficient details about the participants who enrolled in the study. But we are told enough to see that it is an unusually mixed sample. Important things that might be said about some participants would not apply to others.

Eligible participants aged 16–65 years were in contact with mental health services, and either met International Classification of Diseases–tenth revision (ICD-10) criteria for schizophrenia, schizoaffective disorder, or delusional disorder, or met entry criteria for an early intervention for psychosis service (operationally defined with the Positive and Negative Syndrome Scale [PANSS]) to allow for diagnostic uncertainty in early phases of psychosis.

To understand what this means, you need to know something about schizophrenia and also about the early intervention for psychosis services in the UK.

Schizophrenia is a chronic, severe, and disabling mental disorder characterized by deficits in thought processes, perceptions, and emotional responsiveness” with a peak age of onset from 20 to 24 years of age. Recipients of early intervention for psychosocial services in the UK often do not yet have a diagnosis and are not yet receiving medication. Such persons would be overrepresented in the younger age range of the sample, which goes down to 16 years of age. You have to consult the study protocol to learn that 59% of the sample came from these specialized services.

On the other hand, participants at the other end of the age spectrum for the study, which goes up to 65 years of age, would likely have had a diagnosis of schizophrenia for decades and to have received medication for much of the time since diagnosis. If they are eligible for a study requiring that they not have been taking antipsychotic medications for the past six months, they are going to be unusual and quite different from the younger participants in the study who might never have been offered antipsychotic medication.

We are not told enough about how participants of different ages fared in this study or even whether there was differential drop out.

We do not know how many of these participants did not have a diagnosis of schizophrenia or schizophrenia spectrum disorder or how many had never had medication or among those who had been on medication, how many stopped because of horrific experiences.

We do know from the mean baseline PNASS scores, this is an sample with only mild to moderate psychotic symptoms, as been noted in the accompanying editorial in Lancet.

With a 59:41 split between very different recruitment settings, this is particularly important information. Actually, measures of central tendency like means or median can become misleading in the context of such a mixed group of participants sharply divided on some related characteristics. Just think about the typical participant in a physical anthropology study being characterized as female with a mean penis length of 4.4 cm. Of course, that would be ridiculous and misleading, but that is what happens when you try to characterize the typical person in such a mixed sample with single figures.

The bottom line is that the mean or modal participant in this study does not represent the typical person with a diagnosis of schizophrenia or schizophrenia spectrum disorder in the community. It is not just a matter of the sample being unrepresentative, but of it being heterogeneous in that begs for breaking down both outcomes and loss to follow-up by age, previous experience with antipsychotic medication, diagnosis, etc. That must be done for any meaningful generalizations to persons in the community or to comparisons to results obtained for participants in other studies, and most notably for comparisons to persons with schizophrenia receiving neuroleptics that the authors of the study seem to so eager to talk about.

All of the missing information about which I am complaining is readily available to the authors and could be made available to readers, especially when controversial claims were going to be made about the outcome of the study.

Compared to what? The control group does not allow meaningful comparisons

The study compared participants randomized to CBT combined with treatment as usual (CT + TAU) versus TAU alone.

The study was conducted at two sites, one in Manchester and one in Newcastle. Participants at both sites randomized to the comparison/control group of TAU were in one of two very different settings, which provided very different experiences.

Treatment as usual was variable across both sites, although both were chosen partly because these regions had some comprehensive early intervention services. In practice, participants within these services received regular care-coordination and psychosocial interventions, including the offer of family interventions, whereas individuals from other community-based services often received little more than irregular contact with care coordinators, and many of these participants were discharged by their clinical teams during the trial for non-attendance or continued reluctance to accept medicine.

Although the assignment to CBT versus TAU was random, the assignment to one or the other type of TAU was not. Participants being treated in the comprehensive early intervention services were quite different than those treated in other community-based services in terms of age, likelihood of a diagnosis of schizophrenia, previous exposure to antipsychotic medication, and other important features. If you knew something about these characteristics of particular patients, you probably could predict quite well which type of control condition they were receiving, starting with younger patients being more likely to get the enhanced TAU and older patients being more likely to be getting care and conventional settings where they might be discharged get no care if they refused medication.

We are told very little in the Lancet article about the experiences of control/comparison participants in comprehensive early intervention services versus other kinds of community-based services.

We do not know about difference in outcomes or even about the nature of services received.

We do not know how many of the patients assigned to conventional community-based services were either immediately or later discharged for refusal of medication.

We do not even know if there was differential drop out between the two types of treatment settings or how the minority of participants still available for follow-up at the end of the study may have differed according to what kind of TAU they were receiving.

Tony Morrison told an interviewer from Lancet that the early intervention services offered a variety of treatments including supportive counseling, family therapy, and even CBT. Although there was blinding of outcome assessments for this trial, it is highly unlikely that there was blinding within treatment settings as to whether participants were assigned to CBT + TAU or TAU alone. We do not know how being assigned to CBT affected receipt of other services.

Again, these complexities add to a very confusing picture and makes group differences exceedingly difficult to interpret.

The between-group effect sizes for CBT were calculated for differences found between participants assigned to CBT + TAU versus those assigned to TAU alone. The goal is to be able to make statements about the advantage of having CBT to TAU that can be generalized beyond the study. Yet, the outcomes recorded for TAU are some composite of outcomes for what could be considered an enhanced TAU combined with what could be considered inadequate TAU or even no TAU, because of the likelihood that participants refusing medication would be simply discharged.

Recall that discussion we had about the fictional composite participant in the study and how generalizations about this composite might not adequately characterize individual participants. Well, things just got even more complicated. We are now talking about a composite control comparison TAU that is associated with participant characteristics in nonrandom ways.

Overall outcomes for TAU might not adequately characterize outcomes for participants with specific important characteristics.

Overall outcomes for TAU in this particular study would not adequately generalize to TAU in the general community.

Recall that Manchester and Newcastle were chosen as sites for the study because of the availability of special, enhanced comprehensive early intervention services.

Investigators and consumers of their reports of clinical trials need to consider what is accomplished by selection of a TAU as the comparison/control condition.

In this particular study, TAU sometimes involved exposure to a rich set of services. If we compared patients in this context alone, we might be able to make some statements about whether adding CBT has an advantage. However, other participants were assigned to a TAU that actually represented no treatment or quite inadequate treatment. In this context, we might only be learning about the nonspecific factors associated with CBT or any credible treatment such as a continuing relationship, positive expectations, and accountability made a difference in outcome. Apparent effects of CBT may simply be due to the correction of deficiencies in the TAU that could have been accomplished by a number of less intensive and presumably less expensive means.

At some point, and the interpretation of the differences found in the study between intervention and control comparison groups involves a great deal of speculation and assumptions, many of which cannot be tested, and certainly not within the limits of the information provided in the Lancet article.

Why did the investigators not anticipate this unresolvable interpretive mess and avoid it by providing a more suitable control comparison condition?

We will start with this important question in my next blog post. The authors attempted to stack the deck in terms of finding a superiority of CBT by choosing a particular comparison control condition, but their efforts ultimately prove self-defeating.

But for now, I think I am progressing in building a case that

  • The abstract of the Lancet article is missing vital basic information and otherwise misleading.
  • Claims are premature and exaggerated about this study having produced decisive information about the value of providing CBT to unmedicated persons with schizophrenia.
  • Readers of the Lancet article are denied information that is crucial in understanding what went on in this trial and its implications for messages to the community and for future research.

to be cotinued

I welcome your comments in the interim

Category: antipsychotics, psychotherapy, schizophrenia, Uncategorized | Tagged , , , | 8 Comments

Mental Health Commitment Laws: Making the Case for U.S. Civil Commitment Reform

Earlier this month, I posted a blog titled, Understanding Lack of Access to Mental Healthcare in the US: 3 Lessons from the Gus Deeds StoryIn that post, I highlighted how current mental health commitment laws were one of the barriers to accessing mental health care:

 

“….federal and state laws, surrounding the involuntary hospitalization of individuals with mental illness, whilst designed to protect patient’s rights, often leave loved ones and mental health professionals who understand the patient and their illness with no voice, and minimal sway and influence over decisions that get made in courts.”

 

treatment advocacy centerIn follow up to this point, I am blogging today on a recent report published by the Treatment Advocacy Center (a national nonprofit organization) titled Mental Health Commitment Laws: A Survey of the States.

The results of this report validate the experience of those of us who work in the mental health field.  For many of us, it often seems like an uphill battle to get much needed mental health services for patients living with serious mental illness, because our hands are tied by restrictive laws (or restrictive interpretations of such laws).

 

Background:

The deinstitutionalization movement of the 1960’s brought a national trend to reform civil commitment laws and a shifting of focus to the person’s “dangerousness to self or others” as the sole basis for civil commitment (i.e. involuntary treatment or hospitalization).

depressed womanBy the late 1970’s, the results of deinstitutionalization were becoming more apparent, and many had started to wonder if perhaps the pendulum had swung too far. Though community integration had improved the lives of some, a large number of desperately ill people had been abandoned to the streets or to the prison system.

This is why it is so important to re-evaluate the laws surrounding civil commitment and change the very reductionistic and rigid focus on imminent risk of violence or suicide as the only reasons that warrant hospital commitment. Another approach has been to minimize the need for involuntary hospitalizations via the use of less intrusive modalities, such as court-ordered outpatient treatment (i.e. AOT= assisted outpatient treatment).

 

Results of the TAC report?

state-of-statesThe TAC report analyzed the quality and use of laws that each state had enacted to meet the needs of people with severe mental illness who cannot recognize their own need for treatment. The report graded each state on the quality of the civil commitment laws that determined who received court-ordered treatment for mental illness, under what conditions, and for how long. States also received grades on their use of treatment laws based on a survey of mental health officials.

No state earned a grade of “A” on the use of its civil commitment laws. Seventeen states earned a cumulative grade of “D” or “F” for the quality of their laws, and only 14 states earned a grade of “B” or better for the quality of their civil commitment laws. Twenty-seven states provide court-ordered hospital treatment only to people at risk of violence or suicide, even though most of these states have laws that allow treatment under additional circumstances.

 

The report ended with these recommendations:

 

“The deplorable conditions under which more than one million men and women with the most severe mental illness live in America will not end until states universally recognize and implement involuntary commitment as an indispensable tool in promoting recovery among individuals too ill to seek treatment. To that end, the Treatment Advocacy Center recommends:

· Universal adoption of need-for-treatment standards to provide a legally viable means of intervening in psychiatric deterioration prior to the onset of dangerousness or grave disability

· Enactment of AOT laws by the five states that have not yet passed them – Connecticut, Maryland, Massachusetts, New Mexico and Tennessee

· Universal adoption of emergency hospitalization standards that create no additional barriers to treatment

· Provision of sufficient inpatient psychiatric treatment beds for individuals in need of treatment to meet the standard of 50 beds per 100,000 in population”

 

I could not agree more.

 

To view this report, in its entirety, follow this link.

Category: Commentary, mental health care, Psychiatry, research, Uncategorized | Tagged , , , , | 3 Comments

4 Reasons Why Having a Valentine is Good for Your Mental Health

 

red-rosesIt’s that time of year again.  Our screens bombard us with images of bouquets of red roses, strawberries dipped in chocolate and French perfumes. Overnight, stores and shop fronts are filled with pink and red window displays heralding the arrival of Valentine’s Day: A celebration of love, romance and friendship.

I have to be honest and declare that I have never been a huge fan of Valentine’s Day.  I have always found February 14th to be a bit crass, forced and over commercialized, but clearly I am in a minority as far as my lack of enthusiasm for this day goes. 180 Million Valentine’s day cards are exchanged annually (in the U.S alone), and this day has been celebrated and observed, in various forms, for centuries.  Dispersed from its Western and Christian origins, Valentine’s Day celebrations have now spread all over the world as far afield as China, Singapore, South Korea, India and Iran.

The ever expanding popularity of this holiday got me thinking about the relationship between love and one’s mental health.  It turns out, there is some scientific basis for understanding what many of us know, intuitively, to be true.  Here are 4 concrete reasons why it’s worth spending some time on February 14th to mark the occasion of Valentine’s Day, not only with our loved ones at home, but in our schools and communities too.

 

#1 Marriage Reduces Symptoms of Depression for Men and Women

It has long been reported that marriage may affect many aspects of mental health, but the most rigorous research comes from the depression literature, which suggests that marriage reduces depressive symptoms for both men and women. Of note, in addition to findings that getting married decreases depressive symptoms, getting divorced increases such feelings, and such depressive symptoms appear to be long-lasting and remain elevated years after the divorce.

A more recent Norwegian survey examining levels of psychological well-being between married, cohabitors and single people, appears to affirm these earlier findings.  In this study the researchers found that overall, being partnered-living (married or cohabiting) was associated with higher psychological well-being than being single.  Moreover, single living subsequent to a divorce was experienced as particularly negative.

 

#2 Being Sexually Active Has a Beneficial Impact on One’s Neurochemistry

Oxytocin is a mammalian neurohypophysial hormone (secreted by the posterior pituitary gland) that acts primarily as a neuromodulator in the brain. Plasma oxytocin levels increase during sexual arousal in both women and men and are significantly higher (than baseline) during orgasm/ejaculation.

Elevated levels of oxytocin has long been associated with better mental health, with recent studies suggesting a relationship between elevated oxytocin levels and feelings of interpersonal trust, emotional connection, being more satisfied with life, feeling less anxious and less depressed.

 

In the United States, at least, Valentine’s Day has progressed beyond romantic love to encompass love of family and, more platonic, love of friends and community. The last two reasons highlight the importance of these types of love in our society.

 

#3 Patient-Caregiver Relationship May Directly Influence Progression of Alzheimer’s Disease

A study led by Johns Hopkins and Utah State University researchers suggested that a close relationship between patients with Alzheimer’s disease and their caregivers gave those patients a marked edge over those without such a close caregiver relationship.  This outcome manifested in the patient retaining mind and brain function over time. This beneficial effect of emotional intimacy, which the researchers observed among participants, was on par with some of the medications typically used to treat the dementia.

 

#4 Children Who Feel Loved by Family and Caregivers Are Psychologically More Resilient

Photo by colemama via Flickr

Young people’s sense of connection to their parents and other family members is the most consistent protective factor across all health outcomes, including the likelihood that they will engage in violent behavior.  Furthermore, research shows that simply developing relationships with caring adults protects “at-risk” youth against becoming involved in violence. The school environment, too, exerts considerable influence on the psychological well-being of young people.  Students who feel they are a part of their school are also more emotionally healthy and less inclined towards drug and alcohol abuse or suicidal thoughts and attempts.

 

 

 

Category: Commentary, mental health care, Psychiatry, Uncategorized | Tagged , , , | 10 Comments