U.S. Mental Health Policy Comes of Age

I am delighted to offer Mind the Brain readers a guest blog written by Keith Humphreys, Ph.D.  Dr. Humphreys is a Professor and the Section Director for Mental Health Policy in the Department of Psychiatry and Behavioral Sciences at Stanford University. He is also a Senior Research Career Scientist at the VA Health Services Research Center in Palo Alto and an Honorary Professor of Psychiatry at the Institute of Psychiatry, King’s College, London. His research addresses the prevention and treatment of addictive disorders, the formation of public policy, and the extent to which subjects in medical research differ from patients seen in everyday clinical practice.

Dr. Humphreys has been extensively involved in the formation of public policy, having served as a member of the White House Commission on Drug Free Communities, the VA National Mental Health Task Force, and the National Advisory Council of the U.S. Substance Abuse and Mental Health Services Administration. During the Obama Administration, he spent a sabbatical year as Senior Policy Advisor at the White House Office of National Drug Control Policy. He has also testified on numerous occasions in Parliament and advises multiple government agencies in the U.K.

Follow Professor Humphreys on Twitter @KeithNHumphreys.

 

U.S. Mental Health Policy Comes of Age

 

From the very first days of the U.S. health insurance system, the stigma of mental illness was formally codified into benefit design.  Both public and private insurers provided inferior coverage for mental health care, if they even provided it at all.  For decades, this was not remotely controversial.  Labor unions were quite happy to trade “mental for dental” when negotiating fringe benefits for their workers, politicians suffered no electoral consequences for passing insurance legislation that discriminated against people with mental illness, and families who needed better health benefits for mentally ill loved ones were typically too ashamed to speak up.  But thanks to brave advocates, inspiring bipartisan political leadership and cultural changes in perceptions of mental illness, dramatically improved mental health insurance coverage has at last arrived on the American scene.

 

Three laws have transformed the landscape of mental health insurance policy in the span of only a half-decade.

 

MedicareIn 2008, a sweeping reform of Medicare passed which righted an injustice that had plagued the program since its inception.  Medicare originally covered outpatient mental health and addiction treatment at a far lower rate (50%) than other outpatient care (80%).  The backbreaking 50% outpatient co-pay effectively prevented most enrollees from accessing outpatient mental health care.  The 2008 law phased this payment disparity out over time, eliminating it entirely as of January 1, 2014.  Medicare now covers 80% of outpatient mental health care costs, which is good news for its 50 million current enrollees and the 150,000 new enrollees it gains each month.

 

Attribution: Titomuerte at en.wikipedia; Photographer Mischa Fisher, Mental Health Parity Rally, March 5th 2008.

Attribution: Titomuerte at en.wikipedia; Photographer Mischa Fisher, Mental Health Parity Rally, March 5th 2008.

Also in 2008, Congress passed and President G.W. Bush signed the Wellstone-Domenici Mental Health Parity and Addiction Equity Act. Named for its two leading Senatorial advocates (interestingly, the proudly liberal Paul Wellstone and the staunchly conservative Pete Domenici) the law requires companies with more than 50 employees as well as Medicaid managed care plans to make their offered mental health benefits comparable to those for other illnesses.  These parity protections apply to over 100 million Americans.

 

Image Credit: Pete Souza

Image Credit: Pete Souza

The 2010 Affordable Care Act aka “Obamacare” went even further. It extended parity protections to individuals who receive insurance from small businesses and to those who purchase it in the individual market (e.g., on a state or federal health insurance exchange).  It also defines mental health as an essential health care benefit that all plans it regulates must offer.  Last but not least, the law of course also provides insurance to the uninsured population, which has a high rate of psychiatric disorders.  Over 60 million Americans will receive improved mental health insurance coverage because of the provisions of the Affordable Care Act.

 

Although enormous work remains necessary to implement these laws, they together bring the U.S. closer than it has ever been to providing mental health treatment on demand.  Coping with mental illness is never going to be easy, but at least mental health policy is now directed at making the process easier rather than harder.

Category: Commentary, mental health care, Uncategorized | Tagged , , | 2 Comments

The Latest and Greatest in Treatment for PTSD: Magic Bullets and Cutting Edge Innovation

I am frequently asked to talk about PTSD to professional audiences and, without 2012-04-05-ptsd1exception, always get a post talk question asking about my experience with some experimental intervention that someone read about somewhere in a newsmagazine or heard about from the T.V.

Internally, I always groan.

Having just spent 60-90 minutes pouring over carefully crafted PowerPoint slides that contain information about the evidence base for the treatments of PTSD and what best practices consist of, why I am always confronted with a zealous audience member who is obsessed with the new, the innovative, or the magic bullet?

In the interest of full disclosure, I have to share my viewpoint as being that of a health services researcher.  I approach PTSD treatment with a basic belief that we already have pretty good treatments, and the issues with getting better outcomes for PTSD lie more in how we implement those treatments, the limitations of the systems that provide care, massive issues of access to care (i.e. those who need care the most simply can’t access it for a myriad of reasons), and healthcare disparities (that an individual’s outcomes for PTSD are more likely linked to their zip code as opposed to their genes/neurotransmitters).

In short, I usually have a healthy skepticism toward the experimental or magic bullets type of treatments for PTSD, which often get a lot of media attention and can be very seductive to the brain of a researcher or clinician who spends their days trying to help individuals who live with PTSD.

 

Still, today I am curbing my skepticism and with much enthusiasm am writing about some of the hottest ideas for innovation in the treatment of PTSD.

 

Please note: MANY of these approaches are still considered EXPERIMENTAL, and I am listing them in no particular order of importance.

1. Mind – Body Practices for PTSD

Image Credit: Cornelius383

Image Credit: Cornelius383

Mind Body practices are increasingly used to offer symptom reduction for PTSD.  Approaches such as Yoga, Tai Chi, Mindfulness Based Stress Reduction, Meditation, and Deep breathing are some examples.  There are about 16 rigorous studies that have been done to date, most of which have small sample sizes.  Whilst early findings suggest such practices can have a beneficial impact on symptoms like intrusive memories, avoidance, and increased emotional arousal, there is insufficient evidence to support their use as standalone treatments, though they can be recommended as an adjunctive treatment.

 

2. Cervical Sympathetic Blockade and Stellate Ganglion Block for PTSD 

In 2008, reports started to emerge about a minimally invasive manipulation of sympathetic nerve tissue in patients with PTSD that relieved their anxiety.  The procedure consisted of injecting a local anesthetic into sympathetic cervical nerve tissue at the C6 level and was apparently accompanied by immediate relief by the patient.  In 2012, a case series was reported where treatment resistant veterans with PTSD were given a stellate ganglion block and also a pre and post intervention CAPS. After the intervention, 5/9 of the patients experienced significant improvement; these benefits diminished over time and the benefits were not universal.  Controlled trials are currently underway to investigate this intervention further.

 

3. Virtual Reality Exposure Therapy

Virtual Reality exposure therapy utilizes real time computer graphics, body tracking devices, visual displays, and other sensory input devices to give the patient the experience that they are immersed in a virtual environment. It is an enhanced version of the imaginal exposure typically utilized as a part of trauma-focused psychotherapies. In 2001 an open clinical trial of Virtual Reality exposure therapy yielded promising results. It is currently being studied under controlled conditions.

 

4. D-Cycloserinemanypills

D-Cycloserine is a partial agonist of the NMDA receptor (a brain receptor that plays an essential role in learning and memory). It has been used to treat social phobia and panic disorder and to enhance the effects of psychological therapies for those disorders.  Preliminary data suggests it can be a useful adjunct in addition to evidence-based psychotherapies for patients living with severe PTSD.

 

5. Ketamine

Ketamine is a non-barbiturate anesthetic and antagonist at the NMDA receptor that is typically administered intravenously.  It has been used for years for patients with severe burns and it was, in this use, that its dissociative properties became apparent.  Retrospective studies show that those who received Ketamine after a traumatic event were less likely to develop PTSD.  It has been postulated that Ketamine may disrupt the process via which traumatic memories are laid down. A 2014 JAMA study reported on a RCT which demonstrated a rapid reduction in symptom severity following Ketamine infusion in patients with chronic PTSD.

 

6. Increasing the Intensity of Treatments

In an experimentation with packaging, British researchers compressed versions of trauma-focused psychotherapies for PTSD into a seven day intensive treatment.  This was found to work as well as treatment as usual, which is the same treatment delivered once a week, over 12 weeks.  Such an approach was postulated to be more efficient and convenient and was associated with faster improvement in symptoms and lower dropout rates.

 

7. Memantine colorful-pills

Memantine is a non competitive NMDA antagonist that is thought to protect the glutamergic destruction of neurons and hence prevent the hypothesized neurodegeneration in the hypothalamus, which contributes to the memory issues related to PTSD.  In a 2007 open label small trial, Memantine was found to be associated with some encouraging outcomes.  Double blind placebo controlled trials are pending.

Category: alternative medicine, Commentary, mental health care, Psychiatry, PTSD, research | Tagged , , , , | Leave a comment

On bitopertin and sharing data from clinical trials

While modern antipsychotics are doing an OK job improving the positive symptoms of schizophrenia – such as auditory hallucinations or fixed delusional ideas – there is a lot left to be desired in term of how these medications – or any psychotropics for that matter – work against negative symptoms. colorful-pillsThe negative symptoms of schizophrenia, including deficits in a variety of mental functions such as will power, ability to enjoy things, express emotions, or associate with others, have been an elusive target from a treatment perspective. In other words, available medications for schizophrenia might help a patient deal with “the voices,” but, because they do not also improve that patient’s negative symptoms, his ability to return to a meaningful and productive life is not substantially helped.

A promising new medication for negative symptoms — almost

Not surprisingly, any intervention that promises to improve negative symptoms makes the news. This was exactly what happened with bitopertin, a new Roche medication aimed at negative symptoms, which generated enough enthusiasm during its preliminary testing (Phase I and II clinical trials) to move to Phase III trials.

So, one wonders, why didn’t a brand new bitopertin study (ClinicalTrials.gov identifier NCT00616798), with a solid design and impressive action on negative symptoms get any attention – from the media, psychiatrist gurus, patient’s advocates, anyone? The study, which just came out in the flagship psychiatric journal (JAMA Psychiatry) generated no press/media coverage that I am aware off.

Actually, this rather surprising silent treatment is not an accident.

It turns out that this study has been preempted by a press announcement that Roche made this last January stating that  bitopertin did not hold up to its promise following its Phase III testing.  Meaning that… the just published JAMA Psychiatry study — the very study which generated enough enthusiasm to justify the seemingly well-deserved promotion bitopertin received further down the pipeline — is old, outdated news.

To summarize: on one hand, great positive results from a just published study, which, as it turns out, are actually quite old. On the other hand, negative results in Phase III testing, according to a news conference, but no publications to date.

The result: bitopertin is at the center of what is essentially a skewed and likely misleading state of affairs – at least from the point of view of official scientific validation via publication in peer reviewed journals.

How did we get here?

It seems that this bitopertin situation is the predictable result of a series of unfortunate events including:

  1. delays in reporting NCT00616798 study results : the original study, according to the official clinicaltrials.gov data, was completed in February 2010, more than 3 1/2 years prior to submission for  publication (original submission date August 30, 2013)
  2. an extremely lengthy peer review process: only a bit shy of 1/2 year: the original submission date was August 30, 2013; final revision received January 24, 2014; accepted January 24, 2014.
  3. No shared public data for any of the relevant studies that have been completed.

There is a fairly straightforward solution to prevent such complications: opening up the data from clinical trials to the public.alltrials_basic_logo2 Europe is already  pursuing an initiative to make reporting of all clinical trials data mandatory. This is an important step forward. But real progress will not occur unless and until patient level data is made available to the general public as close in time as possible to the date of data collection completion. Adrian Preda, M.D.

Category: mental health care, Publication bias, research, schizophrenia | Tagged , | 5 Comments

Claire Underwood From Netflix’s House of Cards: Narcissistic Personality Disorder?

Last month I used the character of Frank Underwood as a “case study” to illustrate the misunderstood psychiatric diagnosis of Antisocial Personality Disorder, and many of you asked: Well, what about his wife, Claire?

Good question!  You asked, and so today I will do my best to  answer.

 

SPOILER ALERT: For those of you who have not been on a streaming binge and watched all of Season 2 yet, consider yourself warned. 

 

Image: Netflix

Image: Netflix

Clinical lore would certainly support that Claire, herself, must have a personality disorder of some kind – a sort of fatal attraction, where a couple is drawn to each other because there is something in their personality patterns which is complementary and reciprocal.

She does appear to have mastered the art of turning a blind eye to Frank’s more antisocial exploits.  She is a highly intelligent woman, and she must have some inkling that her husband may be involved in the death of Zoe Barnes and Peter Russo.  But if she has an inkling, she does not show it.

Claire, from what we know, does not engage in outright antisocial behavior.  Unlike Frank, she has not murdered anyone and we have not seen her engage in very reckless or impulsive outbursts.

However, she rarely shows emotion—her smiles seem fake, her laugh empty, and her expressions are bland.  She is more restrained and guarded than Frank, and she does not reveal her inner thoughts to the viewer the way Frank does so it is much harder to know what could be going on in her mind.

Still, I think I have seen enough to venture forth with an assertion that she may have a Narcissistic Personality Disorder.

 

What is Narcissistic Personality Disorder?

 

A pervasive pattern of grandiosity, need for admiration, and lack of empathy beginning by early adulthood and present in a variety of contexts, as indicated by five (or more) of 9 criteria.

 

Below are the five criteria that I think apply to Claire:

 

1) Has a sense of entitlement (i.e. unreasonable expectations of especially favorable treatment or automatic compliance with his or her expectations)

 

Image: Netflix

Image: Netflix

She expected Galloway to take the blame for the photos that were leaked and eventually claim it was all a “publicity stunt,” thus ruining his own reputation and image.  She expressed no regret that her ex-lover was cornered into having to do this, on her behalf, and no remorse that it almost ruined his life and his relationship with his fiancé. She was entitled to this act because she is “special” and expects that people will “fall on their swords” for her.

 

2) Is interpersonally exploitative (i.e. takes advantage of others to achieve his or her own ends)

 

Claire manipulates the first lady, Tricia Walker, into believing Christina (a White House aide) is interested in the president. She pretends to be a friend, wangles her way into becoming the first lady’s confidant, and persuades her to enter couples therapy with the president.  All of this is actually part of an elaborate plan to help Frank take the President down so that he can become president and she (Claire) can usurp Tricia as first lady.

Another example: Claire is pressured by the media into revealing that she once had an abortion, but she lies and states that the unborn child was a result of rape (presumably to save political face).  Again, she shows no remorse about her lie and instead profits from it, gaining much sympathy and public support.

 

3) Lacks empathy: is unwilling to recognize or identify with the feelings and needs of others

 

Image: Netflix

Image: Netflix

This was best seen in the way Claire deals with her former employee Gillian Cole’s threat of a lawsuit –  she pulls a few strings and threatens the life of Gillian’s unborn baby.  In fact, in addition to the obvious lack of empathy was the simmering rage she had toward Gillian for daring to cross her.  Again, entitlement, narcissistic rage, and a lack of empathy would explain that evil threat she made, to Gillian’s face, about the baby.

 

4) Is often envious of others or believes that others are envious of him or her

 

I think part of the reason Claire was so angry at Gillian was because, deep down, she was envious of her pregnancy.  We know that, in parallel, Claire is consulting a doctor about becoming pregnant and is told that her chances are slim.  This is such a narcissistic injury to Claire that she directs her rage at Gillian.  I don’t think she was even consciously aware of how envious she is of Gillian for being pregnant.

Another example would be the look on her face when Galloway indicates he is madly in love with his fiancé and wishes to make a life with her.  For a second her face darkens – a flash of jealous rage – which then translates to indifference and almost pleasure at his eventual public humiliation.

 

5) Shows arrogant, haughty behaviors or attitudes 

 

Image: Netflix

Image: Netflix

Correct me if I am wrong, but Claire just does not appear to be that warm or genuine and has an almost untouchable air about her. Furthermore, we only ever see her with people who work for her (i.e. have less power than her) or with people more powerful than her (i.e. whose power she wants for herself). Other than Frank, where are her equals? Her oldest friends and colleagues? Her family? People who might not be influenced by her title or power?

 

One last comment – in Season 2 Claire certainly comes across as more ruthless and power hungry than the Claire in Season 1—whether she is now showing her true colors and is dropping her facade or just becoming more lost in Frank’s world and hence looking more like him is unclear to me…

 

I suppose we will find out in Season 3!

Category: Commentary, Psychiatry, Uncategorized | Tagged , , , , , | 11 Comments

Are meta analyses conducted by professional organizations more trustworthy?

gold standard

Updated April 18, 2014 (See below)

A well done meta-analysis is the new gold standard for evaluating psychotherapies. Meta-analyses can overcome the limitations of any single randomized controlled trial (RCT) by systematically integrating results across studies and identifying and contrasting outliers. Meta-analyses have the potential to resolve inevitable contradictions in findings among trials. But meta analyses are constrained by the quality and quantity of available studies. The validity of meta analyses also depends on the level of adherence to established standards in their conduct and reporting, as well as the willingness of those doing a meta analysis to concede the limits of available evidence and refrain from going beyond it.

Yet, meta-analytic malpractice is widespread. Authors with agendas  strive to make their point more strongly than the evidence warrants. I have shown misuse of meta-analysis to claim that long-term psychoanalytic psychotherapy (LTPP) is more effective than briefer alternatives. And then there are claims of a radical American antiabortionist made via a meta analysis in British Journal of Psychiatry that abortion accounts for much of the psychiatric disturbance among women of childbearing age.

Funnel Plot

Funnel Plot

Meta-analyses often seem to have intimidating statistical complexity and bewildering graphic display of results. What are consumers to do, when they have neither the time nor the ability to interpret findings for themselves? Is there particular reassurance that a meta-analysis was commissioned by professional organization? Does being associated with a professional organization certify a meta-analysis as valid?

That is the question I am going to take up in this blog post. The article I am going to be discussing is available here.

Hart, S. L., Hoyt, M. A., Diefenbach, M., Anderson, D. R., Kilbourn, K. M., Craft, L. L., … & Stanton, A. L. (2012). Meta-analysis of efficacy of interventions for elevated depressive symptoms in adults diagnosed with cancer. Journal of the National Cancer Institute, 104(13), 990-1004.

In the abstract, the authors declare

 Our findings suggest that psychological and pharmacologic approaches can be targeted productively toward cancer patients with elevated depressive symptoms. Research is needed to maximize effectiveness, accessibility, and integration into clinical care of interventions for depressed cancer patients.

Translation: The evidence for the efficacy of psychological interventions for cancer patients with elevated depressive symptoms is impressive enough to justify dissemination of these treatments and integration into routine cancer care. Let’s get on with the rollout.

The authors did a systematic search, identifying

  • 7700 potentially relevant studies, narrowing this down to
  • 350 fulltext articles that they reviewed, from which they so selected
  • 14 trials from 15 published reports for further analysis.
  • 4 studies lacked the data for calculating effect sizes, even after attempts to contact the authors
  • 10 studies were at first included, but
  • 1 then had to be excluded as an extreme outlier in its claimed effect size, leaving
  • 9 studies to be entered in the meta analysis, but with one of them yielding 2 effect sizes.

The final effect sizes entered into the meta-analysis were 6 for what the authors considered psychotherapy from 5 different studies and 4 pharmacologic comparisons. I will concentrate on the 6 psychotherapy effect sizes that came from five different studies. You can find links to their abstracts or the actual study here.

Why were the authors left with so few studies? They had opened their article claiming over 500 unique trials of psychosocial interventions for cancer patients since 2005, of which 63% involved RCTs. But most evaluations of psychosocial intervention do not recruit patients having sufficient psychological distress or depressive symptoms to register an improvement. Where does that leave claims that psychological interventions are evidenced-based as effective? The literature is exceedingly mixed as to whether psychosocial interventions benefit cancer patients, at least those coming to clinical trials. So, the authors are left having to decide with these few studies recruiting patients on the basis of heightened depressive symptoms.

Independently evaluating the evidence.thumb on scale

Three of the 6 effect sizes classified as psychotherapeutic—including the 2 contributing most of the patients to the meta analysis—should have been excluded.

The three studies (1,2,3)  evaluated collaborative care for depression, which involves substantial reorganization of systems of care, not just providing psychotherapy. Patients assigned to the intervention groups of each of these studies received more medication and better monitoring. In the largest study, the low income patients assigned to the intervention group had to pay out-of-pocket care, whereas care was free for patients assigned to the intervention group. Not surprising, patients assigned to the intervention group got more and better care, including medication management. There was also a lot more support and encouragement being offered to the patients in the intervention conditions. Improvement that was specifically due to psychotherapy, and not something else,  in these three studies cannot be separated out.

I have done a number of meta-analyses and systematic reviews of collaborative care for depression. I do not consider such wholesale systemic interventions as psychotherapy, nor am I aware of other articles in which collaborative care has been treated as such.

Eliminating the collaborative care studies leaves effect sizes from only 2 small studies (4, 5).

One (4) contributed  2 effect sizes based on comparisons of 29 patients receiving cognitive behavior therapy (CBT) and 23 receiving supportive therapy to the same 26-patient no-treatment control group. There were problems in the way this study was handled.

  • The authors of the meta-analysis considered the supportive therapy group as an intervention, but supportive therapy is almost always considered a comparison/control group in psychotherapy studies.
  • The supportive therapy had better outcomes than CBT. If the supportive therapy were re-classified as a control comparison group, the CBT would have had a negative effect size, not the positive one that was entered into the meta-analysis.
  • Including two effect sizes violates the standard assumption for performing a meta analysis that all of the effect sizes being entered into are independent.

Basically, the authors of the meta-analysis are counting the wait-list control group twice in what was already a small number of effect sizes. Doing so strengthened the authors’ case that the evidence for psychotherapeutic intervention for depressive symptoms among cancer patients is strong.

The final study (5)  involved 45 patients being randomly assigned to either problem-solving or waitlist control, but results for only 37 patients were available for analyses. The study had a high risk of bias because analyses were not intent to treat.  It was seriously underpowered, with less than a 50% probability of detecting a positive effect even if it were present.

Null findings are likely with such a small study. If the authors reported null findings, the study would not likely be published because being too small to find anything but a no effect is a reasonable criticism. So we are more likely find positive results from such small studies in the published literature, but they probably will not be replicated in larger studies.

Once we eliminate the three interventions misclassified as psychotherapy and deal with use of the waitlist control group of one study counted twice as a comparator, we are left with only two small studies. Many authorities suggest this is insufficient for a meta-analysis, and it would certainly not serve as the basis for the sweeping conclusions which these authors wish to draw.

How the authors interpreted their findings

The authors declare that they find psychotherapeutic interventions to be

reliably superior in reducing depressive symptoms relative to control conditions.

They offer reassurance that they have checked for publication bias. They should have noted that tests for publication bias are low powered and not meaningful with such small numbers of studies.

They suddenly offer the startling conclusion without citation or further explanation.

The fail-safe N (the number of unpublished studies reporting statistically nonsignificant results needed to reduce the observed effect to statistical nonsignificance) of 106 confirms the relative stability of the observed effect size.

What?!  Suppose we accept the authors’ claim that they have five psychotherapeutic intervention effect sizes, not the two that I claim. How can they claim that there would have to be 106 null studies hiding in desk drawers to unseat their conclusion? Note that they already had to exclude five studies from consideration, four because they could not obtain basic data are from them, and one because the effects claimed for problem-solving therapy were too strong to be credible. So, this is a trimmed down group of studies.

In another of my blog posts  I indicated that clinical epidemiologists, as well as the esteemed Cochrane Collaboration reject the validity of failsafe N. I summarize some good arguments against failsafe N.  But just think about it: on the face of it, do you think results are so strong that it would take over 100 negative studies to change our assessment? This is a nonsensical bluff intended to create false confidence in the authors’ conclusion.

The authors perform a number of subgroup analyses that they claim show CBT to be superior to problem-solving therapy. But the subgroup analyses are inappropriate. For CBT, they take the effect sizes from two small studies in which the intervention and a control group differed in the control group not receiving the therapy. For PST, they take the effect sizes from the very different large collaborative care interventions that involved changing whole systems of care. Patients assigned to the intervention group got a lot more than just psychotherapy.

There is no basis for making such comparisons. The collaborative care studies, as I noted,sbm involve not only providing PST to some of the patients, but also medication management and free treatment when patients – who were low income – in the control condition had to pay for it and so received little. There are just too many confounds here. Recall from my previous blog posts that effect sizes do not characterize a treatment but rather a treatment in comparison to a control condition. The effect sizes that the authors cite are invalid for PST and the conditions of the collaborative care studies versus the small CBT studies are just too different.

https://www.youtube.com/watch?v=R-sLX5UZaxk

https://www.youtube.com/watch?v=R-sLX5UZaxk

Is you or ain’t you a meta analysis organized by Society of Behavioral Medicine?

 The authors wish to acknowledge the Society of Behavioral Medicine and its Evidence-Based Behavioral Medicine Committee, which organized the authorship group…Society of Behavioral Medicine, however, has not commissioned, supervised, sanctioned, approved, overseen, reviewed, or exercised editorial control over the publication’s content. Accordingly, any views of the authors set forth herein are solely those of the authors and not of the Society of Behavioral Medicine.

Let’s examine this denial in the context of other information. The authors included a recent President of SBM and other members of the leadership of the organization, including one person whom would soon be put forward as a presidential candidate.

The Spring/Summer 2012 SBM newsletter states

 The Collaboration between the EBBM SIG and the EBBM Committee (Chair: Paul B. Jacobsen, PhD) provided peer review throughout the planning process. At least two publications in high impact journals have already resulted from the work.

One of the articles to which the newsletter refers is the meta-analysis of interventions for depressive symptoms. I was a member of the EBBM Committee during the time of the writing of this. This and the earlier meta-analyses were inside jobs done by the SBM leadership. A number of the authors are advocates for screening for distress. Naysayers and skeptics on the EBBM Committee were excluded.

The committee did not openly solicit authors for this meta analysis in its meetings nor discuss progress. When I asked David Mohr, one of the eventual authors, why the article was not being discussed in the meetings, he said that the discussions were being held by telephone.

Notably missing from the authors of this meta-analysis is Paul Jacobson, who was head of the Evidence-based Medicine Committee during its writing. He has published meta-analyses and arguably is more of an expert on psychosocial intervention in cancer care than almost any of the authors. Why was he not among the authors? He is given credit only for offering “suggestions regarding the conceptualization and analysis” and for providing “peer review.”

It would have been exceedingly awkward if Jacobson was listed as an author. His CV notes that he was the recipient of $10 million from Pfizer to develop means of assuring quality of care by oncologists. So, he would have had to have a declaration of conflict of interest on a meta-analysis from SBM evaluating psychotherapy and antidepressants for cancer patients. That would not have looked good.

Just before the article was submitted for publication, I received a request from one of the authors asking permission for me to be mentioned in the acknowledgments. I was taken aback because I had never seen the manuscript and I refused.

I know, as Yogi Berra would say, we’re heading for déjà vu all over again. In earlier blodeja vug posts (1, 2)  I criticized a flawed meta-analysis done by this group concerning psychosocial interventions for pain. When I described that the meta-analysis as “commissioned” by SBM. I immediately got a call from the president asking me for a correction. I responded by posting a link to an email by one of the authors describing that meta-analysis, as well as this one, as “organized” by SBM.

So, we are asked to believe the article does not represent the views of SBM, only the authors, but these authors were hand-picked and include some of the leadership of SBM. Did the authors take off their hats as members of the governance of SBM during the writing of the paper?

The authors are not a group of graduate students who downloaded some free meta-analysis software. There were strong political considerations in their selection, but as a group, they have experience with meta-analyses. Journal of the National Cancer Institute (JNCI) is not a mysterious fly-by-night Journal that is not indexed in ISI Web of Science. To the contrary, it is a respected, high impact (JIF= 14.3).

As with the meta-analysis of long-term psychoanalytic psychotherapy with its accompanying editorial in JAMA, followed by publication of a clone in British Journal of Psychiatry, we have to ask did the authors have privileged access to publishing in JNCI with minimal peer-review? Could just anyone have gotten such a meta-analysis accepted there? After all, there are basic, serious misclassifications of the studies that provided most of the patients included in the meta-analysis for psychotherapeutic intervention. There are patently inappropriate comparisons of different therapies delivered in very different studies, some without the basis of random assignment. I speak for myself, not PLOS One, but if, as an Academic Editor I had received such a flawed manuscript, I would have recommended immediately sending it back to the authors without it going out for review.

Imagine that this meta-analysis were written/organized/commissioned/supported by pharmaceutical companies

What we have is exceedingly flawed meta-analysis that reaches a seemingly forgone conclusion promoting the dissemination and implementation of services by the members of an organization from which it came. The authors rely on an exceedingly small number of studies, bolstered by recruitment of some that are highly inappropriate to address the question whether psychotherapy improved depressive symptoms among cancer patients. Yet, the authors’ conclusions are a sweeping endorsement for psychotherapy in this context and unqualified by any restrictions. It is a classic use of meta-analysis for marketing purposes, branding of services being offered, not scientific evaluation. We will see more of these in future blog posts.

If the pharmaceutical industry had been involved, the risk of bias would have been obvious and skepticism would have been high.

But we are talking about a professional organization, not the pharmaceutical industry. We can see that the meta-analysis was flawed, but we should also consider whether that is because it was written with a conflict of interest.

There are now ample demonstrations that practice guidelines produced by professional organizations often serve their members’ interests at the expense of evidence. Formal standards  have been established for evaluating the process by which these organizations produce guidelines. When applied to particular guidelines, the process and outcome often comes up short.

So, we need to be just as skeptical about meta-analyses produced by professional organizations as we are about those produced by the pharmaceutical industry. No, Virginia, we cannot relax our guard, just because a meta-analysis has been done by a professional organization.

If this example does not convince you, please check out a critique of another one written/organized/commissioned/supported by the same group (1, 2).

UPDATE (April 18, 2014)

meta analyses of efficacy CUTjpgAn alert reader scrutinized the meta-analysis after reading my blog post and found something quite interesting in the table to the left. Click on the table to enlarge. What you see is that every comparison worked out extraordinarily well, too well.

The problem is, of course, that these comparisons are inappropriate, as discussed in the blog. The comparisons hinge upon studies being misclassified as psychotherapy when they were actually complex collaborative care interventions, as well as comparisons of problem-solving therapy for cognitive behavior therapy when the patients receiving problem-solving therapy were not randomized to it. Rather, their randomized to a condition in which the intervention patients got a combination of free medication, careful medication management, and the option of problem-solving therapy, whereas the control group patients had to pay for treatment and received substantially less of any kind. This is clearly meta-analysis malpractice of the highest order.

See my discussion of an exchange of letters with the authors here. Go and comment yourself about this study at PubMed Commons here.

Category: cancer, Conflict of interest, depression, psychotherapy, Uncategorized | Tagged , , , | 2 Comments