Did the FDA antidepressants warnings increased suicidality?

A new study just published in the BMJ makes this very point.

bmj-logo-ogChanges in antidepressant use by young people and suicidal behavior after FDA warnings and media coverage: quasi-experimental study

The title makes two claims:

  1. That the  2003 black box  US Food and Drug Administration warning about a possible increased risk of suicidality with antidepressants lead to a change in antidepressant prescriptions.
  2. Further, that there were changes in suicidal behavior following the said US FDAwarning.

While the title does not indicate the direction of the changes, the study conclusions carry little ambiguity:

Safety warnings about antidepressants and widespread media coverage decreased antidepressant use, and there were simultaneous increases in suicide attempts among young people.

This is obviously a big claim, with far reaching implications spreading from the level of primary care physicians who might consider changing their antidepressant prescription practices to the level of policy makers deciding on guidelines about antidepressants approvals and reimbursement.

In this post we will discuss some of the study limitations which, for unclear reasons, seem to have been all but ignored by the over-excited welcome that study received in the mainstream media.

Our goal here is to not discuss subtle academic limitations but rather obvious limitations which could have been picked even by a casual yet critical reader of the paper. We will then cursorily survey the media presentations of the paper and assess the quality of their review in terms of balance and fair criticism.

Let’s start with some of the study most overt shortcomings:

  1. The title is not entirely informative. The study reviews US data however the title seems to indicate world-wide findings.
  2. The study uses a quasi-experimental design assessing changes in outcomes after the FDA  warnings, controlling for pre-existing trends. However the reliability of the controls is not entirely convincing.
  3. Not much is known about the relationship (if any) between the individuals entered in the study because they were prescribed antidepressants and the individuals who took psychotropic overdoses.
  4. The study uses psychotropic drug poising data as a proxy for suicide but is that a good proxi?
  5. The study does not address the possibility that suicide has been on the raise for reasons having nothing to do with antidepressant prescriptions (such as the recent years economic crisis).

Big media enthusiastic welcome

prozac-mood-brightener-150

First the Washington Post (Dennis) reports:

As a result [of the FDA warnings] antidepressant prescriptions fell sharply for adolescents age 10 to 17 and for young adults age 18 to 29. At the same time, researchers found that the number of suicide attempts rose by more that 20 percent in adolescents and by more than a third in young adults.

Comment from a non-study affiliated expert? YES

How many limitations are discussed? NONE

According to NBC News (Raymond) :

New research finds the warning backfired, causing an increase in suicide attempts by teens and young adults. After the FDA advisories and final black box warning that was issued in October 2004 and the media coverage surrounding this issue, the use of antidepressants in young people dropped by up to 31 percent.

Comment from a non-study affiliated expert? NO

How many limitations are discussed? NONE

Reuters (Seaman) reports that:

Antidepressant use decreased by 31 percent among adolescents, about 24 percent among young adults and about 15 percent among adults after the warnings were issued. At the same time, there were increases in the number of adolescents and young adults receiving medical attention for overdosing on psychiatric medicines, which the authors say is an accurate way to measure suicide attempts. Those poisoning increased by about 22 percent among adolescents and about 34 percent among young adults after the warnings. That translates to two additional poisoning per 100,000 adolescents and four more poisoning per 1,000 young adults, the researchers write.

Comment from a non-study affiliated expert? YES

How many limitations are discussed? ONE (limitation #5)

According to USA TODAY (Painter):

Warnings that antidepressant medications might prompt suicidal thinking in some young people may have backfired, resulting in more suicide attempts, new research suggests.

Comment from a non-study affiliated expert? YES

How many limitations are discussed? NONE

Forbes (DiSalvo) reports:

Antidepressant use fell 31 percent among adolescents and 24 percent among young adults after the FDA warnings, according to the study. Suicide attempts increased by almost 22 percent among adolescents and 33 percent among young adults in the same time period. Suicide attempts tracked in the study were largely the result of drug overdoses.

Comment from a non-study affiliated expert? NO

How many limitations are discussed? NONE

NPR (Stein) also reports on the story:

Antidepressant use nationally fell 31 percent among adolescents and 24 percent among young adults, the researchers reported. Suicide attempts increased by almost 22 percent among adolescents and 33 percent among young adults.

Comment from a non-study affiliated expert? YES

How many limitations are discussed? ONE (limitation #5)

Finally, the Boston Globe (Freyer) concludes that

instead of declining as hoped, suicide attempts over the next six years showed a “small but meaningful” uptick among people ages 10 to 29, according to a study published Wednesday in the journal BMJ. That increase followed a substantial drop in the use of antidepressants.

Comment from a non-study affiliated expert? YES

How many limitations are discussed? NONE

A few observations

colorful-pills

To summarize: big media’s portrayal of the situation is straightforward. The FDA black box warnings were followed buy an almost 30% decrease in antidepressant use and an almost 30% increase in suicide rates for adolescents and young adults.The conclusion, while not stated directly, is implicit: antidepressants decrease suicide in this population.

The issue is that this conclusion is at odds with what the study really found:

1. The decrease in antidepressant prescriptions is not absolute but relative to a projected forward prediction of increased antidepressant use over the studied period. All is good if the prediction is accurate – but there in no way of actually assessing how correct these sort of predictions really are.

2. There is no absolute increase in suicides either. The increase rate is not for suicide for for psychiatric drugs poisoning, which might be an entirely different beast than suicide. Even if accepting the authors’ decision to use psychotropics poisoning as proxy for suicide the reported increase is again not absolute but relative to a predicted trajectory of in fact unknown accuracy for psychotropics poisoning.

3. Or maybe the suicide rate was on the raise anyway for reasons that have nothing to do with the FDA warnings or antidepressant prescriptions.  As discussed above the economic crisis of the last few years might have independently contributed to this state of affairs.

Which leads us to an interesting conclusion

Somewhat paradoxically, it turns out that a study that tongue-in-cheek points to the media’s uncritical coverage of medical news as a possible contributor to a public health issue receives an almost universal and equally uncritical acclaim from the same media that it rightfully criticizes.

In a past analysis of medical news reporting I stated that

[...] my hope is that members of the media who cover [medical] debate[s] will realize that “first do no harm” is not only the duty of physicians; it is also the responsibility of anyone trusted with giving health information to the public at large.

More than 2 years later I found that this conclusion still stands.

For my Storify discussion of this story – I’ll see you here.

 

Category: antidepressants, depression, mental health care, Uncategorized | Tagged , | 4 Comments

The Vital Importance of Integrating Primary Care and Mental Health Care: How the VA Does It

For the better part of the last two decades I have practiced psychiatry in a variety of different American healthcare systems, and over these years I have, on numerous occasions, heard psychiatric services referred to in manner that imply (often subtly) that such services are not medical care.  These references come not only from patients, but nurses and doctors (including myself) too.

 

“Yes Mr. Jones you need to follow up with your regular medical doctor about that issue.”

Or

“Dr. Jain, I went to see my medical doctor and he told me my blood pressure was high.”

 

Yes, I have been guilty of propagating this false dichotomy myself, and I too end up colluding with this societal misperception that somehow psychiatric care is NOT medical care, but something separate or distinct from other medical services. I think I did it because, on a day to day basis, when I am busy in clinic it is easier to collude than to get into a debate about semantics.

 

Still, in today’s blog I want to highlight the fact that this artificial distinction between physical and mental health perpetuates much of the stigma and misperception that, we as a society, have toward mental illness. But, most importantly, I want to convey my belief that when mental and physical well-being are separated, healthcare becomes poor in quality.

 

When I did my medical school training in Great Britain, every single medical school student was required to complete a 3 month (minimum) rotation in psychiatry and, furthermore, psychiatry was one of the specialties that had to be passed, in clinical exams, at finals before your MD would be granted.  Why, might you ask, should a ENT surgeon/dermatologist/ER physician to-be  need to spend so much time training in psychiatry?

Firstly, the majority of British medical students become primary care doctors.  The system is set up that way, so there are relatively few spots for specialty training (e.g. cardiology or plastic surgery) and there is much more emphasis for medical students to become primary care doctors.  This is based on the premise that that is what the country needs so that is what medical schools should provide.

colorful-pillsIt’s well known that a significant percentage of patients seen in primary care have a mental illness/disorder as a primary problem.  In fact, most prescriptions for psychotropic medications are written in primary care, hence it is logical that every British medical graduate be well versed and adept in diagnosing, treating, and managing psychiatric disorders.

Secondly, if one looks at mental illness from a sheer epidemiological point of view, no physician can afford to not be well trained in the fundamentals of psychiatric practice.  I, as a psychiatrist, may or may not, in my career, treat a patient who also develops a testicular tumor, needs bypass surgery, or has a fractured hip, but epidemiologically speaking, my colleagues in urology, cardiology, and orthopedics WILL treat patients who have comorbid depression/anxiety or even severe mental illness such as Bipolar Disorder or Psychosis.  For this reason, it makes sense that these providers have some awareness or understanding of such disorders.

My experience with US healthcare is different; the business side of US medicine has a tendency to favor medical specialties that are procedure-based or that generate flash technologies that can be promoted and attract more market share.  Unfortunately, psychiatry often fares poorly when it comes to such business strategies.  Mental illness can be chronic, take time to treat, and there is rarely a quick fix or magic cure.  Moreover, mental illness can be associated with a downward drift, e.g. someone becomes psychotic, they lose their job, and then their health insurance etc. etc.

 

In the U.S. healthcare business the specialty of psychiatry is often not given a seat at the table.

 

I think this, in part, explains this nonsensical divide between “medical” and “psychiatric” that we often have in healthcare.  Of course, it is a fallacy. Mental health and physical health are intricately link on every level, from a cellular level to a more macro perspective of how human beings navigate their day to day life.  In my view, a sophisticated healthcare system should reflect this intricate relationship and integrate primary and psychiatric care, i.e. get rid of this false distinction or separation by physically placing both services in one clinic, side by side.

soldier-294476_640One American healthcare system has been a leader in integrating primary care and mental health care. That system is the Veterans Health Administration (VA). Unlike many other U.S healthcare systems (which place more emphasis on treating individuals), the VA is charged with taking care of a population, i.e. veterans. This mission guides where the VA places emphasis, so whatever the prevalent issues are for this population becomes the area where the VA will place emphasis and resources.

 

The VA aims to meet the needs of the population it is serving, and hence gives psychiatry a seat at the table.

 

With more than 1,000 outpatient clinics, the VA is the largest health care system in the United States, and it has a very clear sense of its population. Over the past 15 years, the VA has not only participated in some of the biggest studies of integrated care, but has made a commitment to provide patient-centered integrated care to its population.

For the last two years I have been in the role of medical director of the Primary Care-Behavioral Health Team at the VA Palo Alto Health Care System, and I spend most of my days right here at the interface between physical well-being and mental health.  Contrary to some of my previous experiences in healthcare, the last two years have taught me the following:

 

  • Our colleagues in primary care place very high value on psychiatric and psychological consultation from colleagues.

 

  • The clinical work is very rewarding and in many ways bypasses a lot of the frustrations we often feel as physicians working in fragmented health care systems.

 

  • As a specialist, your experience and knowledge can add enormous benefit in making health care more streamlined and patient-centric. There are many opportunities for psychiatrists to act as educators to both colleagues and patients about common misperceptions surrounding mental disorders and mental health care.

 

  • Being a consultant for and working closely with a team of professionals from various specialty backgrounds helps your own career development. It prevents you from getting rusty in areas of medicine other than psychiatry and keeps you on the cutting edge of how health care systems are evolving to meet the needs and demands of all stakeholders.

 

Integrated care is the way of the future, and I feel fortunate that I work in a system that is at the cutting edge of such innovation.

 

For more information about the integrated model at the VA Palo alto Healthcare System please follow this link.

Category: Commentary, mental health care, Psychiatry, Uncategorized | Tagged , , , , , | 4 Comments

Salvaging psychotherapy research: a manifesto

NOTE: Additional documentation and supplementary links and commentary are available at What We Need to Do to Redeem Psychotherapy Research.

Fueling Change in Psychotherapy Research with Greater Scrutiny and Public Accountability

John Ioannidis’s declarations that most positive findings are false and that most breakthrough discoveries are exaggerated or fail to replicate apply have as much to with psychotherapy as they do with biomedicine.

BadPharma-Dec2012alltrials_basic_logo2We should take a few tips from Ben Goldacre’s Bad Pharma and clean up the psychotherapy literature, paralleling what is being accomplished with pharmaceutical trials. Sure, much remains to be done to ensure the quality and transparency of drug studies and to get all of the data into public view. But the psychotherapy literature lags far behind and is far less reliable than the pharmaceutical literature.

As it now stands, the psychotherapy literature does not provide a dependable guide to policy makers, clinicians, and consumers attempting to assess the relative costs and benefits of choosing a particular therapy over others. If such stakeholders uncritically depend upon the psychotherapy literature to evaluate the evidence-supported status of treatments, they will be confused or misled.

Psychotherapy research is scandalously bad.

Many RCTs are underpowered, yet consistently obtain positive results by redefining the primary outcomes after results are known. The typical RCT is a small, methodologically flawed study conducted by investigators with strong allegiances to one of the treatments being evaluated. Which treatment is preferred by investigators is a better predictor of the outcome of the trial than the specific treatment being evaluated.

Many positive findings are created by spinning a combination of confirmatory bias, flexible rules of design, data analysis and reporting and significance chasing.

Many studies considered positive, including those that become highly cited, are basicallycherrypicking null trials for which results for the primary outcome are ignored, and post-hoc analysis of secondary outcomes and subgroup analyses are emphasized. Spin starts in abstracts and results that are reported there are almost always positive.

noceboThe bulk of psychotherapy RCTs involve comparisons between a single active treatment and an inactive or neutral control group such as wait list, no treatment, or “routine care” which is typically left undefined but in which exposure to treatment of adequate quality and intensity is not assured. At best these studies can tell us whether a treatment is better than doing nothing at all or than patients expecting treatment because they have enrolled in a trial and not getting it (nocebo).

Meta-silliness?

Meta-analyses of psychotherapy often do not qualify conclusions by grade of evidence, ignore clinical and statistical heterogeneity, inadequately address investigator allegiance, downplay the domination by small trials with statistically improbable rates of positive findings, and ignore the extent to which positive effect sizes occur mainly in comparisons between active and inactive treatments.

Meta-analyses of psychotherapies are strongly biased toward concluding that treatments work, especially when conducted by those who have undeclared conflicts of interest, including developers and promoters of treatments that stand to gain financially from their branding as “evidence-supported.”

Overall, meta-analyses too heavily depend on underpowered, flawed studies conducted by investigators with strong allegiances to a particular treatment or to finding that psychotherapy is in general efficacious. When controls are introduced for risk of bias or investigator allegiance, affects greatly diminish or even disappear.

Conflicts of interest associated with authors having substantial financial benefits at stake are rarely disclosed in the studies that are reviewed or the meta-analyses themselves.

Designations of Treatments as Evidence-Supported

There are low thresholds for professional groups such as the American Psychological Association Division 12 or governmental organizations such as the US Substance Abuse and Mental Health Services Administration (SAMHSA) declaring treatments to be “evidence-supported.” Seldom are any treatments deemed ineffective or harmful by these groups.

Professional groups have conflicts of interest in wanting their members to be able to claim the treatments they practice are evidence-supported, while not wanting to restrict practitioner choice with labels of treatment as ineffective. Other sources of evaluation like SAMHSA depend heavily and uncritically on what promoters of particular psychotherapies submit in applications for “evidence supported status.”

"Everybody has won, and all must have prizes." Chapter 3 of Lewis Carroll's Alice's Adventures in Wonderland

“Everybody has won, and all must have prizes.” Chapter 3 of Lewis Carroll’s Alice’s Adventures in Wonderland

The possibility that there are no consistent differences among standardized, credible treatments across clinical problems is routinely ridiculed as the “dodo bird verdict” and rejected without systematic consideration of the literature for particular clinical problems. Yes, some studies find differences between two active, credible treatments in the absence of clear investigator allegiance, but these are unusual.

The Scam of Continuing Education Credit

thought field therapyRequirements that therapists obtain continuing education credit are intended to protect consumers from outdated, ineffective treatments. There is inadequate oversight of the scientific quality of what is offered. Bogus treatments are promoted with pseudoscientific claims. Organizations like the American Psychological Association (APA) prohibit groups of their members making statements protesting the quality of what is being offered and APA continues to allow CE for bogus and unproven treatments like thought field therapy and somatic experiencing.

Providing opportunities for continuing education credit is a lucrative business for both accrediting agencies and sponsors. In the competitive world of workshops and trainings, entertainment value trumps evidence. Training in delivery of manualized evidence-supported treatments has little appeal when alternative trainings emphasize patient testimonials and dramatic displays of sudden therapeutic gain in carefully edited videotapes, often with actors rather than actual patients.

Branding treatments as evidence supported is used to advertise workshops and trainings in which the particular crowd-pleasing interventions that are presented are not evidence supported.

Those who attend Acceptance and Commitment (ACT) workshops may see videotapes where the presenter cries with patients, recalling his own childhood.  They should ask themselves: “Entertaining, moving perhaps, but is this an evidence supported technique?

Psychotherapies with some support from evidence are advocated for conditions for which there is no evidence for their efficacy. What would be disallowed as “off label applications” for pharmaceuticals is routinely accepted in psychotherapy workshops.

We Know We Can Do Better

Psychotherapy research has achieved considerable sophistication in design, analyses, and strategies to compensate for missing data and elucidate mechanisms of change.

Psychotherapy research lags behind pharmaceutical research, but nonetheless hasCONSORT recommendations and requirements for trial preregistration, including specification of primary outcomes; completion of CONSORT checklists to ensure basic details of trials are reported; preregistration of meta-analyses and systematic reviews at sites like PROSPERO, as well as completion of the PRISMA checklist for adequacy of reporting of meta-analyses and systematic reviews.

nothing_to_declare1Declarations of conflicts of interest are rare and exposure of authors who routinely failed to disclose conflicts of interest is even rarer.

Departures from preregistered protocols in published reports of RCTs are common, and there is little checking of discrepancies in abstracts from results that were actually obtained or promised in preregistration by authors.  There is  inconsistent and incomplete adherence to these requirements. There is little likelihood that noncompliant authors will held accountable and  high incentive to report positive findings in order for a study is to be published in a prestigious journal such as the APA’s Journal of Consulting and Clinical Psychology (JCCP). Examining the abstracts of papers published in JCCP gives the impression that trials are almost always positive, even when seriously underpowered.

Psychotherapy research is conducted and evaluated within a club, a mutual admiration society in which members are careful not to disparage others’ results or enforce standards that they themselves might want relaxed when it comes to publishing their own research. There are rivalries between tribes like psychodynamic therapy and cognitive behavior therapy, but suppression of criticism within the tribes and in strenuous efforts to create the appearance that members of the tribes only do what works.

Reform from Without

Journals and their editors have often resisted changes such as adoption of CONSORT, structured abstracts, and preregistration of trials. The Communications and Publications Baord of the American Psychological Association made APA one of the last major holdout publishers to endorse CONSORT and initially provided an escape clause that CONSORT only applied to articles explicitly labeled as a randomized trial. The board also blocked a push by the Editor of Health Psychology for structured abstracts that reliably reported details needed to evaluate what had actually been done in trials and the results were obtained. In both instances, the committee was most concerned about the implications for the major outlet for clinical trials among its journals, Journal of Consulting and Clinical Psychology.

Although generally not an outlet for psychotherapy trials, the journals of the Associationvagal tone for Psychological Science (APS) show signs of even being worse offenders in terms of ignoring standards and commitment to confirmatory bias. For instance, it takes a reader a great deal of probing to discover that a high-profile paper of Barbara Fredrickson in Psychological Science was actually a randomized trial and further detective work to discover that it was a null trial. There is no sign that a CONSORT checklist was ever filed the study. And despite Frederickson using the spun Psychological Science trial report to promote her workshops, there is no conflict of interest declared.

The new APS Clinical Psychological Science show signs of even more selective publication and confirmatory bias than APA journals, producing newsworthy articles, to the exclusion of null and modest findings. There will undoubtedly be a struggle between APS and APA clinical journals for top position in the hierarchy publishing only papers that that are attention grabbing, even if flawed, while leaving to other journals that are considered less prestigious, the  publishing of negative trials and failed replications.

If there is to be reform, pressures must come from outside the field of psychotherapy, from those without vested interest in promoting particular treatments or the treatments offered by members of professional organizations. Pressures must come from skeptical external review by consumers and policymakers equipped to understand the games that psychotherapy researchers play in creating the appearance that all treatments work, but the dodo bird is dead.

Specific journals are reluctant to publish criticism of their publishing practices.  If we at first cannot gain publication in the offending journals of our concerns, we can rely on blogs and Twitter to call out editors and demand explanations of lapses in peer review and upholding of quality.

We need to raise stakeholders’ levels of skepticism, disseminate critical appraisal skills widely and provide for their application in evaluating exaggerated claim and methodological flaws in articles published in prestigious, high impact journals. Bad science in the evaluation of psychotherapy must be recognized as the current norm, not an anomaly.

We could get far by enforcing rules that we already have.

We need to continually expose journals’ failures to enforce rules about preregistration, disclosure of conflicts of interest, and discrepancies between published clinical trials and their preregistration.

There are too many blatant examples of investigators failing to deliver what they promised in the preregistration, registering after trials have started to accrue patients, and reviewers apparently not ever checking if the primary outcomes and analyses promised in trial registration are actually delivered.

Editors should

  • Require an explicit statement of whether the trial has been registered and where.
  • Insist that reviewers consult trial registration, including modifications, and comment on any deviation.
  • Explicitly label registration dated after patient accrual has started.

spin noCONSORT for abstracts should be disseminated and enforced. A lot of hype and misrepresentation in the media starts with authors’ own spin in the abstract . Editors should insist that main analyses for the preregistered primary outcome be presented in the abstract and highlighted in any interpretation of results.

No more should underpowered in exploratory pilot feasibility studies be passed off as RCTs when they achieve positive results. An orderly sequence of treatment development should occur before conducting what are essentially Phase 3 randomized trials.

Here as elsewhere in reforming psychotherapy research, there is something to be learned from drug trials. A process of intervention development ought to include establishing the feasibility and basic parameters of clinical trials needs to proceed phase 3 randomized trials, but cannot be expected to become phase 3 or to provide effect sizes for the purposes of demonstrating efficacy or comparison to other treatments.

Use of wait list, no treatment, and ill-defined routine care should be discouraged as control groups. For clinical conditions for which there are well-established treatments, head-to-head comparisons should be conducted, as well as including control groups that might elucidate mechanism. A key example of the latter would be structured, supportive therapy that controls for attention and positive expectation. There is little to be gained by further accumulation of studies in which the efficacy of the preferred treatment is assured by comparison to a lamed control group that lacks any conceivable element of affective care.

Evaluations of treatment effects should take into account prior probabilities suggested by the larger literature concerning comparisons between two active, credible treatments. The well-studied treatment of depression literature suggests some parameters: effect size is associated with a treatment are greatly reduced when comparisons are restricted to credible, active treatments; better quality studies; and controls are introduced for investigator allegiance. It is unlikely that initial claims about a breakthrough treatment exceeding the efficacy of existing treatments will be sustained in larger studies conducted by investigators independent of developers and promoters.

Disclosure of conflict of interest should be enforced and nondisclosure identified in correction statements and further penalized. Investigator allegiance should be considered in assessing risk of bias.

Developers of treatments and persons with significant financial gain from a treatment being declared “evidence-supported” should be discouraged from conducting meta-analyses of their own treatments.

Trials should be conducted with sample sizes adequate to detect at least moderate effects. When positive findings from underpowered studies are published,  readers scrutinize the literature for similarly underpowered trials that achieve similarly positive effects.

Meta-analyses of psychotherapy should incorporate p-hacking techniques to evaluate the likelihood that pattern of significant findings exceeds likely probability.

Adverse events and harms should routinely be reported, including lost opportunity costs such as failure to obtain more effective treatment.

We need to shift the culture of doing and reporting psychotherapy research. We need to shift from praising exaggerated claims about treatment and faux evidence generated to  promote opportunities for therapists and their professional organizations.  Instead, it is much more praiseworthy to provide  robust, sustainable, even if more modest claims and to call out hype and hokum in ways that preserve the credibility of psychotherapy.

criticism

Click to Enlarge

The alternative is to continue protecting psychotherapy research from stringent criticism and enforcement of standards for conducting and reporting research. We can simply allow the branding of psychotherapies as “evidence supported” to fall into appropriate disrepute.

Category: evidence-supported, mental health care, meta analysis, psychotherapy, Publication bias, research | Tagged , , , | 13 Comments

Critical analysis of a meta-analysis of a treatment by authors with financial interests at stake

trust me 2

Update June 2, 2014. This blog has been updated to  respond to information provided in comments on the blog, as well as my examination of the membership of International Scientific Advisory Committee for Triple P Parenting that these comments prompted.

The case appears stronger than first thought that this is a thoroughly flawed meta-analysis conducted by persons with substantial but undeclared,  financial interests in portraying triple P parenting as “evidence-supported” and effective. Moreover, they rely heavily on their own studies and those conducted by others with undeclared conflicts of interest. The risk of bias is high as result of including nonrandomized trials in the meta-analysis, likely selected exclusion of contrary data, likely spinning of the data that is presented, and the reliance on small, methodologically flawed studies whose authors have undeclared conflicts of interest.But don’t take my word for it. Read and see if you are persuaded also.

I recommend clinicians, policymakers, and patients not make decisions about Triple P Parenting on the basis of results of this meta-analysis.

This meta-analysis should serve as a wake-up call for greater enforcement of existing safeguards on generating and integrating evidence concerning the efficacy of psychological interventions, as well as a heightened suspicion that what is observed here is more widespread in the literature and true of evaluations of other treatments.

Do you have the time and expertise to interpret a meta-analysis of a psychological treatment? Suppose you don’t, but you are a policymaker, researcher, clinician, or potential patient. You have to decide whether the treatment is worthwhile. Can you rely on the abstract in a prestigious clinical psychology journal?

What if the authors had substantial financial gains to be made from advertising their treatment as “evidence-supported”?

 In this blog post I am going to provide critical analysis of a meta-analysis of Triple P Parenting Programs (3P) published by its promoters. The authors have a lucrative arrangement with their university [updated] for sharing profits from dissemination and implementation of 3P products. As is typical for articles authored by those benefiting financially from 3P, these authors declare no conflict of interest.

You can read this post with a number of different aims.

(1) You can adjust your level of suspicion when you encounter an abstract of a meta-analysis by authors with a conflict of interest. Maybe be even more suspicious when you aren’t informed by the authors and find out on your own.

(2) You can decide more how much credence to give to claims in meta-analyses, simply because they are in prestigious journals.

(3) You can decide whether you have adequate skills to independently evaluate claims in meta-analysis.

(4) Or you can simply read the post to pick up some tips and tricks for scrutinizing a meta-analysis.

You will see how much work I had to do. Decide whether it would have been worth you doing it. This article appears in a prestigious, peer-reviewed journal. You’d think “Surely the reviewers would have caught obvious misstatements or simple hanky-panky and either rejected the manuscript or demanded revision.” But, if you have been reading my posts, you probably are no longer surprised by gross lapses in the oversight provided by peer review.

This post will identify serious shortcomings in a particular meta analysis. However, this article made it through peer review. After my analysis, I will encourage you to consider how this article reflects on some serious problems in the literature concerning psychological treatment. Maybe there is even a bigger message here.

The Clinical Psychology Review article I will be critiquing is behind a pay wall. You request a PDF from the senior author at matts@psy.uq.edu.au . You can consult an earlier version of this meta-analysis placed on the web with a label indicating it was under review at another journal. The manuscript was longer than the published article, but most of the differences between the two other than shortening are cosmetic. Robin Kok has now highlighted the manuscript to indicate overlap with the article. Thanks, Robin.

Does this abstract fairly represent the conduct and conclusions of the meta-analysis?

This systematic review and meta-analysis examined the effects of the multilevel Triple P-Positive Parenting Program system on a broad range of child, parent and family outcomes. Multiple search strategies identified 116 eligible studies conducted over a 33-year period, with 101 studies comprising 16,099 families analyzed quantitatively. Moderator analyses were conducted using structural equation modeling. Risk of bias within and across studies was assessed. Significant short-term effects were found for: children’s social, emotional and behavioral outcomes (d = 0.473); parenting practices (d =0.578); parenting satisfaction and efficacy (d = 0.519), parental adjustment (d = 0.340); parental relationship (d = 0.225) and child observational data (d = 0.501). Significant effects were found for all outcomes at long-term including parent observational data (d = 0.249). Moderator analyses found that study approach, study power, Triple P level, and severity of initial child problems produced significant effects in multiple moderator models when controlling for other significant moderators. Several putative moderators did not have significant effects after controlling for other significant moderators. The positive results for each level of the Triple P system provide empirical support for a blending of universal and targeted parenting interventions to promote child, parent and family wellbeing.

On the face of it, wow! This abstract seems to provide a solid endorsement of 3P based on a well done meta-analysis. Note the impressive terms: “multiple search strategies,”… “moderator analyses”… “Structural equation modeling,”… “risk of bias.”

I have to admit that I was almost taken. But then I noticed a few things.

The effect sizes were unusually high for psychological interventions, particularly treatments delivered

  • in the community with low intensity.
  • by professionals with typically low levels of training.
  • to populations that are mandated treatment and often socially disadvantaged and unprepared to do what is required to benefit from treatment.

The abstract indicates moderator analyses were done. Most people familiar with moderator analyses would be puzzled why results for the individual moderators were not reported except where they turned up in a multivariate analysis. That is not relevant to the question of whether there was a moderator effect for a particular variable.

The abstract mentions attention to “risk of bias,” but does not report what was found. It also leaves you wondering how promoters can analyze the bias of studies of evaluating a psychological treatment when they are the investigators in a substantial number of the trials. Of course, they would say they conduct and report their intervention studies very well. Might they have a risk of bias in giving themselves high marks?

penalty1So, as I started to read the meta-analysis, there is a penalty flag on the playing field. Actually, a lot of them.penalty_flags

I had to read this article a number of times. I had to make a number of forays into the original literature to evaluate the claims that were being made. Read on and discover what I found but here are some teasers:

Two of the largest trials of 3P ever done were excluded, despite claims of comprehensiveness. Both have been interpreted as negative trials.

One of the largest trials ever done involved one of the authors of the meta-analysis. A description of the trial was published while the trial was ongoing. Comparison of this description to the final report suggests highly selective and distorted reporting of basic outcomes and misrepresentation of the basic features of the design. Ouch, a credibility problem!

Very few of the other evaluation studies were pre-registered, so we do not know whether they have such a high risk of bias. But we now have evidence that is unwise to assume they don’t.

The meta-analysis itself was pre-registered in PROSPERO, CRD42012003402. Preregistration is supposed to promise what analyses will and will not be reported in the published meta-analysis. Yet, in this meta analysis, the authors failed to report what was promised and provided analyses other than what were promised in answering key questions. This raises concerns about hypothesizing after results are known (H.A.R.K.I.N.G). Ouch!

The authors claim impressive effects for a variety of outcomes. But these supposedly different outcomes are actually highly intercorrelated. These variables could not have all been the primary outcomes in a given trial. People who do clinical trials and meta-analyses worry about these sorts of selective reporting of outcomes from a larger pool of variables.

There is high likelihood that the original trials and should into the meta-analysis selectively reported positive outcomes from a larger pool of candidates variables. Because these trials do not typically have registered protocols, we do not know what was the designated primary outcome and so we have to particularly careful in accepting multiple outcomes from the same trial. Yet, the authors of this meta-analysis are basically bragging about doing multiple, non-independent re-analyses of data from the same trial. This is not cool. In fact, very fishy.

The abstract implies a single meta-analysis was done. Actually, the big population-level studies were analyzed separately. That is contrary to what was promised in the protocol. We should have been told this would be done at the outset, especially because the authors also claim that they were testing for any bias from including overly small studies (at least one with only 6 patients) with larger ones, and these studies are the largest. So, the authors cannot fully evaluate that bias which they themselves have indicated is a threat to the validity of the findings

The meta-analyses included lower quality, nonrandomized trials, lumping them together with the higher quality evidence of randomized trials. Bad move. Adding poor quality data does not strengthen conclusions from stronger studies.

Inclusion of nonrandomized trials also negates any risk of bias assessment of the randomized trials. Judged by the same standards applied to the randomized trial, the nonrandomized trials all have high of bias. It is no wonder that the authors did not disclose results of risk of bias noted in the abstract. It makes no sense when they have included nonrandomized trials that would score poorly, but it makes no sense to examine the risk of bias in the randomized trials without applying the same standards to the nonrandomized trials with which they are being integrated.

Combining effect sizes from nonrandomized trials with those of randomized trials involve some voodoo statistics that were consistently biased in the direction of producing better outcomes for 3P. Basically, the strategies used to construct effect sizes from nonrandomized trials will exaggerate the effects of 3P.

The bottom line is that “Hey, Houston, we’ve got a problem.” Or two or three.

Introduction: is this meta-analysis the biggest and best ever?

In the introduction, the authors are not shy in their praise for their 3P. They generously cite themselves as the authority for some of the claims. Think about how it would look if they had used first person instead of citing themselves, simply said “we think” and “we conclude” rather than using a citation that usually implies the support of someone else thinking or finding this. Most importantly, they claim that this meta-analysis of 3P is the most comprehensive one ever and they criticize past meta-analyses for being incomplete.

a compilation of systematic review and meta-analysis, based on more than double the number of studies included any prior meta-analysis of triple P or other parenting interventions, provides a timely opportunity to examine the impact of a single, theoretically-integrated system of parenting support on the full range of child, parent and family outcome variables

But it is not a single meta-analysis, but a couple of meta-analyses with some inexplicable and arbitrary lumping and splitting of studies, some of quite poor quality that pass reviewers would have left out.

This meta-analysis includes more studies than recent past meta-analyses because it accepts low-quality data from nonrandomized trials, as well as combining data from huge trials with studies with as few as six patients. It may be bigger, but is unlikely to give a better answer as to whether 3P works because the authors have done unusual things to accommodate poor quality data.

Authorities about how to do a meta-analysis agree it should represent a synthesis of the evidence with careful attention to the quality of that evidence. That involves distinguishing between conclusions based on best evidence, and those dependent on introducing poor quality evidence.

The introduction misrepresents other authors as agreeing with the strategy used in this meta analysis. In particular, if I were Helena Kraemer, I would be very upset that the authors suggest that I endorsed a strategy of ‘the more studies, the better, even if it means integrating data from poor quality studies,’ Or any of a number of things that they imply with which Helena Kraemer would agree:

all possible evaluation designs were included to provide the most comprehensive review of triple P evidence, and to avoid exclusion and publication bias (Sica, 2006; Kraemer et al. 1998)…. To provide the most comprehensive meta-analytic assessment of triple P studies, inclusion-based approach was adopted (Kraemer et al. 1998).

It would help if readers knew that this meta-analysis was published in response to another meta-analysis that disputes the level and quality of evidence that 3P is effective. In a story I have recounted elsewhere, promoters (or someone in direct contact with them) of 3P blocked publication of this other meta-analysis in Clinical Psychology Review and tried to intimidate the author, Philip Wilson, M.D. PhD. His work nonetheless got published elsewhere and, amplified by my commentary on it, set off a lot of reevaluation of the validity of the evidence for 3P. It stimulated discussion about undisclosed conflicts of interest and caused at least one journal editor to announce greater vigilance about nondisclosure.

It is interesting to compare what Wilson says about 3P with how the authors portray his work in this paper. If you did not know better, you would conclude that Wilson’s most important finding was a moderate effect for 3P, but that he was limited by considering too few studies and too narrow a range of outcomes.

Actually, Wilson made well-reasoned and well-and described decisions to focus on primary outcomes, not secondary ones, and only from randomized trials. He concluded:

In volunteer populations over the short term, mothers generally report that Triple P group interventions are better than no intervention, but there is concern about these results given the high risk of bias, poor reporting and potential conflicts of interest. We found no convincing evidence that Triple P interventions work across the whole population or that any benefits are long-term. Given the substantial cost implications, commissioners should apply to parenting programs the standards used in assessing pharmaceutical interventions.

In the manuscript that was posted on the web, these authors cite my paper without indicating its findings and they cite my criticism of them without fairly portraying what I actually said. In the published meta-analysis, they dropped the citation. They instead cite my earlier paper which they say I “claimed” that clinical trials with less than 35 participants per condition have less than 50% probability of obtaining a moderate effect even if there was one present.

No, if these authors had bothered to cite power analyses tables they would see that this is not merely my “claim,” it is what every power analysis table shows. And my point was that if most small published trials are positive, as they usually are, there must be a publication bias, with negative trials missing.

What I say in in the abstract of my critique of 3P is

Applying this [at least 35 participants in the smallest cell] criterion, 19 of the 23 trials identified by Wilson et al. were eliminated. A number of these trials were so small that it would be statistically improbable that they would detect an effect even if it were present. We argued that clinicians and policymakers implementing Triple P programs incorporate evaluations to ensure that goals are being met and resources are not being squandered.

So, most of the available evidence for 3P from randomized trials is from small trials with a high risk of bias, including trials having been conducted by people who have financial interests at stake. The 3P promoters are trying to refute this finding by introducing nonrandomized trials, largely done with their own involvement.

Promoters of 3P have also come under withering criticism from Professor Manuel Eisner of Cambridge University. Reading this article, you would think his main gripe was that developers of 3P are sometimes involved in studies and that needs to be taken into account as a moderator variable.

I invite you to compare the summary in the article to what Professor Eisner says here and here.

The authors set about to counter Professor Eisner by introducing 3P “developer involvement” as a moderator variable. The problem is that Eisner is complaining about undisclosed conflicts of interest. We know that there is a lot of developer involvement by consulting the authorship of papers, but we do not know to what extent conflict of interest exists beyond that, i.e., persons doing trials getting financial benefit from promoting 3P products. But conflicts of interest statements almost never accompany trials of 3P, even when developers are involved.

I sent a couple of emails to Nina Heinrichs, a German investigator who has been involved in a number of studies, conducted a key meta-analysis, and spent considerable time with 3P developers in Australia. I asked her directly if she met criteria for needing to disclose a conflict of interest. She is not yet replied (I will correct this statement if she responds and indicates she does meet criteria for conflict of interest) [Update: Dr. Heinrichs is listed as a member of the Triple P Parenting International Scientific Advisory Committee, as are a number of other authors of 3P intervention trials who fail to disclose conflicts of interest. This meets critteria for having a conflict to declare. But these trials were not coded as positive for developer involvement in this meta analysis and so this designation is not a proxy for conflict of interest. ] So, we have a moderator variable that cannot be independently  checked, but about which there is considerable suspicion. And it does not correspond to the undisclosed conflicts of interest about which critics of 3P complain.

In the introduction we see the authors using a recognizable tactic of misciting key sources and ignoring others in order to suggest there are some consensus about their assessment of 3P and the decisions in conducting a meta-analysis. Basically, they are claiming unfounded authority from the literature by selective and distorted citation and ignoring of others, attributing to sources agreement that is not there and misrepresenting what actually is there. Steven Greenberg has provided an excellent set of tools for detecting when this is being done. I encourage you to study his work and acquire these tools.

We will see more distorted citation in this meta-analysis article as we proceed into the method section. Note, however, that use of this powerful technique and having access to the information it provides requires going back to original sources and looking for differences between what is said in them and what is said about them in the article. This takes time that you might not want to commit, but it is vital to get at some issues.

Methods: Explaining what was done and why

I will try not to bore you with technical details nor get you lost as I almost did in the authors’ complex and contradictory description of how they decided to do the meta-analysis. But I will highlight a few things so that you can get a flavor. I am sure that if you go back you can find more such things.

The methods section starts with reassurance that the protocol for the meta-analysis is preregistered, suggesting that we should relax and assume that the protocol was followed. Actually, the published paper deviates from the protocol in treating randomized trials with active control groups as nonrandomized and by keeping the population-level intervention separate.

Those of you who have been following my blog post may recall my saying again and again that interventions do not have effect sizes, only comparisons do. When these authors take the data for the intervention condition out of randomized trials, they destroy the benefit of the design and use statistics that exaggerate the effects of the treatment.

The authors make a spirited defense of including nonrandomized trials as being better. They justify the voodoo statistics that they used to calculate effect sizes from these trials without control groups. They cite some classic work by Scott Morris and Richard DeShon as the basis for what they do, but I am quite confident that these authorities would be annoyed being invoked as justification.

Basically, if you use data for intervention groups without a comparison, you overestimate the effect of the intervention because you take all the change that occurs and attributed to the intervention, not the passage of time or some nonspecific factors. If there is any hope of getting around this, you have to have an estimate of what change would be experienced in the absence of intervention and test your assumptions in making this estimate.

With the population served by 3P, this is particularly a problem because many of these families come into treatment at a time of crisis and it is only with a control group that you can calculate how much they will decline anyway in their problems without getting treatment. Similarly, if you take an intervention group and drop its comparator, you do not get to see what is specific to the intervention.

We already know that much of the evidence for the efficacy of psychological interventions involves comparing it to an active control groups like waiting lists or remaining in treatment as usual that is either no care or thoroughly in adequate care. The authors of this meta-analysis suppressed information that would have allowed us to examine whether that was the case for 3P.

This is all very bad form and serves to inflate the effect sizes obtained by the authors.

The methods section seems to give an impressive account of thoughtful selection criteria and thorough searches. It indicates that published as well as unpublished papers will be included in the the meta-analysis.

The preregistration does not indicate that population-level intervention trials will be kept separate, but that is what is done. If you do not look carefully, you will not find the two brief disclosures of this in the method section.

The methods section indicates that seven different outcomes will be considered. This may sound comprehensive, but presents a number of problems. Clinical trials have only one or two or so declared primary outcomes and need to be evaluated on the basis of whether there are changes in those outcomes. Many evaluations of psychological treatment, and especially of 3P, include administering a battery of potential outcome variables and leave to after the outcomes of being analyzed to choose which one to report. That is why there is such a push to preregister protocols for trials and commit investigators to one or two outcomes. You can click on the image below depicting child outcomes and 3P to enlarge it. Despite many of these trials being conducted by the same investigators, note the wide variety of outcomes. Without preregistration, we do not know if all of the outcomes are reported in a particular trial or if the one that is emphasized was originally primary.

measures 3supplement.-page-0

Seven outcome categories are a lot, especially when five depend on parent self-report. So, we are dealing with multiple outcomes from the same trial, often with the same responded, often highly intercorrelated, with no independent validation as to whether these outcomes were designated primary or picked after analyses had begun from a larger array. There is a high risk of confirmatory bias.

More preferable would be the strategy adopted by Wilson and others of centering on one outcome, child adjustment in evaluating 3P on that basis. Even here, though, we run into a problem. Child outcomes can be social emotional or behavioral and we do not know which was the primary.

You may not be bothered by all of this, but consider the possibilities of picking the outcomes that make 3P look the best and using that as a basis for committing millions of dollars to funding it, when it may not be effective.

Art Garfunkel’s Mr Shuck ‘N Jive http://tinyurl.com/k7wbwo4

Art Garfunkel’s Mr Shuck ‘N Jive http://tinyurl.com/k7wbwo4

Results: shuckin and jivin’

One of first things you should do when you get the results of a meta-analysis is checked the analysis of heterogeneity to determine whether this group of studies can be integrated to produce a valid summary effect size. The authors report two measures, I2 and Q, both of which indicates considerable heterogeneity, particularly for the very important child outcomes. This should be like a warning light flashing on your dashboard: you should pull over and see what is wrong. All that the authors do is read to these measures for the individual levels 1 – 5 of the intervention. Because a number of these levels have only a few trials, measures of heterogeneity look better but are useless because of low power. The problem is considered solved, except that the authors have missed an important opportunity to throw the hood up on their meta-analysis and see what is going wrong.

The authors also conduct moderator analyses and find that study design (nonrandom versus random), investigator involvement, and having a small number of participants all strongly contribute to more positive findings. Again, the author should have been greatly troubled by this. Instead, they do two things to get rid of this potential embarrassment. First, they emphasize not on whether individual moderator variables have a significant impact, but whether any impact survives inclusion in a multiple regression equation with the other moderator variables. Second, they emphasize that even when they look at conditions when one of these moderator variables puts 3 PM the disadvantage, effects are still significant.

These tactics suggest they are really interested making the case that 3P is effective, not examining all the relevant data and potentially rethinking some of their decisions that artificially improve the appearance of 3P being effective.

The authors separate out what they identify as the three large-scale population trials of 3P. These are extremely expensive trials that adopt a public health approach to reducing child behavior problems. All three were done by the authors and all appraised in glowing terms.

The problem is that we have an independent check on the accuracy of what is reported in the one American trial. As I have detailed elsewhere, an earlier paper was published while the trial was ongoing gives specifics of its design, including primary outcomes. It does not agree with what was reported in the article providing data for the meta analysis.

Prinz, R. J., Sanders, M. R., Shapiro, C. J., Whitaker, D. J., & Lutzker, J. R. (2009). Population-based prevention of child maltreatment: The US Triple P system population trial. Prevention Science, 10(1), 1-12.

Then there is the problem of the two large unreported trials, both of which have been interpreted as being negative. One of them is published and you can find it here.

Little, M., Berry, V., Morpeth, L., Blower, S., Axford, N., Taylor, R., … & Tobin, K. (2012). The impact of three evidence-based programmes delivered in public systems in Birmingham, UK. International Journal of Conflict and Violence, 6(2), 260-272.

While the other is unpublished. It was conducted by investigators who have published other trials and it could be readily identified by either contacting them or from the considerable press coverage that it received. [Update: One of the authors is a member of International Scientific Advisory Committee for Triple P Parenting]

Schönenberger, M., Schmid, H., Fäh, B., Bodenmann, G., Lattmann, U. P., Cina, A., et al. (2006). Projektbericht “Eltern und Schule stärken Kinder” (ESSKI); Ein Projekt zur Förderung der Gesundheit bei Lehrpersonen, Kindern und Eltern und zur Prävention von Stress, Aggression und Sucht – Ergebnisse eines mehrdimensionalen Forschungs- und Entwicklungsprojekts im Bereich psychosoziale Gesundheit in Schule und Elternhaus

Both the media release and the research report can be downloaded from the website of the ESSKI project (http://www.esski.ch/downloads). Subsequent analyses have cast doubt [Updated] about whether there were positive findings and it has remained unpublished.

Taken together, we have

  • Serious doubts about the validity of reports of the only US study.
  • No way of independently checking the validity of the two other studies conducted by the authors of the meta-analysis.
  • The unexplained absence of two negative trials.

Update 6/2/2104 7:51 am The authors bolster the case for the strength of their findings by bringing in the so-called failsafe N to argue hundreds of unpublished studies would have to be sitting out there in desk drawers to reverse their conclusions.

Orwin’s failsafe N was as follows for each outcome: child SEB outcomes = 246, parenting practices = 332, parenting satisfaction and efficacy = 285, parental adjustment = 174, parental relationship = 79, child observations = 76. It is highly unlikely that such large numbers of studies with null results exist, indicating the robustness of the findings to publication bias. For parent observations, Orwin’s failsafe N could not be computed as the overall effect size was below 0.10, the smallest meaningful effect size.

Bringing in failsafe N in an evaluation of an effect size  is still a common tactic in psychology, but the practice is widely condemned and Cochrane Collaboration specifically recommends against it as producing unreliable and invalid results.

I have discussed this in other blog posts, but let me point to some key objections. The first and most devastating is that their analyses have not provided an estimate of effect sizes in well done, adequately sized trials. Rather, they produced a biased estimate based on some poor quality studies that should not have been included, including nonrandomized studies. Second, they have not dispensed with the heterogeneity that they found in the published studies, and so cannot generalize to whatever studies remain unpublished. Third, they assume that the on published studies are only null findings, where as some of them could have actually demonstrated that 3P as a negative effect, particularly in comparison to active control groups, which were suppressed in this meta-analysis.

I am confident that clinical epidemiologists and those doing meta-analyses with biomedical data would reject out of hand this argument for  the strengthen of 3P effects from failsafe N .

 Conclusion

Despite all these objections, the article ends with a glowing conclusion, suitable for dissemination to funding agencies and posting on the websites promoting 3P.

The evolution of a blended system of parenting support involving both universal and targeted elements has been built on a solid foundation of ongoing research and development, and the testing of individual components comprising the intervention. The present findings highlight the value of an integrated multilevel system of evidence-based parenting programs and raise the real prospect that a substantially greater number of children and parents can grow up in nurturing family environments that promote children’s development capabilities throughout their lives.

I suggest that you now go back to the abstract and reevaluate whether you accept what it says.

The problems with this meta-analysis are serious but reflect larger problems in the evaluation of psychological treatments

I have identified serious problems with this meta-analysis, but we need to keep in mind that it got published in a respectable journal. That could only have happened if some reviewers let it through. Our finding it in the literature points to some more pervasive problems in the research evaluating psychological treatments. Just as this meta-analysis does not provide valid estimates of the effectiveness of 3P, other meta-analyses are done by persons with conflicts of interest, and many of the studies providing evidence entered into meta-analyses are conducted by persons with conflicts of interest and have selective reporting of data.

Journals have policies requiring disclosures of conflict of interest under the circumstances, but at least in the psychological literature, you can find only few examples that these policies are enforced.

The authors registered their protocol for conducting this meta-analysis, but then did not adhere to the protocol in important ways. Obviously, reviewers did not pay any attention. It is actually uncommon for meta-analyses of psychological treatments to be preregistered, but any benefit to the recommendation or even requirement for this is lost if reviewers do not pay attention to whether authors delivered to what they promised.

A considerable proportion of the clinical trials included in the meta-analysis either had one of authors involved or another person with financial interests at stake. I did not do a thorough check, but in reviewing this literature I found no examples of conflicts of interest being declared. Again, journals have policies requiring disclosures, but these policies are worthless without enforcement.

Funding agencies increasingly require pre-registration of the protocols for clinical trials before the first patient is enrolled, including their key hypothesis and primary outcomes. Yet, other than a few like PLOS One, most journals do not require that protocols be available or alert reviewers to the need to consult protocols. Authors are free to assess many candidates for what will be decided to be the primary outcome after examining their data. The bottom line is that we cannot determine that there is selective reporting of outcomes with a strong confirmatory bias. Meta-analyses of the existing literature concerning psychological treatments may offer exaggerated estimates of their efficacy because they integrate selectively reported outcomes.

The authors of this meta-analysis need to be called out on their failure to declare conflicts of interest and the claims that they make for the efficacy of 3P should be dismissed, particularly when they are at odds with other assessments not done by people with financial interests at stake. However, we should take this blatant failure in peer review in editorial oversight as an opportunity to demand reform. Until those reforms are achieved, some of which involve simply enforcing existing rules, the literature evaluating psychological treatments is suspect, and especially meta-analyses.

Category: Conflict of interest, mental health care, meta analysis, parenting, Publication bias | Tagged , , | 7 Comments

Patients’ perspectives on schizophrenia: A Beautiful Mind, the movie

john_forbes_nash_jr__by_peter_badge

John Forbes Nash, Jr. photographed by Peter Badge

An old movie by now, Ron Howard’s 2001 highly awarded “A Beautiful Mind” graciously stands the test of time when it comes to great movies about severe mental illness illustrating the patient’s perspective.

This biographical movie shows the life story of John Forbes Nash, Jr., a Nobel Memorial Prize Laureate in Economics.

Nash, a brilliant mathematician, was diagnosed with schizophrenia in his early thirties. The movie follows Nash from his starting at Princeton as a young graduate student, to his late sixties, ending with Nash leaving the auditorium in Stockholm after accepting the Nobel Prize.

Directed by Ron Howard and starting Russell Crowe as John Nash, winner of four Oscars and 4 Golden Globes, A Beautiful Mind is in a category of its own.

There are few movies that manage to win universal applause while also providing an accurate, non-romanticized record of the mental angst and despair that are part of severe mental illness.

A Beautiful Mind manages to do all the above in an elegantly and deeply-touching manner. Furthermore, the movie offers a unique, insider view on what schizophrenia feels like from the patient’s perspective. The movie not only illustrate what hallucinations are but also the quid pro quo relationship between hallucinated perceptions and a delusional interpretation of reality. The world as seen through Nash’s eyes in the mists of his psychosis is a terrifyingly confusing and threatening place. Things are connected in ways that are meant to be secret.  Nash believes that he is both given permission and has the unique ability to see these connections. The expected secrecy is a good explanation why no one else can see what he sees.

The Reality of Illusion

In a sense, we all fabricate our realities. The way we know that our brand of “reality” is indeed real is by confronting it with the “realities” of other people out there. Family, friends, co-workers, the neighbor next door…  Do our realities match? Bingo. Chance is that we are seeing it as it is; that our subjective reality matches the objective reality, whatever that is.

Nash is an able builder of his own reality. He lives in a complex word, full of abstract meaning and symbols, which may be invisible to the untrained eye. But lack of visibility does not equal nonexistence. That is his first challenge.

Further, it just so happens that part of Nash’s reality is genuinely secret. Thus, Nash does not the benefit or those checks and balances that most of us have; he misses out on the opportunity to confront and match his subjective perceptions to re-calibrate and fine tune his reality check. This is his second challenge.

Add to that grave disappointment rooted in tremendous expectations fertilizing a mind which strength is in moving fluidly between concepts that do not seem to be even remotely related. A mind who remains organized in the most disorganized way, so to speak.

And that is when John Nash’s tenuous reality breaks down into paranoia and hallucinations.

Of note, while auditory hallucinations are frequent, the visual hallucinations that carry on a good part of the movie are in fact rare in schizophrenia. In Nash’s own words:

Something like [visual hallucinations] this may appear in the movie [A Beautiful Mind], but of course the delusion pattern in the movie is going to be somewhat different, because the person sees things that aren’t there. It’s not just ideas or the idea of a plot or something, he thinks he sees certain persons. I never saw anything.

As the story unfolds there is no surprise to see that Nash only finds further confirmation of his paranoid fears in the fact “they” are successful in locking him up.

The plot thickens in a gut sickening way. Viewers themselves are bewildered and traumatized by the too many overlapping and self-concealing layers of whatever reality is. Being inside Nash’s mind is no easy treat, even when that actually happens from the comfort and safety of the spectator’s seat.

What is it “really” that a good treatment for schizophrenia does? 

We further understand why many psychiatric patients make the decision to not take their medications. Following insulin-shock therapy and a full course of anti-psychotic medications Nash’s beautiful mind seems shattered to pieces. It is like an extravagantly colored, unreal Flamingo – Bird of Paradise combination of a majestic bird, who once used to fly close to the sun, got his long wings suddenly clipped. The bird once soaring aimlessly in the deeps of the sky is now orderly but also mindlessly dragging his feet in the mud.

It is important to note that this rather dire perspective of the effects of treatment is in fact not supported by John Nash’s own recollection of that period.

And it did happen that when I had been long enough hospitalized that I would finally renounce my delusional hypotheses and revert to thinking of myself as a human of more conventional circumstances and return to mathematical research. In these interludes of, as it were, enforced rationality, I did succeed in doing some respectable mathematical research.

The movie bluntly documents the devastating effects of schizophrenia on Nash’s personal and professional live. The struggle to take/not to take medications, the challenge of facing one’s demons without medication support, with the predictable yet still heart-wrenching return of the nightmarish hallucinations, the ambiguous relationship with the hallucinated reality itself, despite repeated reality “infusions” provided by family, friends and doctors – we see all of it in its terrible glory. And we get to understand.

The End?

A Beautiful Mind closes with an old Nash who, after painstakingly realigning the pieces of his life’s puzzle, is honored by his Princeton fellow professors. The movie closes with Nash seeing three of his old hallucinated friends/tormentors as he leaves the auditorium in Stockholm after receiving his Nobel prize.

After being treated early on with medications John Nash’s schizophrenia course is one of partial recovery.  Nash eventually chooses to stay off medications – this decision leads to”25 years of partially deluded thinking” in Nash’s own words.

Nash does attain a stable state off medication, but it appears that his symptoms do not completely go away – instead they decrease in intensity and go into partial remission. This illustrates the fact that the prognosis of schizophrenia is not universally dire and stability without medications can be achieved in specific cases. The movies also makes the counterpoint that Nash lost many years of productive life to his untreated psychosis before he reached a final state of relative stability.

Further reading:

1. “John F. Nash Jr. – Biographical”. Nobelprize.org. Nobel Media AB 2013. Web. 30 May 2014

Note: For an educational question and answer version of this post you can go here.

Category: antipsychotics, schizophrenia, Uncategorized | Tagged , , , , | Leave a comment