A March 7, 2013 article in JAMA Internal Medicine claimed that depressed heart patients improved with a treatment involving “centralized, stepped, patient preference-based treatment” and that benefits were substantially greater than seen in past intervention studies. The trial, called the Comparison of Depression Interventions after Acute Coronary Syndrome (CODIACS) should get a lot of attention. The article is already being picked up on the web with headlines like “Distance Program Helps Depressed Heart Patients” and “Treating Post-ACS Depression Effective, Cost-Neutral.”
Reading the article a number of times left me with doubts that the trial actually demonstrated the efficacy of the intervention in ways that could be generalized to real world settings, despite the trial having been conducted in the community. But the article also prompted me to think about the dysfunctional, fragmented system of care for depression in general medical settings in America, how poor the treatment is that patients get there, and the difficulty doing meaningful research in this context.
The report of the study, along with a thoughtful editorial commentary, are available open access from the journal. I encourage you to read the article now, ahead of proceeding, or read it along with this blog post, and see if you can see what I saw and whether you agree with my assessments. I am going to be offering an unconventional, but hopefully, in the end, persuasive, analysis that concludes that this trial did not show much that we did not already know. Regardless, we can learn something from this article about interpreting the results of clinical trials for depression, with the added bonus of this article showing how to effectively write a report of a clinical trial in a way that captures attention. This article is a truly masterful example.
Patients. The 150 patients were recruited across five sites, with 73 randomized to the intervention group and 77 to the routine care group. To be eligible, patients had to have elevated scores on the Beck Depression Inventory, a self-report depression questionnaire and to be two to six months after hospitalization for an acute cardiac syndrome, which could be a myocardial infarction (heart attack) or unstable angina.
There were no formal diagnostic interviews conducted, so patients were not required to have a diagnosable depression of the kind for which there are established treatment guidelines. Entry into the study required a score of 10 or greater on the Beck Depression Inventory on at least two occasions or a score >15 on one occasion, but this yields a sample with many, and perhaps most patients not actually being clinically depressed.
The treatment. Patients assigned to the intervention group obtained treatment from an interdisciplinary team of professionals including a local physician or advanced practice nurse and a therapist, with psychiatric and psychological supervisors monitoring treatment and patient progress from a centralized location. Patients got their choice of treatments: problem-solving therapy, medication, a combination, or neither. Problem-solving treatment is a form of cognitive behavior therapy that is practically oriented, and provides patients with the tools to tackle everyday problems that they identify as being tied to their depression. If patients chose this therapy, it was first delivered over the Internet by way of interactive video phone calls. Subsequent sessions were provided by video calls or telephones at either the clinic or the patient’s home, depending on the patient’s preference.
Patients in the active treatment group who chose antidepressant medication were interviewed by a local physician or nurse, with the patient and this local health provider having to reach agreement on the appropriate medication based on the patient’s past experience with antidepressants and current symptoms. The patients were first interviewed face to face at 1 to 2 week intervals and then every three weeks.
The intervention has some state of the art features and if we think of patients assigned to it getting a Mercedes, then the patients assigned to the routine care condition got a skateboard. Their primary care physician or cardiologist was simply informed of their participation in this study and their score on the depression questionnaire. The patients were then free to obtain whatever depression care that they could from the physician or another health care provider, but, as we will see, there were substantial barriers to their getting treatment and few, who were not already receiving treatment, did so subsequently.
Given the study was conducted in the United States, is important to know whether treatment was free or if patients had to pay for it, either out of their pocket or with insurance and often substantial co-pay. This information is not provided in the article, but I emailed the first author, who indicated that treatment was free in the intervention condition, but patients had to pay for any treatment if they were assigned to routine care. There are a couple of issues here. First, patients may have been motivated to enroll the study rather than simply get treatment through their health care provider solely because of the 50:50 chance of getting treatment that they could not otherwise afford. Second, being assigned to the intervention rather than the control group meant patients not only getting a complex intervention probably not readily available elsewhere, but getting it free. So, any difference in outcomes between the intervention versus control group could be due to patients being getting free treatment they wanted at their choice of their home or a clinic, not the specifics of the treatment. Maybe all we need to improve the outcome of depression is to make treatment free and readily available, rather than with all these bells and whistles. Finally, differences in outcomes might reflect patients being assigned to the control group registering their disappointment of not being assigned to the intervention group, not the benefits of the intervention. The outcomes for the patients assigned to routine care could thus be artificially lowered so that the intervention looked more effective.
Treatment already being received. Rates of treatment with antidepressants in the United States and western European countries are high, and the number of people on antidepressants probably exceeds the number of people who are depressed, even allowing for lots of depressed people not getting treatment. In this study, 27 of the patients assigned to the intervention arm of the study and 26 of the patients assigned to routine care were already receiving an antidepressant. So, we need to take into account any increases in this number. It is to the authors’ credit that they even reported this. Most studies of enhanced care for depression do not disclose the extent to which patients being enrolled are already in treatment, so the readers left assuming that patients were not already in treatment or having to guess the extent to which they were, without much information to go on.
Much of the antidepressant treatment patients were already receiving was inappropriate or ineffective. It is estimated that 40% of patients in general medical settings who receive an antidepressant derive no benefit over simply remaining on a waiting list. That is because some of the treatment is provided to patients who are too mildly depressed to show benefit or who are simply not even depressed, and among patients who are sufficiently depressed to benefit, there is inadequate patient education and follow up. It is important to emphasize that antidepressants do not make unhappy people happy. Rather, the effectiveness of these drugs is limited to persons who have a diagnosable depression, who, in the case of this study may have been in a minority.
Routine management of depression in general medical settings in the United States is so poor that patients receiving antidepressants often do not obtain the benefit that they would have gotten from assignment to a pill placebo condition in a clinical trial. Practice guidelines dictate that patients be contacted at five weeks after starting an antidepressant. If improvement is not apparent, their dose should be adjusted, they should be switched to another antidepressant, or maybe they just be given some education about the need for adherence. Guidelines dictate this, but are notoriously ineffective in ensuring that patients get the necessary follow up. Patients getting an initial prescription for an antidepressant from a primary care physician very may simply disappear into the community, often without even renewing their prescription.
In contrast, patients being assigned to a pill placebo condition in a clinical trial get much more than a sugar pill, they get positive expectations and a lot of regular attention and support. Any differences found between an antidepressant and a pill placebo condition in a clinical trial has to be in addition to this attention and support, because patients are blinded as to whether they are getting an antidepressant or pill placebo.
Results of the trial. Patients assigned to the intervention group dropped an average of 3.5 points more on the depression questionnaire, relative to patients assigned to the routine care. Take a look at Figure 2 from the article, which compares this trial to other data concerning treatment for depression. It involves a blobbogram or forest plot of this and past studies. To understand what that means, you can click on this link, or you can go to the excellent, readily understandable discussion on pages 14 to 18 of Ben Goldacre’s Bad Pharma. But for our purposes, it would not be an outrageous distortion to think of this forest plot as being a snapshot of a horse race. (I know, an oversimplification, and past co-authors on meta-analyses, please forgive me.) The horse out in front represents results of this CODIACS trial, and the only other horse almost neck and neck represents results of the COPES randomized controlled trial, which served as preliminary work for the CODIACS trial, which was conducted by the same authors.
Three and a half points on a Beck Depression Inventory with range of possible scores from 0 to 64 does not sound like much, but this could be an exceptional finding. Figure 2 shows that, along with an earlier study done in preparation for it, exceed the effects found in meta-analyses (a statistical tool for integrating results of different studies) for other complex (collaborative care) interventions in medical settings, represented by the horse at the top of the diagram; meta-analyses of published and unpublished clinical trials of SSRI antidepressants, represented as the next horse down; meta-analyses of only published trials of SSRI antidepressants, which, because of publication bias are higher, represented by the next horse down; and a variety of single trials. We are therefore talking about an apparently big effect, especially for a group with low depressive symptoms to begin with.
Yet, before you conclude “wow!” we need to ask if this is really that impressive. It is important to note that effect sizes obtained in clinical trials depend not only on changes observed in the intervention group, but also changes observed in the control group. A control group showing little change can make an otherwise mediocre intervention look impressive. In the case of this trial, there was no change in the routine care group. So, we might simply be comparing doing a whole lot with this intervention to doing basically nothing, without getting at what of the “whole lot” mattered for outcome.
If we go to the meta-analysis of collaborative care trials represented by the top horse, we find that it considered 37 trials involving 12,355 depressed patients. The results were strong enough to recommend implementation, but–do check this out–only in the United States (!). Trials conducted outside the United States did not demonstrate a significant effect on patient outcomes of improving routine care for depression in this manner. Why– because the other countries are too primitive to benefit? Hardly, the other countries are mainly the United Kingdom and the Netherlands, where routine care for depression is less fragmented and poses less of a financial burden on patients. So, we might infer that collaborative care works best in contexts, like the United States, in which there is lots of room for improvement in routine care for depression, including making it less costly to patients.
Returning to our discussion of the CODIACS trial, we can find an alternative expression of its results, namely, that 24 patients in the active treatment group achieved remission of depression, and 16 in the usual care. Thus, 49 patients in the active treatment group and 57 patients in the routine care group remained depressed. This is not atypical, and shows just how far we have to go in getting better outcomes for treatment of depression in the community, better even than claimed for this CODIACS trial. Despite being the horse out ahead of the pack, the study left most patients still with their depression.
Treatment received after patients were randomized to intervention or control group. In the intervention group, the number of patients on antidepressants went from 27 to 37 out of 73, and for the routine care group, the number went from 26 to 28 out of 76. These are not impressive numbers. The number of patients in the intervention group receiving psychotherapy increased from 6 to 48, and the number in the control group increased from 7 to 14. These are more impressive numbers, and consistent with the view that patients identified as depressed in general medical care often have difficulties completing appropriate, affordable psychotherapy, unless there is some assistance. Note too that the therapy for the intervention group was provided wherever the patient preferred, either in the convenience of their home or at a clinic.
A number of factors could explain these differences in the therapy received by the intervention versus the control group. Patients in the intervention group could have been getting more psychotherapy at follow-up because they are encouraged to do so, because it was free, or because it was more readily available and convenient than what patients got in routine care group, where they might not be able to find a therapist with fees they could afford, even if they looked hard.
So, we can conclude that the intervention modestly increased the number of patients on antidepressants and rather substantially increased the number of patients getting Internet provided psychotherapy, whereas being assigned to the control group meant not much change in treatment. The net effect was a change in depression scores, but largely driven by not much happening in the routine care group.
Cost analysis. The authors concluded that healthcare did not cost more for patients assigned to the intervention group versus the control group. They arrived at this conclusion by combining the cost of mental health care, which was higher for the intervention group, with cost of general medical care, which was lower for the intervention group. We cannot take this assessment of no increase in cost too seriously. These estimates are based on very small numbers of patients and they do not take into account the considerable cost of setting up, staffing, and maintaining this complex centralized system. I think these cost would leave this complex intervention like many other complex, collaborative care interventions for depression feasible within a research trial, but not sustainable in routine care. A commentary on a cost analysis of the authors’ earlier COPES study noted
One cannot compare the cost of coq au vin and a glass of pinot noir at a local French restaurant to the same meal in Paris without including the cost of airfare and hotel. The cost of enhanced depression care is like the cost of the French meal; the real cost must include all other expenses for the trip to get us there.
And it is probably unrealistic to assume we can improve the treatment of depression with no increase in costs, anyway.
From this article, we don’t really learn
- How many of these patients were actually clinically depressed.
- Whether treatment with antidepressants was appropriate for these patients
- The quality of care patients were receiving in routine care, either before randomization or after but there is reason to believe that it was quite in adequate.
- Whether encouragement, cost, or accessibility led to more psychotherapy being received by the patients in the intervention group
- Whether any of the active components of the intervention decisively mattered, rather than positive expectations, attention and support that the patients received.
Bottom line: in the context of the usually poor care for depression in general medical settings in the United States, we don’t know if this kind of intensity is needed to achieve better outcomes or if lower intensity interventions could have achieved much the same effect.
Maybe we are indeed observing in the effects of a powerful intervention in the CODIACS trial. But we cannot rule out that what we are seeing is a false signal among a lot of noise. To really detect a powerful effect due to the intervention, we would need different patients in a comparison group drawn from a different setting. A trial reporting an intervention being better than routine care for depression does not demonstrate that the intervention is good, nor does it identify that the intervention is the source of the apparent effect.
Just what is this routine care being provided anyway? Three of the authors of this article have co-authored an excellent paper on how having routine care as the comparison group for an intervention does not necessarily allow us to say much about the effectiveness of the intervention. It needs to be established that the routine care was adequate, or if the intervention simply compensated for the inadequacy of routine care in a way that lots of interventions could have. And then there is the ethical requirement of equipoise. It dictates that researchers have a reasonable assumption that the treatments they are offering to patients in a clinical trial are reasonably equivalent. Could the investigator team justify the needed assumption that they were offering equivalent treatment to the two groups?
If routine care is a car with bad spark plugs, all one needs is new spark plugs. However, it would be a mistake to generalize from a trial in which the intervention was the equivalent of providing new spark plugs that the same intervention would work for other cars that are not in need of such repairs.
How common is it that quite modestly sized studies like the CODIACS and its predecessor COPES trial finish way ahead of the pack of other, mostly larger studies, only to not be replicated? The accompanying editorial commentary by Gregory Simon, who has been involved a number of important studies of collaborative care, indicates this pattern is all too common. John Ioannidis has demonstrated that is more generally common in medicine in a paper aptly entitled “why most discoveries are inflated?” Other authors have referred to this pattern as the decline effect.
Before clinicians and policy makers invest in this complex intervention, we should see what happens if we simply offered free, better quality and more appropriate antidepressant treatment, but also access to this kind of psychotherapy, which is difficult for general medical care patients to obtain. I would not be surprised if the same effects could be achieved simply by providing this access without this complex intervention. Or, before we conclude that this intervention have features that were particularly effective, we should at least similarly pay for the treatment of patients in the intervention and control groups and that might in itself take away any differences between the intervention the control group.
Will publicizing of this study encourage overtreatment of patients who are not depressed with antidepressants? Maybe. One thing that I am uncomfortable with in the study is that decisions were made that patients were “depressed” and very likely even the type of treatment offered based solely on questionnaires. Sure, physicians and nurses discussed treatment with the patients, but notoriously primary-care providers don’t do a formal diagnostic interview or even ask for a many questions coming to a decision that patient is depressed. The exclusive reliance on the questionnaires in this study sends the wrong message and is a poor model for routine care. We already faced with the problem of considerable overtreatment with antidepressants but undermanagement, and if this study is taken seriously, it could contribute to that trend.
The packaging and selling of this trial. I teach scientific writing in Europe and I constantly remind participants in my workshops of the need for Europeans to do a better job of promoting their work. I coach them in doing elevator talks promoting their work and themselves, which some participants find difficult and counter to ingrained cultural prohibitions against self-promotion. The Dutch, for instance, want their children and Dutch professionals warn their colleagues of the “tall poppy syndrome,” kop boven het maaiveld uitsteken! This roughly translates as ‘the tall poppy gets cut’. I tell my European participants that they need to emulate the Americans in some ways, even if they would not want to become an American.
This article provides a masterful example of how to promote a study. It is no surprise because the list of authors include a former journal editor and associate editors and a NIH program officer. The artful persuasion starts in the abstract and introduction, which do not take for granted that the reader already has a sense of the importance of evaluating this particular intervention. In the abstract and introduction, the authors spell out just what a serious problem depression is, however is suboptimally treated, and what the consequences can be of it being left inadequately treated.
Starting in the abstract, results that are actually not all that clear cut are presented as strong effects. The abstract ends with a call for a larger, more ambitious study (funding agencies and journalists, please take note). This is no boring “further research is needed,” but a specific call to action for a more ambitious study. A reader can quickly lose sight of our not even knowing how many of these patients were actually depressed and in the end, we don’t know if a specific component of the intervention was needed to get the difference in outcomes. We don’t even know if the apparent effects of the intervention largely depend more on the poor quality care being received by the patients assigned to the control condition. But the article does not call attention to these issues.
The argument that this intensive intervention will not cost more is seemingly compelling, when there is actually little basis for it.
In the discussion section, the possibility is raised that improving depression can lead to improved physical health outcomes among cardiac patients, including longer lives. This is a dubious assertion because there is is at present no data is to support this, but pointing to possibility is quite important in promoting the significance of this study. I’m not saying anything of what the authors do here is inappropriate, but it does go far beyond the data. So, Europeans, please study how these authors open with an impressive statement of the importance of their work and close with an elaboration of that, but also maybe where, as the Dutch would say, they go over the top.
What are we being sold in this article? I think that the CODIACS intervention has some promising elements, but represents a complex intervention that is more expensive and less sustainable than the authors note, when administrative and infrastructure costs are taken into account. Its efficacy remains to be demonstrated in trials with patients appropriately selected for clinical depression. A credible demonstration of its efficacy will require pitting it against a routine care that provides greater likelihood that depressed patients can readily access affordable treatment that is acceptable to them and that they can get the minimal professional attention to facilitate that access. The validity of routine care as an appropriate comparison/control group needs to be demonstrated, not assumed, and this requires showing that assignment of patients to it leads to at least modest increase in utilization of treatment and at least modest improvements in depression scores.
I’d be interesting in hearing from readers if they also zeroed in on what I found important about this article and whether they agree with me about just how ambiguous these findings are.
Coordinating depression treatment from afar: Are results credible? by PLOS Blogs Network, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 4.0 International License.