We need NON-randomized lifestyle interventions

UPDATE: It has been pointed out in the comments that the real problem is not with randomization per se; the problem is with how we apply the randomization.  With this I completely agree.  On Twitter, Matt Hodgkinson has suggested another model known as a “patient preference trial” which would allow us to keep the benefits of randomization while also getting at some of the issues I bring up in the post. More details on patient preference trials can be found here.

This morning I read a post on the blog of Dr Arya Sharma that meshed very well with an idea that’s been ruminating in my head for quite a while (if you are interested in obesity and don’t read his blog, you are missing out).  In the post, Dr Sharma discussed the results of a new study comparing low carb and high protein diets.

From the post (emphasis mine):

Subjects were randomised to 12 months of a standard-protein diet (protien:fat:carbohydrate ratio 20:30:50 % of energy) and one where energy from carbohydrates was reduced and replaced by protein (30:30:40 % of energy).

Both groups lost a significant amount of weight over 12 months (6.6 Kg on the standard-protein diet, 9.7 Kg on the high-protein diet).

The diets had no impact on kidney function despite improvements in diabetes control.

Of note, only 45 of the 76 volunteers completed the study – a drop-out rate of over 40%

Overall, the study shows that differences in carb to protein ratios matter neither in terms of weight loss nor in their impact on kidney function.

Perhaps, even more importantly, the study shows that trying to keep people on diets – even in clinical trials – is challenging, with almost half the subjects abandoning their diet within 12 months.

As I have noted before, diets only work when you stick with them. Rather than obsessing about the exact composition of your diet, it may be best to chose the one you like best and can actually stay on.

To anyone who follows these things, Dr Sharma’s conclusion makes perfect sense.  And yet we keep doing randomized trials of different lifestyle interventions.  If anything, I think that randomized studies may actually be underselling the benefits of some lifestyle interventions.

Randomized controlled trials are the absolute gold standard when trying to determine the benefit of any sort of health intervention. By randomizing participants to different treatment groups, you are able to more clearly determine the physiological impact of the intervention. Randomized trials are extremely important, and they’ve given us a lot of tremendously useful information about various lifestyle interventions.  So why do I think we need to move away from randomized studies for some lifestyle interventions (at least with effectiveness studies, if not efficacy studies)?

As Dr Sharma notes, lifestyle interventions only work if you actually follow them. If followed, all the major diets lead to reductions in food intake, which will either lead to reduced weight gain or weight loss (assuming no changes in energy expenditure). The problem isn’t with the diets – it is with following the diets.

This is why I think that randomized controlled trials are underselling the benefits of diets and other lifestyle interventions. If you randomly allocate people to a cookie-cutter intervention, it’s not hard to see that the intervention might be wholly inappropriate for many of the participants (not medically, but in terms of being something which they can manage within their lifestyle).

So why not take the opposite approach? Recruit 200 people, and allocate them to 4 different lifestyle interventions. But instead of allocating them randomly, do some sort of survey or interview (including their entire family, if possible) to see which type of intervention is most likely to fit within their lifestyle.

I think there is already a general acceptance among practitioners (good ones at least) that interventions need to be made to fit the individual, and not the other way around. And I know that researchers often talk about the importance of sussing out factors related to adherence, or identifying responders vs non-responders. But I have yet to see the same logic applied to a large scale study. And I don’t know how journal editors would react to a proudly non-randomized study, as this seems anathema to unbiased research. But it seems like a more reasonable approach than forcing people into interventions that we know they aren’t going to follow for more than a few months.

Travis

 

Related Posts Plugin for WordPress, Blogger...

Creative Commons License
The We need NON-randomized lifestyle interventions by Obesity Panacea, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 3.0 Unported License.

This entry was posted in News, Obesity Research, Peer Reviewed Research. Bookmark the permalink.

17 Responses to We need NON-randomized lifestyle interventions

  1. Sheila Kealey says:

    If you don’t randomly assign participants to treatment/lifestyle groups, you can’t be certain that changes observed were due to the treatment/lifestyle . . .

    Better to focus research on designing interventions that maximize the chances of participants adhering to the lifestyle changes and preventing dropout. Our research (WHEL Study) succeeded in changing women’s dietary intake substantially (long term >4 y) with an innovative personalized telephone counseling program (tailored to participants schedules/dietary preferences) supplemented by cooking classes and monthly newsletters. We also included participant retention activities over the course of the study (earning “points” redeemable coupons, etc. for study tasks like blood draws, completing questionnaires). More details of the intervention are here:
    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2064909/

    Hopefully more research on barriers to adherence and innovative health behavior change strategies (possibly more personalized strategies, such as those used in the WHEL Study) will improve adherence in future lifestyle change studies.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
    • Travis says:

      I guess that’s sort of my point though. In efficacy studies we should continue to randomize. But in effectiveness studies, we don’t really care *why* something works, so long as it does work. As a bit of a hybrid, we could still have some form of randomization and control group. But instead of being randomized to a specific intervention, you’d be randomized to either “intervention” or “control”. The intervention itself would simply be tailored to the individual.

      More research on adherence makes sense to me. But for any given intervention, even if you’re doing everything right with respect to promoting adherence, wouldn’t it still be best to have an intervention that is tailored to fit a person’s likes/dislikes/etc? If you put me into a yoga intervention, I’m unlikely to adhere no matter what the interventionist does :)

      VA:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
  2. Hi Travis,

    I don’t understand Dr. Sharma’s comments on that study. It was not a comparison between low-carb and high-protein diets, it was a high-protein vs normal macronutrient diet. Furthermore, the HP diet caused 47% more weight loss, the difference simply wasn’t statistically significant. They didn’t report body composition measures, which is important because HP diets usually cause lean mass retention with weight loss. There could have been substantial differences in adipose loss that went undetected due to a lack of power and body comp measure.

    This study IMO does not support the position that HP offers no weight loss benefit. What it suggests is that the study should be repeated with higher power and DXA to see if they can confirm the suggestion that the HP diet may have been superior for fat loss, consistent with the majority of other HP diet studies that managed to achieve a substantial difference in protein intake between groups. Scott Weigle’s study is notable in this regard.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
    • Travis says:

      Your comments about this particular study seem reasonable. But from my reading of the research more generally, it seems that no one diet is hands-down better than others (I’m thinking of the Atkins vs Ornish vs etc studies in JAMA of the past few years), at least with respect to “real world” scenarios, as opposed to super controlled lab-based studies. Is that a fair assessment of the literature?

      Irrespective of this particular study, my point is simply that by focusing on the *best* diet (or the *best* exercise program) for everyone, we lose sight of the fact that the impact of these interventions are likely vary greatly based on the personal preferences of the individual participants/patients.

      VA:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
      • Hi Travis,

        I agree with your main point and it is interesting to consider the possibility of non-randomized trials. There’s no single ideal diet for everyone and these single-factor diet manipulations tend to yield underwhelming long-term outcomes in the “average person”.

        However, certain diet manipulations do aid weight loss, and increasing the proportion of dietary protein is one of them. My view is that the current study is not inconsistent with that position. It’s a tangent from the main message of your post but I felt it was worth mentioning.

        VA:F [1.9.22_1171]
        Rating: 0 (from 0 votes)
  3. In our book : “Eating healthy and dying obese, elucidation of an apparent paradox”, we have a chapter with the title: ” Science feeds the confusion”.
    Read our data, presented at the last ECO2013, obtained with a body-physiology-based educational approach! http://www.vitasanas.ch/wp-content/uploads/2013/06/poster-only-eco-liverpool-ok.pdf

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  4. Katya says:

    Interesting idea… take a population health approach, vs. a clinical approach. Does it matter in this context if the changes observed were due to the treatment itself, or something else associated with it? Eventually that can be teased out if necessary. But the important thing is that in the real world, changes (weight loss, cessation of weight gain) were in fact observed… yes, efficacy vs. effectiveness.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  5. Jeremy Labrecque says:

    What you’re describing could be done with a conditionally-randomized trail where you find a group of people that you think would benefit from a certain diet and randomly allocate half of them to get the diet.

    I understand the frustration with RCTs and non-adherence (although instrumental variable analyses of RCTs are way underused and can, somewhat, remedy this) but I have no idea how you’d interpret the results of the kind of study you’re describing. You’re guaranteeing the non-existence of a comparison group.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
    • Travis says:

      If we take your randomization idea one step further (see my comment to Sheila) I think we can keep the best of both worlds.

      Could you elaborate a bit more on the instrumental variable analyses you mention? Is that similar to intent-to-treat analyses? For the purposes of this discussion I’m less concerned with adherence to the study (e.g. showing up for testing sessions, etc) and more concerned with adherence to the lifestyle intervention (e.g. actually following the prescribed diet). It seems as though things like intent-to-treat analyses are useful for the former, and less useful for the latter, no?

      VA:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
      • Jeremy Labrecque says:

        So I guess your problem is with the fact that people are evaluating stupid, one-size-fits-all interventions that we know will have bad adherence (and therefore be not very effective). I don’t think your problem is with the randomization at all.

        You can design a more complex intervention that involves rules that assign people diets based on personal characteristics and hopefully it will be more effective! But I’d still evaluate it via randomization if possible.

        Instrumental variables are used to obtain non-biased estimates of efficacy from trials with poor adherence (and are really cool!) but I see that your point is more about effectiveness (and rightfully so) so it’s probably not worth using them…

        VA:F [1.9.22_1171]
        Rating: 0 (from 0 votes)
        • You might appreciate the information in this link http://www.vitasanas.ch/wp-content/uploads/2013/06/poster-only-eco-liverpool-ok.pdf

          VA:F [1.9.22_1171]
          Rating: 0 (from 0 votes)
          • Jeremy Labrecque says:

            Interesting. The before/after design does have some important issues though. Those results could be explained if people who were recruited/volunteered were already thinking about losing weight (i.e. they may have lost weight without intervention).

            An idea would be to let people choose their intervention, as you did, and then randomize them to receive the intervention either right away or starting in month (or starting in 6 months for those who chose the 6 month program). The latter would be a perfect control group and everyone gets what they wanted!

            VA:F [1.9.22_1171]
            Rating: 0 (from 0 votes)
  6. Ashley says:

    I absolutely love this! I remember reading similar studies in grad school thinking that the most telling sign of why obesity is still such a problem wasn’t that we hadn’t found the perfect diet, macronutrient ratio or number of meals per day, but the high drop out rates regardless of interventions. It may not make me highly regarded within the scientific community but I’ve found much more success working with clients to create habit changes they enjoy rather than pushing them to perfection or following the latest findings to a tee.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  7. I have the impression that you are discussing how to get statistical significant and BIOLOGICAl (life) IRRELEVANT data!!!
    ANY study MUST considers interindividual differences (partially genetic-dependnet) of energy requirements!!
    Any study, in which TDEE is based on calculated BMR (RMR, REE) is MEANINGLESS!! See discrepancies between CALCULATED and MEASURED REE!!! http://www.vitasanas.ch/wp-content/uploads/2013/02/who-ree-mesured-rm_mueller.jpg

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
  8. Johan says:

    I think you are confusing two unrelated things. There is nothing in the need to randomize that prevents you from studying the effect personalized lifestyle intervention. The control group is to give you something to compare against and the randomization is to randomize the effect of confounders.

    You could recruit 400 and randomize them into two groups. The control group you give some standard reference diet (or maybe the “best” diet by some measure). The treatment group you then give your survey to and based on the results group them into four groups with diets appropriate to the person.

    The important thing is to give everyone in the treatment group the same treatment, but nobody says the treatment has to be a diet. The treatment can very well be one of four diets chosen based on a survey. As long as the the treatment protocol you give is clear beforehand and you don’t simply make it up as you go along.

    Randomization is not in any way holding back the ability to test personalized diets.

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
    • Travis Saunders, Phd, MSc, CEP says:

      I agree with you, Johan. No need to get rid of randomization per se, but it would still be a big change from the way things are currently done.

      Matt Hodgkinson also suggested another interesting type of randomization called “patient preference” trials, which sounds like it could be an ideal scenario. https://twitter.com/mattjhodgkinson/status/412705055704637440

      VN:F [1.9.22_1171]
      Rating: 0 (from 0 votes)