UPDATE: It has been pointed out in the comments that the real problem is not with randomization per se; the problem is with how we apply the randomization. With this I completely agree. On Twitter, Matt Hodgkinson has suggested another model known as a “patient preference trial” which would allow us to keep the benefits of randomization while also getting at some of the issues I bring up in the post. More details on patient preference trials can be found here.
This morning I read a post on the blog of Dr Arya Sharma that meshed very well with an idea that’s been ruminating in my head for quite a while (if you are interested in obesity and don’t read his blog, you are missing out). In the post, Dr Sharma discussed the results of a new study comparing low carb and high protein diets.
From the post (emphasis mine):
Subjects were randomised to 12 months of a standard-protein diet (protien:fat:carbohydrate ratio 20:30:50 % of energy) and one where energy from carbohydrates was reduced and replaced by protein (30:30:40 % of energy).
Both groups lost a significant amount of weight over 12 months (6.6 Kg on the standard-protein diet, 9.7 Kg on the high-protein diet).
The diets had no impact on kidney function despite improvements in diabetes control.
Of note, only 45 of the 76 volunteers completed the study – a drop-out rate of over 40%
Overall, the study shows that differences in carb to protein ratios matter neither in terms of weight loss nor in their impact on kidney function.
Perhaps, even more importantly, the study shows that trying to keep people on diets – even in clinical trials – is challenging, with almost half the subjects abandoning their diet within 12 months.
As I have noted before, diets only work when you stick with them. Rather than obsessing about the exact composition of your diet, it may be best to chose the one you like best and can actually stay on.
To anyone who follows these things, Dr Sharma’s conclusion makes perfect sense. And yet we keep doing randomized trials of different lifestyle interventions. If anything, I think that randomized studies may actually be underselling the benefits of some lifestyle interventions.
Randomized controlled trials are the absolute gold standard when trying to determine the benefit of any sort of health intervention. By randomizing participants to different treatment groups, you are able to more clearly determine the physiological impact of the intervention. Randomized trials are extremely important, and they’ve given us a lot of tremendously useful information about various lifestyle interventions. So why do I think we need to move away from randomized studies for some lifestyle interventions (at least with effectiveness studies, if not efficacy studies)?
As Dr Sharma notes, lifestyle interventions only work if you actually follow them. If followed, all the major diets lead to reductions in food intake, which will either lead to reduced weight gain or weight loss (assuming no changes in energy expenditure). The problem isn’t with the diets – it is with following the diets.
This is why I think that randomized controlled trials are underselling the benefits of diets and other lifestyle interventions. If you randomly allocate people to a cookie-cutter intervention, it’s not hard to see that the intervention might be wholly inappropriate for many of the participants (not medically, but in terms of being something which they can manage within their lifestyle).
So why not take the opposite approach? Recruit 200 people, and allocate them to 4 different lifestyle interventions. But instead of allocating them randomly, do some sort of survey or interview (including their entire family, if possible) to see which type of intervention is most likely to fit within their lifestyle.
I think there is already a general acceptance among practitioners (good ones at least) that interventions need to be made to fit the individual, and not the other way around. And I know that researchers often talk about the importance of sussing out factors related to adherence, or identifying responders vs non-responders. But I have yet to see the same logic applied to a large scale study. And I don’t know how journal editors would react to a proudly non-randomized study, as this seems anathema to unbiased research. But it seems like a more reasonable approach than forcing people into interventions that we know they aren’t going to follow for more than a few months.