I’ve been involved in a couple projects recently focusing on the quality of sedentary behaviour questionnaires. What has surprised me is that even though we have a ton of questionnaires that measure sedentary behaviour, we don’t know a ton about their quality.
With any data collection tool, you ideally want it to accomplish 2 things:
- It should measure what you think it measures (e.g. it is valid)
- It should give consistent results (e.g. it is reliable)
So for example, a pedometer is a valid and reliable way to measure physical activity. It measures what you expect it to measure (steps/day), and it will give consistent results (e.g. if I wear 4 pedometers at the same time, they are likely to give very similar values at the end of the day).
By their nature, questionnaires are often less valid measures of activity and sedentary behaviour when compared to more objective measurement devices (pedometers, acclerometers, etc). Questionnaires really measure how much sedentary behaviour you think you do, rather than how much you actually do (which may or may not be the same). The problem is that objective measures are really expensive, and questionnaires are really cheap. Questionnaires also give some information on context (e.g. whether you are sitting in front of a screen or a book), whereas objective measures mostly just measure whether you are sitting or moving. So most studies use both, or just questionnaires.
As noted above, the big problem is that we don’t really know if these questionnaires are accurate – if a child reports that they get 4 hours/day of screen time, how confident should we be that they actually spend 4 hours/day in front of a screen And if we used the same questionnaire a week later, are we confident that we’d still get a value of 4 hours/day, rather than 2 or 6? These things can be influenced by a number of things, including the wording of a questionnaire, the options you give for answers, etc. And unfortunately we just don’t know the accuracy of most of the most commonly used questionnaires.
For example, as part of the lead-up to Canada’s 24 hour movement behaviour guidelines, Dr Valerie Carson did a systematic review of the available evidence on sedentary behaviour and health in kids. She found an enormous amount of research – 235 studies with more than 1.6 million participants. Her review had a few interesting findings. First, they noted that we don’t know the validity or reliability for the vast majority of the questionnaires used in those studies. So if a study determined that a child spends 5 hours/day in front of a screen, we really don’t know if that represents their “true” amount of screen time. In fact, even the questionnaires that have undergone testing (which is a minority of questionnaires in the literature) don’t seem to be fantastic – a recent study by Hidding et al. did not find a single questionnaire considered both valid and reliable for measuring sedentary behaviour in kids.
However, that doesn’t mean that these questionnaires don’t provide useful information. For example, Dr Carson et al. also found that self-reported TV viewing and total screen time were associated with a range of important health outcomes (body composition, self-esteem, etc). Even more interesting is that more objective measures of actual sitting time did not show consistent associations with health outcomes in these populations. So the tool that we know is more accurate (e.g. objective measures of sitting time) aren’t associated with health as frequently as the tools that we know are probably less accurate (e.g. questionnaires).
So we are in a strange situation where we know that sedentary behaviour questionnaires don’t do a great job of measuring sedentary behaviour, but we also know that the results of these questionnaires are strongly associated with important health outcomes. This issue isn’t unique to sedentary behaviour – it’s an issue with questionnaires in general. But it’s still a bit vexing. Maybe screen time has such a strong impact on health that it doesn’t matter if the numbers aren’t bang on (this is my guess, given that we see similar relationships regardless of the specific questionnaire used). For now, what we really need are valid and reliable questionnaires.
Featured image by Alberto G.