New Publication: Big Differences in Self Report vs Directly Measured Sedentary Behaviour

Last fall a very interesting paper was published by Dr Russell Jago and colleagues in the International Journal of Physical Activity and Behavioural Nutrition (IJBNPA), looking at sedentary behaviour and physical activity “typologies” in British children.  Essentially, the paper wanted to see if there were groups of children who exhibited similar patterns of physical activity and sedentary behaviour.  The idea being that if we can identify patterns in how kids accumulate these behaviours, we can create more targeted interventions to increase physical activity and/or reduce sedentary behaviour.  Below is the abstract from that paper (IJBNPA is an open access journal, so I’d urge everyone to check out the original paper here):

Background: Targeted interventions may be more effective at increasing children’s physical activity. The aim of this study was to identify clusters of children based on physical activity and sedentary patterns across the week.

Methods: Participants were 761, 10-11 year old children. Participant’s self-reported time spent in eight physical activity and sedentary contexts and wore an accelerometer. Cluster analysis was conducted on the time spent in the self-reported physical activity and sedentary contexts. Mean minutes of accelerometer derived of moderate to vigorous physical activity (MVPA) and sedentary time were derived for the entire week, weekdays only, weekend days and four different time periods across each type (weekend or weekday) of days. Differences in the physical activity patterns of the groups derived from the cluster analysis were assessed for overall physical activity as well as for the four time periods on weekdays and weekend days.

Results: Three clusters emerged: 1) High active/Low sedentary; 2) Low active/Moderate sedentary; and 3) High Active/High sedentary. Patterns of activity differed across the week for each group and the High Active/High sedentary obtained the most minutes of MVPA.

Conclusions: Patterns of physical activity and sedentary time differed across the week for each cluster. Interventions could be targeted to the key periods when each group is inactive.

So the authors were able to identify distinct clusters of behaviour in these kids – very cool!

But when I began to look at the data from the paper itself, it was a bit more complicated than I had expected.  It actually got me so excited that I wrote a Letter to the Editor (my first!) with Stephanie Prince and Mark Tremblay, which was recently published in IJBNPA here.  The next four paragraphs are taken from that letter, although the emphasis and figures have been added for the blog.

The above-mentioned study assessed physical activity and sedentary behaviour using both self-report questionnaires and accelerometry.  However, when creating clusters of children with similar behaviour, the authors relied on only the self-reported data.  While this resulted in clusters of children with very distinct quantities of self-reported physical activity and sedentariness, the groups appear almost identical when compared using the objectively measured data.

For example, according to the self-report data, the “High Activity/Low Sedentary” group performed an average of 3.6 hours more weekday physical activity than children in the “Low Activity/Medium Sedentary” group. However, when the accelerometer-derived values of weekday moderate- to vigorousintensity activity are compared instead, the difference between the two groups is reduced to roughly two minutes.  Thus, in this situation, the difference between the two groups using self-report measures was roughly 100 times greater than the measured difference assessed using accelerometry.

Physical Activity Data

A similar problem is observed when comparing the groups for sedentary time.  For example, the self-report data suggests a dramatic difference in screen time (excluding school-work) between the “High Activity/High Sedentary” group which accumulated 13.86 hours per day and the “High Activity/Low Sedentary” group which reported just 5.77 hours per day. In contrast, the objectively measured data suggests that the “High Activity/High Sedentary” group accumulated 4.7 hours of weekday sedentary time outside of class time (roughly 9 hours less than suggested by their self-reported screentime), and only differed from the “High Activity/Low Sedentary” group by 5 minutes.  Similarly, the “High Activity/High Sedentary” group actually accumulated lessobjectively-measured sedentary time than the “Low Activity/Medium Sedentary” group on both weekdays and on weekends.

Sedentary Behaviour Data

Finally, it is questionable whether it would even be possible for children to accumulate the daily volume of screen time (13.86 hours) and physical activity (5.89 hours) reported by children in the “High Activity/High Sedentary” cluster. If true, this would leave the children less than 5 hours per day for both schoolwork and sleep, suggesting that these values are not just unlikely but impossible.

As you can see from both of the above graphs, the self-report and objectively measured data don’t match up very well. To put it simply, the objectively measured data suggests that the “High Sedentary” kids aren’t any more sedentary than their peers – they just have no sense of time!

As is often the case, our Letter was published along with a response from the authors of the original paper.  I’ve included the full-text of that letter below, and the pdf version can be found here.

“Dear Editor,

We read with interest the recent letter by Saunders and colleagues [1] in relation to our paper entitled “Physical activity and sedentary behaviour typologies of 10-11 year olds” [2] and we welcome the opportunity to respond to the comments that have been raised in their letter.  Firstly, they question why we opted to use self-reported activity data to generate behavioural profiles when objective accelerometer data were available. Secondly, they asked us to explain why the differences in the accelerometer-determined physical activity levels were relatively small when the self-reported differences were far larger.

Regarding their first point, the self-reported physical activity and screen-viewing data were used to create profiles of the types of behaviour in which the participants engaged. The questions used focussed on frequency of attendance of sport clubs, playing with friends near the home, playing with friends in the garden as well as time spent screen-viewing. We made no assumptions about the actual intensity of the physical activity in which the participants engaged, and thus did not claim that if a participant reported attending an after-school club for 1-2 days per week that this led to an extra amount of physical activity. This decision was taken because we are aware that a large proportion of time that children spend participating in clubs or sports is not moderate to vigorous in intensity [3]. More importantly, however, the instrument was not intended to provide an indication of overall levels of physical activity. Rather the questions were designed to provide information on the types of behaviours in which the participants engaged. These data are needed as although accelerometers can provide detailed information on the intensity of physical activity and the time of day at which it occurred, they cannot provide information about what a person was doing when they were physically active. While accelerometer data can provide information about whether or not a child meets physical activity guidelines, the data cannot solely inform interventions as there is no information about the activities in which the child engages. Context of activity is required to guide and target strategies for promoting activity in children. We therefore used self-reported data to identify children who reported engaging in similar behaviours with the intention being that this information might then be used to design targeted interventions for children with similar physical activity and screen-viewing profiles.

Regarding their second point, Saunders and colleagues are correct to point out that the differences between the clusters in terms of self-reported participation in physical activity and screen-time were far larger than the differences when analysed via accelerometer. However, it is important to be clear that the outcomes were the accelerometer-derived variables and not the self-reported physical activity participation or screen-viewing time. We acknowledge that some of the screen-viewing estimates may not be plausible but they provide a good reflection of the perceived screen-viewing behaviours and patterning in relation to other children which, as concluded in the paper, can inform the design of interventions.

In summary, we believe the analyses that we performed and the interpretation of those analyses are correct. We agree with Saunders and colleagues that accelerometers provide more accurate representations of volumes and intensities of physical activity than self-reported activity participation but, in this paper, the activity participation data were used to provide context on what the children were doing when accruing accelerometer-derived physical activity. We therefore believe that our objective could not have been met by the singular analysis of accelerometer data as suggested by Saunders and colleagues, which we feel answers an interesting, but ultimately different question.

Sincerely,  Russ Jago, Ken Fox, Angie Page, Rowan Brockman and Janice Thompson – University of Bristol, UK”

So there you have it.  I must say that I still find that the original paper and abstract give the impression that the groups showed meaningful differences in terms of actual behaviour, not just in their perceived behaviour.  And I question whether we should target interventions based on differences in perceived behaviour, when these groups of children appear nearly identical in terms of actual behaviour.  Nonetheless, I appreciate the authors’ thoughtful response to our questions, and this whole experience has really highlighted for me the large differences between self-report and objectively measured sedentary behaviour.

I should point out that while I am a fan of directly-measured sedentary behaviour, I have no illusions about its superiority when it comes to predicting health outcomes.  Our colleague Valerie Carson recently published a paper in BMC Public Health which found that TV watching, but not objectively measured sedentary behaviour, was strongly associated with increased metabolic risk in kids.  It appears that self-report and directly measured sedentary behaviour are just separate and distinct constructs, but I’m not sure that anyone knows exactly what self-reported sedentary behaviour data is really getting at.

The papers are all Open Access and can be viewed at the links below.  It’s been a few months that I’ve been waiting to discuss this here on the blog, and I’m curious to hear what other people think about self-report vs direct measures in general, and related to sedentary behaviour/physical activity in particular.

Travis

To get future posts delivered directly to your email inbox or to your RSS reader, be sure to subscribe to Obesity Panacea.

ResearchBlogging.orgJago, R., Fox, K., Page, A., Brockman, R., & Thompson, J. (2010). Physical activity and sedentary behaviour typologies of 10-11 year olds International Journal of Behavioral Nutrition and Physical Activity, 7 (1) DOI: 10.1186/1479-5868-7-59

Saunders, T., Prince, S., & Tremblay, M. (2011). Clustering of children’s activity behaviour: the use of self-report versus direct measures. International Journal of Behavioral Nutrition and Physical Activity, 8 (1) DOI: 10.1186/1479-5868-8-48

Jago, R., Fox, K., Page, A., Brockman, R., & Thompson, J. (2011). Physical activity and sedentary behaviour typologies of 10-11 year olds – Response to Saunders and Colleagues International Journal of Behavioral Nutrition and Physical Activity, 8 (1) DOI: 10.1186/1479-5868-8-49

Related Posts Plugin for WordPress, Blogger...

Creative Commons License
The New Publication: Big Differences in Self Report vs Directly Measured Sedentary Behaviour by Obesity Panacea, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 3.0 Unported License.

This entry was posted in News. Bookmark the permalink.

5 Responses to New Publication: Big Differences in Self Report vs Directly Measured Sedentary Behaviour

  1. Ernesto says:

    Great job Travis! I recently was working through some analysis looking at self-reported sedentary behavior and calorie expenditure from doubly-labeled water. Nothing was coming out significant, or even making biological sense. Turns out that a large number of participants report being sedentary and sleeping for more than 24 hours per day.

    So, a question? We know self-report is bad and accelerometers are okay. If you were trying to intervene on sedentary behavior what which would you use and why?

    VA:F [1.9.22_1171]
    Rating: 0 (from 0 votes)
    • Travis Saunders, MSc, CEP says:

      I’ve heard that’s a fairly common occurance. I know that some online surveys now keep a running tally of the total hours accounted for, so that someone can see that they’ve reported 27 hours of daily activities, and adjust it appropriately.

      Did you look at self-reported SB vs any health outcomes? Despite it’s poor agreement with objective measures of SB, self-reported TV time still seems to be a strong predictor of health risk, and perhaps even stronger than the objective measures.

      In my thesis project (which includes a lab based intervention), we’re focusing only on directly measured sedentary behaviour. That’s because that is my personal interest, but also because I’m trying to get at the underlying physiology, rather than actually creating a lasting change in a person’s behaviour.

      For a lifestyle interevention, personally I think I’d use something that makes people aware of their behaviour. For example, Leo Epstein has used devices that limit the amount a TV or computer can be used each day – kids get a certain amount of TV time, and after that the device stops working. Not surprisingly, that seems to have an impact on sedentary behaviour and body weight in their participants. I think that’s the ideal type of approach since it both cues people to their sedentary behaviour, and also helps (or perhaps forces) them to change their behaviour at the same time.

      It’s the same reason why I think pedometers are a great tool for increasing daily physical activity, and why food diaries are so important for weight loss. I know there are a few free programs that will alert you whenever you’ve been using your computer for a given length of time, and I would expect that there are probably smart-phone apps as well. So if I were going to try to reduce sedentary behaviour in some sort of lifestyle intervention, that is the approach I would use. I’d have the participants wear accelerometers so that I could track their behaviour objectively, but I don’t know that it’s the best way to drive behaviour.

      VN:F [1.9.22_1171]
      Rating: 0 (from 0 votes)
  2. Pingback: Post Publication Peer Review: Blogs vs Letters to the Editor

  3. Pingback: World’s First Systematic Review On Sedentary Behaviour & Health in School-Aged Children | Obesity Panacea

  4. Pingback: Congratulations to Dr Stephanie Prince-Ware | Obesity Panacea