“CATCH-IT Reports” are Critically Appraised Topics in Communication, Health Informatics, and Technology, discussing recently published ehealth research. We hope these reports will draw attention to important work published in journals, provide a platform for discussion around results and methodological issues in eHealth research, and help to develop a framework for evidence-based eHealth. CATCH-IT Reports arise from “journal club” - like sessions founded in February 2003 by Gunther Eysenbach.

Saturday, November 7, 2009

Nov 16 - Effectiveness of Active-Online, an Individually Tailored Physical Activity Intervention, in a Real-Life Setting: Randomized Controlled Trial

Wanner M., Martin-Diener E., Braun-Fahrländer C., Bauer G., Martin B.W. (2009). Effectiveness of Active-Online, an Individually Tailored Physical Activity Intervention, in a Real-Life Setting: Randomized Controlled Trial. J Med Internet Res, 11 (3): e23.

Full Text - Abstract Only - CATCH-IT Draft

Effective interventions are needed to reduce the chronic disease epidemic. The Internet has the potential to provide large populations with individual advice at relatively low cost.

Objective: The focus of the study was the Web-based tailored physical activity intervention Active-online. The main research questions were (1) How effective is Active-online, compared to a nontailored website, in increasing self-reported and objectively measured physical activity levels in the general population when delivered in a real-life setting? (2) Do respondents recruited for the randomized study differ from spontaneous users of Active-online, and how does effectiveness differ between these groups? (3) What is the impact of frequency and duration of use of Active-online on changes in physical activity behavior?

Methods: Volunteers recruited via different media channels completed a Web-based baseline survey and were randomized to Active-online (intervention group) or a nontailored website (control group). In addition, spontaneous users were recruited directly from the Active-online website. In a subgroup of participants, physical activity was measured objectively using accelerometers. Follow-up assessments took place 6 weeks (FU1), 6 months (FU2), and 13 months (FU3) after baseline.

Results: A total of 1531 respondents completed the baseline questionnaire (intervention group n = 681, control group n = 688, spontaneous users n = 162); 133 individuals had valid accelerometer data at baseline. Mean age of the total sample was 43.7 years, and 1146 (74.9%) were women. Mixed linear models (adjusted for sex, age, BMI category, and stage of change) showed a significant increase in self-reported mean minutes spent in moderate- and vigorous-intensity activity from baseline to FU1 (coefficient = 0.14, P = .001) and to FU3 (coefficient = 0.19, P < .001) in all participants with no significant differences between groups. A significant increase in the proportion of individuals meeting the HEPA recommendations (self-reported) was observed in all participants between baseline and FU3 (OR = 1.47, P = .03), with a higher increase in spontaneous users compared to the randomized groups (interaction between FU3 and spontaneous users, OR = 2.95, P = .02). There were no increases in physical activity over time in any group for objectively measured physical activity. A significant relation was found between time spent on the tailored intervention and changes in self-reported physical activity between baseline and FU3 (coefficient = 1.13, P = .03, intervention group and spontaneous users combined). However, this association was no longer significant when adjusting for stage of change. Conclusions: In a real-life setting, Active-online was not more effective than a nontailored website in increasing physical activity levels in volunteers from the general population. Further research may investigate ways of integrating Web-based physical activity interventions in a wider context, for example, primary care or workplace health promotion.


  1. In the Discussion, the authors state that the Spontaneous Users could not be directly compared to the randomized groups. My question would be, what was the purpose of including the Spontaneous Users in the paper?

    I liked many aspects of the study, such as the multiple follow up times and length of follow up, etc. but found tracking the Spontaneous Users took away from the focus and clarity of the paper.

  2. It's kind of disappointing that the study sample already had high levels of physical activity. It then makes it seems kind of a foregone conclusion that there would be no significant change in activity levels. Had there been a population with low-levels, or even a mix, there may have been a different result.

  3. Interesting paper but I wondered if the authors considered that their study design could not ensure that the control group was not contaminated by the intervention. This could occur in three manners; the first is the scenario where controls could have access to the website. The second could be that controls and intervention participants could be in the same home or friends and could influence each other. The third could be that the information from the site may already be available been available to everyone (online or advertising or public health promotion programs).

  4. I would disagree with Laure. I actually found that the inclusion of the SU group made the results from the study more credible.

    I like how the final conclusion was that the intervention has no impact – most of the studies we read claim an impact (this could be publication bias).

    My question is more technical in nature (this study seemed to take a thorough methodological approach). In Figure 5, why was it that in all groups, the amount of minutes/week for those meeting HEPA recommendations at baseline decreased? They mention it in the paper

    “When including only those participants who did not meet the HEPA recommendations at baseline, total reported activity time increased significantly in all groups. The increases observed in these insufficiently active individuals exceeded the increase observed in all participants; thus, a decrease in total reported activity time was found in those individuals meeting the HEPA recommendations at baseline. The decrease was significant in the CG.”

    Though I am still confused.

    A more general question (perhaps Gunther can address in class) is what is the best method to recruit individuals for an online study? There does not seem to be a general consensus or gold standard.

  5. Just couple of quick questions:
    1. What were the “technical problems” for which the 38 participants were excluded from the study?
    2. How did they validate uniqueness of the participants? Yes sure email addresses probably were considered as unique – but what if someone maintains multiple email addresses (like most of us)?
    3. Finally, as Arun mentioned, how did they ensure that the control group were not already the members of Active-Online?

  6. As previously mentioned, keeping the CG from accessing active online was an issue as "62 of 453 [CG] participants responding to FU3 stated that they had heard about Active-online and had used it at least once during the preceding year." How would one design a web-based study without this contamination bias?

    What accounted for the significant decrease in activity for the CG?

  7. I am unsure how the data on the spontaneous group makes the results more credible, as James suggests. I am not sure why they even collected the data.
    Given that study participants would likely have been a motivated group, is it possible that the control group could have accessed the intervention as spontaneous users? Could this potential contamination influence the result?
    I did find it interesting that the paper reported negative results.

  8. What was the reason for not measuring the usage of the nontailored website in the control group? I think it would have been useful to know how many participants access the website as an indication of seeking information. Did this information have an impact?

    The author stated in the discussion that this a previous study showing used of a tailored intervention on CD-ROM in a controlled setting after 6 months and after 2 years was not effective when delivered online in a real-life setting. Also seems to be aware of the possibility of contamination bias. I am unclear as to why the author throught this would be any different based on a review of the actual website which has text base advice.

    Is it the 2 bicyles that are being raffled the impetus for interest? Another interesting study using incentives for participation.

  9. To me the strength of the article was its randomized longitudinal design with relatively large sample size. However, similar to other longitudinal studies this study also suffered from dropouts, which affected the study. Authors could have followed up with the participants in order to reduce the rate of the dropouts.

    The weaknesses of the study was also well explained in the article.

  10. Interesting paper. Am a bit concerned with the trial registration taking place once the study had begun. I wonder if the study protocol for the purpose of receiving funding altered the actual study design in any way.

    Reading this paper also gave an impression that the participants were given too much autonomy. Do we consider this as positive or negative in terms of critical appraisal?