“CATCH-IT Reports” are Critically Appraised Topics in Communication, Health Informatics, and Technology, discussing recently published ehealth research. We hope these reports will draw attention to important work published in journals, provide a platform for discussion around results and methodological issues in eHealth research, and help to develop a framework for evidence-based eHealth. CATCH-IT Reports arise from “journal club” - like sessions founded in February 2003 by Gunther Eysenbach.

Thursday, October 1, 2009

Cellular phone and Internet-based individual intervention on blood pressure and obesity in obese patients with hypertension

Park MJ, Kim HS, Kim KS. Cellular phone and Internet-based individual intervention on blood pressure and obesity in obese patients with hypertension. International journal of medical informatics. 2009 Oct;78(10):704-10.
(Sign-On to Blackboard Required)


Purpose: The present study evaluated whether an intervention using a short message service (SMS) by cellular phone and Internet would improve blood pressure, weight control, and serum lipids of obese patients with hypertension during 8 weeks.

Methods: This is a quasi-experimental design with pre- and follow-up tests. Participants were recruited from the family medicine outpatient department of tertiary care hospital located in an urban city of South Korea. Twenty-eight patients were assigned to an intervention group and 21 to a control group. The goal of intervention was to bring blood pressure, body weight, and serum lipids levels close to normal ranges. Patients in the intervention group were requested to record their blood pressure and body weight in a weekly web based diary through the Internet or by cellular phones. The researchers sent optimal recommendations as an intervention to each patient, by both cellular phone and Internet weekly. The intervention was applied for 8 weeks.

Results: Systolic (SBP) and diastolic blood pressures (DBP) significantly decreased by 9.1 and 7.2mm Hg respectively at 8weeks fromthe baseline in the intervention group (p < 0.05). However, after 8weeks from the baseline both SBP and DBP in the control group had not changed significantly. Yet, There were significant mean decreases in body weight and waist circumference by 1.6 kg (p < 0.05) and 2.8cm (p < 0.05) in the intervention group, respectively. In the control group increases in body weight and waist circumference (p < 0.05) mean changes were also significant. High density lipoprotein cholesterol (HDL-C) significantly increased, with a mean change of 3.7 mg/dl at 8weeks frombaseline in the intervention group (p < 0.05). The mean change of HDL-C in the control group was, however, not significant.

Conclusion: During 8 weeks using this web-based intervention by way of cellular phone and Internet SMS improved blood pressure, body weight, waist circumference, and HDL-C in patients with obese hypertension.


  1. The generalizeability of the study to the general population is questionable. The sample may not be a good representative of the general population since the sample is from a university affiliated hospital in an urban city in South Korea.

  2. I was interested in the duration of the study - 8 weeks is a limited time period to examine lifestyle changes, eg. diet, physical activity. Is this time frame adequate for the sustained changes considered ideal given the clinical condition studied?

  3. I have a number of questions about the study population as it pertains to generalizability. It is unclear to me how/which 35 patients they asked to participate in the study (intervention arm)of the total number of eligible patients.

  4. Interesting intervention but not sure why the researchers did not randomize the sample. While the control group was selected by matching characteristics, could selection bias influence the results?

  5. I concur with the previous comments on generalizability of the results.

    Although the paper does not mention the "law of attrition", I think it would be an interesting consideration for the decision-makers before implementing the intervention on a large scale. Results may have been biased given the 8-week duration (which, I think was short) whereby most participants adhered to the regimes but how about the long term? I wonder how the law of attrition would unfold if the study spanned over 8 months instead of 8 weeks.

  6. I agree with the aforementioned comments. I also found it interesting that they considered people to be obese with a BMI > 23. The normal convention is a BMI > 25. In line with what others have written, I am not sure if this intervention was short-term in nature. 8 weeks seemed a bit short for generalizability. Who is to say that the patients would act the same if this were not part of a study. I wonder if the study itself (knowledge of being part thereof) affected their (both physicians and patients') behaviour. Interestingly though, the authors do address some of the major limitations of their study. This study can definitely add to the body of knowledge on technology-based interventions.

  7. I believe that the 35 participants that they requested for the study were the only ones that matched their criteria. The reason is, the method states that this was a "quasi-experimental design"...which means that it's a design that looks a bit like an experimental design but lacks the "random assignment". (see here: http://www.socialresearchmethods.net/kb/quasiexp.php).

    The main questions I had in mind was:

    The design of the study required the participants to meet criteria such as being able to enter data into a website. How did they determine ones who knew how to enter data using a computer versus the ones that didn't?

    The reason I'm asking this is because, I would have liked to see what impact the study would have had if the participants had a general idea about entering data into a website...and slowly got trained into doing it frequently.

  8. This comment has been removed by the author.

  9. In the conclusion section the researchers make the statement that ."..study results show the evidence that online services are as effective as face-to-face guidance and treatment in hypertension." The comparison between a face-to-face intervention and an online service is not part of the original objective of the study. I find that this is a rather sweeping conclusion, especially given the brevity of the study, and the limitations of the participants to report regularly.

  10. It seems like there was only one researcher making the measurements which would have required somewhat of a judgment call, i.e., waist circumference. Perhaps the introduction of more raters who have a high inter-rater reliability would have increased the internal validity of the results.

  11. This is a very good example for a paper that has gone through peer-review but still leaves many questions open - a lot of critical information is not reported in the paper (I am curious if Plumaletta can find the missing pieces elsewhere). Some of you have raised questions regarding external validity, but in my mind there are far more important issues affecting the internal validity. More on that in class ... :-)

  12. By the way: Anybody who can explain to me why they used an ANOVA test rather than just comparing the mean blood pressure/weight/waist change between the groups (using a t-test) wins a special prize...

  13. One reason why ANOVA may have been used instead of t-test is because with ANOVA, it is possible to detect interaction effects between variables so that more complex hypotheses about reality can be tested. This is not possible with simple t tests.

    How do weight, blood pressure and waist size affect each other and the study results? What if the gender factor was added to the study? Simple t tests cannot answer these questions.

  14. Self reporting by the intervention group may not yield the most accurate data. They may lie about their stats to feel better about themselves or simply make up values if they find it too much of a hassle to always keep measuring themselves.

    Regarding ANOVA vs t-test, I think since there is more than 1 variable being investigated, multiple t-tests need to be run. As the number of t-tests increase, the chance for Type I errors increase as well.

  15. I have a few questions for the authors and concerns about the paper (which has been poorly peer-reviewed/edited)

    1. Most importantly, I would be interested in any usage data.
    Author alluded to the fact that not all patients actually used the intervention. Data on how the system was used is missing. How many BP / weight / drug entries did they get over the 8 wk period? How may pts logged in?

    2.I am not clear about how the intervention looked like. They write pts entered only BP, weight and drug information. How can the system
    then give advice about fast food intake or exercise duration ?

    3. Authors write that some pts may not had access to a computer or notable to use a cell phone., however, one inclusion criterion was that
    they "should be able to input data into the website and have their own cellular phone". How does this go together?

    4. Authors write they excluded patients that changed medication during the period of the interventions. What's the rationale for this? How many were excluded due to this in the
    intervention / control group?

    5. Who did the selection of the control group and how exactly was this done? What were the variables used for matching? What is the potential for a selection bias?
    This seems to be the single most important threat to validity as surely it is possible to pull pt records where there is no change in the clinical outcome variables, unless this was a blinded process.

    6. Authors write something about a paired t-test with Bonferroni correction, but there seem to be no results reported. What exactly was
    this used for?

    7. Why did they use ANOVA rather than t-test when comparing the groups? There seem to be no repeated measurements, but only two measurements at baseline and 8 weeks. My feeling is that a t-test comparing the change scores between the groups would have been the correct test.

  16. This comment has been removed by the author.

  17. I think the repeated measurements ANOVA was used to compare two groups because the outcomes were measured more than once, one at the baseline and the other after 8 weeks. However, the T-test could also be used to compare the change of differences between two groups.

  18. Gunther: From your valuable questions/concerns, some of the answers were found in the paper as follows.

    Q: Who did the selection of the control group and how exactly was this done? What were the variables used for matching?

    From paper: The control group was selected by matching the age, sex, SBP, DBP, and body weight to the intervention group at the same department. They were 25 subjects and informed about the study.

    Q: Authors write they excluded patients that changed medication during the period of the interventions. What's the rationale for this? How many were excluded due to this in the
    intervention / control group?

    From paper: Two subjects did not record their blood pressure levels for more than 4 weeks on the website in the intervention group. One
    patient took a trip and the other patient was hospitalized.
    As for the control group, four subjects refused the test at 8weeks.

    Q: 2.I am not clear about how the intervention looked like. They write pts entered only BP, weight and drug information. How can the system
    then give advice about fast food intake or exercise duration ?

    Partial answer from paper:
    We did not formally assess which changes in diet and physical activity. The authors referred to other literature in support of the result.

  19. Plumaletta: Thanks for this, but just to be clear: none of these quotes answer my questions. 1) Did they select patients for the control group which had _exactly_ the same age, sex, SBP, DBP and body weight? I would think it is next to impossible to find patients who are absolutely identical in all these variables. There is inadequate information given to say what exactly they mean by "matching". Also, who did the matching, at what point in time (retrospectively ie. after the 8 weeks, or prospectively) and - if retrospectively - whether or not they were blinded as to the 8 week outcomes is a critical question to assess the potential for selection bias.

    2) None of that information provided says how many were excluded due to a change in medication,

    3) In section 2.4 they describe how the intervention consisted of SMS's commenting on "fast food intake" or "exercise duration". I am just a bit confused how they collected this information if they didn't specifically ask for it (pg 706).

  20. Thanks Plumaletta for your presentation, I really appreciated and enjoyed how you linked the relevance of this paper to the work you are doing. I think the largest issue I had with this paper is that I still do not have a clear idea of what are the elements of this intervention and how it was used by both the patients and the medical staff.