“CATCH-IT Reports” are Critically Appraised Topics in Communication, Health Informatics, and Technology, discussing recently published ehealth research. We hope these reports will draw attention to important work published in journals, provide a platform for discussion around results and methodological issues in eHealth research, and help to develop a framework for evidence-based eHealth. CATCH-IT Reports arise from “journal club” - like sessions founded in February 2003 by Gunther Eysenbach.

Sunday, October 25, 2009

CATCH-IT DRAFT: Individualized electronic decision support and reminders to improve diabetes care in the community: COMPETE II randomized trial.

Holbrook, A., Thebane, L., Keshavjee, K., Dolovich, L., Bernstein, B., Chan, D., Troyan, S., Foster, G., Gerstein, H. Individualized electronic decision support and reminders to improve diabetes care in the community: COMPETE II randomized trial. CMAJ. 2009 Jul 7;181(1-2):37-44


Links: Abstract - Full Text - Slideshow - Final Paper

Introduction

Diabetes Mellitus is a complex condition. The inherent nature of the disease makes it an ideal target to care for using eHealth innovations. However, research on electronic applications that address chronic diseases is limited; the usefulness of such applications has not been thoroughly investigated. From 2000-2003, Health Canada funded a project titled ‘Computerization of Medical Practices for the Enhancement of Therapeutic Effectiveness’ (COMPETE). The second phase of the project (COMPETE II) evaluated the use of a “Web-based, continuously updated, patient-specific diabetes tracker available to patient and physician plus an automated telephone reminder service for patients, on access, quality, satisfaction and continuity of care” (1). The results of the project have been published by Holbrook et al (2) and are discussed herein.

Objectives of the study

The authors state that the rationale for the study is because “there have been few randomized trials to confirm that computerized decision support systems can reliably improve patient outcomes” (2). Thus, this study is to add to the literature. They hypothesize that patients within the intervention group – those with access to the decision support tool and receiving telephone reminders – would have improved quality of diabetes care. Interestingly, the deliberate goal of the COMPETE II project, within which the study was conducted, was to “have patients regularly visit their family physician” (1). These goals, although similar in nature, are explicitly different. This difference in goals brings into question if the study was conducted proactively or retrospectively.

Methodological issues

In selecting patients for the intervention group, randomization occurred using allocation concealment through central computer generation of group assignment, which was subsequently stratified by the provider in groups of 6. Although the authors attempt to reduce bias into the sample, randomization at the patient level could have caused contamination due to possible interactions amongst patient of the same physician.

One major flaw in the study design was that patients within the intervention group were initially required to set up an appointment with their family physician in addition to getting relevant lab tests done. Patients within the control group did not. This is of particular interest because the outcome measure for the intervention (the process composite score) was based on the frequency of patient visits and lab tests. Therefore, one cannot determine whether or not the change in the outcome measure was due to the intervention or the initial physician visit and lab tests.

Another issue that requires discussion is the timeframe of the study. Patients were only tracked for 6 months even though the study was conducted over the time span of a year. Tracking over a longer time period would have made the process score more valuable insofar as many of the process targets were semiannual. Thus, these targets would have already been met just by the primary physician visit required of intervention participants.

Discussion

The authors interpret the results of there study as being that they have “demonstrated that the care of complex chronic disease can be improved with electronic tracking and decision support shared by providers and patients” (2). This statement seems to go beyond the evidence of the study. For example, the inherent flaws of the intervention and the outcome measures brought the authors’ conclusion into question.

The intervention

The intervention is described as a web-based diabetes tracker that includes decision support. The electronic tracker was built to integrate with the care providers electronic medical records (EMRs) as well as with an automated telephone reminder system. Patients also received mail out tracking reports. The multi-nodal nature of the intervention makes it difficult to determine which component is the causative agent for the change observed. The authors do not comment on the utilization of the tracker nor the nature of the telephone reminders. In addition, 51.4% of patients in the intervention group ‘never’ used the internet. How then, is it possible for the authors to imply that the causative agent of change is the electronic decision support if more than half of the sample does not use the tool? Utilization values for the decision support tool would substantiate this claim with quantitative evidence.

Outcome measures

One strength of the study is in how the authors determined the variables to measure. These were based on the guidelines from the Canadian and American Diabetes Associations, literature reviews, and expert opinions. As a result, they have determined two outcome measures based on the frequency of patient-clinician interactions (process score), and the improvement in clinical outcomes (clinical score) based on best practices. However, the authors do not validate their choice in variable weight. Take, for example, the process score. Two variables, weight and physical activity, both having quarterly process targets are weighted differently - weight and physical activity having maximum score of 2 and 1 respectively. Furthermore, individuals in the intervention group started the study with a physician and lab visit. Thus their process score would be high irrespective of the intervention. Why wasn’t the control group also sent in for physician and lab visits?

The presentation of the clinical target scores is questionable. Only two variables showed statistically significant improvement. However, there were two variables that did not improve - exercise and smoking. Both scores had a value of 1.00 before the intervention and 0.69 after the intervention. These results appear counterintuitive. Interestingly the authors attribute the positive change to the intervention tool, but do not discuss the reasons for the decrease in score.

Conclusion

The authors have not validated their interpretation sufficiently. The multi-nodal nature of the intervention makes it difficult to correlate intervention with outcomes. In addition, the study does not indicate which component is the causative agent. Although this study adds to the growing body of literature on eHealth tools and their effect on chronic disease management, the rigor applied in the methodology is not strong enough to substantiate the authors’ claims. Therefore, it is difficult to support the validity and relevance of the study’s conclusion.

Acknowledgement

Thank you to the 2009 CATCH-IT Journal Club members from the University of Toronto’s HAD 5726 course along with Professor Eysenbach for their feedback and critical analysis that has greatly contributed to this report.

References

(1) COMPETE II. Available at: http://www.hc-sc.gc.ca/hcs-sss/pubs/chipp-ppics/2003-compete/final-eng.php. Accessed Oct 15, 2009.

(2) Holbrook A, Thabane L, Keshavjee K, Dolovich L, Bernstein B, Chan D, et al. Individualized electronic decision support and reminders to improve diabetes care in the community: COMPETE II randomized trial. CMAJ 2009 Jul 7;181(1-2):37-44.

6 comments:

  1. Thanks for this. The final report needs to improve significantly - you need to sharpen your arguments and add some additional points raised during discussion.

    Just a few points:

    1. My points 3, 5, 7 from the initial discussion did not make it into the CATCH-IT report. Why not? Do you disagree with them?
    2. "randomization at the patient level could have caused contamination due to possible interactions amongst patient of the same physician." - I do not see this as a very plausible scenario - could you explain this further what exactly you mean by this? You are basically arguing that patients in the waiting room discussing the application would lead to process and outcome improvements? Much more plausible is the concern that the intervention "spills over" into the control group by physicians "learning" from intervention patients. This leads to a process improvement in the control group, making it more difficult to show a true difference. This is called a "bias towards null". The remedy is a cluster randomized trial, as discussed in class.
    3. your argument to provide utilization data is framed in a confusing way, as you argue that the low Internet use in the patient group is the issue (it is not, because the intervention was also delivered on paper, and also targeted physicians). Still, it would be essential to report usage data because system developers need to know what parts of the intervention worked/was used.
    4. "counterintuitive" needs to be explained - why exactly are these figures "counterintuitive"? (I am not saying they aren't, but you need to sharpen your arguments)

    ReplyDelete
  2. Thanks Gunther, I will address these points in the final paper. Just a quick question though. Your comments alone (from my draft page) were about 700 words. The word count for the paper is 1000. I think it will be difficult to write a final paper that is 1000 words yet addresses all of the points made in class. Is this word count set in stone or flexible?

    ReplyDelete
  3. It is flexible - you can go beyond it. Same question was asked by Plumaletta - I should probably announce it in class...

    ReplyDelete
  4. Good review James.

    Not sure if you are posing any questions for the authors, just in case you decide to, here is one-“How did they account for the usability of the web-based tracker since a greater proportion of patients were not regular internet users?” I think it is important to bring this up because although patient adherence and access to high quality diabetes care improved, many primary care providers faced technical difficulties with the electronic decision support tool which had an impact on the perceived usefulness of the intervention.

    ReplyDelete
  5. Hi James,

    The issue of multiple comparison may also be worth mentioning. In Table 4: Result of clinical outcomes, various numbers of clinical outcomes were compared simultaneously between intervention and control groups without accounting for the multiple comparisons issue. Meaning that as the number of comparisons increases, it becomes more likely that the groups being compared will appear to differ in terms of at least one attribute and show at least one or two statistically significant p values. When we compare several outcomes of the same population, the type one error will be more than 0.05 and proper adjustment would be needed. Bonferroni correction is a method that addresses this issue, which is looking at a smaller p value. The formula would be, 0.05(p value) / n (number of comparisons) which will results in an overall type one error at 0.05 level.

    ReplyDelete
  6. Thanks for the review James. I agree with you on your points in the discussion. While multi-nodal nature of the intervention makes it difficult to assess, I wonder how the research design could have been improved. Given the costs of electronic tools, not sure if the researchers could compare the electronic tool versus the control group with the paper tool.

    ReplyDelete