Holbrook, A., Thebane, L., Keshavjee, K., Dolovich, L., Bernstein, B., Chan, D., Troyan, S., Foster, G., Gerstein, H. Individualized electronic decision support and reminders to improve diabetes care in the community: COMPETE II randomized trial. CMAJ. 2009 Jul 7;181(1-2):37-44
Diabetes Mellitus is a complex condition. The inherent nature of the disease makes it an ideal target to care for using eHealth innovations. However, research on electronic applications that address chronic diseases is limited; the usefulness of such applications has not been thoroughly investigated. From 2000-2003, Health Canada funded a project titled ‘Computerization of Medical Practices for the Enhancement of Therapeutic Effectiveness’ (COMPETE). The second phase of the project (COMPETE II) evaluated the use of a “Web-based, continuously updated, patient-specific diabetes tracker available to patient and physician plus an automated telephone reminder service for patients, on access, quality, satisfaction and continuity of care” (1). The results of the project have been published by Holbrook et al (2) and are discussed herein.
Objectives of the study
The authors state that the rationale for the study is because “there have been few randomized trials to confirm that computerized decision support systems can reliably improve patient outcomes” (2). Thus, this study is to add to the literature. They hypothesize that patients within the intervention group – those with access to the decision support tool and receiving telephone reminders – would have improved quality of diabetes care. Interestingly, the deliberate goal of the COMPETE II project, within which the study was conducted, was to “have patients regularly visit their family physician” (1). These goals, although similar in nature, are explicitly different. This difference in goals brings into question if the study was conducted proactively or retrospectively.
In selecting patients for the intervention group, randomization occurred using allocation concealment through central computer generation of group assignment, which was subsequently stratified by the provider in groups of 6. Although the authors attempt to reduce bias into the sample, randomization at the patient level could have caused contamination due to possible interactions amongst patient of the same physician.
One major flaw in the study design was that patients within the intervention group were initially required to set up an appointment with their family physician in addition to getting relevant lab tests done. Patients within the control group did not. This is of particular interest because the outcome measure for the intervention (the process composite score) was based on the frequency of patient visits and lab tests. Therefore, one cannot determine whether or not the change in the outcome measure was due to the intervention or the initial physician visit and lab tests.
Another issue that requires discussion is the timeframe of the study. Patients were only tracked for 6 months even though the study was conducted over the time span of a year. Tracking over a longer time period would have made the process score more valuable insofar as many of the process targets were semiannual. Thus, these targets would have already been met just by the primary physician visit required of intervention participants.
The authors interpret the results of there study as being that they have “demonstrated that the care of complex chronic disease can be improved with electronic tracking and decision support shared by providers and patients” (2). This statement seems to go beyond the evidence of the study. For example, the inherent flaws of the intervention and the outcome measures brought the authors’ conclusion into question.
The intervention is described as a web-based diabetes tracker that includes decision support. The electronic tracker was built to integrate with the care providers electronic medical records (EMRs) as well as with an automated telephone reminder system. Patients also received mail out tracking reports. The multi-nodal nature of the intervention makes it difficult to determine which component is the causative agent for the change observed. The authors do not comment on the utilization of the tracker nor the nature of the telephone reminders. In addition, 51.4% of patients in the intervention group ‘never’ used the internet. How then, is it possible for the authors to imply that the causative agent of change is the electronic decision support if more than half of the sample does not use the tool? Utilization values for the decision support tool would substantiate this claim with quantitative evidence.
One strength of the study is in how the authors determined the variables to measure. These were based on the guidelines from the Canadian and American Diabetes Associations, literature reviews, and expert opinions. As a result, they have determined two outcome measures based on the frequency of patient-clinician interactions (process score), and the improvement in clinical outcomes (clinical score) based on best practices. However, the authors do not validate their choice in variable weight. Take, for example, the process score. Two variables, weight and physical activity, both having quarterly process targets are weighted differently - weight and physical activity having maximum score of 2 and 1 respectively. Furthermore, individuals in the intervention group started the study with a physician and lab visit. Thus their process score would be high irrespective of the intervention. Why wasn’t the control group also sent in for physician and lab visits?
The presentation of the clinical target scores is questionable. Only two variables showed statistically significant improvement. However, there were two variables that did not improve - exercise and smoking. Both scores had a value of 1.00 before the intervention and 0.69 after the intervention. These results appear counterintuitive. Interestingly the authors attribute the positive change to the intervention tool, but do not discuss the reasons for the decrease in score.
The authors have not validated their interpretation sufficiently. The multi-nodal nature of the intervention makes it difficult to correlate intervention with outcomes. In addition, the study does not indicate which component is the causative agent. Although this study adds to the growing body of literature on eHealth tools and their effect on chronic disease management, the rigor applied in the methodology is not strong enough to substantiate the authors’ claims. Therefore, it is difficult to support the validity and relevance of the study’s conclusion.
Thank you to the 2009 CATCH-IT Journal Club members from the University of Toronto’s HAD 5726 course along with Professor Eysenbach for their feedback and critical analysis that has greatly contributed to this report.
(1) COMPETE II. Available at: http://www.hc-sc.gc.ca/hcs-sss/pubs/chipp-ppics/2003-compete/final-eng.php. Accessed Oct 15, 2009.
(2) Holbrook A, Thabane L, Keshavjee K, Dolovich L, Bernstein B, Chan D, et al. Individualized electronic decision support and reminders to improve diabetes care in the community: COMPETE II randomized trial. CMAJ 2009 Jul 7;181(1-2):37-44.