Holbrook, A., Thebane, L., Keshavjee, K., Dolovich, L., Bernstein, B., Chan, D., Troyan, S., Foster, G., Gerstein, H. Individualized electronic decision support and reminders to improve diabetes care in the community: COMPETE II randomized trial. CMAJ. 2009 Jul 7;181(1-2):37-44
Links: Abstract - Full Paper - Draft Report - Presentation
Globally, it is estimated that 18 million people are living with Diabetes Mellitus, accounting for 5% of all deaths worldwide (1). The condition affects approximately 7% of the populations of Canada and the United States (2). Any intervention that is capable of reducing the burden that such patients pose on the health care system, and in a cost-effective manner, would have huge potential in relieving and reducing future diabetes related health care costs. To this end, the paper by Holbrooke et al (2), Individualized electronic decision support and reminders can improve diabetes care in the community: COMPETE II randomized trial, represents a significant attempt at addressing diabetes care in a community-based care setting using an information technology based intervention. The paper is a pragmatic randomized trial that evaluates a Web-based decision support enabled diabetes tracker, the Computerization of Medical Practices for the Enhancement of Therapeutic Effectiveness (COMPETE) Project (http://compete-study.com/), and the general application of eHealth technologies in community-based care settings.
The authorship team of this paper have extensive experience in the area of clinical decision support and conducting research trials in the relatively emergent area of eHealth and clinical informatics, and have published several papers on information technology-based interventions for chronic diseases. This paper is one of just many publications put forth over the past decade with regard to the COMPETE project. Phase II of project, conducted under the auspice of Health Canada, was funded $2.378 million (3) by the Canada Health Infostructure Partnerships Program (CHIPP). The COMPETE project site is publicly available (http://www.compete-study.com/).
Objectives of the study
The aim of the study was to evaluate whether the COMPETE II intervention would result in improved quality of diabetes care. Holbrook and colleagues (2) describe their multi-modal intervention as having many components including
shared access by the primary care provider and the patient to a Web-based, colour-coded diabetes tracker, which provided sequential monitoring values for 13 diabetes risk factors, their respective targets and brief, prioritized messages of advice.
The intervention also included monthly-automated telephone reminders, 3 incidences of lab work, and frequent physician visits. The authors cite that there is a lack of high-quality research on the effect of computerized decision support systems. However, the limited studies available suggest that CDS systems can positively change provider behavior.
The study design was a randomized trial over the time period of approximately one year. A total of 46 primary care providers (43 physicians and 3 nurse practitioners) were recruited – all having an electronic medical record in their practice. Participants were selected randomly using allocation concealment and computer generation of group assignment from the care providers’ roster and recruited by invitations sent via mail. Patients in the intervention group were instructed to have relevant blood tests followed by a visit with their family physician a week later. In addition, intervention arm patients had access to the web-based tracker, were mailed a paper version of the tracker page twice, and were sent monthly reminders for medication, lab, and physician visits via phone. The study duration was 6 months. No description of primary care provider recruitment was provided.
Results were computed using two scoring methods: a process composite score and a clinical outcome score. The process composite score resulted from the sum of 8 variables measuring the frequency of care (e.g., quarterly or semiannual). The variables were modeled from recommendations by professional diabetes associations, expert opinion, and literature reviews. Each variable was given a maximum score (either 1 or 2), though no description of how the authors determined this score was given. Clinical outcomes were scored based on targets on an 8-item composite for clinical marker outcomes. A special subset, labeled ‘ABC’ (glycated hemoglobin, blood pressure, and cholesterol [LDL]) was also evaluated.
A total of 511 participants were recruited, 253 and 258 assigned to the intervention and control arms respectively. The intervention group improved the total process composite score when compared to the control group (1.33 v 0.06; difference 1.27, 95% CI 0.79 to 1.75, p <>p = 0.036) and the ABC composite score (0.34, 95% CI 0.04 to 0.65, p = 0.028). The results, although modest, are compatible with other studies (4,5) of eHealth interventions targeting chronic diseases.
Although the authors suggest that diabetes quality of care can be improved with electronic tracking and decision support shared by providers and patients, it is unclear whether this effect is attributable to the intervention or merely due to increased frequency of doctor visits. The study design specifies 3 care provider visits for the intervention arm, compared to none in the control arm. This alone could have impacted both process and clinical outcome scores. The authors do not report the actual number of primary care provider (PCP) visits of patients within either groups. A difference of 0.66 is reported, though absolute numbers have been omitted.
There are also concerns regarding the frequency of lab visits. Appendix 3 states that patients in the intervention arm had a total of 3 lab visits during the 6-month course of the study. There is no mention of the number of lab visits for patients in the control arm. Moreover, the paper does not state what type of lab work is done which would alter the process score - for example, if albuminuria, glycated hemoglobin, and/or LDL cholesterol were measured - thereby completely undermining the assertion that the observations were a result of the decision support system.
Holbrook and colleagues have taken a valuable stride in the implementation of complex eHealth innovations, specifically in a community-based care setting. The complexities of such implementations present many obstacles in eHealth evaluation. It would be valuable to understand the role that patients themselves play with regard to eHealth tools in non-disease specific contexts – topics that warrant further investigation.
Although the study concept is unique, the manner in which the study was reported was problematic in numerous areas. The paper does not reveal the utilization rates of the actual decision support tool. As a result, it is difficult to deduce which portion of the tool was useful to both physicians and patients. The paper also states that there were “technical difficulties” (2) with the tool itself. No description is provided. Therefore, the reader is left wondering the extent to which the tool was actually used. Similarly, there is no description of how the system interfaced with the providers’ EMRs. The paper briefly discusses the “inability to completely integrate the tracker’s decision-support system with each of the 5 different types of electronic medical records” (2). To this end, new questions surrounding data quality emerge. Readers are unaware of the accuracy of the data, who entered in data, how often, or the motivations to do so. Such information would have been beneficial in guiding further development of similar clinical informatics interventions.
There is some concern with regard to some of the data presented in the paper. Table 3 reports a score of 1.00 and 1.00 in the ‘Before’ column of the intervention arm for exercise and smoking respectively. The ‘After’ column reports scores of 0.69 and 0.69 respectively. This implies that in this cohort of patients, 100% of the processes were measured before the intervention with much fewer being measured after. In addition, the standard deviation in the ‘Before’ column is 0.06 for smoking, despite a score of 1.00. How is this possible? The authors do not address this anomaly within the paper.
Questions also arise regarding the timeframe of the study. Participants were tracked for 6 months, although the total period for the study was over a year. Why did the authors choose a 6-month time period when many of the process variables were measured on a biannual basis? In addition, the authors do not address the issue of bias towards null. In other words, physicians in the control arm could have possibly learned from their rostered patients in the intervention arm, thereby altering the effect of the decision support tool. A remedy for this issue, a cluster randomized trial, was not mentioned.
Although the study has many limitations, many of which could be reduced through more thorough reporting, the implications arevast. Conducting randomized trials of eHealth applications are especially onerous and difficult to carry out. This study exemplifies a successful randomized trial using eHealth innovations in a community-based setting. The findings have significance in the area of chronic disease management and will aid in the future development of eHealth tools which specifically target chronic diseases. Nonetheless, this study highlights the necessity of more rigorous standards for research in health information technology in which tools can be meticulously evaluated.
Questions for the Authors
1. What were the technical difficulties in integrating the tracker with the care providers’ EMR? How did this affect the study?
2. How was the process composite score validated? How did the authors decide to assign differential scores to BP (2) vs. glycate HgB (1)?
3. Why was the study conducted within such a short time frame (6 months)?
4. Why were only the intervention arm participants sent for initial lab tests and follow up physician visits? How were the baseline values measured for the control arm?
5. What were the utilization rates of the electronic decision support system? Which components worked/were used?
6. How were patients educated on using the intervention?
7. What were the reasons for patients withdrawing from the study?
8. Why did the authors not perform a cluster randomized trial in order to address bias towards null?
9. How often did patients in the control arm visit primary care providers and have relevant lab tests conducted?
10. How were primary care providers recruited?
11. Did the authors consider the ‘checklist effect’ bias whereby filling out a questionnaire can alter behavior?
The author would like to thank the members of the CATCH-IT Journal Club at the University of Toronto and the Centre for Global eHealth Innovation, Toronto, for their feedback and critical analysis that have greatly contributed to this report.
(1) World Health Organization Diabetes Programme. 2008; Available at: http://www.who.int/mediacentre/factsheets/fs312/en/index.html. Accessed Nov/7, 2009.
(2) Holbrook A, Thabane L, Keshavjee K, Dolovich L, Bernstein B, Chan D, et al. Individualized electronic decision support and reminders to improve diabetes care in the community: COMPETE II randomized trial. CMAJ 2009 Jul 7;181(1-2):37-44.
(3) COMPETE II. Available at: http://www.hc-sc.gc.ca/hcs-sss/pubs/chipp-ppics/2003-compete/final-eng.php. Accessed Oct 15, 2009.
(4) Trudel M, Cafazzo JA, Hamill M, Igharas W, Tallevi K, Picton P, et al. A mobile phone based remote patient monitoring system for chronic disease management. Stud.Health Technol.Inform. 2007;129(Pt 1):167-171.
(5) Wyne K. Information technology for the treatment of diabetes: improving outcomes and controlling costs. J.Manag.Care.Pharm. 2008 Mar;14(2 Suppl):S12-7.