“CATCH-IT Reports” are Critically Appraised Topics in Communication, Health Informatics, and Technology, discussing recently published ehealth research. We hope these reports will draw attention to important work published in journals, provide a platform for discussion around results and methodological issues in eHealth research, and help to develop a framework for evidence-based eHealth. CATCH-IT Reports arise from “journal club” - like sessions founded in February 2003 by Gunther Eysenbach.

Tuesday, October 13, 2009

Individualized electronic decision support and reminders to improve diabetes care in the community: COMPETE II randomized trial

Holbrook et al. Individualized electronic decision support and reminders to improve diabetes care in the community: COMPETE II randomized trial. CMAJ. 2009 Jul 7;181(1-2):37-44

Full Text Draft Report Slideshow

Background: Diabetes mellitus is a complex disease with serious complications. Electronic decision support, providing information that is shared and discussed by both patient and physician, encourages timely interventions and may improve the management of this chronic disease. However, it has rarely been tested in community-based primary care.

Methods: In this pragmatic randomized trial, we randomly assigned adult primary care patients with type 2 diabetes to receive the intervention or usual care. The intervention involved shared access by the primary care provider and the patient to a Web-based, colour-coded diabetes tracker, which provided sequential monitoring values for 13 diabetes risk factors, their respective targets and brief, prioritized messages of advice. The primary outcome measure was a process composite score. Secondary outcomes included clinical composite scores, quality of life, continuity of care and usability. The outcome assessors were blinded to each patient’s intervention status.

Results: We recruited sequentially 46 primary care providers and then 511 of their patients (mean age 60.7 [standard deviation 12.5] years). Mean follow-up was 5.9 months. The process composite score was significantly better for patients in the intervention group than for control patients (difference 1.27, 95% confidence interval [CI] 0.79–1.75, p <> of patients in the intervention group, compared with 42.6% (110/258) of control patients, showed improvement (difference 19.1%, p <> more variables with improvement for the intervention group (0.59, 95% CI 0.09–1.10, p = 0.02), including significantly greater declines in blood pressure (–3.95 mm Hg systolic and –2.38 mm Hg diastolic) and glycated hemoglobin (–0.2%). Patients in the intervention group reported greater satisfaction with their diabetes care.

Interpretation: A shared electronic decision-support system to support the primary care of diabetes improved the process of care and some clinical markers of the quality of diabetes care. (ClinicalTrials.gov trial register no. NCT00813085.)



14 comments:

  1. Proper bibliographic information is missing. Holbrook et al. Individualized electronic decision support and reminders to improve diabetes care in the community: COMPETE II randomized trial. CMAJ. 2009 Jul 7;181(1-2):37-44

    ReplyDelete
  2. After reading this paper I had a number of questions but the one that I seemed to keep coming back to was whether the process composite score was validated? If so I would like to learn more, especially about how they decided to assign diffrential scores to BP (2) vs glycate HgB (1) as well as how it was validated.

    ReplyDelete
  3. How were the targets for the 13 variables set? Are the targets based on any guidelines?

    ReplyDelete
  4. The patient's use of the intervention except to view records and/or enter data is unclear. How many telephone reminders were sent? Should the patient selection include ability to use a computer as part of the inclusion criteria for the intervention group? A high percentage of users in the result never use a computer and rendering the intent of the intervention unclear given that patients "..who had electronic and paper access to an individual diabetes tracker...and whose information was shared with their primary care providers, would have improved quality of diabetes care."

    ReplyDelete
  5. It appears there was a survey done collecting information around satisfaction, etc. with responses tabulated descriptively. This was described under 'Other Outcomes' and it discussed the issue of technical problems, identified by both practitioners and patients.

    Overall, little is offered or understood by the reader as to the magnitude of the technical difficulties, or its real impact on the study. Was there a system for practitioners and patients to report problems so that researchers could identify differences between system problems vs. limited computer skills? Did the researchers keep logs of technical problems? How often were patients logging onto system and did this decline over the course of the study? Did the technical problems impact this in any way? etc.......

    ReplyDelete
  6. The scoring process used in this paper wasn't the most transparent to me, and therefore I have trouble understanding the analysis and determining if it is valid. For example, how exactly did they evaluate "harshly" (pg. 39) and deduct points for worsening variables, and how did they determine cut-points for variables that improved clinically in order to assign the +1/0/-1 score? (pg.40)

    In regards to the process score, is it measured on a discrete or continuous scale? I.e. do you get a score of 1/1 for meeting the target or can you get less than that if you're close? If it is simply a discrete categorical target-met yes-or-no scale, it's not appropriate to average the scores for a t-test and should preform chi square using frequencies with the population instead. Again, I am not sure how the composite score is measured so this is just a thought.

    ReplyDelete
  7. Was there any measure on the usability of the web-based tracker, particularly since a large proportion of patients were not computer/web users? Also, what was the role of the mailed colour coded tracker page for patients in the intervention group? Given the complexity of the intervention, “web-based electronic tracker system” + colour coded tracker page (mailed 4 per year) and telephone reminders, how much of this could be achieved with less technology (e.g., less expensive intervention)? That is, could the paper tracker and phone reminders be implemented in control group for comparison?

    ReplyDelete
  8. I don't think that anyone is denying the burden of chronic diseases on the health system, or that eHealth may be a potential solution. But do the authors of this study really believe that this is a study that will resonate with physicians in Ontario, considering that only 20-30% of them use EMRs in their clinical practices?

    It almost seems as if this paper was published for political reasons urging more research to be conducted, urging action by policy makers and stating that there was a "suboptimal" provincial health infrastructure, yet there were still positive outcomes.

    ReplyDelete
  9. The shortfalls:
    a) The result of this paper cannot be extended to the populations that do not communicate in English. In a multicultural city such as Toronto it is an important factor.

    b) The age group indicates age 18 or older but the average age of patients were 60, which indicates that a great number were older patients. This could have been the reason for their lack of computer literacy.

    c) More than %50 of participants never used computers or Internet. Therefore, how could they have been able to judge the reliability of accessing the website?

    d) There was no report on how these patients actually interacted with the website and entered data.

    e) It is not clear whether 0.55 significant changes in clinical composite score and 0.34 change in ABC composite score have made any meaningful clinical implications.

    f) Randomization at the patient level could have caused contamination due to possible interactions among the patients of same physician. (patients who may have been of same family , relatives or friends)

    Observation:
    It interesting that the life style, exercise, and smoking did not change in neither of the studies.

    ReplyDelete
  10. (a) The commentary by Grant and Middleton (http://www.cmaj.ca/cgi/content/full/cmaj;181/1-2/17) provides a good lens to interpret this article. In it, I could not concur more with the first caveat they propose that the intervention group had significantly greater number of doctor/lab visits during the 6-month study period. It is unclear if and how any consideration to this effect was made in the analysis of results.
    (b) Also, Table 2 shows that majority of the participants had never used internet. Upon being recruited for this study, could there have been a bias and/or attrition whereby the subjects made a rather increased use of the web-based application being conscious of being observed and the duration of study?
    (c) It would be interesting to see if there is actually any data available on participants’ view. Page 39 last sentence under ‘outcome measures’ is somewhat bizarre. It seems that the authors do not want to reveal the source to this information. Reference # 30 does not provide much information on the participants’ perspective. Just a thought- a comparison of the cost-effectiveness of this trial and the participants’ view is likely to provide the most accurate information on the feasibility of e-decision-support on quality of diabetes care.

    ReplyDelete
  11. Since over 40% of the participants never used a computer and over 50% the internet, the telephone and mail for the intervention group probably played more of a role than the web based tracker which might be an issue given that the "cornerstone of the intervention" was suppose to the be the web based tracker.
    Did they try to educate the participants in using computers and the tracker?
    Is there any data on the usage amount on the tracker by the participants?
    It might be interesting if they stratified the group by computer/internet usage level from their data and see if there is a difference between the usage groups.

    ReplyDelete
  12. After reading the article, I was a bit confused about how the decision support system was integrated or used in this study. Although the details about the web portal is missing, Table 1 listed a few items that were monitored, what was the frequency of their submission? It's quite unclear.

    I think one of the major flaws of this study is its lack of technical details. There are fully featured web sites such as Sugar Stats that do diabetes tracking, and I thought this area was quite well studied. However, a concern I have is - does increased patient visits to the primary care truly translate into better care?

    ReplyDelete
  13. A few questions and concerns I have (and I invited the author to comment on this, and I hope she does):

    1. I did not understand the footnote in Table 1 "recommended target was quarterly, but data were available before and after the intervention", in regards to physical activity and smoking. Why? Was that for both the control and the intervention group? How much sense does it make to incorporate this in the process score if (perhaps as part of the trial) these data were obtained anyways?

    2. In table 3, I see for smoking and exercise 1.00 for most "before" data (implying that for 100% of people these process measures were obtained), and 0.69 for all "after" data. How can that be? Also, why is the standard deviation 0.06 for "intervention before smoking" - if for 100% of the intervention participants these data were available, doesn't the standard deviation have to be 0.00?

    3. There are some open questions on how exactly the diabetes tracker "interfaced" with the EMR. It seems like the initial idea was to do an automatic XML-based data exchange, but at the end of the paper it was mentioned that the data had to be entered manually, twice, which raises a lot of questions and HUGE issues. Who entered the data? How accurate were they? (was there any audit?) How often were they updated? How were physicians motivated to do it?
    There is also a well-known bias called the "checklist effect", i.e. asking physicians to fill in data in a checklist-like questionnaire will lead to behaviour change and improve process and outcome by itself, thus making the suggestion that the process/outcome improvements are due to the decision support and reminder component questionable.

    4. Usage data are completely missing from the paper and are so important to make any interpretations and draw conclusions. We basically don't know how and how often the intervention was used (and - as it is a very complex intervention - which parts worked and which didn't), by who (patients/physicians), etc. Authors allude to "technical difficulties", which always makes me question if the application was actually used and usable (if it hasn't, than the entire premise of the paper, that changes are due to the decision support system, falls apart). There is common "misconduct" in the ehealth literature to split this kind of information into separate papers, which is annoying, as it makes the interpretation of efficacy data harder.

    5. In the editorial, Grant and Middleton correctly pointed out that we don't know if the effect we are seeing is merely due to the fact that patients in the intervention group were seeing their doctor more frequently. In fact, appendix 3 specifies three physician visits in the intervention arm, compared to none in the control arm. One can argue that those alone would have an impact on process and outcome quality.

    6. Having said that, it is beyond me why authors don't report (and editors/peer-reviewers didn't insist on then providing) these numbers, i.e. how many doctor visits were made in the intervention group vs the control group?
    A difference of 0.66 was reported, but we still don't know the absolute numbers (which are essential for any cost-benefit analysis). Moreover, with 3 visits scheduled in the intervention group (and none in the control group) on the part of the investigators, the difference of 0.66 strikes me as surprisingly low.

    ReplyDelete
  14. (continued)


    7. Appendix 3 also specifies lab work done in the intervention group, a total of 3 times during the 6 month intervention, and only in the intervention group. What lab work exactly was done here? I assume (and hope!) it wasn't glycated hemoglobin, LDL cholesterol, or Albuminuria, because this would have a direct impact on the process score, and completely undermine the suggestion that process improvements were due to the decision support system and reminders. But even if these three were not included, doesn't conducting these lab tests in the intervention group (and not in the control group) per se have an effect on process and outcomes, even in the absence of an electronic decision support system, as physicians in the intervention group will be more likely to identify problematic patients?

    ReplyDelete