“CATCH-IT Reports” are Critically Appraised Topics in Communication, Health Informatics, and Technology, discussing recently published ehealth research. We hope these reports will draw attention to important work published in journals, provide a platform for discussion around results and methodological issues in eHealth research, and help to develop a framework for evidence-based eHealth. CATCH-IT Reports arise from “journal club” - like sessions founded in February 2003 by Gunther Eysenbach.

Wednesday, October 28, 2009

CATCH-IT Draft: Effect of guideline based computerised decision support on decision making of multidisciplinary teams

CATCH-IT Draft: Effect of guideline based computerised decision support on decision making of multidisciplinary teams: cluster randomised trial in cardiac rehabilitation

Goud R, de Keizer NF, ter Riet G, Wyatt JC, Hasman A, Hellemans IM, Peek N. Effect of guideline based computerised decision support on decision making of multidisciplinary teams: cluster randomised trial in cardiac rehabilitation. BMJ. 2009;338:b1440.

Abstract / Full Text / Slideshow

Introduction

In this article in BMJ, the authors investigated the effect of computerized decision support (CDS) on guideline adherence to recommended therapeutic decisions in multidisciplinary teams. They investigated the use of the Cardiac Rehabilitation Decision Support System (CARDSS) in promoting adherence to the Dutch Cardiac Rehabilitation Guidelines in cardiac rehabilitation centres in the Netherlands. It is the first study to my knowledge to evaluate the effect of CDS on decision making in teams. The article would be of interest to health care settings with multidisciplinary teams who are thinking about adding CDS to their center.

Discussion

The study design was a randomized clustered trial. In order to investigate the effect of CARDSS on the care providers, the trial had to randomize by team/center rather than randomize on the patient level since the study was essentially testing the care provider teams. Therefore, it would not be feasible to include both the intervention and control group in the same center since the teams can learn from the intervention. Having a randomized clustered design may subtract from the ability to test a true difference, however, due to the nature of the trial it was a necessity.

There was some concern for the high attrition rate in the study. Initially, there were 31 randomized centers, but 5 centres discontinued participation and another 5 were excluded. Although the authors stated that the centres were excluded based on data discrepancies or missing data, while also being the lead developers of the CARDSS system, they may have been influenced on which centres to exclude which might have shown the system as having a negative effect on guideline adherence. However, the authors also took steps to reduce this bias such as blinding the investigators during randomization allocation, use of objective outcome measures, and involvement of an external evaluator and statistician.

With this high attrition rate, there was a possible issue with the study having insufficient statistical power as the authors stated that they required 36 participating centers during their six month follow up to detect a 10% absolute difference in adherence rate with 80% power at a type I error risk (a) of 5%. However, in the end they were only able to include 21 centres in their analysis. Although, the 21 centers may have provided a large enough sample size by having a larger number of patients per a center, their analysis still resulted in wide confidence intervals such as the borderline significance found for exercise therapy (95% CI 0% to 5.4%). In addition, the authors did not explain how they calculated the adjusted difference or confidence intervals. It was not clear that the Improvement seen in the intervention group in comparison with control group was not influenced by the confounding factors. There was also no explanation on how the covariates (age, sex, diagnosis, weekly volume of new patients, whether the center is a specialized rehabilitation center or part of academic hospital) influenced the rate of adherence.

To account for the initial learning curve to the CARDSS, the authors provided a standardized training course for the care providers and excluded the data of patients enrolled in the first two weeks of CARDSS being used in each participating centre from the analyses. The authors stated that CARDSS was judged favourably in a usability study. However, the fact that three centres were excluded from the study for not recording their decisions on CARDSS and two centers for too much missing data may indicate a possible usability issue. It may be possible to re-engineer the software in a way to reduce missing recorded decisions and data. The data audit which the authors used to exclude the centres by comparing the record keeping on CARDSS and the paper based patient record was one of the strengths of the study. If there were discrepancies in the data between the two records, the authors considered the centre in question to be unreliable and excluded them from the analyses. If a centre passed the data audit but data analysis indicated that 20% or more of a centre’s patients’ records had missing data, the authors also excluded that centre from the analyses. The authors did perform a follow up qualitative study performed afterwards which looked into how CARDSS affected the main barriers to the implementation of guidelines for cardiac rehabilitation. In this follow up study, there was more qualitative user feedback from the interviews on the usability of CARDSS.

Although the authors stated that the study design ensured that there would be no bias from the Hawthorne effect, they did not go into on detail how this was accomplished. There could have been a possible Hawthorne effect since the care providers were aware they were in a study with their performance being monitored and reported which may have prompted them to increase their adherence to guidelines. There was also the potential bias resulting from having the care providers record their reason for cases where they do not adhere to the guidelines. They may simply go along with the recommended therapy from CARDSS since it would prevent additional work.

It would have been also nice if the investigators collected baseline adherence data prior to implementing the CARDSS at the centres. This would allow comparisons to be made between pre-implementation and post-implementation of CARDSS. The control group in the current study design is unable to do this since they were still given a CARDSS. Even if their version had limited functionality, it would have an effect on the results. It would also have been interesting to investigate guideline adherence rates in the control group after they have been given full functionality of the CARDSS in a follow up trial. The authors do have an ongoing follow up trial which may be looking into this aspect.

Finally, there was the issue regarding the lack of ethics approval for the study. It was stated in the article that ethics approval was not needed according to the medical ethics committee of the Academic Medical Centre in Amsterdam.

Conclusion

The authors concluded that CARDSS improved adherence to guideline recommendations with respect to three out of the four therapies: exercise, education, and relaxation therapy. There was no effect on lifestyle change therapy which may be partly due to the fact that the majority of clinics did not have the therapy program available. Although there are some weaknesses and limitations in this study, the authors hope to address most of them in their follow up randomized cluster study.

Questions to the authors
  • Why was baseline adherence data not collected?
  • How do the covariates affect guideline adherence?
  • How were the adjusted differences and CI values calculated?
  • What is the reason for large variation in adherence between centers?
  • Why was ethics approval not needed?
  • Why was CARDSS not designed to be interoperable with other information systems?
  • How did the authors account for the effect of the control group knowing they were in the control group?

4 comments:

  1. Hi Andrew,
    Very good draft report; however, it would be helpful if you could improve the following paragraph regarding the randomized cluster sampling to clarify your point with respect to avoiding contamination at the physician level.
    “The study design was a randomized clustered trial. In order to investigate the effect of CARDSS on the care providers, the trial had to randomize by team/center rather than randomize on the patient level since the study was essentially testing the care provider teams. Therefore, it would not be feasible to include both the intervention and control group in the same center since the teams can learn from the intervention. Having a randomized clustered design may subtract from the ability to test a true difference, however, due to the nature of the trial it was a necessity.”
    Thank you.
    Marjan

    ReplyDelete
  2. A good draft Andrew.

    One suggestion: You might want to ask the authors if the patients refusal was associated to "no ethics approval" which may have led to non-concordance with recommendations. Are they planning to address this issue in their follow-up randomized cluster study?

    ReplyDelete
  3. Hi Andrew,
    Not sure if any of my comments were useful OR not. Just in case you might what to expand on your critique, I would at least question the clinical significance of “overtreatment with exercise therapy in the control arm?” Does this lead to more cardiovascular events or less?

    Also what is meant by the following statement on page 5,

    “of the 353 patients who should not have been given exercise therapy, 111 patients incorrectly received this treatment”?

    ReplyDelete
  4. A couple of comments to consider,
    - "being the lead developers of the CARDSS system, they may have been influenced on which centres to exclude...", may be strong to suggest w/out something concrete to base on
    - for ques of why ethics approval not needed, authors openly addressed as unneccessary in their country
    - one part done well, is that trial was registered

    ReplyDelete