“CATCH-IT Reports” are Critically Appraised Topics in Communication, Health Informatics, and Technology, discussing recently published ehealth research. We hope these reports will draw attention to important work published in journals, provide a platform for discussion around results and methodological issues in eHealth research, and help to develop a framework for evidence-based eHealth. CATCH-IT Reports arise from “journal club” - like sessions founded in February 2003 by Gunther Eysenbach.

Tuesday, November 10, 2009

CATCH-IT Final Report: Effect of guideline based computerised decision support on decision making of multidisciplinary teams

CATCH-IT Draft: Effect of guideline based computerised decision support on decision making of multidisciplinary teams: cluster randomised trial in cardiac rehabilitation

Goud R, de Keizer NF, ter Riet G, Wyatt JC, Hasman A, Hellemans IM, Peek N. Effect of guideline based computerised decision support on decision making of multidisciplinary teams: cluster randomised trial in cardiac rehabilitation. BMJ. 2009;338:b1440.

Abstract & Initial Comments / Full Text / Slideshow / CATCH-IT Draft & Comments

Introduction

In this article in BMJ, the authors investigated the effect of computerized decision support (CDS) on guideline adherence to recommended therapeutic decisions in multidisciplinary teams. They investigated the use of the Cardiac Rehabilitation Decision Support System (CARDSS) in promoting adherence to the Dutch Cardiac Rehabilitation Guidelines in cardiac rehabilitation centres in the Netherlands. It was a registered trail and the first to my knowledge to evaluate the effect of CDS on decision making in teams. The article would be of interest to health care settings with multidisciplinary teams who are thinking about adding CDS to their center.

Discussion

The study design was a randomized clustered trial. In order to investigate the effect of CARDSS on the care providers, the trial had to randomize by team/center rather than randomize on the patient level to avoid contamination at the physician level. Although a randomized clustered design has less precision and requires a greater sample size, due to the nature of the trial it was a necessity.

There was some concern for the high attrition rate in the study. Initially, there were 31 randomized centers, but 5 centres discontinued participation and another 5 were excluded. However, the authors also took steps to reduce this bias such as blinding the investigators during randomization allocation, use of objective outcome measures, and involvement of an external evaluator and statistician.

With this high attrition rate, there was a possible issue with the study having insufficient statistical power as the authors stated that they required 36 participating centers during their six month follow up to detect a 10% absolute difference in adherence rate with 80% power at a type I error risk (a) of 5%. However, in the end they were only able to include 21 centres in their analysis. Although, the 21 centers may have provided a large enough sample size by having a larger number of patients per a center, their analysis still resulted in wide confidence intervals such as the borderline significance found for exercise therapy (95% CI 0% to 5.4%). In addition, the authors did not explain how they calculated the adjusted difference or confidence intervals. It was not clear that the Improvement seen in the intervention group in comparison with control group was not influenced by the confounding factors. There was also no explanation on how the covariates (age, sex, diagnosis, weekly volume of new patients, whether the center is a specialized rehabilitation center or part of academic hospital) influenced the rate of adherence. The investigators were able to detect a significant change for three of the four therapies, but they also noticed a considerable undertreatment of patients where patients did not receive the treatment they were suppose to according to the guidelines and therefore were incorrectly given other treatments. This undertreatment was due to two issues: the therapy was not available at the center or patient non-adherence. The investigators could have controlled for the first issue by excluding centers which did not offer all four therapies, but that would have reduced the number of participating centers even further. The second issue was more of a concern since the trial was investigating care team adherence to guidelines; however, this relied on the patient adhering to them as well.

To account for the initial learning curve to the CARDSS, the authors provided a standardized training course for the care providers and excluded the data of patients enrolled in the first two weeks of CARDSS being used in each participating centre from the analyses. The authors stated that CARDSS was judged favourably in a usability study. However, the fact that three centres were excluded from the study for not recording their decisions on CARDSS and two centers for too much missing data may indicate a possible usability issue. It may be possible to re-engineer the software in a way to reduce missing recorded decisions and data. The data audit which the authors used to exclude the centres by comparing the record keeping on CARDSS and the paper based patient record was one of the strengths of the study. If there were discrepancies in the data between the two records, the authors considered the centre in question to be unreliable and excluded them from the analyses. If a centre passed the data audit but data analysis indicated that 20% or more of a centre’s patients’ records had missing data, the authors also excluded that centre from the analyses. The authors did perform a follow up qualitative study performed afterwards which looked into how CARDSS affected the main barriers to the implementation of guidelines for cardiac rehabilitation. In this follow up study, there was more qualitative user feedback from the interviews on the usability of CARDSS.

Although the authors stated that the study design ensured that there would be no bias from the Hawthorne effect, they did not go into on detail how this was accomplished. There could have been a possible Hawthorne effect since the care providers were aware they were in a study with their performance being monitored and reported which may have prompted them to increase their adherence to guidelines. There was also the potential bias resulting from having the care providers record their reason for cases where they do not adhere to the guidelines. They may simply go along with the recommended therapy from CARDSS since it would prevent additional work.

It would have been also beneficial if the investigators collected baseline adherence data prior to implementing the CARDSS at the centres. This would allow comparisons to be made between pre-implementation and post-implementation of CARDSS. The control group in the current study design is unable to do this since they were still given a CARDSS. Even if their version had limited functionality, it would have an effect on the results. It would also have been interesting to investigate guideline adherence rates in the control group after they have been given full functionality of the CARDSS in a follow up trial. The authors do have an ongoing follow up trial which may be looking into this aspect.

Finally, there was the issue regarding the unnecessity of ethics approval for the study. Since patients are involved in the study, one would assume ethics approval would be required; however, it was stated in the article that ethics approval was not needed in their country according to the medical ethics committee of the Academic Medical Centre in Amsterdam.

Conclusion

The authors concluded that CARDSS improved adherence to guideline recommendations with respect to three out of the four therapies: exercise, education, and relaxation therapy. There was no effect on lifestyle change therapy which may be partly due to the fact that the majority of clinics did not have the therapy program available. Although there are some weaknesses and limitations in this study, the authors hope to address most of them in their follow up randomized cluster study.

Questions to the authors

  • Why was baseline adherence data not collected?
  • How do the covariates affect guideline adherence?
  • How were the adjusted differences and CI values calculated?
  • What is the reason for large variation in adherence between centers?
  • Why was CARDSS not designed to be interoperable with other information systems?
  • How did the authors account for the effect of the control group knowing they were in the control group?
  • How significant of a factor was patient non-adherence and how do you plan on controlling it for the follow up trial?

No comments:

Post a Comment