Goud R, de Keizer NF, ter Riet G, Wyatt JC, Hasman A, Hellemans IM, Peek N. Effect of guideline based computerised decision support on decision making of multidisciplinary teams: cluster randomised trial in cardiac rehabilitation. BMJ. 2009;338:b1440.
Abstract / Full Text / Slideshow
Introduction
In this article in BMJ, the authors investigated the effect of computerized decision support (CDS) on guideline adherence to recommended therapeutic decisions in multidisciplinary teams. They investigated the use of the Cardiac Rehabilitation Decision Support System (CARDSS) in promoting adherence to the Dutch Cardiac Rehabilitation Guidelines in cardiac rehabilitation centres in the Netherlands. It is the first study to my knowledge to evaluate the effect of CDS on decision making in teams. The article would be of interest to health care settings with multidisciplinary teams who are thinking about adding CDS to their center.
Discussion
The study design was a randomized clustered trial. In order to investigate the effect of CARDSS on the care providers, the trial had to randomize by team/center rather than randomize on the patient level since the study was essentially testing the care provider teams. Therefore, it would not be feasible to include both the intervention and control group in the same center since the teams can learn from the intervention. Having a randomized clustered design may subtract from the ability to test a true difference, however, due to the nature of the trial it was a necessity.
There was some concern for the high attrition rate in the study. Initially, there were 31 randomized centers, but 5 centres discontinued participation and another 5 were excluded. Although the authors stated that the centres were excluded based on data discrepancies or missing data, while also being the lead developers of the CARDSS system, they may have been influenced on which centres to exclude which might have shown the system as having a negative effect on guideline adherence. However, the authors also took steps to reduce this bias such as blinding the investigators during randomization allocation, use of objective outcome measures, and involvement of an external evaluator and statistician.
With this high attrition rate, there was a possible issue with the study having insufficient statistical power as the authors stated that they required 36 participating centers during their six month follow up to detect a 10% absolute difference in adherence rate with 80% power at a type I error risk (a) of 5%. However, in the end they were only able to include 21 centres in their analysis. Although, the 21 centers may have provided a large enough sample size by having a larger number of patients per a center, their analysis still resulted in wide confidence intervals such as the borderline significance found for exercise therapy (95% CI 0% to 5.4%). In addition, the authors did not explain how they calculated the adjusted difference or confidence intervals. It was not clear that the Improvement seen in the intervention group in comparison with control group was not influenced by the confounding factors. There was also no explanation on how the covariates (age, sex, diagnosis, weekly volume of new patients, whether the center is a specialized rehabilitation center or part of academic hospital) influenced the rate of adherence.
To account for the initial learning curve to the CARDSS, the authors provided a standardized training course for the care providers and excluded the data of patients enrolled in the first two weeks of CARDSS being used in each participating centre from the analyses. The authors stated that CARDSS was judged favourably in a usability study. However, the fact that three centres were excluded from the study for not recording their decisions on CARDSS and two centers for too much missing data may indicate a possible usability issue. It may be possible to re-engineer the software in a way to reduce missing recorded decisions and data. The data audit which the authors used to exclude the centres by comparing the record keeping on CARDSS and the paper based patient record was one of the strengths of the study. If there were discrepancies in the data between the two records, the authors considered the centre in question to be unreliable and excluded them from the analyses. If a centre passed the data audit but data analysis indicated that 20% or more of a centre’s patients’ records had missing data, the authors also excluded that centre from the analyses. The authors did perform a follow up qualitative study performed afterwards which looked into how CARDSS affected the main barriers to the implementation of guidelines for cardiac rehabilitation. In this follow up study, there was more qualitative user feedback from the interviews on the usability of CARDSS.
Although the authors stated that the study design ensured that there would be no bias from the Hawthorne effect, they did not go into on detail how this was accomplished. There could have been a possible Hawthorne effect since the care providers were aware they were in a study with their performance being monitored and reported which may have prompted them to increase their adherence to guidelines. There was also the potential bias resulting from having the care providers record their reason for cases where they do not adhere to the guidelines. They may simply go along with the recommended therapy from CARDSS since it would prevent additional work.
It would have been also nice if the investigators collected baseline adherence data prior to implementing the CARDSS at the centres. This would allow comparisons to be made between pre-implementation and post-implementation of CARDSS. The control group in the current study design is unable to do this since they were still given a CARDSS. Even if their version had limited functionality, it would have an effect on the results. It would also have been interesting to investigate guideline adherence rates in the control group after they have been given full functionality of the CARDSS in a follow up trial. The authors do have an ongoing follow up trial which may be looking into this aspect.
Finally, there was the issue regarding the lack of ethics approval for the study. It was stated in the article that ethics approval was not needed according to the medical ethics committee of the Academic Medical Centre in Amsterdam.
Conclusion
The authors concluded that CARDSS improved adherence to guideline recommendations with respect to three out of the four therapies: exercise, education, and relaxation therapy. There was no effect on lifestyle change therapy which may be partly due to the fact that the majority of clinics did not have the therapy program available. Although there are some weaknesses and limitations in this study, the authors hope to address most of them in their follow up randomized cluster study.
Questions to the authors
- Why was baseline adherence data not collected?
- How do the covariates affect guideline adherence?
- How were the adjusted differences and CI values calculated?
- What is the reason for large variation in adherence between centers?
- Why was ethics approval not needed?
- Why was CARDSS not designed to be interoperable with other information systems?
- How did the authors account for the effect of the control group knowing they were in the control group?