“CATCH-IT Reports” are Critically Appraised Topics in Communication, Health Informatics, and Technology, discussing recently published ehealth research. We hope these reports will draw attention to important work published in journals, provide a platform for discussion around results and methodological issues in eHealth research, and help to develop a framework for evidence-based eHealth. CATCH-IT Reports arise from “journal club” - like sessions founded in February 2003 by Gunther Eysenbach.

Thursday, October 8, 2009

Goud et al. Effect of guideline based CDS

Goud R, de Keizer NF, ter Riet G, Wyatt JC, Hasman A, Hellemans IM, Peek N. Effect of guideline based computerised decision support on decision making of multidisciplinary teams: cluster randomised trial in cardiac rehabilitation. BMJ. 2009;338:b1440.

Full Text


OBJECTIVE: To determine the extent to which computerised decision support can improve concordance of multidisciplinary teams with therapeutic decisions recommended by guidelines.

DESIGN: Multicentre cluster randomised trial.

PARTICIPANTS: Multidisciplinary cardiac rehabilitation teams in Dutch centres and their cardiac rehabilitation patients.

INTERVENTIONS: Teams received an electronic patient record system with or without additional guideline based decision support.

MAIN OUTCOME MEASURES: Concordance with guideline recommendations assessed for two standard rehabilitation treatments-exercise and education therapy-and for two new but evidence based rehabilitation treatments-relaxation and lifestyle change therapy; generalised estimating equations were used to account for intra-cluster correlation and were adjusted for patient's age, sex, and indication for cardiac rehabilitation and for type and volume of centre.

RESULTS: Data from 21 centres, including 2787 patients, were analysed. Computerised decision support increased concordance with guideline recommended therapeutic decisions for exercise therapy by 7.9% (control 84.7%; adjusted difference 3.5%, 95% confidence 0.1% to 5.2%), for education therapy by 25.7% (control 63.9%; adjusted difference 23.7%, 15.5% to 29.4%), and for relaxation therapy by 25.5% (control 34.1%; adjusted difference 41.6%, 25.2% to 51.3%). The concordance for lifestyle change therapy increased by 3.2% (control 54.1%; adjusted difference 7.1%, -2.9% to 18.3%). Computerised decision support reduced cases of both overtreatment and undertreatment.

CONCLUSIONS: In a multidisciplinary team motivated to adopt a computerised decision support aid that assists in formulating guideline based care plans, computerised decision support can be effective in improving the team's concordance with guidelines. Therefore, computerised decision support may also be considered to improve implementation of guidelines in such settings.

TRIAL REGISTRATION: Current Controlled Trials ISRCTN36656997.


  1. 1. To what extent the success and timing of implementation of CARDSS may have had an impact on the performance?

    2. Page 6 (second-last paragraph) states that 'process' measure was preferred over patient-related 'outcomes'. How about looking at the impact brought about by CARDSS from the 'structure' perspective? (e.g. standardization of care using information technology-CARDSS. I would be interested in learning whether structure related measures would have provided a broader multidisciplinary perspective and altered results in any way? Structure-process-Outcome model comes from the Donabedian/IOM Framework of performance measurement. (Source: Institute of Medicine (Eds.). Appendix E. Methodology and Analytic Frameworks. Performance Measurement: Accelerating Improvement Washington, D.C.: National Academies Press; 2006. p. 170-171-175).

  2. When adopting a new technology, there is usually an initial learning curve. It would be interesting to see the change in the level of concordance (if any) with respect to the learning curve, i.e., what effect does the initial learning curve have on the users' performance with the system?

  3. Descriptive of the intervention states that "CARDSS actively guides users through the needs assessment procedure by way of a structured dialogue, prompting them to record the necessary information." Given that CARDSS uses "structured dialogue", what control mechanism was in place for user's to address decisions unique to the patient such as social or cultural sensitivity factors for which recommendation may not exist in the guidelines? How many of these may have contributed to still a large gap of 46% under treatment in the relaxation therapy of the intervention group? Would the patients' presceptive on reasons for refusal in future study be beneficial to design of the CARDSS?

  4. The multidisciplinary team in a cardiac rehab centre consists of various health professionals such as nurses, cardiologists, pharmacist, dietician etc and as claimed in the study, recruiting all these participants was considered a major challenge. Also, once recruited, there was a high rate of drop outs in the study. It will be interesting to find out what are the other factors apart from being under staffed or lack of IT infrastructure, that caused this attrition rate among the participants. Does usability of CARDSS also play a potential role in here?

  5. 4 (out of 35) institutions wanted to participate but could not implement the CARDSS system because they lacked adequate information technology infrastructure. Since the developers knew there were 101 potentially eligible centres, did they survey these organizations to understand their infrastructure in making plans for developing the CARDSS system? Did they make efforts to find out why 57 centres declined taking part in the study?

  6. The researchers are attempting to evaluate the effect of CDSS on the team's decision making and state that their outcomes measurement is "concordance". However, doesn't a a great deal of this depends on patient compliance? Patient refusal was the top reason for non-concordance with the CARDSS, which seems to be outside of the provider's decision making ability, whether or not they are using the system. I am wondering if this was taken into consideration and if this measure was chosen appropriately.

  7. Did the study measure "concordance" with guideline recommendations (on exercise, education, lifestyle and relaxation), or concordance with the CARDSS documentation, especially since documentation can take place after the patient visit?

    Also, given the incentives (free CARDSS system for $130 plus training and helpdesk support), how did the researchers encourage adoption of the CARDSS system, especially in the control group who did not receive the therapeutic recommendations? Could quality of the documentation from the control versus intervention group influence the results?

  8. Cyn, you raise a good point. What is the clinical significance of the study? "Concordance" with guidelines by the provider does not take into account whether patients actually followed through with the recommendations.

    What is the clinical significance of “overtreatment with exercise therapy in the control arm?” Does this lead to more cardiovascular events or less?

    And, what is the meaning of the following statement on page 5,

    “of the 353 patients who should not have been given exercise therapy, 111 patients incorrectly received this treatment”?

  9. I agree with Talat’s statement regarding IT infrastructure. The paper somewhat implies that all participating centres had equal levels of IT infrastructure. This surely is not the case seeing as some had to drop out due to insufficient infrastructure.

    This was an interesting study which I think has a lot of value. However, I had a few issues. The authors stated that they needed a sample size of 36 to be statistically significant. They only ended up with 21. How powerful are the statistics then? Another issue not addressed was the number of personnel at each institute (or the average number). I think this is relevant seeing as it affects workload issues (one centre dropped out due to lack of personnel). I don’t understand why the authors did not collect baseline data for comparison. This could have given more insight and meaning to the study. Or… why not continue the analysis on the control group for 6 months once they received the CARDSS system. It also interests me that the centres new if they were the control of not. Did this, therefore, have an effect on concordance? Finally, I am a bit unsure about the motivations of the authors inasmuch as they are the originators of the software themselves.

  10. Atfer having read the paper I was unclear on how the authors determined whether concordance was acheived or not. It appeared that there were multiple visits in which case was concordance measured at the first visit or over time?
    I was also very interested to know if the authors have any intention or have done any follow up work in analysing the 4 inetervention centres and why their data gathering was problematic was it the IT systems or was it a lack of guideline concordance that led to poor data recording?

  11. In the outcome measures section, the author states they , "evaluated the effect of the computerised decision support on undertreatment (withholding treatment from patients who should receive it)"

    Isn't this unethical?

    Also, wouldn't a potential bias occur in training if both the control and intervention groups were shown both versions of the software?

  12. The shortfalls of the paper:
    a) The authors did not explain how the adjusted difference and the 95 confidence interval were calculated. It was not clear that the Improvement seen in the intervention group in comparison with control group was not influenced by the confounding factors.
    b) There was no report on how the confounding factors including age, sex, diagnosis at patient level, weakly volume of new patients, whether or not center is either specialized rehabilitation center or part of an academic hospital at center level, influenced the rate of concordance.
    c) There was on report on how the ethical approval was obtained.
    The strength of the paper:
    a) The paper was well written and nicely laid out.
    b) Strengths, weaknesses and future research were nicely discussed.