“CATCH-IT Reports” are Critically Appraised Topics in Communication, Health Informatics, and Technology, discussing recently published ehealth research. We hope these reports will draw attention to important work published in journals, provide a platform for discussion around results and methodological issues in eHealth research, and help to develop a framework for evidence-based eHealth. CATCH-IT Reports arise from “journal club” - like sessions founded in February 2003 by Gunther Eysenbach.

Monday, October 26, 2009

Clinical Decision Support Capabilities of Commercially-available Clinical Information Systems

WRIGHT A, SITTIG DF, ASH JS, SHARMA S, PANG JE, MIDDLETON B. Clinical Decision Support Capabilities of Commercially-available Clinical Information Systems. J Am Med Inform Assoc. 2009; 16(5): 637-644

Full Text: click here

A b s t r a c t

Background: The most effective decision support systems are integrated with clinical information systems, such as inpatient and outpatient electronic health records (EHRs) and computerized provider order entry (CPOE) systems.

Purpose: The goal of this project was to describe and quantify the results of a study of decision support capabilities in Certification Commission for Health Information Technology (CCHIT) certified electronic health record systems.

Methods: The authors conducted a series of interviews with representatives of nine commercially available clinical information systems, evaluating their capabilities against 42 different clinical decision support features.

Results: Six of the nine reviewed systems offered all the applicable event-driven, action-oriented, real-time clinical decision support triggers required for initiating clinical decision support interventions. Five of the nine systems could access all the patient-specific data items identified as necessary. Six of the nine systems supported all the intervention types identified as necessary to allow clinical information systems to tailor their interventions based on the severity of the clinical situation and the user’s workflow. Only one system supported all the offered choices identified as key to allowing physicians to take action directly from within the alert.
Discussion: The principal finding relates to system-by-system variability. The best system in our analysis had onlya single missing feature (from 42 total) while the worst had eighteen. This dramatic variability in CDS capability among commercially available systems was unexpected and is a cause for concern.

Conclusions: These findings have implications for four distinct constituencies: purchasers of clinical information systems, developers of clinical decision support, vendors of clinical information systems and certification bodies.

9 comments:

  1. Under principal findings, the authors recommended that the CCHIT include ALL of the features noted in the taxomony in their future certification criteria. However, One system
    was scored N/A for the “outpatient encounter opened” trigger because it was an inpatient-only system. Are we comparing apples with apples?

    Under system-by-system performance, they noted that the two worst-performing systems had 18 gaps each of which the inpatient-only was included. Review of the 18 gaps represent options that are not applicable to the environment. This is cause for concern in presentation of the results for performance comparison on a system by system without accounting for the context of use.

    ReplyDelete
  2. Is it valid to take a taxonomy created at one institution and then apply it as a checklist against other institutions? Also curious as to when these EHRs were created and certified. CCHIT certification may change over the years to reflect evolving technologies and different purposes (inpatient, emerg, etc).

    ReplyDelete
  3. What is the definition of a customer? Does this include clinicians (or is it administrative people who decided to buy a system for an organization)? Would their findings have been different if users such as clinicians assessed the 42 features?

    ReplyDelete
  4. How is it that they were able to conduct the study but not implicate which vendor was responsible – that is one of the most valid pieces of information to the clinical administrator (I am sure that it is for legal and marketing reasons, but this information would be very useful).

    From a methodological viewpoint, it seems as if this research was conducted on an ad hoc basis. Did the authors have a pre-set inclusion criteria before they emailed or phoned the vendors? The selection of the 9 vendors seems rather vague to me.

    ReplyDelete
  5. a. It is not clear as to what methodology was used to analyze the data collected. I wonder if there was a structured program/software that facilitated the evaluation process.
    b. How did the researchers account for missing data? Does the current approach impact the reliability and validity of the results?
    c. This was more of a summative evaluation- the justification for using Partners HealthCare System was perhaps not adequate. I therefore feel the link between the purpose of the study and its resulting implications should be viewed with caution because the black and whites were somewhat obvious while no consideration was given to the grey areas (such as usability, correct implementation) which need greater emphasis for a comprehensive evaluation.
    d. Finally, I was having trouble identifying the actual duration of the research!

    ReplyDelete
  6. Where have the authors provided an adequate justification that more functionality makes for a better CDS? The taxonomy that was used to grade the systems describes a list of possible functionalities but I was unable to see any justification of why and how this taxonomy could be used to diffrentiate between better and worse systems, e.g. are all functionalities equal?
    I would also like to have more information about the respondents and non-respondents to the intial inquiry as well as what criteria they used for their purposive sample.

    ReplyDelete
  7. What is the rationale behind using a taxonomy based on Partners Healthcare System? I find that the authors fail to explain why Partners Healthcare System is considered the benchmark such that it served as the basis for a taxonomy that was used to evaluate other systems.

    ReplyDelete
  8. Maybe it would have been also helpful if the authors accounted for the usability of the features. The authors could have surveyed the users of these systems to find to what extent these functions were used. For example, if a feature was not used was it because it was too difficult to use or because it was not needed.

    ReplyDelete
  9. 1. How many "best-selling clinical information systems" where used to identify the sample of "nine." And why chose "nine" if they were all
    "best-selling" systems?
    2. The authors "completed interviews with knowledgeable individuals for nine commercially available products". How many individuals did they interview and who did they interview? Why did they not assess the systems by accessing the system, or documents on the system (functional requirements, user manuals).
    3. How do the researchers know having more of the 42 features equates a better system. Whether a feature is available or not does not account for other factors (e.g., usability, work flow).

    ReplyDelete