“CATCH-IT Reports” are Critically Appraised Topics in Communication, Health Informatics, and Technology, discussing recently published ehealth research. We hope these reports will draw attention to important work published in journals, provide a platform for discussion around results and methodological issues in eHealth research, and help to develop a framework for evidence-based eHealth. CATCH-IT Reports arise from “journal club” - like sessions founded in February 2003 by Gunther Eysenbach.

Sunday, October 25, 2009

The unintended consequences of computerized provider order entry: findings from a mixed methods exploration.

Ash JS, Sittig DF, Dykstra R, Campbell E, Guappone K. The unintended consequences of computerized provider order entry: Findings from a mixed methods exploration. Int J Med Inf. 2009;78(SUPPL. 1):69-76

Full Text

(Please note that after reading this paper you may want to also look at the following papers to gather a better understanding of the results from their research:

1. Ash, J.S., Sittig, D.F., Poon, E.G., Guappone, K., Campbell, E., Dykstra, R.H.The Extent and Importance of Unintended Consequences Related to Computerized Provider Order Entry(2007) Journal of the American Medical Informatics Association, 14 (4), pp. 415-423

2. Campbell, E.M., Sittig, D.F., Ash, J.S., Guappone, K.P., Dykstra, R.H.Types of Unintended Consequences Related to Computerized Provider Order Entry(2006) Journal of the American Medical Informatics Association, 13 (5), pp. 547-556.)


To describe the foci, activities, methods, and results of a 4-year research project identifying the unintended consequences of computerized provider order entry (CPOE).


Using a mixed methods approach, we identified and categorized into nine types 380 examples of the unintended consequences of CPOE gleaned from fieldwork data and a conference of experts. We then conducted a national survey in the U.S.A. to discover how hospitals with varying levels of infusion, a measure of CPOE sophistication, recognize and deal with unintended consequences. The research team, with assistance from experts, identified strategies for managing the nine types of unintended adverse consequences and developed and disseminated tools for CPOE implementers to help in addressing these consequences.


Hospitals reported that levels of infusion are quite high and that these types of unintended consequences are common. Strategies for avoiding or managing the unintended consequences are similar to best practices for CPOE success published in the literature.


Development of a taxonomy of types of unintended adverse consequences of CPOE using qualitative methods allowed us to craft a national survey and discover how widespread these consequences are. Using mixed methods, we were able to structure an approach for addressing the skillful management of unintended consequences as well.


  1. Interesting that this paper is an extension of previous studies as per the other papers you attached. I have not seen a great difference from the previous studies despite adding the mixed method approach. The questions or comments I would like to see addressed are:

    1) Why did the questionnaire not address the usability and human factor issues around the use of CPOE that may have contributed to the unintended consequences?

    2) Change management (Process and Policy Changes) and implementation planning also affects the outcome of the CPOE. This is also not included in the questions.

    See response to a paper on the impact of CPOE on mortality rate for which Ash also responded.

    Ammenwerth E, Talmon J, Ash JS, Bates DW, Beuscart-Zephir MC, Duhamel A, Elkin PL, Gardner RM, Geissbuhler A. Impact of CPOE on mortality rates--contradictory findings, important messages. Methods Inf Med. 2006; 45(6):586-93.

  2. One of the challenges of appraising this paper on its own was that so many other papers had been published that likely provide further details. When arriving at the blog, I see that 2 more papers have been added as supplements to this one and may clear up some of the concerns.

    For now, here are some pieces that raised questions for me,
    1. In the data analysis section, the authors describe developing a schema for analyzing transcripts then abandoning this, and declaring the use of grounded theory. Ignoring that qualitative researchers would consider this strategy unorthodox, using grounded theory as an analytical approach suggests there are no preconceived ideas/theories around the data - how were the authors able to not let the schema influence the analysis?

    2. How was 'reputation for excellence' assessed in choosing organizations?

    3. For Table 1: Is this a guideline or list of questions asked verbatim? If the questions were asked verbatim, could they have been constructed better? For instance, Question 1 has 3 questions collapsed into one. Asking each question independently would ensure answers are obtained for all questions.

  3. In addition to the comments above, I'm a little confused about this "anticipation survey" that they conducted. The authors already created their scheme of UCs, but then ask clinicians to confirm this? If so, I'm not sure if the q's listed in table 3 will address this issue and didn't they already interview clinicians regarding this earlier on in the study when they did their site visits?

    Not entirely sure about the validity of their sampling/interview method here either. (why only clinicians, why only community hospitals, why n=83)

  4. I actually found this paper quite useful from a project- and change management point of view. However, some of the questions that came to my mind while reading were:

    -Did the UCs differ from hospitals using different CPOE systems? In other words, are the UCs systemic in nature, or a artifact of the specific system in question?

    -The authors mention on p. S73 the different informants of the UCs survey. How accurate or valid are the results of the survey? If only clinicians answered, the answers would differ greatly when compared to surveys answered by only informaticians.

  5. One big gap in the study is that they have selected the organizations based on 'excellence' rather than their use of CPOEs with compatible features. I am not sure whether this is a blessing in disguise, because collecting the UCs of the use of different CPOEs with different features may just give them a wide range of UCs...which the authors categorize in this study. However, I would still like to question: "What strategies did the authors take to ensure ONE of the following:
    (a) the CPOEs have similar features
    (b) the CPOEs have vastly different features.

    Now, on the flip side, it's important that we tie in the issues of human factors in this study, mainly because the authors have chosen to use different CPOEs which could have vastly different levels of human factors involved in their design. So, my next question would be, what are the possible consequences of using different organizations based on their use of different types of CPOEs in determining the type of UCs?

    What I'm pointing at is that there are numerous variables in this study. Had they kept the CPOE product fixed across all the organizations, perhaps the study result would have been a bit different?

  6.  I am not too sure if the explanations regarding a mixed method approach, and research strategies both for the Grounded theory and survey approaches were sufficient and clear.

     For instance, regarding the Grounded theory approach, the explanation provided was not sufficient in terms of identifying the key pints that led the authors to creating codes and from there to creating categories and concepts, and finally generating new theories from those categories. Their explanation was very superficial and incomplete. I believe the authors should have provided more clarification on these areas in the analysis and discussion sections of their paper.

     Also, I believe as the others colleagues mentioned, the “juxtaposition error” that contributed to unintended consequences is more an issue of the usability and human factor issues, and flaw in system design, on which the authors did not comment.

     Furthermore, the systems were mostly locally developed systems and authors did not comment on the possible effect of these variations of systems on the unintended consequences.

  7. It is not clear in the article how the participating organizations were chosen. Who decides the "excellence" of organizations? What criteria deems an organization as "excellent"? Furthermore, the authors mention that they were seeking sites with personnel who would be "willing" to describe their experiences and be observed during the order entry process. It is thus unclear to me whether the selected organizations were chosen based on the unclear "excellence" criteria, or on the basis of their sheer "willingness" to participate.

  8. Several sections keep referring the audience to other papers published by the same research team while this particular research was being undertaken. I did not understand the rationale behind not including summaries of other UCs in this paper.
    The authors have mentioned (Page S72) that a number of hospitals had policies against doing surveys. Would be interesting to know exactly what percentage of hospitals had this policy as this might impact the strength of their results.
    The discussion on addressing the management of UC using mixed methods approach was very limited needs further explanation. Agree/Disagree?

  9. I am so confused by this paper. What research method and results are reported here? Why did it take four years? The taxonomy of nine major tpes of advers UCs, the national survey of unintended consequences, the survey of infusion questions, anticipation survey, are all reported in other papers. What happened to the interview with staff at 176 hospitals? Was the data analyzed by grounded theory?
    The authors noted the difficulties they had to summarize the 4 year study. It was difficult to assess their conclusions based on what was provided in this paper.