“CATCH-IT Reports” are Critically Appraised Topics in Communication, Health Informatics, and Technology, discussing recently published ehealth research. We hope these reports will draw attention to important work published in journals, provide a platform for discussion around results and methodological issues in eHealth research, and help to develop a framework for evidence-based eHealth. CATCH-IT Reports arise from “journal club” - like sessions founded in February 2003 by Gunther Eysenbach.

Tuesday, September 29, 2009

The Relationship between Electronic Health Record Use and Quality of Care over Time

Zhou L, Soran CS, Jenter CA, Volk LA, Orav EJ, Bates DW, Simon, SR. Relationship between EHR use and Quality of Care over Time. J Am Med Inform Assoc. Jul-Aug 2009;16:457– 464.

CATCH-IT Draft Report

CATCH-IT Final Report

Full-text article: http://www.jamia.org.myaccess.library.utoronto.ca/cgi/content/full/16/4/457
(*Note: You must be logged in to the U of T system)


Electronic health records (EHRs) have the potential to advance the quality of care, but recent studies have shown mixed results. We undertook the present study to examine the extent of EHR usage and how the quality of care delivered in ambulatory care practices varied according to duration of EHR availability.

We linked two data sources: a statewide survey of physicians' adoption and use of EHR and claims data reflecting quality of care as indicated by physicians' performance on widely used quality measures. Using four years of measurement, we combined 18 quality measures into 6 clinical condition categories. While the survey of physicians was cross-sectional, respondents indicated the year in which they adopted EHR. In an analysis accounting for duration of EHR use, we examined the relationship between EHR adoption and quality of care.

The percent of physicians reporting adoption of EHR and availability of EHR core functions more than doubled between 2000 and 2005. Among EHR users in 2005, the average duration of EHR use was 4.8 years. For all 6 clinical conditions, there was no difference in performance between EHR users and non-users. In addition, for these 6 clinical conditions, there was no consistent pattern between length of time using an EHR and physicians' performance on quality measures in both bivariate and multivariate analyses.

In this cross-sectional study, we found no association between duration of using an EHR and performance with respect to quality of care, although power was limited. Intensifying the use of key EHR features, such as clinical decision support, may be needed to realize quality improvement from EHRs. Future studies should examine the relationship between the extent to which physicians use key EHR functions and their performance on quality measures over time.


  1. The authors conducted a longitudinal analysis for EHR adoption duration (years) vs. HEDIS measures, but wouldn't it be worthwhile to perform a stratified analysis which separated physicians based on their actual usage of EHR or EHR features (most, some, none) vs. HEDIS scores?

  2. In considering this study I have concerns/questions about the nature of the HEDIS data. What is unclear to me is how HEDIS data is collected for a group of physicians, is it voluntary reporting or is it calculated on all possible physicians or on a particular subset. I am particulary interested in this because depending on how HEDIS scores are collected we may see it biasing the outcomes of this study. To illustrate my concern: Are physicians motivated to inflate their HEDIS score and can they? If so then we may not see an improvement in their HEDIS scores with the use of an EHR as they may already be maximing the "quality of care" they can provide.

  3. Comparison of the quality of care between EMR users and non-EMR users were not adjusted for patients’ characteristics which could be well associated with quality of care.

  4. The methodology is described and executed in a reasonably clear manner. The study generates further questions, such as Cyn mentions in the post above, around drilling down into the data to find out more, eg. EHR level of usage v. HEDIS scores, duration of use (group using 10 yrs is small) vs. HEDIS, etc.

  5. I agree with Arun. It is unclear how HEDIS scores where determined, as the HEDIS score used as the measure of quality. Was there any relationship with HEDIS scores and physician payment with the insurers? Where HEDIS scores determined by all of the physicians' patients? How does HEDIS scores determine quality? (e.g., how was quality of asthma care deteremined.. what was considered appropriate ashtma meds in children)?

  6. I am puzzled about the use of the term "adoption" in this study. The researchers seem to equate the term "adoption" with "use" and "availability" of the EHR system. The terms are used interchangeably and there is no attempt at defining what they mean by adoption.

    The system could be available and it could be used but not fully adopted. Did anyone else have problems with this lack of clarity? I find it relevant because they are trying to tie adoption to quality of care but I am not sure what level of adoption they are referring to. Adoption goes beyond use and has a number of dimensions. The adoption of a solution includes knowledge of the system and an understanding of the problems it will solve as well as integration of the system into workflow processes on an ongoing sustainable basis so that benefits can be realized.

  7. This comment has been removed by the author.

  8. Under data analysis, there is an assumption that availabity of a feature when the practice first began using an EHR this correlated to the extent of usage.

    This assumption is one that can affect the validity of the percentage of physians using EHRS as data collection leading to measurement of quality indicators could take variable amount of time to populate, hence the use of some features many not coincide. Also, for large practices, adoption of EHR may not be immediate which again could affect the amount data entered utilize features for performance measurement possible in year 1 or 2.

    This assumption us very questionable and would impact the validity of the statistical results.

  9. I agree with Plumaletta and CCripps comments. The authors do not report the level of EHR usage, but rather segment usage by how long the EHR has been implemented within a certain practice. Thus, if Physician 'A' has been using the EHR for 10% of the time for 5 years, and physician 'B' has been using the EHR for 100% of the time for only 1 year, this would skew the results.

    The authors also understandably assume that functionality on an EHR results in usage. I am a bit more skeptical. For example, my MS Excel software has many uses - though I only tend to use it for rather basic functions.

    One way to counter this might be to develop an EHR usage measure that would perform a standardized method for reporting on usage for EHRs. By applying this to the regression equation, I wonder if the results would change.

  10. The thought that came on my mind before I completed reading the paper was "wouldn't it be cool if they compared quality of care versus use of a specific feature in EHR?" Well, they did. I thought that was brilliant.

    However, I'm a bit skeptical about the choice of HEDIS. From Table 2, I failed to see too many of those "quality of care" issues that a CDSS implementation or order entry utility could help. For example, what does LDL-C screening in diabetes care have to do with quality of care as it relates to the use of ANY features of EHR? I fail to draw a connection.

    Having said that, the question I would like to raise is, what would be a rather better "quality of care" determinant when it comes to the evaluation of whether EHR use has had any impact on the quality of care?

  11. This seemed to be a rather high level study investigating only the overall use of EHR to conclude its impact on quality of care without accurate measures in the first place.
    As cross-sectional study is a kind of observational study, I am skeptical about the actual methodology implemented here. Physicians self-reporting their usage characteristics, coupled with several other actual and potential limitations makes the conclusion less strong.

    In terms of the survey design it was mentioned that the core EHR functions listed in Table 1 were defined by the Institute of Medicine Panel, to which the researchers added two more- electronic communication and connectivity. I wonder if the corresponding features list was created in collaboration with the physicians.

    It would be interesting to discuss in class some measures that would have actually been more appropriate assessing the impact of EHR use on quality of patient care.

  12. To pick up on the points raised about HEDIS, it would be interesting to discuss briefly in class what other "quality of care" databases exist, both in the US and in Canada or even elsewhere, which could be used for this kind of research. Perhaps Daniel can dig a bit deeper here. I am sure there are some databases from the Canadian Institute for Health Information (CIHI) or other institutions.

  13. Apparently there is already some controversy and criticisms surrounding the use of HEDIS with some arguments that it does not account for many important aspects of health care quality, is not proven to be associated with better health outcomes, and may cause harm to patients when health care providers attempt to improve their HEDIS measures.

  14. My concern about this study is the lack of a formal methodology--the measures used are physicians self-reporting their HEDIS scores, which can have severe implications on the conclusion of the study. How easy is it for physicians to report "modified" scores?

  15. I think by and large this is a well-conducted and well-reported study by some leaders in the field - given how difficult it is to assess health care quality. The question lingering in my mind is if there aren't other ways of analyzing the HEDIS data. E.g. authors do not seem to make use of the longitudinal nature of the HEDIS data. I would be interested to see if there are any relationships between EHR adoption/use and HEDIS trend data (e.g. the "slope" of the quality scores over time for a particular physician). I am sure there must be some practical reasons that prevented authors from doing this kind of analysis, but they are not reported.

  16. @Lambchops - HEDIS scores are not self-reported, they are - in my understanding - based on claims data (which of course raises soem other problems, such as how accurate these data are and how well they reflect what's really been done..)

    @Andrew - do you have a reference for this?

  17. This comment has been removed by the author.

  18. The authors could have done secondary analysis and looked at the association between the quality of care and the usage of the features of the EMR; specifically the use of decision support features within the EMR. For example, they could have compared the quality of care between those who used the Alert system and those who did not use it.

  19. Thanks Daniel for your presentation. The discussion about what is quality and how we measure quality is a significant yet difficult one. Is quality based on clinical outcomes? Is it based on user satisfaction (patient and provider)? Even if we can answer those questions we have to then think about how/where do we measure clinical quality; is it at the process level, the intermediate measure level or at an outcome level? Once we decide that we then have to worry about the impacts of measurement do we see gaming or creamskimming/cherry picking by providers? By making quality measures public how does that impact wait times; if everyone wants to go to the clinician or hospital with the best rankings? At the end of this paper I would have liked the authors to qualify their statements by saying that length of use of EHRs are not associated with improvements in HEDIS scores rather than the blanket statement of quality.

  20. This comment has been removed by the author.