“CATCH-IT Reports” are Critically Appraised Topics in Communication, Health Informatics, and Technology, discussing recently published ehealth research. We hope these reports will draw attention to important work published in journals, provide a platform for discussion around results and methodological issues in eHealth research, and help to develop a framework for evidence-based eHealth. CATCH-IT Reports arise from “journal club” - like sessions founded in February 2003 by Gunther Eysenbach.

Showing posts with label hospitals. Show all posts
Showing posts with label hospitals. Show all posts

Monday, November 23, 2009

Final CATCH-IT Report: User-designed information tools to support communication and care coordination in a trauma hospital

Gurses AP, Xiao Y, Hu P. User-designed information tools to support communication and care coordination in a trauma hospital. J Biomed Inform. 2009 Aug;42(4):667-77.

Original Post · Full-Text · Abstract Only · Slideshow

Introduction
This report provides an appraisal of the study presented by Gurses and colleagues (2009) entitled User-designed information tools to support communication and care coordination in a trauma hospital. A nurse coordinator’s clipboard, a paper-based information tool compiled from a variety of information sources is examined using the qualitative methods of shadowing, interviews, photographs, and samples of the clipboards.

The authors determine that clinicians will create their own tools and it is necessary to describe and understand the characteristics of these information tools used in practice. They conclude that uncovering strategies developed to ‘work around’ disparate systems may identify unrecognized needs not taken into account by system designers.

Objectives
The aim of the study was to describe the design characteristics and use of a clinician-designed information tool constructed to support information transfer and care coordination. Specifically, this was a clipboard that nurse coordinators assembled by compiling data from a variety of sources, such as electronic medical records and physician on-call schedules. They manually blocked out non-essential information, cut and pasted print-outs, and re-assembled data onto one clipboard.

Methods
Six nurse coordinators in an urban, academic trauma hospital agreed to participate in the study. Qualitative methods included shadowing plus recorded voice communication, as well as semi-structured interviews, photographs, and samples of clipboards from six consecutive shifts. Content analysis was conducted on observation transcripts, interview transcripts, photographs, and samples of clipboards. Ethnographic methods were initially stated as the analytical approach. Later, a grounded theory approach was identified specifically for analyzing the clipboards. No paradigm framework or philosophical assumptions were described.


Results

A paper-based tool was assembled by nurse coordinators by drawing from a variety of information sources, some of them electronic. This process was outlined as taking data from various sources and, 1) selecting and formatting, 2) reducing, 3) bundling, and 4) annotating. On average, this tool took 41 minutes to put together and was done twice each day. Assembling this tool was done in efforts to increase the 1) compatibility of information, 2) rapid access to information, and 3) rapid processing of information.

These results are comparable to other studies offering descriptions of information tools created by clinicians outside of electronic systems to assist their work (Varpio 2006, Gorman 2000, Saleem 2009).

Limitations
The authors do not take the opportunity to appraise the reader of their philosophic assumptions. Qualitative research is value-laden and researchers can explicitly state those values by declaring their worldviews or philosophic assumptions, providing the reader with a transparency helpful when appraising their work (Creswell 2007; Guba & Lincoln 2005). As well, the authors use two analytic approaches, ethnographic methods and grounded theory, but offer no further explanation necessary for the reader to understand this strategy.

Although six nurse coordinators gave their permission to participate in the study, one did not take part in the interviews without an explanation provided for their exclusion. At the same time, it was not indicated that data saturation was attained in the description of the analysis. Limited information is offered about the interviews, the photographs, and the clipboards in comparison to the reporting on shadowing. For instance, interviews lacked descriptions about their length, the use of field notes to supplement the interviews, or the number of researchers involved in conducting interviews.

Activities such as the pre-testing of the observation instrument are summarized but not explicitly identified as a pre-test for the reader. Similarly, the use of more than one method of data collection suggests that the authors may have been using triangulation to substantiate findings; however this is not clearly labeled and clarification is not given on how it was achieved. Respondent validation could have been considered for this study to add to validity and credibility to the findings (Guba & Lincoln, 2005).

Discussion
This study offers insight into design characteristics and contextual information about a clinician-developed information tool. Although one group of clinicians is examined in one hospital, it contributes to a growing body of work that identifies the widespread use of user-designed tools (Halbesleben 2008, Varpio 2006, Gorman 2000, Saleem 2009). It describes how processes to design systems may not meet the needs of practitioners and offers a rich illustration of this captured with methods such as shadowing and photographs. By finding agreement upon the definition of an information transfer event, the authors were able to identify that the clipboard being studied was used more often than other information sources by the nurse coordinators, and concluded the features of the clipboard such as portability and rapid accessibility outweighed accuracy.

The findings offer a demonstration of aspects of user-interface design that can be overlooked. The authors describe the complexity of work in clinical settings, and suggest that it may be impossible to design systems that accurately meet clinician’s needs at all times. It is suggested that design processes may not readily be able to demarcate a clear beginning or end since needs are constantly evolving. They advise that the realities of complex clinical work environments providing appropriate support for clinicians require moving flexibility and adaptability to the forefront of design. Hybrid models of electronic and paper-based information tools may provide the most effective tools in accommodating the needs of practitioners in clinical settings.

Questions for Authors
  • Do the authors have a paradigm framework or worldview that could be declared?
  • Why were the analytical approaches of ethnographic methods and grounded theory mixed?
  • What were the reasons for not including an interview with one nurse coordinator?
  • Would it have been possible to use respondent validation with the nurse coordinators?
  • Which of the researchers conducted the interviews? Were field notes used to supplement the interviews?
  • Was data saturation reached?

References
Creswell, John W. Qualitative inquiry and research design: Choosing among five approaches. 2nd edition. Thousand Oaks CA: Sage Publications, 2007.

Gurses AP, Xiao Y, Hu P. User-designed information tools to support communication and care coordination in a trauma hospital. J Biomed Inform. 2009 Aug;42(4):667-677.

Gorman P, Ash J, Lavelle M, Lyman J, Delcambre L, Maier D. Bundles in the wild: managing information to solve problems and maintain situation awareness. Libr Trends 2000;49(2):266–289.

Guba EG, Lincoln YS. Paradigmatic controversies, contradictions, and emerging influences. In: Denzin NK, Lincoln YS, eds. The Sage Handbook of Qualitative Research. 3rd edition. Thousand Oaks CA: Sage Publications, 2005: 191-215.

Halbesleben JR, Wakefield DS, Wakefield BJ. Work-arounds in health care
settings: Literature review and research agenda. Health Care Manage Rev. 2008
Jan-Mar;33(1):2-12.

Saleem JJ, Russ AL, Justice CF, Hagg H, Ebright PR, Woodbridge PA, Doebbeling BN. Exploring the persistence of paper with the electronic health record. International Journal of Medical Informatics 2009; 78:618-628.

Varpio L, Schryer CF, Lehoux P, Lingard L. Working off the record: Physicians’ and nurses’ transformations of electronic patient record-based patient information. Academic Medicine 2006;81(10):S35–S39.

Friday, November 13, 2009

CATCH-IT Draft: User-designed information tools to support communication and care coordination in a trauma hospital

Gurses AP, Xiao Y, Hu P. User-designed information tools to support communication and care coordination in a trauma hospital. J Biomed Inform. 2009 Aug;42(4):667-77.

Original Post · Full-Text · Abstract Only · Slideshow

Introduction

This report provides an appraisal of the study presented by Gurses and colleagues (2009) entitled User-designed information tools to support communication and care coordination in a trauma hospital. A nurse coordinator’s clipboard, a paper-based information tool compiled from a variety of information sources is examined using the qualitative methods of shadowing, interviews, photographs, and samples of the clipboards.

The authors determine that clinicians will create their own tools and it is necessary to describe and understand the characteristics of these information tools used in practice. They conclude that uncovering strategies developed to ‘work around’ disparate systems may identify unrecognized needs not taken into account by system designers.

Objectives

The aim of the study was to describe the design characteristics and use of a clinician-designed information tool constructed to support information transfer and care coordination. Specifically, this was a clipboard that nurse coordinators assembled by compiling data from a variety of sources, such as electronic medical records and physician on-call schedules. They manually blocked out non-essential information, cut and pasted print-outs, and re-assembled data onto one clipboard.

Methods
Six nurse coordinators in an urban, academic trauma hospital agreed to participate in the study. Qualitative methods included shadowing plus recorded voice communication, as well as semi-structured interviews, photographs, and samples of clipboards from six consecutive shifts. Content analysis was conducted on observation transcripts, interview transcripts, photographs, and samples of clipboards. Ethnographic methods were initially stated as the analytical approach. Later, a grounded theory approach was identified specifically for analyzing the clipboards. No paradigm framework or philosophical assumptions were described.

Results
A paper-based tool was assembled by nurse coordinators by drawing from a variety of information sources, some of them electronic. This process was outlined as taking data from various sources and, 1) selecting and formatting, 2) reducing, 3) bundling, and 4) annotating. On average, this tool took 41 minutes to put together and was done twice each day. Assembling this tool was done in efforts to increase the 1) compatibility of information, 2) rapid access to information, and 3) rapid processing of information.

These results are comparable to other studies offering descriptions of information tools created by clinicians outside of electronic systems to assist their work (Varpio 2006, Gorman 2000, Saleem 2009).

Limitations
The authors do not take the opportunity to appraise the reader of their philosophic assumptions. Qualitative research is value-laden and researchers can explicitly state those values by declaring their worldviews or philosophic assumptions, providing the reader with a transparency helpful when appraising their work (Creswell 2007; Guba & Lincoln 2005). As well, the authors use two analytic approaches, ethnographic methods and grounded theory, but offer no further explanation necessary for the reader to understand this strategy.

Although six nurse coordinators gave their permission to participate in the study, one did not take part in the interviews without an explanation provided for their exclusion. At the same time, it was not indicated that data saturation was attained in description of the analysis. Limited information is offered about the interviews, the photographs, and the clipboards in comparison to the reporting on shadowing. For instance, interviews lacked descriptions about their length, the use of field notes to supplement the interviews, or the number of researchers involved in conducting interviews.

Activities such as the pre-testing of the observation instrument are summarized but not explicitly identified as a pre-test for the reader. Similarly, the use of more than one method of data collection suggests that the authors may have been using triangulation to substantiate findings; however this is not clearly labeled and clarification is not given on how it was achieved. Although not essential, respondent validation may have been considered for this study due to the limited number of participants and the non-controversial topic centering on work practices.

Discussion
This study offers insight into design characteristics and contextual information about a clinician-developed information tool. Although one group of clinicians is examined in one hospital, it contributes to a growing body of work that identifies the widespread use of user-designed tools (Halbesleben 2008, Varpio 2006, Gorman 2000, Saleem 2009). It describes how processes to design systems may not meet the needs of practitioners and offers a rich illustration of this captured with methods such as shadowing and photographs. By finding agreement upon the definition of an information transfer event, the authors were able to identify that the clipboard being studied was used more often than other information sources by the nurse coordinators, and concluded the features of the clipboard such as portability and rapid accessibility outweighed accuracy.

The findings offer a demonstration of aspects of user-interface design that can be overlooked. The authors describe the complexity of work in clinical settings, and suggest that it may be impossible to design systems that accurately meet clinician’s needs at all times. It is suggested that design processes may not readily be able to demarcate a clear beginning or end since needs are constantly evolving. They advise that the realities of complex clinical work environments providing appropriate support for clinicians require moving flexibility and adaptability to the forefront of design. Hybrid models of electronic and paper-based information tools may provide the most effective tools in accommodating the needs of practitioners in clinical settings.

Questions for Authors
  • Do the authors have a paradigm framework or worldview that could be declared?
  • Why were the analytical approaches of ethnographic methods and grounded theory mixed?
  • What were the reasons for not including an interview with one nurse coordinator?
  • Would it have been possible to use respondent validation with the nurse coordinators?
  • Which of the researchers conducted the interviews? Were field notes used to supplement the interviews?
  • Was data saturation reached?
References
Creswell, John W. Qualitative inquiry and research design: Choosing among five approaches. 2nd edition. Thousand Oaks CA: Sage Publications, 2007.

Gurses AP, Xiao Y, Hu P. User-designed information tools to support communication and care coordination in a trauma hospital. J Biomed Inform. 2009 Aug;42(4):667-677.

Gorman P, Ash J, Lavelle M, Lyman J, Delcambre L, Maier D. Bundles in the wild: managing information to solve problems and maintain situation awareness. Libr Trends 2000;49(2):266–289.

Guba EG, Lincoln YS. Paradigmatic controversies, contradictions, and emerging influences. In: Denzin NK, Lincoln YS, eds. The Sage Handbook of Qualitative Research. 3rd edition. Thousand Oaks CA: Sage Publications, 2005: 191-215.

Halbesleben JR, Wakefield DS, Wakefield BJ. Work-arounds in health care
settings: Literature review and research agenda. Health Care Manage Rev. 2008
Jan-Mar;33(1):2-12.

Saleem JJ, Russ AL, Justice CF, Hagg H, Ebright PR, Woodbridge PA, Doebbeling BN. Exploring the persistence of paper with the electronic health record. International Journal of Medical Informatics 2009; 78:618-628.

Varpio L, Schryer CF, Lehoux P, Lingard L. Working off the record: Physicians’ and nurses’ transformations of electronic patient record-based patient information. Academic Medicine 2006;81(10):S35–S39.

Saturday, November 7, 2009

CATCH-IT Draft: Clinical Decision Support capabilities of Commercially-available Clinical Information Systems

WRIGHT, A., SITTIG, D. F., ASH, J.S., SHARMA, S., PANG, J. E., and MIDDLETON, B. (2009). Clinical Decision Support capabilities of Commercially-available Clinical Information Systems. Journal of the American Medical Informatics Association, 16(5), 637 – 644.

Background
Recent studies have reported that the CDS applications built in-house produce the best results. However, there is not much research done for the CDS capabilities of commercially available clinical information systems (CIS). The cited paper wishes to fill this gap in research by evaluating the CDS capabilities of 9 commercially available CCHIT certified EHR systems using a 42-element functional taxonomy. The evaluations are based on information collected from the vendors and customers of the EHR systems. The study finds that while capabilities for ‘triggers’ in CDS are well covered among the systems, many capabilities for ‘offered choices’ are not present. The results of the study are presented pseudonymously to respect privacy of the customer.

This report is based on an evaluation of the study in the CATCH-IT Journal Club. It reports the key points raised about the methodological issues of the study as a result of the CATCH-IT analysis. These issues are discussed in the following, and it is expected that consideration of this evaluation will enhance the quality of the research performed by the research community.

Methodological Issues
There are several methodological issues with the original study that can be highlighted. These methodological issues can be causes of potential concerns that may hinder the validity of the research findings. The following sections discuss the methodological issues in more detail.

Use of CCHIT Certified EHR Systems
The authors indicated about the use of CCHIT certification as a baseline for the selected systems in the study. As a result of establishing such a baseline, the authors have ensured that the selected systems meet a particular quality and are have comparable features.

An investigation of the CCHIT certification requirements has indicated that the CCHIT certification criteria are continuously evolving, with additional requirements being added each year. CCHIT uses a matrix of requirements with specific requirements relating to a system’s domain of use (such as ambulatory care and outpatient care) and the system’s aspect of use (such as for EMR storage and CDS). While CCHIT certification has been used as a baseline for the selection procedure of the systems, the authors have not discussed details about what year the selected systems were certified, and whether their certification has been renewed with the evolving CCHIT requirements. In addition, it is unclear as to what the authors have done to ensure that the features in the selected taxonomy are in alignment with the CDS –specific requirements CCHIT.

System Selection Procedure
The authors indicated in the methods that a preliminary set of CCHIT certified EHR systems was identified based on figures from Klas and HIMSS Analytics. The vendors involved with the development of these systems and the customers of these systems were then contacted, based on which a sample of 9 systems were selected for this study.

The immediate concerns that arise regarding the selection procedure is that it is unclear as to how many systems were originally selected, what was the nature of such communications (such as questions asked, and type of information requested), and what was the criteria for short listing the selected EHRs for the study. Without such details, the study cannot illustrate to the audience that the study followed an effective selection procedure in which there was no external influence, and that a specific system was included or excluded from the study due to potential bias.

Taxonomy Selection
For the purpose of determining the availability of a certain CDS capability in the selected systems, the authors selected a self-developed functional taxonomy that combined common CDS capabilities along four axes – triggers, input data elements, interventions, and offered choices. The authors mention that the taxonomy was developed based on a research at the Partners HealthCare System, by emphasizing on the fact that there was no other functional taxonomy available for use in this research.

It has been determined that the taxonomy has been developed based on the numerous clinical rules used at the Partners HealthCare System in Boston. While Partners is evidently a large healthcare system involving a blend of healthcare provider types, it must be noted that the developed taxonomy has not been validated by employing it in other healthcare organizations outside of Partners.

Due to the concerns raised by the use of an un-validated self-developed taxonomy in this research, an investigative approach has been used to determine how well the taxonomy has been received by the research community. Findings suggest that even though the taxonomy research was published in 2007, to date there are only 5 journal articles that reference that research study. There is only one article that has not been authored by any of the researchers involved with the taxonomy development. However, that article does not make any specific reference to the taxonomy or its development. As a result, research has failed to identify any neutral opinion about the developed taxonomy, raising concerns of self-boasting by the authors’ use of a self-developed taxonomy that lacks an apparent acceptance in the research community. However, this raises serious concerns about the findings of the study, since the study evaluation is wholly based on the CDS capabilities indentified in the taxonomy.

Data Collection Procedure
In this study, the vendors and customers of the 9 systems were contacted and interviewed by three of the authors. The outcomes of these interviews were used to evaluate the CDS capabilities of each system against the 42-element taxonomy. The authors reported that if there was any doubt about the availability of a particular feature in any of the systems, they contacted other customers, read product manuals, and conducted hands-on evaluation to determine availability of a feature.

The paper suggests that three of the authors were involves with the data collection procedure. The authors have not specified what kind of data collection mechanisms were used for collecting the data. Data collection procedures can be a cause of bad data for which the study is based. As a result, the researchers must demonstrate the validity of their data collection procedure. For example, it is not known whether the authors used one-on-one interviews or panel interviews for collecting the data, how many interviews were conducted with the same interviewee, were the interviews open-ended or close-ended, how many questions were involved, what was the follow-up procedure, and what did the authors do to prepare for the interviews. The audience of the research can easily raise questions about the procedures and argue about the limitations of the procedures used. A well-written report will typically avoid letting such concerns settle in the mind of its audience.

Apart from the concerns about the data collection procedure, there are concerns about the validity of the collected data. It is unclear who the researchers spoke with in each of these interviews with the vendors or customers. Not all members of the vendor organization are able to answer the same question about the availability of a particular feature. In the case of a vendor, there may be bias in the answers about the availability of a certain feature. At the same time, asking the customers about the availability of a feature may raise concerns about the knowledge of customer about the product itself. And this leads to the question as to what method did the authors use to ensure that 1) what the vendors and customers are saying are actually valid, 2) how was the collected data validated, 3) what is it that raised doubts in the researchers’ minds because of which they conducted further investigation, and 4) what determined whether a feature is actually available.

Results Interpretation
The results of the study have been presented in a tabular form for each of the axis of the taxonomy by evaluating the systems against the features in the axes. To respect the software vendors’ right to privacy, the results were pseudonymously represented by identifying each system with a number. In their evaluation, the authors used a binary-style evaluation, where the result is either yes (available) or no (unavailable). Since both in-patient and outpatient were used, the inapplicable criterion for a system was marked as N/A (not applicable). The final result was represented with a count of unavailable features by each system by each axis. In the authors’ view, the system with the least number of unavailable features is the best system.
Although the authors have mentioned this as a limitation of their study, but the binary-style evaluation does not match the way that the data for this study has been collected and used. The data collected in the study was qualitative, and it has been used to evaluate a question to yes or no. Similar to the concern about the validity of collected data, this raises serious questions about the validity of the evaluation that the authors have performed. For example, even if a feature is available, what did the authors do to evaluate how well that feature has been implemented by the software developers? How complete is the feature? How usable is it? How applicable is it for a particular setting?

The representation of the final result in this research by tallying the number of unavailable features is all but useful. The authors have failed to use key concepts of importance, usefulness, and frequency of use of an available feature. For example, a feature may be useful, but may not be frequently used. Or perhaps a feature is not frequently used but is very important for the success of a CDS application. This directly impacts the final results of the study where the authors chose the system with the least number of unavailable features as the best system. Since the scoring system used in this research is weak, it can be argued that the results are invalid.

Questions for the authors
1. What led to the linear treatment of the capabilities?
2. What was the reason behind the use of a taxonomy which is not yet well-received in the research community?
3. What were the steps taken to validate the information gathered from the vendors and customers?
4. What was the reasoning behind counting the number of unavailable features rather than available ones? Did you not want to deal with the complexity of working with N/A?
5. Why were both inpatient and outpatient systems with potentially different capabilities selected for the study?

References
1. Wright A, Sittig D F, Ash J S, Sharma S, Pang J E, and Middleton B. Clinical Decision Support capabilities of Commercially-available Clinical Information Systems. Journal of the American Medical Informatics Association 2009; 16(5): 637-644.

2. Parners Healthcare. What is Partners?. Accessed via http://www.partners.org/about/about_whatis.html. Accessed on October 20, 2009

3. Scopus. Scopus Journal Search. Accessed via http://simplelink.library.utoronto.ca/url.cfm/54186. Accessed on October 22, 2009

4. BioMed Experts. Accessed via http://www.biomedexperts.com. Accessed on October 15, 2009.

5. DMICE: People – Students. Department of Medical Informatics & Clinical Epidemiology, Oregon Health & Science University. Accessed via http://www.ohsu.edu/ohsuedu/academic/som/dmice/people/students/index.cfm. Accessed on October 20, 2009

6. Clinical and Quality Analysis, Information Systems. Clinical and Quality Analysis Staff. Accessed via http://www.partners.org/cqa/Staff.htm. Accessed on October 18, 2009.

7. Wrigh A, Goldberg H, Hongsermeier T, and Middleton B. A Description and Functional Taxonomy of Rule-Based Decision Support Content at a Large Integrated Delivery Network. Journal of the American Medical Informatics Association 2007; 14(4): 489-496.

8. CCHIT. Concise Guide to CCHIT Certification Criteria. Accessed via http://www.cchit.org/sites/all/files/ConciseGuideToCCHIT_CertificationCriteria_May_29_2009.pdf. Accessed on October 10, 2009.

9. Sittig DF, Wright A, Osheroff JA, Middleton B, Teich JM, Ash JS, Campbell E, Bates DW. Grand challenges in clinical decision support. Journal of Biomedical Informatics 2008; 41(2):387-392.

Monday, November 2, 2009

Nov 9: User-designed information tools to support communication and care coordination in a trauma hospital

Gurses AP, Xiao Y, Hu P. User-designed information tools to support communication and care coordination in a trauma hospital. J Biomed Inform. 2009 Aug;42(4):667-77.

Full-Text · Abstract Only · Slideshow
· Draft CATCH-IT

BACKGROUND:
In response to inherent inadequacies in health information technologies, clinicians create their own tools for managing their information needs. Little is known about these clinician-designed information tools. With greater appreciation for why clinicians resort to these tools, health information technology designers can develop systems that better meet clinicians' needs and that can also support clinicians in design and use of their own information tools.

OBJECTIVE: To describe the design characteristics and use of a clinician-designed information tool in supporting information transfer and care coordination

DESIGN: Observations, semi-structured interviews, and photographing were used to collect data. Participants were six nurse coordinators in a high-volume trauma hospital. Content analysis was carried out and interactions with information tools were analyzed.

RESULTS: Nurse coordinators used a paper-based information tool (a nurse coordinator's clipboard) that consisted of the compilation of essential data from disparate information sources. The tool was assembled twice daily through (1) selecting and formatting key data from multiple information systems (such as the unit census and the EHR), (2) data reduction (e.g., by cutting and whitening out non-essential items from the print-outs of computerized information systems), (3) bundling (e.g., organizing pieces of information and taping them to each other), and (4) annotating (e.g., through the use of colored highlighters and shorthand symbols). It took nurse coordinators an average of 41min to assemble the clipboard. The design goals articulated by nurse coordinators to fit the tool into their tasks included (1) making information compatible with the mobile nature of their work, (2) enabling rapid information access and note-taking under time pressure, and (3) supporting rapid information processing and attention management through the effective use of layout design, shorthand symbols, and color-coding.

CONCLUSIONS: Clinicians design their own information tools based on the existing health information technologies to meet their information needs. The characteristics of these clinician-designed tools provide insights into the "realities" of how clinicians work with health information technologies. The findings suggest an often overlooked role for health information technologies: facilitating user creation of information tools that will best meet their needs.