“CATCH-IT Reports” are Critically Appraised Topics in Communication, Health Informatics, and Technology, discussing recently published ehealth research. We hope these reports will draw attention to important work published in journals, provide a platform for discussion around results and methodological issues in eHealth research, and help to develop a framework for evidence-based eHealth. CATCH-IT Reports arise from “journal club” - like sessions founded in February 2003 by Gunther Eysenbach.

Monday, November 2, 2009

Acceptability of a Personally Controlled Health Record in a Community-Based Setting: Implications for Policy and Design

Weitzman ER, Kaci L, Mandl KD. Acceptability of a Personally Controlled Health Record in a Community-Based Setting: Implications for Policy and Design. J.Med.Internet Res. 2009 Apr 29;11(2):e14.

Full-text article: http://www.jmir.org/2009/2/e14/HTML
J Med Internet Research Vol 11, No 2 (2009)


Background: Consumer-centered health information systems that address problems related to fragmented health records and disengaged and disempowered patients are needed, as are information systems that support public health monitoring and research. Personally controlled health records (PCHRs) represent one response to these needs. PCHRs are a special class of personal health records (PHRs) distinguished by the extent to which users control record access and contents. Recently launched PCHR platforms include Google Health, Microsoft’s HealthVault, and the Dossia platform, based on Indivo.

Objective: To understand the acceptability, early impacts, policy, and design requirements of PCHRs in a community-based setting.

Methods: Observational and narrative data relating to acceptability, adoption, and use of a personally controlled health record were collected and analyzed within a formative evaluation of a PCHR demonstration. Subjects were affiliates of a managed care organization run by an urban university in the northeastern United States. Data were collected using focus groups, semi-structured individual interviews, and content review of email communications. Subjects included: n = 20 administrators, clinicians, and institutional stakeholders who participated in pre-deployment group or individual interviews; n = 52 community members who participated in usability testing and/or pre-deployment piloting; and n = 250 subjects who participated in the full demonstration of which n = 81 initiated email communications to troubleshoot problems or provide feedback. All data were formatted as narrative text and coded thematically by two independent analysts using a shared rubric of a priori defined major codes. Sub-themes were identified by analysts using an iterative inductive process. Themes were reviewed within and across research activities (ie, focus group, usability testing, email content review) and triangulated to identify patterns.

Results: Low levels of familiarity with PCHRs were found as were high expectations for capabilities of nascent systems. Perceived value for PCHRs was highest around abilities to co-locate, view, update, and share health information with providers. Expectations were lowest for opportunities to participate in research. Early adopters perceived that PCHR benefits outweighed perceived risks, including those related to inadvertent or intentional information disclosure. Barriers and facilitators at institutional, interpersonal, and individual levels were identified. Endorsement of a dynamic platform model PCHR was evidenced by preferences for embedded searching, linking, and messaging capabilities in PCHRs; by high expectations for within-system tailored communications; and by expectation of linkages between self-report and clinical data.

Conclusions: Low levels of awareness/preparedness and high expectations for PCHRs exist as a potentially problematic pairing. Educational and technical assistance for lay users and providers are critical to meet challenges related to: access to PCHRs, especially among older cohorts; workflow demands and resistance to change among providers; inadequate health and technology literacy; clarification of boundaries and responsibility for ensuring accuracy and integrity of health information across distributed data systems; and understanding confidentiality and privacy risks. Continued demonstration and evaluation of PCHRs is essential to advancing their use.


  1. I am curious about the privacy education given to participants and the consent choice(s) offered to use their own personal health information in this research study. Although Indivo is a commercial product -- it is OpenSource software -- it is not clear what the long term implications are of Indivo's relationship with the Dossia Consortium. Also some of the participants shared passwords which allowed some of the researchers access to their PHI. Privacy education may have addressed this in advance but as we don't know what initial training in privacy and consent was received by participants, I view this as a shortcoming and would like to propose some discussion around this in class.

  2. Hi all, I had posted this abstract back in September without realizing that I should have waited till a week before the presentation. Have therefore republished it so that it appears along with other articles being presented in November.

  3. My concern about this study is how representative is the sample of the general population? According to the article, all the participants in the study had training in medicine and/or health administration, and were highly proficient in technology. Thus the results may not be applicable to the general population.

    Another concern is that part of the objective of the study was to understand the acceptability of PCHRs—the fact that the participants were all volunteers may suggest their inherent willingness to accept PCHRs as part of a health care model, causing the results of the study to be biased towards higher degrees of acceptance.

  4. Could authors have provided a figure or table with an overview of 3 research activities summarizing key information such as, Activity, Number of Participants, Characteristics of Participants, etc.?

  5. As a follow-up to Cripps comments on privacy education of patient prior to consent, it would be interesting to find out from the authors if the IRB approval stated how to dispose of the research data given that some participants shared nonclinical, identifying information, including passwords, in email exchanges with project staff (beyond HIPPA approval???). Why did participants include non-clinical information? Was there a visible privacy policy on the PCHR application?
    This would be instrumental in understanding the concerns around privacy by the participants (although in table 2, they stated to offset these issues, they had stringent data security such as storage behind firewalled, individual record encryption, certificate authentication system.) Is the individual record encryption both ways (patient/PCHR)?

  6. After having read the paper I would have liked more information as to who actually constituted the sample and how the sample was chosen.

    In addition I am still unclear as to what the nature of the intervention.

    Finally I am surprised that the issue of medico-legal concerns about who owns the records were not brought out by clinicians who were interviewed. Given the importance of the clinical record in medical legal cases I would have liked to understand the intervention better.

  7. I liked this study in that I found it useful as a health informatician. However, I was somewhat disappointed that there was not much information about the study participants. Moreover, the study included 3 disparate PCHR systems. What was the utilization rate of each system and did this have an effect on the qualitative findings?

    I would also like some more quantitative evidence to back up Table 1. For example, they state that uncertainty about appropriate and safe read/edit access policies was evident among young adults/students but do not state any quantitative backing. Was this evident amongst all young adults and students that responded?

    I can also understand where Arun is coming from when he states he is unclear about the nature of the intervention.

  8. This study seems to want to do too much. The objective listed are: determine assumptions, facilitators, and barriers to adoption. They also want to determine policy and design implications. That's a tall order for one article.

    Perhaps it would serve the authors well to focus on one particular area rather than diffuse their efforts.

  9. My biggest concern for the study is I am not sure how the researchers recruited the participants - what was the process involved with the recruitment, who did they choose, who did they not choose, etc.

  10. It is not clear what the sampling technique was and how the authors selected their samples. Also the methodological approach was not clear. The author did not address what methodological approach they used; was it ethnography or grounded theory.
    In addition, the analysis section did not provide a transparent description of how the coding was performed.

  11. It is interesting how the authors used a qualitative approach to understand the acceptability of a Indivo personally controlled health record but the research method is not clearly described. Did they use narrative inquiry? How were the participants sampled?
    How many participated in focus groups and one-to-interview? Why were only the focus group interviews transcribed? Did they not use the data from the one-to-one interviews?

    How were the written observational notes of usability testing analyzed? And were notes taken for all 12 testers and 40 pilot participants?

    Of the 250 users, 81 email communications (over six month period) were analyzed. How many users are represented in these emails? Did the other users that did not communicate with the study team have any issues? Did they even use the PCHR?