“CATCH-IT Reports” are Critically Appraised Topics in Communication, Health Informatics, and Technology, discussing recently published ehealth research. We hope these reports will draw attention to important work published in journals, provide a platform for discussion around results and methodological issues in eHealth research, and help to develop a framework for evidence-based eHealth. CATCH-IT Reports arise from “journal club” - like sessions founded in February 2003 by Gunther Eysenbach.

Sunday, October 18, 2009

Abstract: Increasing the use of e-consultation in primary care: Results of an online survey among non-users of e-consultation.

Nijland N., van Gemert-Pijnen J. E.W.C., Boer H., Steehouder M. F., and Seydel E. R.(2009). Increasing the use of e-consultation in primary care: Results of an online survey among non-users of e-consultation. International Journal of Medical Informatics 78(10), 688-703.

Slide show , Full text , Draft report

Abstract

Objective
To identify factors that can enhance the use of e-consultation in primary care. We investigated the barriers, demands and motivations regarding e-consultation among patients with no e-consultation experience (non-users).

Methods
We used an online survey to gather data. Via online banners on 26 different websites of patient organizations we recruited primary care patients with chronic complaints, an important target group for e-consultation. A regression analysis was performed to identify the main drivers for e-consultation use among patients with no e-consultation experience.

Results
In total, 1706 patients started to fill out the survey. Of these patients 90% had no prior e-consultation experience. The most prominent reasons for non-use of e-consultation use were: not being aware of the existence of the service, the preference to see a doctor and e-consultation not being provided by a GP. Patients were motivated to use e-consultation, because e-consultation makes it possible to contact a GP at any time and because it enabled patients to ask additional questions after a visit to the doctor. The use of a Web-based triage application for computer-generated advice was popular among patients desiring to determine the need to see a doctor and for purposes of self-care. The patients’ motivations to use e-consultation strongly depended on demands being satisfied such as getting a quick response. When looking at socio-demographic and health-related characteristics it turned out that certain patient groups – the elderly, the less-educated individuals, the chronic medication users and the frequent GP visitors – were more motivated than other patient groups to use e-consultation services, but were also more demanding. The less-educated patients, for example, more strongly demanded instructions regarding e-consultation use than the highly educated patients.

Conclusion
In order to foster the use of e-consultation in primary care both GPs and non-users must be informed about the possibilities and consequences of e-consultation through tailored education and instruction. We must also take into account patient profiles and their specific demands regarding e-consultation. Special attention should be paid to patients who can benefit the most from e-consultation while also facing the greatest chance of being excluded from the service. As health care continues to evolve towards a more patient-centred approach, we expect that patient expectations and demands will be a major force in driving the adoption of e-consultation.

14 comments:

  1. I'm questioning why the authors would allow participants to skip questions as this makes the analysis/comparisons messy because you have to deal with missing data and changing n's. There also lacks any explanation as to how the authors handled this issue.

    Second question, why ask about participants' demands regarding e-consultation in the way presented in the study? Participants responded very positively to wanting most of the features, which I think would be expected. Perhaps a willingness-to-pay question would stratify the responses more so to yield more useful results in determining patient demand priorities.

    ReplyDelete
  2. We may want to discuss potential sources of bias in surveys, in particular online surveys. Students may want to have a look at the following paper (and check if all the items from the CHERRIES checklist have been reported in this paper?):

    Eysenbach G
    Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES)
    J Med Internet Res 2004;6(3):e34
    URL: http://www.jmir.org/2004/3/e34/

    ReplyDelete
  3. Would fuller reporting and analyses around missing data have impact on reporting of the results? It appears that if a respondent answered as little as 2 (out of 45) questions, their data was included.

    For example, given ~1/4 of the total sample (n=1066) has missing data from Table 2 where highest number is listed as 827, it would be helpful to report that further analyses was done in the study, eg. by splitting data into two groups: cases missing values for a certain variable, and those not missing a value for that variable, then use t-tests to see if there is a systematic difference between the two groups - examination of this may impact how results are reported.

    Perhaps this was done by the authors but it did not appear to be reported in the manuscript.

    ReplyDelete
  4. The first question I have is with regards to the survey tool, how was it validated? Though the authors briefly mention that it was based on previous studies, I wonder how applicable the finding of those studies are for the population the authors have chosen to study? This line of thinking led me to also question whether perhaps a mixed method approach would have been better at both identifying local factors affecting e-consultation as well as identifying the importance of each factor. I also had questions about how significant the responder bias would be in this study based on the methods used to recruit individuals.

    ReplyDelete
  5. While I found this article interesting, I was surprised that they did not mention the nature of the online surveys as a potential source of bias (volunteer bias). Did the authors take this into account? Specifically, my concern is over the types of patients that fill out online surveys and if they are different than a normal population. Also, my interest was piqued when the authors stated that they “recruited participants through banners on frequently visited websites…” Could this have played a role in the bias of participants? For example, I never fill out online surveys from website banners.

    ReplyDelete
  6. Looking at the study through the CHERRIES checklist lens, I find that the paper does NOT report:
    -the informed consent process, or whether or not the study has been approved by an IRB
    -whether or not the usability and technical functions of the electronic questionnaire had been tested before distributing the questionnaire
    -the wording of the survey announcement on the website banners. The study mentions that the survey was posted on banners of several websites, but it would also be helpful to know what the wording on these banners was as that can influence who chooses to participate
    -how was the 'uniqueness' of the site visitor determined? Could someone have responded to the survey multiple times? Were there any checks in place (such as IP checks) to ensure no participant filled out the survey more than once?

    ReplyDelete
  7. This comment has been removed by the author.

    ReplyDelete
  8. As a follow-up to the comment by Lamia, if IP checks were used, then the concern also would be whether or not cookies were used to assign a unique user identifier to each client computer.

    Another concern is no clear indication of the inclusion/exclusion criteria (is Table 1 the only factors used?). How did they determine that the user never used e-consulation? Was there a terms and condition outline online consent given that "Eligible patients were at least 18 years old"?

    ReplyDelete
  9. From the CHEERIES checklist, what were the view rate (unique site visitors/unique survey visitors), participation rate (unique survey page visitors/agreed to participate), and completion rate (agreed to participate/finished survey) for the study? Low response rates may indicate bias and the survey may be viewed with skepticism.

    ReplyDelete
  10. I found the study very confusing. I have so many questions on the study.
    - It is unclear who completed the survey, as they only excluded those that answered 1 question. How valid is the survey if only 2 questions completed?
    - Did they look at the time to complete the study?
    - Was personal info collected for demographics section and was the data was protected?
    - Were the survey (rating statements from 1 to 5) valid for assessing motivation, or rank barriers?
    - In figures 1 to 5, I am guessing that those that responded 1 to 2 were added as "strongly disagree" and those that responded 4 or 5 were added as "strongly agree." This is a bit misleading.
    - when did study take place?
    - Are those 50+ considered elderly? How did they determine these cut offs?

    ReplyDelete
  11. I had a few questions for the study:

    1. Why did they not require patients to answer all questions? Could this hint at data quality issues? For example, if out of 1543 participants without e-consultation experience, may be 10 could have answered a particular question. What was the researchers’ approach to handle such low number of entries for individual questions? 10 responses for a particular question certainly cannot make the study results reliable for obvious reasons.

    2. It is noticed that the authors used the term ‘highly educated’. What do the authors mean by ‘highly educated’?

    3. In section 3.2, the authors discussed the barriers towards the use of e-consultation. It’s interesting that they ruled out ‘computer and internet skills’ as they were not expected to be problems? The authors state this in their limitations section, stating that they only wanted to focus on ‘internet users’. That’s interesting, because even though the study was filled online, it’s not necessary that the participant has a personal computer, **regular** internet access, or necessary skills required for using e-consultation. They missed the frequency of internet usage as a potential element of the study. This is a major limitation of the study.

    4. The response about whether the participant knew about the existence of e-consultation (page 691, figure 1) is also interesting, because there is a neutral response. I am not sure what that means, because to me, this is more of a yes/no response.

    5. Finally, the authors did not provide any explanation of how they ensured that the same person did not fill in the survey multiple times. Given that the participants were recruited using the banner which may or may not have been linked to an individual's account...and that as well from numerous web sites, it becomes questionable as to how they maintained uniqueness of the survey participants.

    ReplyDelete
  12. It seems kind of odd that a study for e-consultation would use a population more inclined to use the internet anyways, i.e., an online survey. It seems logical that this group would be chosen, but making statements about barriers and motivating factors for this group would vary greatly for a group of individuals who do not use the internet frequently.

    ReplyDelete
  13. Based on the CHERRIES checklist, I had the following observations:
    1. In the ‘survey administration’, the ‘completeness check’ seemed incomplete because as mentioned in the previous comments as well all participants did not fully complete the questionnaire.
    2. The paper does not mention whether it underwent the IRB approval process.
    3. On preventing multiple entries from the same individual, there was no mention of any precautionary measures such as the use of cookies to assign unique identifiers in order to avoid duplicate entries, and/or IP address check.

    Also, it was mentioned on page 689 that “In the event of non-urgent issues it generates a tailored self-care advice”, I was wondering if there any mentioned at all in the paper as to how the knowledge base was generated. To me, this may have a potential impact on the results.

    Section 3.3 on page 690 states “The top priority was getting a quick response”. Were the patients given a standard definition of the term quick response or were they free to interpret it themselves?

    66.1% did not know whether e-consultation was refunded by their insurer. It would be interesting to analyze this aspect further because if we try to generalize the findings of this study across nations, what is covered and what is not covered is likely to have a profound impact on the decision-making of patients to use innovative technology.

    Finally, there was a typo ‘change’ should be ‘chance’ on page 693, section 3.6.

    ReplyDelete
  14. I am surprised that nobody has any issues with how the results are reported (Tables 4-8) and with the authors' interpretations of these results. Is reporting and comparing means a good way to analyze the results? The authors interpret higher means as "having significantly more barriers" or being "significantly more motivated to use e-consultations". Is this a correct interpretation of the results?

    ReplyDelete