“CATCH-IT Reports” are Critically Appraised Topics in Communication, Health Informatics, and Technology, discussing recently published ehealth research. We hope these reports will draw attention to important work published in journals, provide a platform for discussion around results and methodological issues in eHealth research, and help to develop a framework for evidence-based eHealth. CATCH-IT Reports arise from “journal club” - like sessions founded in February 2003 by Gunther Eysenbach.

Friday, November 13, 2009

CATCH-IT Final Report: Increasing the use of e-consultation in primary care: Results of an online survey among non-users of e-consultation

Abstract , Full text , Presentation, Draft report


The authors started the report by an introduction on e-consultation and the increase in the use of the Internet as a source of health information. Authors stated the rationale for doing the study as low usage of e-consultation despite the fact that e-consultation offered many benefits. The authors identified the need for this research by reviewing an up-to-date literature regarding the use of Internet in communication between physicians and patients. The benefits of e-consultation was stated as increasing access to care enabling patients to ask questions regardless of time and place; allowing for anonymous consultation regarding sensitive questions; increase in self-management support for individuals with significant medical problems; and reducing health system cost by having the capability of responding to increasing demands for care in an aging society.


The objective of the study was clearly stated as to identify factors that could increase the use of e-consultation among nonusers, who were patients with access to the Internet, but with no previous e-consultation experience.


To collect data an online survey was carried out among non-users, patients with access to the Internet but no prior e-consultation experience, in order to assess their barriers towards e-consultation, their demands regarding e-consultation and their motivations to use e-consultation. The authors also investigated the motivation for using two types of e-consultation, which were provided in the Netherlands: • Direct e-consultation: consulting a GP through secured e-mail. • Indirect e-consultation: consulting a GP through secured email with intervention of a Web-based triage system. The survey was available for the period of 11 weeks. Participants were required to response to a 5-point likert scale from 1 representing “strongly disagree” to 5 representing “strongly agree”.

Although the authors addressed in their paper that the online survey was pre-tested but it was not clear how the survey was developed, including whether the usability and technical functionality of the questioner had been tested. Moreover, authors did not mentioned how they dealt with possible methodological issues regarding the online survey. The authors could have improved the use of the online survey by addressing some methodological issues that are associated with the use of online surveys for data collection (Eysenbach, 2004).

Despite the fact that online surveys have several advantages such as reaching a certain population more accessibly and achieving sample sizes that exceed mail and telephone surveys (Kraut et al., 2004), there are several limitations and biases associated with the use of online survey including the non-representativeness nature of the Internet population, and the self-selection of participants. When interpreting the results of online surveys, it is necessary to consider both who responded to the survey and who did not .Among the responses self-selection; multiple submissions, non-serious responses, and dropouts are especially problematic in web-based designs (Eysenbach, 2004). For instance, since there is a lack of control over whoever responses to the survey, there may be multiple responses and/or non-serious responses.

It is also possible that the responders answer and submit their questionnaire multiple times. The authors did not address how they dealt with these issues in their paper. For, example some ways that they could have handled these issues were to require participants to provide a unique identifier at the beginning of the survey or to have a couple of questions with definite answers (e.g., birth date) repeated in the survey as well as using the IP address of the client computer to identify potential duplicate entries from the same user or use cookies to indicate whether cookies were used to assign a unique user identifier to each client computer (Eysenbach, 2004). In addition, there are also several biases regarding likert scale (central tendency bias) This is where survey respondents may avoid using extreme response categories such as strongly disagree or strongly agree; and respondents may agree with statements as they are presented for example if the statement is positive, respondents are more likely to agree with statement as presented (acquiescence bias) and answer in the affirmative. And, finally, the respondents are more likely to represent themselves or their opinion in a more favorable and sociably desired way (Eysenbach, 2009).

With respect to the number of questions answered on the survey, authors excluded those who responded to only one question but included responses contained relatively large number of missing values which may have caused biases in the results.

Ethical consideration

It was not clear from the article whether the study had been approved by ERB (Ethics Review Board). Authors did not address the ethical considerations regarding the online survey. The process of informed consent and data protection were not mentioned within the paper. It was also not clear whether the patients were told about the length of time of the survey.


Participants were 18 years and older; they were recruited through banners on frequently visited websites of 26 well-trusted patient organizations that were all member organizations of the Dutch Federation of Patients and Consumer Organizations. One of the issues with the online surveys is that they often do not have a defined sampling frame; therefore, it is impossible to calculate the response rate for such studies (Couper, 2000).


Authors performed descriptive and inferential statistics to identify factors that could enhance the use of e-consultation in primary settings. Authors appropriately collapsed “agree” and “strongly agree” into one category ; “disagree” and “strongly disagree” to another category and the “neutral” to a third category; then used barcharts to present the responses of each question in a percentage format. The authors used means of likert scales to compare patient groups on perceived berries towards e-consultation, demands regarding e-consultation, motivation to use e-consultation and motivation to use direct and indirect e-consultation. However, using mean for comparison by age, education level, medication use and frequency of GP visit may not have been suitable since mean and standard deviation are not proper summary of likert scale. Authors could have alternatively use median or mode to perform their comparison more appropriately. Also, to perform the comparisons, they could have used percentages of “agree” or “disagree”. The authors could have simplified the survey data further by combining strongly agree, agree, disagree, strongly disagree into two nominal categories such as agree/disagree. This would have offered other analysis possibilities such as chi square test and logistic regression.


Authors concluded that In order to promote the use of e-consultation in primary care both GPs and non-users must be informed about the possibilities and consequences of e-consultation through tailored education and instruction. Furthermore, authors mentioned that patient profiles and their specific demands for e-consultation should be also taken into account and special attention should be paid to patients who can benefit the most from e-consultation while also facing the greatest chance of being excluded from the service (Nijland et al, 2009). The barriers, demands and motivation towards e-consultation that were identified throughout this paper, were compatible with other studies. However, the author’s conclusion was based on the population who had access to the Internet; therefore, it may not have been representative of those who did not have access to the Internet. Moreover, some of the statistically significant results identified throughout this study could be as result of improper use of mean technique. The authors could have used other statistical techniques such as mode, median or percentages instead of means of likert scales to present the results and perform inferential statistics. In general, the statistical technique that uses mean for comparison or assessing association have more power compare to techniques that use mode or median, Therefore, the results might have been different, if the authors had performed modes and medians. In summary, if the authors used a different data collection technique to collect data, the result might have been stronger and more representative of the population. For instance, they could have surveyed patients within the General Practitioner offices to access patients with and without access to the Internet. This sampling technique would have also eliminated several online survey limitations. Moreover, the study would have been stronger if the authors had taken a mix method approach using both quantitative and qualitative methodologies to better identify barriers, demands and motivations regarding use of e-consultation. The authors could have performed some qualitative interviews to better understand the patients’ demands and concerns regarding the use of e-consultation.


Couper.M.P (2000) Review: Web Surveys: A Review of Issues and Approaches. Public Opinion Quarterly, 64, 464-494.

Eysenbach Gunther. Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res. 2004 Sep 29;6(3):e34. doi: 10.2196/jmir.6.3.e34. http://www.jmir.org/2004/3/e34/v6e34 [PubMed].

Eysenbach Gunther. Class Discussion. Health Policy, Management & Evaluation Department & HAD5726: Design and Evaluation in eHealth Innovation and Information Management. 26 October 2009.

Kraut, R., Olson, J., Banaji, M., Bruckman, A., Cohen, J., & Couper, M. (2004). Psychological research online: Report of board of scientific affairs' advisory group on the conduct of research on the internet. American Psychologist, 59, 105-117.

Nijland N., van Gemert-Pijnen J. E.W.C., Boer H., Steehouder M. F., and Seydel E. R.(2009). Increasing the use of e-consultation in primary care: Results of an online survey among non-users of e-consultation. International Journal of Medical Informatics 78(10), 688-703.

No comments:

Post a Comment