“CATCH-IT Reports” are Critically Appraised Topics in Communication, Health Informatics, and Technology, discussing recently published ehealth research. We hope these reports will draw attention to important work published in journals, provide a platform for discussion around results and methodological issues in eHealth research, and help to develop a framework for evidence-based eHealth. CATCH-IT Reports arise from “journal club” - like sessions founded in February 2003 by Gunther Eysenbach.
Monday, November 2, 2009
Nov 9: User-designed information tools to support communication and care coordination in a trauma hospital
Full-Text · Abstract Only · Slideshow · Draft CATCH-IT
BACKGROUND: In response to inherent inadequacies in health information technologies, clinicians create their own tools for managing their information needs. Little is known about these clinician-designed information tools. With greater appreciation for why clinicians resort to these tools, health information technology designers can develop systems that better meet clinicians' needs and that can also support clinicians in design and use of their own information tools.
OBJECTIVE: To describe the design characteristics and use of a clinician-designed information tool in supporting information transfer and care coordination
DESIGN: Observations, semi-structured interviews, and photographing were used to collect data. Participants were six nurse coordinators in a high-volume trauma hospital. Content analysis was carried out and interactions with information tools were analyzed.
RESULTS: Nurse coordinators used a paper-based information tool (a nurse coordinator's clipboard) that consisted of the compilation of essential data from disparate information sources. The tool was assembled twice daily through (1) selecting and formatting key data from multiple information systems (such as the unit census and the EHR), (2) data reduction (e.g., by cutting and whitening out non-essential items from the print-outs of computerized information systems), (3) bundling (e.g., organizing pieces of information and taping them to each other), and (4) annotating (e.g., through the use of colored highlighters and shorthand symbols). It took nurse coordinators an average of 41min to assemble the clipboard. The design goals articulated by nurse coordinators to fit the tool into their tasks included (1) making information compatible with the mobile nature of their work, (2) enabling rapid information access and note-taking under time pressure, and (3) supporting rapid information processing and attention management through the effective use of layout design, shorthand symbols, and color-coding.
CONCLUSIONS: Clinicians design their own information tools based on the existing health information technologies to meet their information needs. The characteristics of these clinician-designed tools provide insights into the "realities" of how clinicians work with health information technologies. The findings suggest an often overlooked role for health information technologies: facilitating user creation of information tools that will best meet their needs.
Sunday, November 1, 2009
Abstract , Full text , Presentation, Final report
Introduction
The authors started the report by an introduction on e-consultation and the increase in the use of the Internet as a source of health information. Authors stated the rationale for doing the study as low usage of e-consultation despite the fact that e-consultation offered many benefits. The authors identified the need for this research by reviewing an up-to-date literature regarding the use of Internet in communication between physicians and patients. The benefits of e-consultation was stated as increasing access to care enabling patients to ask questions regardless of time and place; allowing for anonymous consultation regarding sensitive questions; increase in self-management support for individuals with significant medical problems; and reducing health system cost by having the capability of responding to increasing demands for care in an aging society.
The purpose of the study
The objective of the study was clearly stated as to identify factors that could increase the use of e-consultation among nonusers, who were patients with access to the Internet, but with no previous e-consultation experience.
Methodology & Data collection
To collect data an online survey was carried out among non-users, patients with access to the Internet but no prior e-consultation experience, in order to assess their barriers towards e-consultation, their demands regarding e-consultation and their motivations to use e-consultation. The authors also investigated the motivation for using two types of e-consultation, which were provided in the Netherlands:
• Direct e-consultation: consulting a GP through secured e-mail.
• Indirect e-consultation: consulting a GP through secured email with intervention of a Web-based triage system. The survey was available for the period of 11 weeks.
Participants were required to response to a 5-point likert scale from 1 representing
“strongly disagree” to 5 representing “strongly agree”.
Although the authors addressed in their paper that the online survey was pre-tested but it was not clear how the survey was developed, including whether the usability and technical functionality of the questioner had been tested. Moreover, authors did not mentioned how they dealt with possible methodological issues regarding the online survey.
The authors could have improved the use of the online survey by addressing some methodological issues that are associated with the use of online surveys for data collection (Eysenbach, 2004).
Despite the fact that online surveys have several advantages such as reaching a certain population more accessibly and achieving sample sizes that exceed mail and telephone surveys (Kraut et al., 2004), there are several limitations and biases associated with the use of online survey including the non-representativeness nature of the Internet population, and the self-selection of participants. When interpreting the results of online surveys, it is necessary to consider both who responded to the survey and who did not .Among the responses self-selection; multiple submissions, non-serious responses, and dropouts are especially problematic in web-based designs (Eysenbach, 2004). For instance, since there is a lack of control over who responses to the survey, there may be multiple responses and/or non-serious responses.
It is also possible that the responders answer and submit their questionnaire multiple times. The authors did not address how they dealt with these issues in their paper. For, example some ways that they could have handled these issues were to require participants to provide a unique identifier at the beginning of the survey or to have a couple of questions with definite answers (e.g., birth date) repeated in the survey as well as using the IP address of the client computer to identify potential duplicate entries from the same user or use cookies to indicate whether cookies were used to assign a unique user identifier to each client computer (Eysenbach, 2004).
In addition, there are also several biases regarding likert scale (central tendency bias) This is where survey respondents may avoid using extreme response categories such as strongly disagree or strongly agree; and respondents may agree with statements as they are presented for example if the statement is positive, respondents are more likely to agree with statement as presented and answer in the affirmative. And, finally, the respondents are more likely to represent themselves or their opinion in a more favorable and sociably desired way (Eysenbach, 2009).
Ethical consideration
It was not clear from the article whether the study had been approved by ERB (Ethics Review Board). Authors did not address the ethical considerations regarding the online survey. The process of informed consent and data protection were not mentioned within the paper. It was also not clear whether the patients were told about the length of time of the survey,
Sampling
Participants were 18 years and older; they were recruited through banners on frequently visited websites of 26 well-trusted patient organizations that were all member organizations of the Dutch Federation of Patients and Consumer Organizations.
One of the issues with the online surveys is that they often do not have a defined sampling frame; therefore, it is impossible to calculate the response rate for such studies (Couper, 2000).
Data analysis
Authors performed descriptive and inferential statistics to identify factors that could enhance the use of e-consultation in primary settings.
Authors appropriately collapsed “agree” and “strongly agree” into one category ; “disagree” and “strongly disagree” to another category and the “neutral” to a third category; then used barcharts to present the responses of each question in a percentage format. The authors used means of likert scales to compare patient groups on perceived berries towards e-consultation, demands regarding e-consultation, motivation to use e-consultation and motivation to use direct and indirect e-consultation. However, using mean for comparison by age, education level, medication use and frequency of GP visit may not have been suitable since mean and standard deviation are not proper summary of likert scale. Authors could have alternatively use median or mode to perform their comparison more appropriately. Also, to perform the comparisons, they could have used percentages of “agree” or “disagree”. The authors could have simplified the survey data further by combining strongly agree, agree, disagree, strongly disagree into two nominal categories, such as agree/disagree. This would have offered other analysis possibilities such as chi square test and logistic regression.
Conclusion
Authors concluded that In order to promote the use of e-consultation in primary care both GPs and non-users must be informed about the possibilities and consequences of e-consultation through tailored education and instruction. Furthermore, authors mentioned that patient profiles and their specific demands for e-consultation should be also taken into account and special attention should be paid to patients who can benefit the most from e-consultation while also facing the greatest chance of being excluded from the service (Nijland et al, 2009).
The barriers, demands and motivation towards e-consultation that were identified throughout this paper, were compatible with other studies. However, the author’s conclusion was based on the population who had access to the Internet; therefore, it may not have been representative of those who did not have access to the Internet.
Moreover, some of the statistically significant results identified throughout this study could be as result of improper use of mean technique. The authors could have used other statistical techniques such as mode, median or percentages instead of means of likert scales to present the results and perform inferential statistics. In general, the statistical technique that uses mean for comparison or assessing association have more power compare to techniques that use mode or median, Therefore, the results might have been different, if the authors had performed modes and medians.
Furthermore, the authors could have used a different data collection technique to collect data. For instance, they could have surveyed patients within the General Practitioner offices to access patients with and without access to the Internet. This sampling technique would have also eliminated several online survey limitations. Moreover, the study would have been stronger if the authors had taken a mix method approach using both quantitative and qualitative methodologies to better identify barriers, demands and motivations regarding use of e-consultation.
References:
Couper.M.P (2000) Review: Web Surveys: A Review of Issues and Approaches. Public Opinion Quarterly, 64, 464-494.
Eysenbach Gunther. Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res. 2004 Sep 29;6(3):e34. doi: 10.2196/jmir.6.3.e34. http://www.jmir.org/2004/3/e34/v6e34 [PubMed].
Eysenbach Gunther. Class Discussion. Health Policy, Management & Evaluation Department & HAD5726: Design and Evaluation in eHealth Innovation and Information Management. 26 October 2009.
Kraut, R., Olson, J., Banaji, M., Bruckman, A., Cohen, J., & Couper, M. (2004). Psychological research online: Report of board of scientific affairs' advisory group on the conduct of research on the internet. American Psychologist, 59, 105-117.
Nijland N., van Gemert-Pijnen J. E.W.C., Boer H., Steehouder M. F., and Seydel E. R.(2009). Increasing the use of e-consultation in primary care: Results of an online survey among non-users of e-consultation. International Journal of Medical Informatics 78(10), 688-703.
Wednesday, October 28, 2009
CATCH-IT Draft: Effect of guideline based computerised decision support on decision making of multidisciplinary teams
Goud R, de Keizer NF, ter Riet G, Wyatt JC, Hasman A, Hellemans IM, Peek N. Effect of guideline based computerised decision support on decision making of multidisciplinary teams: cluster randomised trial in cardiac rehabilitation. BMJ. 2009;338:b1440.
Abstract / Full Text / Slideshow
Introduction
In this article in BMJ, the authors investigated the effect of computerized decision support (CDS) on guideline adherence to recommended therapeutic decisions in multidisciplinary teams. They investigated the use of the Cardiac Rehabilitation Decision Support System (CARDSS) in promoting adherence to the Dutch Cardiac Rehabilitation Guidelines in cardiac rehabilitation centres in the Netherlands. It is the first study to my knowledge to evaluate the effect of CDS on decision making in teams. The article would be of interest to health care settings with multidisciplinary teams who are thinking about adding CDS to their center.
Discussion
The study design was a randomized clustered trial. In order to investigate the effect of CARDSS on the care providers, the trial had to randomize by team/center rather than randomize on the patient level since the study was essentially testing the care provider teams. Therefore, it would not be feasible to include both the intervention and control group in the same center since the teams can learn from the intervention. Having a randomized clustered design may subtract from the ability to test a true difference, however, due to the nature of the trial it was a necessity.
There was some concern for the high attrition rate in the study. Initially, there were 31 randomized centers, but 5 centres discontinued participation and another 5 were excluded. Although the authors stated that the centres were excluded based on data discrepancies or missing data, while also being the lead developers of the CARDSS system, they may have been influenced on which centres to exclude which might have shown the system as having a negative effect on guideline adherence. However, the authors also took steps to reduce this bias such as blinding the investigators during randomization allocation, use of objective outcome measures, and involvement of an external evaluator and statistician.
With this high attrition rate, there was a possible issue with the study having insufficient statistical power as the authors stated that they required 36 participating centers during their six month follow up to detect a 10% absolute difference in adherence rate with 80% power at a type I error risk (a) of 5%. However, in the end they were only able to include 21 centres in their analysis. Although, the 21 centers may have provided a large enough sample size by having a larger number of patients per a center, their analysis still resulted in wide confidence intervals such as the borderline significance found for exercise therapy (95% CI 0% to 5.4%). In addition, the authors did not explain how they calculated the adjusted difference or confidence intervals. It was not clear that the Improvement seen in the intervention group in comparison with control group was not influenced by the confounding factors. There was also no explanation on how the covariates (age, sex, diagnosis, weekly volume of new patients, whether the center is a specialized rehabilitation center or part of academic hospital) influenced the rate of adherence.
To account for the initial learning curve to the CARDSS, the authors provided a standardized training course for the care providers and excluded the data of patients enrolled in the first two weeks of CARDSS being used in each participating centre from the analyses. The authors stated that CARDSS was judged favourably in a usability study. However, the fact that three centres were excluded from the study for not recording their decisions on CARDSS and two centers for too much missing data may indicate a possible usability issue. It may be possible to re-engineer the software in a way to reduce missing recorded decisions and data. The data audit which the authors used to exclude the centres by comparing the record keeping on CARDSS and the paper based patient record was one of the strengths of the study. If there were discrepancies in the data between the two records, the authors considered the centre in question to be unreliable and excluded them from the analyses. If a centre passed the data audit but data analysis indicated that 20% or more of a centre’s patients’ records had missing data, the authors also excluded that centre from the analyses. The authors did perform a follow up qualitative study performed afterwards which looked into how CARDSS affected the main barriers to the implementation of guidelines for cardiac rehabilitation. In this follow up study, there was more qualitative user feedback from the interviews on the usability of CARDSS.
Although the authors stated that the study design ensured that there would be no bias from the Hawthorne effect, they did not go into on detail how this was accomplished. There could have been a possible Hawthorne effect since the care providers were aware they were in a study with their performance being monitored and reported which may have prompted them to increase their adherence to guidelines. There was also the potential bias resulting from having the care providers record their reason for cases where they do not adhere to the guidelines. They may simply go along with the recommended therapy from CARDSS since it would prevent additional work.
It would have been also nice if the investigators collected baseline adherence data prior to implementing the CARDSS at the centres. This would allow comparisons to be made between pre-implementation and post-implementation of CARDSS. The control group in the current study design is unable to do this since they were still given a CARDSS. Even if their version had limited functionality, it would have an effect on the results. It would also have been interesting to investigate guideline adherence rates in the control group after they have been given full functionality of the CARDSS in a follow up trial. The authors do have an ongoing follow up trial which may be looking into this aspect.
Finally, there was the issue regarding the lack of ethics approval for the study. It was stated in the article that ethics approval was not needed according to the medical ethics committee of the Academic Medical Centre in Amsterdam.
Conclusion
The authors concluded that CARDSS improved adherence to guideline recommendations with respect to three out of the four therapies: exercise, education, and relaxation therapy. There was no effect on lifestyle change therapy which may be partly due to the fact that the majority of clinics did not have the therapy program available. Although there are some weaknesses and limitations in this study, the authors hope to address most of them in their follow up randomized cluster study.
Questions to the authors
- Why was baseline adherence data not collected?
- How do the covariates affect guideline adherence?
- How were the adjusted differences and CI values calculated?
- What is the reason for large variation in adherence between centers?
- Why was ethics approval not needed?
- Why was CARDSS not designed to be interoperable with other information systems?
- How did the authors account for the effect of the control group knowing they were in the control group?
Monday, October 26, 2009
Clinical Decision Support Capabilities of Commercially-available Clinical Information Systems
Full Text: click here
A b s t r a c t
Background: The most effective decision support systems are integrated with clinical information systems, such as inpatient and outpatient electronic health records (EHRs) and computerized provider order entry (CPOE) systems.
Purpose: The goal of this project was to describe and quantify the results of a study of decision support capabilities in Certification Commission for Health Information Technology (CCHIT) certified electronic health record systems.
Methods: The authors conducted a series of interviews with representatives of nine commercially available clinical information systems, evaluating their capabilities against 42 different clinical decision support features.
Results: Six of the nine reviewed systems offered all the applicable event-driven, action-oriented, real-time clinical decision support triggers required for initiating clinical decision support interventions. Five of the nine systems could access all the patient-specific data items identified as necessary. Six of the nine systems supported all the intervention types identified as necessary to allow clinical information systems to tailor their interventions based on the severity of the clinical situation and the user’s workflow. Only one system supported all the offered choices identified as key to allowing physicians to take action directly from within the alert.
Discussion: The principal finding relates to system-by-system variability. The best system in our analysis had onlya single missing feature (from 42 total) while the worst had eighteen. This dramatic variability in CDS capability among commercially available systems was unexpected and is a cause for concern.
Conclusions: These findings have implications for four distinct constituencies: purchasers of clinical information systems, developers of clinical decision support, vendors of clinical information systems and certification bodies.
Sunday, October 25, 2009
The unintended consequences of computerized provider order entry: findings from a mixed methods exploration.
Full Text
(Please note that after reading this paper you may want to also look at the following papers to gather a better understanding of the results from their research:
1. Ash, J.S., Sittig, D.F., Poon, E.G., Guappone, K., Campbell, E., Dykstra, R.H.The Extent and Importance of Unintended Consequences Related to Computerized Provider Order Entry(2007) Journal of the American Medical Informatics Association, 14 (4), pp. 415-423
2. Campbell, E.M., Sittig, D.F., Ash, J.S., Guappone, K.P., Dykstra, R.H.Types of Unintended Consequences Related to Computerized Provider Order Entry(2006) Journal of the American Medical Informatics Association, 13 (5), pp. 547-556.)
OBJECTIVE
To describe the foci, activities, methods, and results of a 4-year research project identifying the unintended consequences of computerized provider order entry (CPOE).
METHODS
Using a mixed methods approach, we identified and categorized into nine types 380 examples of the unintended consequences of CPOE gleaned from fieldwork data and a conference of experts. We then conducted a national survey in the U.S.A. to discover how hospitals with varying levels of infusion, a measure of CPOE sophistication, recognize and deal with unintended consequences. The research team, with assistance from experts, identified strategies for managing the nine types of unintended adverse consequences and developed and disseminated tools for CPOE implementers to help in addressing these consequences.
RESULTS
Hospitals reported that levels of infusion are quite high and that these types of unintended consequences are common. Strategies for avoiding or managing the unintended consequences are similar to best practices for CPOE success published in the literature.
CONCLUSION
Development of a taxonomy of types of unintended adverse consequences of CPOE using qualitative methods allowed us to craft a national survey and discover how widespread these consequences are. Using mixed methods, we were able to structure an approach for addressing the skillful management of unintended consequences as well.
CATCH-IT DRAFT: Individualized electronic decision support and reminders to improve diabetes care in the community: COMPETE II randomized trial.
Holbrook, A., Thebane, L., Keshavjee, K., Dolovich, L., Bernstein, B., Chan, D., Troyan, S., Foster, G., Gerstein, H. Individualized electronic decision support and reminders to improve diabetes care in the community: COMPETE II randomized trial. CMAJ. 2009 Jul 7;181(1-2):37-44
Links: Abstract - Full Text - Slideshow - Final Paper
Introduction
Diabetes Mellitus is a complex condition. The inherent nature of the disease makes it an ideal target to care for using eHealth innovations. However, research on electronic applications that address chronic diseases is limited; the usefulness of such applications has not been thoroughly investigated. From 2000-2003, Health Canada funded a project titled ‘Computerization of Medical Practices for the Enhancement of Therapeutic Effectiveness’ (COMPETE). The second phase of the project (COMPETE II) evaluated the use of a “Web-based, continuously updated, patient-specific diabetes tracker available to patient and physician plus an automated telephone reminder service for patients, on access, quality, satisfaction and continuity of care” (1). The results of the project have been published by Holbrook et al (2) and are discussed herein.
Objectives of the study
The authors state that the rationale for the study is because “there have been few randomized trials to confirm that computerized decision support systems can reliably improve patient outcomes” (2). Thus, this study is to add to the literature. They hypothesize that patients within the intervention group – those with access to the decision support tool and receiving telephone reminders – would have improved quality of diabetes care. Interestingly, the deliberate goal of the COMPETE II project, within which the study was conducted, was to “have patients regularly visit their family physician” (1). These goals, although similar in nature, are explicitly different. This difference in goals brings into question if the study was conducted proactively or retrospectively.
Methodological issues
In selecting patients for the intervention group, randomization occurred using allocation concealment through central computer generation of group assignment, which was subsequently stratified by the provider in groups of 6. Although the authors attempt to reduce bias into the sample, randomization at the patient level could have caused contamination due to possible interactions amongst patient of the same physician.
One major flaw in the study design was that patients within the intervention group were initially required to set up an appointment with their family physician in addition to getting relevant lab tests done. Patients within the control group did not. This is of particular interest because the outcome measure for the intervention (the process composite score) was based on the frequency of patient visits and lab tests. Therefore, one cannot determine whether or not the change in the outcome measure was due to the intervention or the initial physician visit and lab tests.
Another issue that requires discussion is the timeframe of the study. Patients were only tracked for 6 months even though the study was conducted over the time span of a year. Tracking over a longer time period would have made the process score more valuable insofar as many of the process targets were semiannual. Thus, these targets would have already been met just by the primary physician visit required of intervention participants.
Discussion
The authors interpret the results of there study as being that they have “demonstrated that the care of complex chronic disease can be improved with electronic tracking and decision support shared by providers and patients” (2). This statement seems to go beyond the evidence of the study. For example, the inherent flaws of the intervention and the outcome measures brought the authors’ conclusion into question.
The intervention
The intervention is described as a web-based diabetes tracker that includes decision support. The electronic tracker was built to integrate with the care providers electronic medical records (EMRs) as well as with an automated telephone reminder system. Patients also received mail out tracking reports. The multi-nodal nature of the intervention makes it difficult to determine which component is the causative agent for the change observed. The authors do not comment on the utilization of the tracker nor the nature of the telephone reminders. In addition, 51.4% of patients in the intervention group ‘never’ used the internet. How then, is it possible for the authors to imply that the causative agent of change is the electronic decision support if more than half of the sample does not use the tool? Utilization values for the decision support tool would substantiate this claim with quantitative evidence.
Outcome measures
One strength of the study is in how the authors determined the variables to measure. These were based on the guidelines from the Canadian and American Diabetes Associations, literature reviews, and expert opinions. As a result, they have determined two outcome measures based on the frequency of patient-clinician interactions (process score), and the improvement in clinical outcomes (clinical score) based on best practices. However, the authors do not validate their choice in variable weight. Take, for example, the process score. Two variables, weight and physical activity, both having quarterly process targets are weighted differently - weight and physical activity having maximum score of 2 and 1 respectively. Furthermore, individuals in the intervention group started the study with a physician and lab visit. Thus their process score would be high irrespective of the intervention. Why wasn’t the control group also sent in for physician and lab visits?
The presentation of the clinical target scores is questionable. Only two variables showed statistically significant improvement. However, there were two variables that did not improve - exercise and smoking. Both scores had a value of 1.00 before the intervention and 0.69 after the intervention. These results appear counterintuitive. Interestingly the authors attribute the positive change to the intervention tool, but do not discuss the reasons for the decrease in score.
Conclusion
The authors have not validated their interpretation sufficiently. The multi-nodal nature of the intervention makes it difficult to correlate intervention with outcomes. In addition, the study does not indicate which component is the causative agent. Although this study adds to the growing body of literature on eHealth tools and their effect on chronic disease management, the rigor applied in the methodology is not strong enough to substantiate the authors’ claims. Therefore, it is difficult to support the validity and relevance of the study’s conclusion.
Acknowledgement
Thank you to the 2009 CATCH-IT Journal Club members from the University of Toronto’s HAD 5726 course along with Professor Eysenbach for their feedback and critical analysis that has greatly contributed to this report.
References
(1) COMPETE II. Available at: http://www.hc-sc.gc.ca/hcs-sss/pubs/chipp-ppics/2003-compete/final-eng.php. Accessed Oct 15, 2009.
(2) Holbrook A, Thabane L, Keshavjee K, Dolovich L, Bernstein B, Chan D, et al. Individualized electronic decision support and reminders to improve diabetes care in the community: COMPETE II randomized trial. CMAJ 2009 Jul 7;181(1-2):37-44.
Friday, October 23, 2009
CATCH-IT Final Report: Cellular phone and Internet-based individual intervention on blood pressure and obesity in obese patients with hypertension
Links: Abstract . Comments . Draft Report . Presentation
Park MJ, Kim HS, Kim KS. Cellular phone and Internet-based individual intervention on blood pressure and obesity in obese patients with hypertension. International journal of medical informatics. 2009 Oct;78(10):704-10.
Introduction
Globally overweight and obesity, representing at least 300 million clinically obese persons, poses a major risk for chronic diseases, including type 2 diabetes, cardiovascular disease, hypertension and stroke
One of the authors of this paper, Hee-Sung Kim, has experience with previous research studies using ICT for chronic disease management dated back to 2003. No further work in this area is established for the other authors.
Objectives of Study
The aim of the study is to evaluate whether an intervention using short message service (SMS) by cellular phone and Internet would improve blood pressure (BP), weight control, and serum lipids of obese patients with hypertension during 8 weeks. The authors cite the rationale for this as “no study has been done to test the direct efficacy of the cellular phone or internet-based system” on improving of these measures for hypertension”. Logan etal, 2007 presented a paper on use of both inventions, which is referenced by authors making this not a novel research.
Methodological Issues
Firstly, the intervention is not clearly defined, “.. intervention group were requested to record their blood pressure and body weight in a weekly web based diary through the Internet or by cellular phones.” No justification is given for the choosing SMS or descriptive given of how the patient would input information, if any, from the cellular phone. No clear indication is stated as to how the input of information on BP, weight and drug information would allow the system to give advice about fast food intake and exercise duration. In addition, given that the data is self-reported, there is no indication of any objective way to confirm the data reported on which the decisions are made for the SMS alerts. In the research by Logan etal (2007), a Bluetooth-enabled home BP monitor is used for greater validity of information.
Secondly, omitted is the actual population size of the data collected from which the participants are drawn as well as how the sample is selected. Associated with this is the potential for selection bias, as it is unclear who selected the control group and what other factors may have been considered in addition to matching the age, sex, systolic BP, diastolic BP and body weight to the intervention group at the same department. Amongst other internal validities that are observed, this appears to pose the single most important threat, as it is possible to pull patient records with no change in the clinical outcome variables, unless this was a blinded process.
Thirdly, usage data of the intervention is omitted. Review done on related articles presenting studies with the use of ICT intervention such as by Patrick etal (2009), Raab etal (2009), Cocosila etal (2009), Morak etal (2008), Logan etal, (2007), and Kwon etal (2004), report results in addition to the clinical outcome. Data expected are those such as mean number of logon times per patient per day, alerts sent from both SMS and internet, entries for clinical measures such as blood pressure, weight, drug entries and most frequent comments over the period. In addition, how did the researchers analyze this data when not all patients had access to a computer of phone?
Fourthly, given that the study incorporates the behavioural pattern of patients, the theoretical approach used is not stated explicitly for self-efficacy
Discussion
This study is contributing to the body of literature on behavioural change through online intervention, still a relatively new area of research and will prompt development of more in-depth research. However, the results must be taken with caution base on the fundamentally flawed methodological issues that is associated with the research.
Hee-Seung Kim has authored one publication
Another critical element missing from this and past reports of similar study by the authors is the lack of information regarding the patients’ perspective on the ease of use, acceptance and effectiveness of the interventions. It would be valuable to know the extent to which patients find the ICT interventions to be helpful in disease self-management, increased self-efficacy, and treatment adherence, as the technology becomes an integral part of people's everyday life. This information would also help to inform future research and long-term planning.
Conclusion
The authors conclude, “the intervention using SMS of cellular phone and Internet improved blood pressure, body weight, waist circumference, and HDL-C at 8 weeks in obese hypertensive patients.” However, given the number of concerns regarding the methodological issues, limited timeline of this intervention, and lack of generalization due to low sample size; these will greatly limit the level of confidence in all inferences that might be drawn from this study deeming the results not valid. Overall, the poor quality of reporting has detracted from the goal of the study.
Questions to the Authors
1. What is the usage data of the cellular phone and internet intervention such as daily frequency response rate per patient per measure, number of alerts sent, and number of entries for clinical measures over the 8 weeks period?
2. How was the data analyzed to determine alerts to be sent if not all patients had access to a computer of phone?
3. What is the actual population size of the data collected from which the participants are drawn?
4. Who did the selection of the control group and how exactly was this done? What were the variables used for matching? What is the potential for a selection bias?
5. What is the rationale for exclusion of patients that changed medication during the period of the interventions and how many persons were excluded due to this in the intervention and control group?
6. What specific differences are identified using the paired t-test with Bonferroni correction and why is ANOVA used rather than t-test when comparing the groups?
7. What exactly was the paired t-test with Bonferroni correction used for?
8. Are the findings presented in the results of statistical significance only or were these also verified for clinical significance?
9. What measurement is used to determine self-efficacy in the adherence to control of hypertension?
10. Why is the patient’s perspective not included on the usability and effectiveness of the intervention?
11. Do you think doing a qualitative study of patients' perspectives might have altered the results or help to inform future research and long-term planning.
Acknowledgement
Thank you to the Professor Eysenbach and fellow graduate students of the 2009 CATCH-IT Journal Club at the University of Toronto, for their helpful and insightful discussion and comments that contributed to this report.
References
1. World Health Organization. Obesity and overweight. [Online].; 2003 [cited 2009 October Available from: http://www.who.int/dietphysicalactivity/publications/facts/obesity/en/.
2.
3.
4.
5.
6.
7.
8.
9.
10.