Evaluation of eHealth Implementations in Uganda: Investigating Practices, Challenges and Insights

DOI: https://doi.org/10.21203/rs.2.12339/v1

Abstract

Background The application of information and communication technology is becoming more popular in healthcare management evidencing improvement of effectiveness, access, quality, and efficiency of the healthcare systems. With increased investment and implementation of eHealth across the world, there is a need to evidence its value. That is, its evaluation is required in order to get the most benefits out of them. To this end, this research study investigates the practices, challenges, and insights regarding the evaluation of eHealth implementations in Uganda. Methods A qualitative approach was employed to conduct the study investigation with key eHealth implementers in Uganda considered as respondents to establish an understanding of their perspectives with respect to ehealth evaluation practices and challenges faced, as well as to derive insights from these perspectives in relation to the World Health Organization (WHO) understanding of digital health evaluation. Results Results show that Uganda has implemented various eHealth initiatives; however less to none evaluation is undertaken, as it is not a key activity with most of the eHealth implementers. The focus is put on monitoring the ehealth initiatives’ functionality and adoption rather than their outcome and impact. Conclusion Accordingly, the study recommends the need for an evaluation framework following the WHO global digital health evaluation framework guidelines to elucidate the notion of evaluation, its characteristics, and measurement indicators regards the outcome and impact of ehealth implementations in healthcare and service delivery for Uganda’s health system.

Background

Across the world, healthcare systems are facing pressures to guarantee simultaneous access, quality, and affordable care. Healthcare administrators and policymakers are expected to select interventions that increase the quality and efficiency of services, care, and support high performance of health systems [1–3]. The application of information and communication technology (ICT) is becoming more popular in healthcare management and has proved to improve the effectiveness, access, quality and efficiency of the healthcare systems [4–7]. The WHO [8] notes that the application of eHealth is necessary if universal health coverage is to be realised. The increasing investment in eHealth has led to a need for evidence to prove that their benefits are realised from eHealth applications. Such evidence helps to establish the return on investment and value and guides future eHealth investment and adoption decisions. Evaluation for ehealth interventions helps to generate data that can be used as a basis for assessing whether observed changes in behavior; processes or health outcomes can be attributed to the interventions [9, 10]. This paper provides results of an exploratory study conducted among key eHealth implementers in Uganda to establish their perspectives regards the practices and challenges faced with the evaluation of eHealth implementations.

The Concept of eHealth and its Benefits

Coleman [11] defines eHealth as the use of ICTs in support of health to improve the efficiency and effectiveness of healthcare management and delivery. WHO [12] ehealth definition accommodates a variety of areas including health informatics, digital health, telehealth, telemedicine, eLearning and mobile health. EHealth applications allow communication between healthcare providers and their clients, and sharing of information and knowledge among healthcare providers [13]. The Internet has also been used for communication and it has contributed to better disease management [14, 15]. Learning from the developed world, sub-Saharan African and other developing countries are implementing eHealth tools as a means to improve accessibility to quality and equitable healthcare for poor and vulnerable communities [16]. In sub-Saharan Africa, the use of ICTs in health has been evidenced in the use of various mobile health solutions in multiple countries [17] and telemedicine in West Africa [18].

There is much interest internationally in exploiting the potential of ICTs to improve healthcare [8, 19–21] because the proper use of ICTs in healthcare enable more efficiency in information processing and impact on access and quality of care [22–24]. Other examples of technologies that indicate eHealth can be the use of the Internet, which enables prompt communication of medical data, and smartphones, which enable clinical staff to have real-time access to patient’s medical data on the mobile [25]. Research on eHealth in LMICs by Lewis et al. [26] indicated that ICT including video/photo cameras, computers, Global Positioning Systems, Personal Digital Assistants, phones (smartphone, cell phone, landline phone), radios, remote/portable diagnostic tools, smart cards, software, voice (e.g. VoIP, hotline), internet, text messaging, videoconference are used for extending geographic access, facilitating patient communications, improving diagnosis and treatment, improving data management, streamlining financial transactions, mitigating fraud and abuse, overcoming language barriers or using technology’s appeal to attract more patients and greater attention.

According to [12, 26, 27], eHealth is very beneficial in the following ways; extending geographic access, empowering patients, improving epidemiology, facilitating patient communications, improving diagnosis and treatment, improving data management, streamlining financial transactions, mitigating fraud and abuse, creating efficiencyeHealth improves the efficiency of healthcare systems through lowering operational costs.

eHealth Implementation Challenges

Difficulties and challenges in eHealth implementation are an international phenomenon to all countries irrespective of their development status [28]. Some aspects that threaten ICT systems implementation in the health sector involve economic resources, income disparities, exorbitant costs of usage fees, excessive costs for even rudimentary health information systems, lack of human trained resources, lack of governmental policies that address a well-defined health system that incorporates eHealth, cultural aspects and some resistance to the use of computers for health care processes [29], absence of rigorous evaluation research of such technologies on health outcomes [30], challenges in systems integration [31, 32] and other organizational barriers to health information technology uptake [33]. Scott & Mars [16] noted that for most developing countries, eHealth remains a proof-of-concept activity, with only modest value demonstrated within small pilot projects.

An Overview of eHealth Implementations Evaluation

The concept of evaluation can be defined as a systematic and objective assessment of an intervention that aims to determine the fulfilment of objectives, efficiency, effectiveness, impact, and sustainability [34]. Yusof et al [35] point out that the questions of why (reason for conducting an evaluation), who (which stakeholders’ view is being evaluated), when (which phase of system development life-cycle is being evaluated), what (aspects of the system are being evaluated), and how (choice of the research approach) need to be answered upon undertaking an evaluation. WHO [10] further defines evaluation, as measures taken and analysis performed in order to assess the interaction of users or a health system with the digital health intervention strategy, or changes attributable to the digital health intervention. Throughout the implementation of eHealth initiatives, their evaluation is required in order to get the most benefits out of them [36, 37]. Related to evaluation is monitoring. Though in most cases monitoring and evaluation are conducted concurrently, the two concepts are different in the context of measuring performance and impact of eHealth implementations. WHO [10] emphasizes that monitoring is the routine collection, review, and analysis of data intended to measure implementation progress for an eHealth initiative, and results into adjustments in intervention activities necessary to maintain or improve the quality and consistency of the eHealth deployment. In contrast, evaluation measures changes in outcome and impact that are attributed to the ehealth initiative. WHO [12] observes that monitoring and evaluation of eHealth implementations plays an essential role in demonstrating the progress that a country is making towards the development of its national eHealth environment. Lau & Kuziemsky [9] note that an eHealth system covers not only the technical ICT artefacts but also the socio-organizational and environmental factors and processes that influence its behaviours, so they argue that the scope of eHealth evaluation can cover the entire life cycle, which spans the planning, design, implementation, use, and maintenance of the eHealth system over time; and that depending on the life cycle stage being evaluated, there can be different questions raised.

Evaluation of eHealth implementations is a challenging undertaking [9, 38, 39]. There are a few published evaluations on eHealth implementations [6, 30, 40–43] especially in the developing countries [39]. The difficulty is because such evaluation does not focus on technology only but often needs to consider how the technology components interact with other processes in the eHealth implementation [44], which in turn broadens the scope of the evaluation [45, 46]. Secondly, the evaluation takes place in a complex healthcare setting that involves multiple stakeholder categories (such as patients, clinicians, administrators, IT specialists, funders) on top of legislation, social, political and economic environments [47]. This poses challenges to the evaluation since different stakeholders present different expectations and perspectives of a successful eHealth implementation, which may lead to conflicting evaluation criteria, and require multiple study designs and evaluation methods [38, 48, 49]. eHealth evaluations are also resource-intensive and are always hampered by insufficiency of resources like time, funding, human resources, and subject participants [38].

Notwithstanding the challenges, eHealth evaluation efforts are worth undertaking [50]. Implementers and countries that have evaluated their eHealth implementations have benefited from the knowledge about results of the implementations in the respective programmes [9] and this knowledge base helps to inform decisions on policies, practices, and research [51]. In Europe, the topic of impact assessment as well as evaluations for eHealth had gained considerable momentum by 2011 to an extent that half of the countries had designated a specific body/institution that was responsible for eHealth evaluation activities . Various Canadian eHealth evaluation studies evidenced positive benefits from the implementation of electronic medical records and drug information systems [52–54], and such helped to answer questions concerning whether there was sufficient value for money on Canadian electronic health records investments which were earlier raised in 2009-2010 performance audit reports by the Auditor General of Canada and six provincial auditors offices [55]. In 2010 Canada’s International Development Research Centre (IDRC) conducted an evaluation of its 25-eHealth projects funded between years of 2005 and 2010 in 28 countries in Africa, Asia and Latin America and the Caribbean (LAC). The projects (50% from Africa, 28% from LAC, and 16% from Asia) focused on contributing evidence and knowledge about how to use technology to help solve health challenges through either the use of eHealth tools to tackle one or more specific challenges, or general health systems strengthening. The evaluation results showed contributions of the projects in the regions and informed IDRC’s future programming in eHealth research [56]. Evaluation done for the United Kingdom’s implementation and adoption of the nationwide electronic health records system indicated limited visible benefits for clinicians and patients, and it guided the eventual closedown of the initiative [57–60]. An assessment that sought to find out the successes and challenges of eHealth in Africa and developing countries [61] indicated that most of the initiatives lacked documentation and proper evaluation hence their overall success was uncertain, but led to recommendations that would guide future implementations to do well. All the above cases communicate how eHealth evaluation has been given attention in some countries and how the evaluation results have been useful to inform decisions.

Status of eHealth in Uganda

Uganda, like most developing countries, has employed eHealth applications to improve healthcare delivery and public health [16]. The growth in ICT created a fertile environment for new innovations whose application into the Uganda health industry has yielded positive results, especially in disease control and prevention through disease surveillance [62]. Some of the famous eHealth systems implemented in Uganda include DHIS2 which supports routine health data reporting from the district level to the national level [63], mTrac an SMS, USSD and web-based data collection tool for health workers at district health centers to submit weekly HMIS reports related to disease outbreaks and stock outs of essential medicines [64], and OpenMRS an electronic medical records application to support records management functions especially in HIV-care health facilities [63] among others.

Although Uganda experiences various eHealth projects, most of them are pilot projects, operated in silos and lack sustainability [64, 65]. For example, there were approximately 23 of 36 mHealth initiatives in 2008 and 2009 that did not move beyond the pilot phase [66]. Such situations have been criticized as ‘pilotitis’, an expression of dissatisfaction from donors and governments because of the isolated eHealth initiatives that are successful in one context, but not rolled out [67]. With such a large number of uncoordinated pilot projects, the Government of Uganda imposed a moratorium on new eHealth activities, which demanded new eHealth initiatives to be approved on condition that they met the existing Ministry of Health requirements [68]. The National eHealth Policy and Strategy [69] were also developed to guide the development and implementation of eHealth in the country.

Methods

We used qualitative methods [70] to investigate the perspectives regards practices and challenges faced in eHealth implementations’ evaluations. The authors through consensus with the following interview guide topics developed the guiding interview questions; the use of eHealth in organisation activities; organisation practices, motivations and challenges in eHealth evaluation; performance indicators for eHealth evaluation; and existing tools and resources for supporting eHealth evaluation. A semi-structured approach with a mixed questionnaire that included both closed and open-ended questions about the institutions’ experience in implementation and evaluation of eHealth initiatives was used during the interviews, and it allowed an opportunity for probing more information and seeking clarification where necessary.

Between June 2018 and November 2018, face-to-face semi-structured interviews were conducted with twenty two (22) key informants from eighteen (18) key eHealth implementation stakeholder institutions in Uganda which were selected using a combination of purposive and convenience sampling. Initially, the Ministry of Health (MoH)’s Division of Health Information (DHI), which is the custodian of eHealth in Uganda, was contacted to recommend key eHealth implementing institutions to participate in the study. Out of the twenty-seven (27) recommended institutions, three (3) were not contacted due to limitations to access their offices and contact details in the data collection period. Entry contacts to twenty-two (22) institutions were contacted, we explained the study objectives and asked to nominate their most appropriate staff that were involved in eHealth implementation or evaluation to participate in the interviews. Of the contacted, eighteen (18) institutions responded positively and nominated a staff to attend to the interview. Four (4) institutions did not participate because the nominated staff did not provide to researchers interview appointments.

Verbal consent to participate in the study was obtained from twenty-two (22) participants in eighteen (18) institutions, and face-to-face interviews were conducted on separate days at scheduled time at each participant’s place of work. The first author (JA) conducted the interviews in English, each lasting between 60 - 90 minutes. Participants’ responses were recorded as written extensive notes, responses on each question were reviewed with each of the participants to ensure that no wrong data was carried over; and more field notes were also written immediately after each interview. The analysis of the interviews was done using the thematic content analysis approach [43] where both authors/researchers read all the notes to familiarise themselves with the text, then identified codes, categorised the codes and developed themes from the collected data. Quantitative information about the resultant codes and other quantitative responses were analysed using SPSS (Statistical Package for the Social Sciences) software. Feedback on the field findings was then shared with the MoH’s DHI for review and identification of any obvious outliers in the collected data. The DHI did not identify any outliers; as such the findings presented in this paper reflect the true practices in eHealth implementation and evaluation in the country.

Results

A total of 22 interviews were conducted in 18 institutions. Out of the 22 respondents, 17 (77.3%) were males and 5 (22.7%) were females. Most  of the respondents 12 (54.5%) were in the age bracket of 31-40 followed by 6 (27.3%) in the age bracket of 18-30 and 3 (13.6%) in the age bracket of 41-50. The respondents included a diverse range of cadres including programme managers, monitoring and evaluation officers, health informatics specialists, software developers, statisticians, and IT systems administrators.

Use of eHealth in work practices

All participants mentioned that their organisations used eHealth in their health related activities. 91% of the respondents indicated to be using eHealth to a great extent while only 9% indicated to be using eHealth to a certain extent in their activities (Table 1). Data collection and reporting 9 (41%) was the most common area of eHealth application followed by data analysis 4 (18%) and others as shown in Figure 2 (see Supplementary Figure 2 online). In addition, DHIS2 12 (54.5%), mTrac 9 (41%) and Family Connect 5 (22.7%) were the most used eHealth systems as shown in Figure 1, and evidenced in some of the participants’ responses; “We use information systems in basically all of our services provision; stores, general clinic, laboratory, finance and procurement, etc.” -- Participant 22

“… to a great extent. We have introduced electronic systems to process and disseminate results e.g. SMS, ODK data collect” – Participant 21

“We use ICT to a certain extent. We use it in communications with mobile phones, and Internet, etc.; data collection; data analysis and use with DHIS2, MTrac, iHRIS, OpenMRS, EMRs etc” – Participant 1

“eHealth is used to a great extent, for example with the use of DHIS2 to support reporting of routine health services from districts, use of MTrac based on rapid sms for surveillance and medicines management, use of HRIS to manage human resources for health” – Participant 12

Organisations’ motivations for eHealth evaluation

Many participants reported that their institutions put efforts to evaluate the performance of eHealth, but also some organisations do not. 50% of the participants indicated that their organisations put efforts to a great extent, 18% to a certain extent, 23% to a very small extent and 9% not at all (Table 2). Looking into reasons for conducting the evaluations,, checking functionality of the eHealth initiatives was the most reported reason (32%). Participants also reported (see examples of verbatim responses below) that institutions also conducted evaluation of eHealth because it is a requirement by funders, to keep track of changes in user requirements, to identify gaps in system functionality, and to streamline partners’ approaches to eHealth implementation; “… I think to a great extent, because we conduct these evaluations throughout the implementation of the systems. We conduct the evaluation because one, it is a requirement from our donors, secondly, evaluations help to quickly document achievements, and also capture user feedback. Internal evaluations contribute to our marketing strategy for the systems”-- Participant 2

“Evaluations are part of our quarterly activities and at baseline before implementing mobile tools. We do baseline evaluations to determine capacity needs, user attitudes, and challenges to be solved by the mobile tools...” -- Participant 18

“We conduct evaluations to a great extent. We go ahead to deploy our staff onsite where the initiatives are implemented” --Participant 3

“… evaluations are conducted to a certain extent and mostly during implementation stages. As the Ministry, our main interest is to streamline how different partners implement eHealth solutions so as to eliminate uncoordinated, fragmented, and duplicate systems” – Participant 4

“.. to a very small extent because we do not normally conduct performance evaluations, but we sometimes want to ensure proper flow of system functionality to meet user requirements” – Participant 17

“… may be to some extent. We especially evaluate the system before implementation during user acceptance testing to ensure the system works as expected. The evaluation also reduces complaints from users” – Participant 10

Indicators monitored during eHealth evaluation

Participants reported various indicators that were currently considered during evaluations; most reported indicators being system availability, system response speed, interoperability, usability, scalability, and availability of human resources to implement the eHealth initiatives as shown in Figure 4 (see Supplementary Figure 4 online).  Among the participants, 9 (43%) did not mention any indicators because their organisations did not conduct evaluations or they did not have a practice of using indicators for evaluation. Below are some of the responses from participants;

“… Most of the initiatives are user-centered, so we look out for usability indicators” – Participant 8

“We normally evaluate functional and non-functional requirements of the system. Functional requirements are evaluated through checking the functionality of the system and then validation rules on the data. Then some of the non-functional requirements evaluated are system’s interoperability capacity with other systems, cost implications for implementing the system, security, scalability, and sustainability of the system” – Participant 6

“… Mainly we pay attention to functionality to ensure that the system functions as expected” – Participant 10

“…. system functionality, data use, impact, human resource capacity, and ICT infrastructure. We also use USAID Measure tools …” – Participant 14

“We assess availability of the computing infrastructure, internet access, capacity of the health workers and staffing levels” – Participant 11

“… We needed to assess value for money … What additional value does the eHealth initiative bring? … and is this additional value worth the additional costs?” -- Participant 13

“The indictors we monitor depend on what aspects of the work process you are evaluating …. “ --Participant 20

“.. We assess system availability and its usage. We also assess availability of a champion to lead implementation on the operation site” – Participant 2

“… We only developed the electronic database and dashboard and trained health facility trained staff, and the project even ended but we did not evaluate implementation of the initiative…” -- Participant 15

“… Apart from comparing system functionality with initially specified user requirements, we do not conduct any other evaluation on the systems we deploy. Mainly our aim is to ensure that the systems function well to support the work done by users” – Participant 17

Challenges in eHealth Evaluation

The participants reported a wide range of challenges they faced during evaluation of eHealth initiatives. The most reported challenges and limitations that affect eHealth evaluation included limited skills/capacity among the evaluation teams, lack of standard procedures on eHealth implementation and evaluation, limited documentation about the eHealth initiatives, limited resources in terms of time and money, unharmonised interpretation of eHealth performance indicators and stakeholders’ negative attitudes as shown in Figure 5 (see Supplementary Figure 5 online) and evidenced in some of their responses below;

“We cannot be sure that all the eHealth systems in any category e.g. EMRs have been assessed because we do not have a comprehensive list of what systems exist in the health sector” – Participant 7

“…. We have challenges related to interpretation of evaluation indicators because we do not have them categorised and made more specific, so different stakeholders understand and interpret some indicators differently….”  – Participant 2

“There is lack of clear indicators to show contribution of ICT initiatives to the sector wide service delivery. We also lack a clear mechanism of performance evaluation” – Participant 12

“We face challenges like limited resources in terms of time and money, limited expertise more so in the area of data analysis” – Participant 20

“Some users fear that evaluations will expose their low performance so they end up giving impressive feedback that is not objective during the evaluations. And also as the evaluation teams, we do not have a standard to follow while conducting evaluations, so every time we need to conduct an evaluation we put in much effort to plan for it from scratch” – Participant 18

“There is no standard procedure to guide on when and how frequent to do evaluations …… there is a problem of un-harmonised indicator interpretation and this leads to misinformation…” – Participant 1

“Most of the teams have negative attitude towards evaluation of systems hence few individuals remain willing to participate in this… and it becomes a one man’s show ...” -- Participant 1

“… there is not enough documentation of these initiatives, so trouble comes when individuals leading their implementation leave the organisations where the initiatives are being implemented … evaluating an initiative without any background information is difficult …” – Participant 3

“… some of the data collectors bring wrong data that is unusable, so they need more training …” – Participant 19

Discussion

eHealth practices – Table 3 evidences that all institutions apply eHealth practices in some ways in the country [64, 65]. From Figure 1 and 2, we observe that there are various areas of eHealth applications although few respondents use them; in other words, ehealth implementation in Uganda is not integrated but operated in silos [65]. Regards conducting eHealth evaluation, Table 4 indicates that only 50% of the respondents conduct ehealth evaluation to a great extent, while the rest it is done to a small extent (50%) or not at all. This implies that there is no concerted culture of eHealth evaluation in Uganda. Looking into the reasons why evaluations are conducted (see Figure 3), most of the respondents provided reasons that are related to ensuring proper functionality of the eHealth initiatives. This is also reflected in the indicators measured in Figure 4, where system availability, response speed, interoperability, usability, scalability, and availability of human resources to implement the eHealth initiatives are the most measured indicators. World Health Organization [10] and WHO & ITU [12] categorise such indicators as process and output indicators that provide information and insight on the adoption of an eHealth initiative, and are more suitable for monitoring eHealth initiative implementation; however, they do not necessarily evaluate the performance of the eHealth initiative. This implies that even though more respondents conduct evaluations on their eHealth implementations, they majorly monitor eHealth deployment, functionality, and adoption rather than measuring outcome and impact that result from the eHealth implementations. Uganda is not the only country suffering the challenge of having weak eHealth evaluation mechanisms. According to World Health Organisation’s observations in its Global Observatory Survey on eHealth of 2016, though there was a reported rapid growth in implementation of eHealth initiatives in WHO member states (109, 87%), very few states (16, 14%) conducted evaluation of the initiatives. The Eastern Mediterranean region and South-East Asia region had the highest percentages of countries that conducted evaluations, while in terms of the World Bank income groupings; the high-income countries reported the highest percentage of countries that conducted evaluation of the initiatives [8].

eHealth evaluation challenges – Most respondents reported limited skills/capacity among the evaluation teams, lack of standard procedures on ehealth implementation and evaluation, limited documentation about the ehealth initiatives [61], and un-harmonised interpretation of ehealth performance indicators [9]. Other challenges reported by more than one respondent included limited resources (finances and time) to promote ehealth evaluation activities, unavailability of the definition of impact evaluation indicators [38] and stakeholders’ attitude about the evaluation [71]. The challenges faced by implementers in conducting an evaluation of ehealth initiatives are more attributed to the fact that the country had no guidelines for eHealth implementation and evaluation, and implementers had not yet put efforts to building capacity that is relevant for the evaluation of eHealth implementations [38]. Though the country’s National eHealth Policy and Strategy [69] was developed and launched in 2017, all existing eHealth initiatives in the country were implemented with no national guidance being followed. Worse still, there were no detailed guidelines for evaluating eHealth initiatives in the country.

Insights learned from eHealth implementations evaluation – From this study, we learned that implementers in Uganda undertake more of “monitoring” activities for their eHealth implementations as compared to their evaluation. That is, the implementers understood that such monitoring activities and efforts could also be used to evaluate the impact and contribution of the ehealth implementations to the main programme objectives. This coincides with observations in Otto et al., [72] where only very few cases had their impact evaluation done out of the twelve eHealth cases studied across sixteen African countries. In their study, only Ethiopia’s FrontLineSMS and Malawi’s CommTrack were evaluated for impact; while for Uganda, both its RapidSMS and Trac FM were not evaluated [72]. Following guidance by WHO & ITU [12], activities and efforts for eHeath evaluation should consider observations and measurements beyond process and output indicators to also consider outcome and impact indicators for each of the eHealth implementation/initiative in question. In order to improve the practice of ehealth evaluation in Uganda, efforts are needed to support changing implementers’ perspectives on eHealth evaluation; key effort being the development of an ehealth evaluation framework that will define the notion of “evaluation”, its characteristics, and the indicators that should be measured with regards to the performance and impact of ehealth implementations in healthcare and service delivery for Uganda’s health system.

Conclusions

In this study, we share findings from the exploratory study on eHealth implementations evaluation practices and challenges faced in Uganda. We learned that Uganda had implemented various ehealth initiatives that had neither followed the national ehealth implementing guidelines nor had they involved the government (i.e. Ministry of Health); despite its increased efforts such as the development of the eHealth Policy and Strategy to regulate and guide eHealth implementation in the country. We also learned that most of the ehealth implementations were actually “monitored”, an activity that was erroneously used as “evaluation” of these initiatives. In other words, the process of evaluating the outcome and impact attributed to eHealth initiatives had not been a key activity with most of the eHealth implementers. This could be attributed to the fact that the country did not have guidelines on impact evaluation for eHealth implementations; as such, the ehealth implementers majorly monitored eHealth deployment, functionality, and adoption.

Notwithstanding the above challenges and lessons learned, our research findings can play a vital role in terms of providing the baseline situation on which health leaders and policymakers as well as the eHealth implementers can set improvement targets and action plans for strengthening and sustaining ehealth in Uganda. Accordingly, there is need for the development of ehealth evaluation guidelines and indicators that can be used to evaluate the outcome and impact of eHealth implementations in the country. Additionally, we advocate for the creation of awareness of the need to plan for eHealth evaluation in addition to monitoring activities during the planning of eHealth implementation programmes. This forms our next research steps, that is, establishing an all-encompassing ehealth evaluation framework to guide comprehensive evaluation for all Uganda’s ehealth implementations in order to have a more sustainable digital health system for Uganda.

Declarations

Ethics Approval and Consent to Participate

Following from the researchers’ request and acceptance by the Ministry of Health, Uganda, verbal consent was obtained from all the participants. The verbal consent was accepted because majority of the participants are members of the eHealth Technical Working Group (EHTWG), which is chaired by the Permanent Secretary of the Ministry of Health.

Consent for Publication                                                                                               

Not applicable.

Availability of data and material

The materials/articles used in this review are available upon request from the corresponding author.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgements

We would like to acknowledge all the respondents who participated in our field study particularly the Ministry of Health, Uganda (Division of Health Information), EHTWG members, and eHealth Implementers in Uganda.

Funding

We acknowledge the support / funding for the field study particularly the data collection by the HI-TRain project at Makerere University under the Norwegian Government Higher Education Funding Agency (NorHED), NORAD, Norway.

Authors' contributions

Both authors J. A., and J. N., worked together to develop the study protocol, administering it and analysing the data collected. Both authors J. A., and J. N., developed the full manuscript.

References

  1. Ryan J, Doty MM, Abrams MK, Riley P. The Adoption and Use of Health Information Technology by Community Health Centers, 2009–2013. New York, NY Commonw Fund. 2014.
  2. Codagnone C, Lupiañez-Villanueva F. Benchmarking deployment of eHealth among general practitioners. Final Rep. 2013;10:24556.
  3. Kivunike FN, Ekenberg L, Danielson M, Tusubira FF. Using a structured approach to evaluate ICT4D: Healthcare delivery in Uganda. Electron J Inf Syst Dev Ctries. 2015;66:1–16.
  4. Adler-Milstein J, Sarma N, Woskie LR, Jha AK. A comparison of how four countries use health IT to support care for people with chronic conditions. Health Aff. 2014;33:1559–66.
  5. Fanta GB, Pretorius L, Erasmus L. An evaluation of eHealth systems implementation frameworks for sustainability in resource constrained environments: a literature review. In: IAMOT 2015 Conference Proceedings, Cape Town. 2015.
  6. Shuvo TA, Islam R, Hossain S, Evans JL, Khatun F, Ahmed T, et al. eHealth innovations in LMICs of Africa and Asia: a literature review exploring factors affecting implementation, scale-up, and sustainability. Health Care (Don Mills). 2015;8:9.
  7. Hyppönen H, Ronchi E, Adler-Milstein J. Health care performance indicators for health information systems. Stud Heal Technol Inf. 2016;222:181–94.
  8. Organization WH. Global diffusion of eHealth: making universal health coverage achievable: report of the third global survey on eHealth. World Health Organization; 2016.
  9. Lau F, Kuziemsky C. Handbook of eHealth evaluation: an evidence-based approach. 2016.
  10. Organization WH. Monitoring and evaluating digital health interventions: a practical guide to conducting research and assessment. 2016.
  11. Coleman A. Migration from resource based to knowledge based strategy for e-health implementation in developing countries. J Commun. 2014;5:1–7.
  12. WHO & ITU. National eHealth strategy toolkit. International Telecommunication Union; 2012.
  13. Natrielli DG, Enokibara M. The use of telemedicine with patients in clinical practice: The view of medical psychology. Sao Paulo Med J. 2013;131:62–3.
  14. Piette JD, Lun KC, Moura Jr LA, Fraser HSF, Mechael PN, Powell J, et al. Impacts of e-health on the outcomes of care in low-and middle-income countries: where do we go from here? Bull World Health Organ. 2012;90:365–72.
  15. Diamantidis CJ, Becker S. Health information technology (IT) to improve the care of patients with chronic kidney disease (CKD). BMC Nephrol. 2014;15:7.
  16. Scott RE, Mars M. Telehealth in the developing world: current status and future prospects. Smart Homecare Technol TeleHealth. 2015;3:25–37.
  17. Källander K, Tibenderana JK, Akpogheneta OJ, Strachan DL, Hill Z, ten Asbroek AHA, et al. Mobile health (mHealth) approaches and lessons for increased performance and retention of community health workers in low-and middle-income countries: a review. J Med Internet Res. 2013;15:e17.
  18. Bagayokoa CO, Anneb A, Fieschi M, Geissbuhlera A. Can ICTs contribute to the efficiency and provide equitable access to the health care system in Sub-Saharan Africa? The Mali experience. Yearb Med Inform. 2011;20:33–8.
  19. Foh K-L. Integrating healthcare: The role and value of mobile operators in eHealth. GSMA mHealth Program Tech Rep. 2012.
  20. Achampong EK. The state of information and communication technology and health informatics in Ghana. Online J Public Health Inform. 2012;4.
  21. Betjeman TJ, Soghoian SE, Foran MP. mHealth in sub-Saharan Africa. Int J Telemed Appl. 2013;2013:6.
  22. Zobel R. Health in the information and knowledge economy age--a European perspective. Stud Heal Technol Inf. 2004;108:1–4.
  23. Iakovidis I, Wilson P, Healy JC. E-health: current situation and examples of implemented and beneficial e-health applications. Ios Press; 2004.
  24. Al-Shorbaji N. The World Health Assembly resolutions on eHealth: eHealth in support of universal health coverage. Methods Inf Med. 2013;52:463–6.
  25. Wyatt T, Krauskopf P. E-health and nursing: using smartphones to enhance nursing practice. Online J Nurs Informatics. 2012;16:1706.
  26. Lewis T, Synowiec C, Lagomarsino G, Schweitzer J. E-health in low-and middle-income countries: findings from the Center for Health Market Innovations. Bull World Health Organ. 2012;90:332–340.
  27. Naseem A, Rashid A, Kureshi NI. E-health: effect on health system efficiency of Pakistan. Ann Saudi Med. 2014;34:59.
  28. Merrell RC. Review of National e-Health Strategy Toolkit. Telemed e-Health. 2013;19:994.
  29. Porter ME, Lee TH. The strategy that will fix health care. Harv Bus Rev. 2013;91:1–19.
  30. Labrique AB, Vasudevan L, Kochi E, Fabricant R, Mehl G. mHealth innovations as health system strengthening tools: 12 common applications and a visual framework. Glob Heal Sci Pract. 2013;1:160–71.
  31. Bloomfield GS, Vedanthan R, Vasudevan L, Kithei A, Were M, Velazquez EJ. Mobile health for non-communicable diseases in Sub-Saharan Africa: a systematic review of the literature and strategic framework for research. Global Health. 2014;10:49.
  32. Ahmed T, Lucas H, Khan AS, Islam R, Bhuiya A, Iqbal M. eHealth and mHealth initiatives in Bangladesh: a scoping study. BMC Health Serv Res. 2014;14:260.
  33. Lluch M. Healthcare professionals’ organisational barriers to health information technologies—A literature review. Int J Med Inform. 2011;80:849–62.
  34. World Health Organisation. AFR/RC63/9: Utilizing eHealth solutions to improve national health systems in the African Region. Brazzaville; 2013.
  35. Yusof MM, Papazafeiropoulou A, Paul RJ, Stergioulas LK. Investigating evaluation frameworks for health information systems. Int J Med Inform. 2008;77:377–85.
  36. Yusof MM, Kuljis J, Papazafeiropoulou A, Stergioulas LK. An evaluation framework for Health Information Systems: human, organization and technology-fit factors (HOT-fit). Int J Med Inform. 2008;77:386–98.
  37. Ammenwerth E, Brender J, Nykänen P, Prokosch H-U, Rigby M, Talmon J. Visions and strategies to improve evaluation of health information systems: Reflections and lessons based on the HIS-EVAL workshop in Innsbruck. Int J Med Inform. 2004;73:479–91.
  38. Ammenwerth E, Gräber S, Herrmann G, Bürkle T, König J. Evaluation of health information systems—problems and challenges. Int J Med Inform. 2003;71:125–35.
  39. Blaya JA, Fraser HSF, Holt B. E-health technologies show promise in developing countries. Health Aff. 2010;29:244–51.
  40. Scott RE, Saeed A. Global eHealth: measuring outcomes: why, what, and how. Bellagio Rockefeller Found. 2008.
  41. Agarwal S, Labrique A. Newborn health on the line: the potential mHealth applications. Jama. 2014;312:229–30.
  42. Zhao J, Freeman B, Li M. Can mobile phone apps influence people’s health behavior change? An evidence review. J Med Internet Res. 2016;18:e287.
  43. Lee SH, Nurmatov UB, Nwaru BI, Mukherjee M, Grant L, Pagliari C. Effectiveness of mHealth interventions for maternal, newborn and child health in low–and middle–income countries: Systematic review and meta–analysis. J Glob Health. 2016;6.
  44. Ammenwerth E, De Keizer N. An inventory of evaluation studies of information technology in health care. Methods Inf Med. 2005;44:44–56.
  45. Palvia SC, Sharma RS, Conrath DW. A socio-technical framework for quality assessment of computer information systems. Ind Manag Data Syst. 2001;101:237–51.
  46. Berg M. Patient care information systems and health care work: a sociotechnical approach. Int J Med Inform. 1999;55:87–101.
  47. Greenhalgh T, Russell J. Why do evaluations of eHealth programs fail? An alternative set of guiding principles. PLoS Med. 2010;7:e1000360.
  48. Heathfield H, Hudson P, Kay S, Mackay L, Marley T, Nicholson L, et al. Issues in the multi-disciplinary assessment of healthcare information systems. Inf Technol People. 1999;12:253–75.
  49. Moehr JR. Evaluation: salvation or nemesis of medical informatics? Comput Biol Med. 2002;32:113–25.
  50. Sittig DF. Electronic health records: Challenges in design and implementation. CRC Press; 2013.
  51. Murray E, Hekler EB, Andersson G, Collins LM, Doherty A, Hollis C, et al. Evaluating digital health interventions: key questions and approaches. 2016.
  52. Deloitte. National Impact of Generation 2 Drug Information Systems Technical Report (Full) | Canada Health Infoway. 2010. https://www.infoway-inforoute.ca/en/component/edocman/resources/reports/331-national-impact-of-generation-2-drug-information-systems-technical-report-full. Accessed 14 Dec 2018.
  53. Fernandes O, Etchells E, Lee AW, Siu V, Bell C. What is the impact of a centralized provincial drug profile viewer on the quality and efficiency of patient admission medication reconciliation? A randomized controlled trial. Can J Hosp Pharm. 2011;64:82–6.
  54. O’reilly D, Holbrook A, Blackhouse G, Troyan S, Goeree R. Cost-effectiveness of a shared computerized decision support system for diabetes linked to electronic medical records. J Am Med Informatics Assoc. 2011;19:341–5.
  55. Office of the Auditor General of Canada. Electronic Health Records in Canada: An Overview of Federal and Provincial Audit Reports. Ottawa, Ontario; 2010. https://auditor.sk.ca/pub/publications/special/2010EHealthRecordsinCanada.pdf.
  56. Mechael P. Evaluation of IDRC-supported eHealth projects. 2011.
  57. Robertson A, Cresswell K, Takian A, Petrakaki D, Crowe S, Cornford T, et al. Implementation and adoption of nationwide electronic health records in secondary care in England: qualitative analysis of interim results from a prospective national evaluation. Bmj. 2010;341:c4564.
  58. National Audit Office. The National Programme for IT in the NHS: an update on the delivery of detailed care records systems. 2011. https://www.nao.org.uk/wp-content/uploads/2011/05/1012888.pdf.
  59. Sheikh A, Cornford T, Barber N, Avery A, Takian A, Lichtner V, et al. Implementation and adoption of nationwide electronic health records in secondary care in England: final qualitative results from prospective national evaluation in “early adopter” hospitals. Bmj. 2011;343:d6054.
  60. Takian A, Sheikh A, Barber N. We are bitter, but we are better off: case study of the implementation of an electronic health record system into a mental health hospital in England. BMC Health Serv Res. 2012;12:484. doi:10.1186/1472-6963-12-484.
  61. Molefi M. An Assessment of eHealth Projects and Initiatives in Africa. World Heal Organ Geneva, Switz. 2010.
  62. Omaswa C. Uganda National E-health Policy 2013.
  63. Kiberu VM, Matovu JKB, Makumbi F, Kyozira C, Mukooyo E, Wanyenze RK. Strengthening district-based health reporting through the district health management information software system: the Ugandan experience. BMC Med Inform Decis Mak. 2014;14:40.
  64. Huang F, Blaschke S, Lucas H. Beyond pilotitis: taking digital health interventions to the national level in China and Uganda. Global Health. 2017;13:49. doi:10.1186/s12992-017-0275-z.
  65. Kiberu VM, Mars M, Scott RE. Barriers and opportunities to implementation of sustainable e-Health programmes in Uganda: A literature review. African J Prim Heal care Fam Med. 2017;9:1–10.
  66. Lemaire J. Scaling up mobile health: Elements necessary for the successful scale up of mHealth in developing countries. Geneva Adv Dev Africa. 2011.
  67. Free C, Phillips G, Galli L, Watson L, Felix L, Edwards P, et al. The Effectiveness of Mobile-Health Technology-Based Health Behaviour Change or Disease Management Interventions for Health Care Consumers: A Systematic Review. 2013.
  68. McCann D. A Ugandan mHealth Moratorium Is a Good Thing - ICTworks. 2012. https://www.ictworks.org/ugandan-mhealth-moratorium-good-thing/. Accessed 14 Dec 2018.
  69. Uganda Ministry of Health. Uganda National eHealth Policy and Strategy. 2016.
  70. Neuman WL. Social Research Methods: Qualitative and Quantitative Approaches. USA: Pearson Education, Inc; 2003.
  71. Henderson RD, Deane FP. User expectations and perceptions of a patient management information system. Comput Nurs. 1996;14:188–93.
  72. Otto K. Information and communication technologies for health systems strengthening: opportunities, criteria for success, and innovation for Africa and beyond. 2015.
  73. Luna‐Reyes, L. F., & Andersen, D. L. (2003). Collecting and analyzing qualitative data for system dynamics: methods and models. System Dynamics Review: The Journal of the System Dynamics Society, 19(4), 271-296.

Tables

Table 1: Use of eHealth in work practices

 

 

Frequency

Percent

Valid Percent

Cumulative Percent

Valid

To a great extent

20

90.9

90.9

90.9

To a certain extent

2

9.1

9.1

100.0

Total

22

100.0

100.0

 

 

Table 2: Extent of ehealth Evaluation

 

 

Frequency

Percent

Valid Percent

Cumulative Percent

Valid

To a great extent

11

50.0

50.0

50.0

To a certain extent

4

18.2

18.2

68.2

To a very small extent

5

22.7

22.7

90.9

Not at all

2

9.1

9.1

100.0

Total

22

100.0

100.0