A total of 22 interviews were conducted in 18 institutions. Out of the 22 respondents, 17 (77.3%) were males and 5 (22.7%) were females. Most of the respondents 12 (54.5%) were in the age bracket of 31-40 followed by 6 (27.3%) in the age bracket of 18-30 and 3 (13.6%) in the age bracket of 41-50. The respondents included a diverse range of cadres including programme managers, monitoring and evaluation officers, health informatics specialists, software developers, statisticians, and IT systems administrators.
The use of eHealth
All participants mentioned that their organisations use eHealth in their health-related activities. With 'great extent' meaning very high rate, ‘certain extent' meaning medium rate and 'very small extent' meaning very low rate, 91% of the respondents indicated to be using eHealth to a great extent while only 9% indicated to be using eHealth to a certain extent in their activities (Table 1). Data collection and reporting 9 (41%) was the most common area of eHealth application followed by data analysis 4 (18%) and others as shown in Figure 2 (see Supplementary Figure 2 online). In addition, DHIS2 12 (54.5%), mTrac 9 (41%) and Family Connect 5 (22.7%) were the most used eHealth systems as shown in Figure 1. Some of the responses (quoted verbatim) from the participants included;
“We use information systems in basically all of our services provision; stores, general clinic, laboratory, finance and procurement, etc...” -- Participant 22
“… to a great extent. We have introduced electronic systems to process and disseminate results e.g. SMS, ODK data collect” – Participant 21
“We use ICT to a certain extent. We use it in communications with mobile phones, internet, etc.; data collection; data analysis and use with DHIS2, MTrac, iHRIS, OpenMRS, EMRs etc” – Participant 1
“eHealth is used to a great extent, for example with the use of DHIS2 to support reporting of routine health services from districts, use of MTrac based on rapid sms for surveillance and medicines management, use of HRIS to manage human resources for health” – Participant 12
Organisation practices and motivations for eHealth evaluation
Most participants reported that their institutions put efforts to evaluate the performance of eHealth, but also some organisations do not. 50% of the participants indicated that their organisations put efforts to a great extent, 18% to a certain extent, 23% to a very small extent and 9% not at all (Table 2). Looking into reasons for conducting the evaluations (Figure 3), checking functionality of the eHealth initiatives was the most reported reason by many participants (32%). Participants also provided that institutions also conducted evaluation of eHealth because it is a requirement by funders, to keep track of changes in user requirements, to identify gaps in system functionality, and to streamline partners’ approaches to eHealth implementation. Examples of respondents’ feedback included;
“… I think to a great extent, because we conduct these evaluations throughout the implementation of the systems. We conduct the evaluation because one, it is a requirement from our donors, secondly, evaluations help to quickly document achievements, and also capture user feedback. Internal evaluations contribute to our marketing strategy for the systems” -- Participant 2
“Evaluations are part of our quarterly activities and at baseline before implementing mobile tools. We do baseline evaluations to determine capacity needs, user attitudes, and challenges to be solved by the mobile tools ...” -- Participant 18
“We conduct evaluations to a great extent. We go ahead to deploy our staff onsite where the initiatives are implemented” --Participant 3
“… evaluations are conducted to a certain extent and mostly during implementation stages. As the Ministry, our main interest is to streamline how different partners implement eHealth solutions so as to eliminate uncoordinated, fragmented, and duplicate systems” – Participant 4
“.. to a very small extent because we do not normally conduct performance evaluations, but we sometimes want to ensure proper flow of system functionality to meet user requirements” – Participant 17
“… may be to some extent. We especially evaluate the system before implementation during user acceptance testing to ensure the system works as expected. The evaluation also reduces complaints from users” – Participant 10
Indicators monitored during eHealth evaluation
Participants reported various indicators that are currently considered during evaluations, most reported indicators being system availability, system response speed, interoperability, usability, scalability, and availability of human resources to implement the eHealth initiatives as shown on Figure 4 (see Supplementary Figure 4 online). Among the participants, 9 (41%) did not mention any indicators because their organisations did not conduct evaluations or they did not have a practice of using indicators for evaluation. Below are some of the responses from participants;
“… Most of the initiatives are user-centered, so we look out for usability indicators” – Participant 8
“We normally evaluate functional and non-functional requirements of the system. Functional requirements are evaluated through checking the functionality of the system and then validation rules on the data. Then some of the non-functional requirements evaluated are system’s interoperability capacity with other systems, cost implications for implementing the system, security, scalability, and sustainability of the system” – Participant 6
“… Mainly we pay attention to functionality to ensure that the system functions as expected” – Participant 10
“…. system functionality, data use, impact, human resource capacity, and ICT infrastructure. We also use USAID Measure tools …” – Participant 14
“We assess availability of the computing infrastructure, internet access, capacity of the health workers and staffing levels” – Participant 11
“… We needed to assess value for money …. What additional value does the eHealth initiative bring? … and is this additional value worth the additional costs?” -- Participant 13
“The indictors we monitor depend on what aspects of the work process you are evaluating …. “--Participant 20
“.. We assess system availability and its usage. We also assess availability of a champion to lead implementation on the operation site” – Participant 2
“… We only developed the electronic database and dashboard and trained health facility trained staff, and the project even ended but we did not evaluate implementation of the initiative…” -- Participant 15
“… Apart from comparing system functionality with initially specified user requirements, we do not conduct any other evaluation on the systems we deploy. Mainly our aim is to ensure that the systems function well to support the work done by users” – Participant 17
Challenges in eHealth Evaluation
The participants reported a wide range of challenges they face during evaluation of eHealth initiatives. The most reported challenges and limitations that affect eHealth evaluation included limited skills/capacity among the evaluation teams, lack of standard procedures on eHealth implementation and evaluation, limited documentation about the eHealth initiatives, limited resources in terms of time and money, unharmonised interpretation of eHealth performance indicators and stakeholders’ negative attitudes as shown in Figure 5 (see Supplementary Figure 5 online). Some of the participants’ responses are below;
“We cannot be sure that all the eHealth systems in any category e.g. EMRs have been assessed because we do not have a comprehensive list of what systems exist in the health sector” – Participant 7
“…. We have challenges related to interpretation of evaluation indicators because we do not have them categorised and made more specific, so different stakeholders understand and interpret some indicators differently….” – Participant 2
“There is lack of clear indicators to show contribution of ICT initiatives to the sector wide service delivery. We also lack a clear mechanism of performance evaluation” – Participant 12
“We face challenges like limited resources in terms of time and money, limited expertise more so in the area of data analysis” – Participant 20
“Some users fear that evaluations will expose their low performance so they end up giving impressive feedback that is not objective during the evaluations. And also as the evaluation teams, we do not have a standard to follow while conducting evaluations, so every time we need to conduct an evaluation we put in much effort to plan for it from scratch” – Participant 18
“There is no standard procedure to guide on when and how frequent to do evaluations …… there is a problem of unharmonized indicator interpretation and this leads to misinformation…” – Participant 1
“Most of the teams have negative attitude towards evaluation of systems hence few individuals remain willing to participate in this… and it becomes a one man’s show ….” -- Participant 1
“.. there is no enough documentation of these initiatives, so trouble comes when individuals leading their implementation leave the organisations where the initiatives are being implemented … evaluating an initiative without any background information is difficult …” – Participant 3
“… some of the data collectors bring wrong data that is unusable, so they need more training …” – Participant 19