Participants worked in over 20 countries spanning Africa, Asia, Latin America, and the Middle East, with most also living in their countries of work (Table 3). A few were currently associated with HICs (e.g., Canada, Japan), but their primary work experience had been in LMICs. Most of the participants who mentioned their prior professional experience had a medical degree or other post-graduate training. Many were responsible for program management or coordination, followed by researchers/academics, physicians, and health financing professionals. The majority had over ten years of implementation experience but fewer than five years of experience in IS.
Concordance analysis results
As the concordance analysis results show (Table 2), the self-reported stakeholder category and the interviewer assigned category differed among 28% of interviewees (11 of 28). This discordance was because interviewees played multiple roles within their organizations and throughout their careers. The greatest discordance was among interviewees who classified themselves as clinical researchers and implementers, who the interviewer classified as implementation researchers and staff managers, respectively. Our stakeholder definitions differentiated between those who managed implementation projects (staff managers) and those responsible for the implementation (implementers), but the interviewees did not always make this distinction. Similarly, we distinguished between clinical researchers who primary focused on developing interventions, and implementation researchers who worked on creating and testing implementation strategies. However, these distinctions were blurred, and some clinical researchers also classified themselves as implementers because they provided services while simultaneously engaged in research.
Thematic analysis results
Themes are described in three major categories: experience with IS training, future training needs, and crosscutting contextual issues. The first two themes align directly with interview questions, consistent with the codebook approach to thematic analysis described above. These themes highlight majority perspectives. The third theme arose from our internal reflections and external validation inputs and emphasizes salient learning considerations beyond IS training.
Experience with IS training
Table 4 summarizes the perception of IS training that interviewees had received to date. About half the stakeholders had no formal IS training; others mostly participated in IS trainings as opportunities arose, for example through workshops or short courses. A few had pursued IS graduate training programs. Modalities through which the interviewees had acquired IS knowledge varied widely, including online resources, online courses, textbooks, self-study, collaborative learning or alliances, and conferences.
Most interviewees described their training experiences as positive, saying, for example, trainings helped them understand what IS is and learn new approaches to research or job duties. However, many struggled to define IS. Seventy-five percent of respondents defined IS as a) closing the research-to-practice gap in implementing programs or interventions or b) studying/applying scientific methods to design, implement, and scale programs. For example, one interviewee stated:
"Putting research into practice but doing it in an evidence-based manner. You don’t just translate your findings into practice and ask people to just apply it, but you do it in a way that you make sure to monitor the process and evaluate each step, looking at what goes wrong or right and how to incorporate this in a way that can be scaled up.”
This definition is consistent with the responses to how IS is primarily applied. Two thirds of interviewees stated that they used IS for evaluation of implementation efforts or designing and adapting new programs. Fifteen percent indicated using IS to influence policy. Overall, even amongst researchers, the understanding of IS appeared more akin to operational research and process evaluation (e.g., to understand barriers to implementation within a specific program). Only five respondents described using IS to frame or guide implementation research activities, and five were unable to describe how IS applied to their work.
When asked about use of IS theories, models, and frameworks, almost 40% of interviewees were not using or were not able to identify any IS frameworks or tools. Fewer than ten stakeholders named any IS-specific models, theories, or frameworks. Eleven mentioned using evaluation frameworks, though with the exception of RE-AIM (26), those mentioned were generic, such as theory of change and logic models. This finding reinforces our prior result that there is confusion between IS and process evaluation. Four respondents described CFIR (Consolidated Framework for Implementation Research), one person mentioned EPIS (Exploration, Preparation, Implementation, Sustainment), and two generically described process frameworks.
Even respondents who had undergone formal training in IS mostly described their IS understanding of the field as basic or minimal. The stakeholders named several gaps in training, the most common (40%) being difficulty in applying IS principles to their work and not knowing how to convey IS to other stakeholders. In the words of one interviewee:
“I think there is a gap in understanding how IS can be integrated into each program to enhance the way it works. Very often, there is so much research out of which recommendations emanate, but there is not always guidance on how to implement those recommendations. There is a gap in knowledge of how do you translate those research outcomes into something meaningful on the ground.”
Most of the other gaps mentioned involved foundational capacity not directly related to IS, such as proposal writing, research designs, or data visualization. Generic barriers to filling these training gaps such as language, time, training locations, training fees, and lack of access to experts were mentioned, but nothing was unique to IS training. In summary, most respondents appeared to view the IS training that they had received as part of general capacity building in program implementation and evaluation rather than as skills in a separate discipline. One interviewee stated:
“In the design of all projects, there is an inclusion of some sort of evaluation of how things are implemented. You include measures of how processes are going out, they use outcomes, outputs, and activities frameworks.”
Future IS training needs
Table 5 summarizes interviewees’ stated requirements for an ideal IS training program. Reflecting the variation in individual training experience, there was significant heterogeneity in respondent opinions of who should be trained, who should train, how training should be conducted, and the topics that should be covered. However, amidst this variation, some common themes emerged.
A majority of respondents emphasized the need for IS topics to cover basic, practical topics. The top six topics that stakeholders felt should be covered were basic IS knowledge (14), practical application of IS (10), application to LMIC contexts (6), engaging stakeholders (6), integrating IS into program planning and evaluation (6), and IS research methods, grant writing, and dissemination (6).
A significant majority of respondents (70%) preferred a combination of online and in-person training. Many interviewees described the need for interactive training programs including elements such as workshops, training embedded in fieldwork, peer learning, and interactive online discussion. There was also a feeling that the training duration should be linked to the distribution of time spent online and face to face. As one interviewee suggested,
“If it’s in person, then a shorter course. In person is much better. If online, then a bit longer. With online courses, not being able to engage that well is a gap.”
Twenty-three of 39 respondents expressed the need for an interdisciplinary team of trainers. An equal number mentioned the need for trainers to have practical experience, with a subset (17%) expressing a preference for trainers who were comfortable with both theory and practice. In addition, some stakeholders specifically stated that instructors should have experience working in LMICs rather than only having training from Western knowledge, and that IS training topics should include application of IS in specific contexts such as LMICs. As one interviewee mentioned,
“I recently went to an implementation science training…by someone from the UK. There was a bit of discussion after that there was some disconnect between the overly-theoretically driven approach by the lecturer and the actual needs in African contexts. So you need more than an implementation science researcher from North—you need someone working in the African context as well.”
Sixty-five percent of respondents emphasized the need for training programs to include or be supplemented by mentorship or apprenticeship either during or following training. Other ideas for ongoing support were also mentioned frequently, the most common being communities of practice or learning networks and monthly seminars or other structured events. There was an overall sentiment that training alone cannot build the skills needed to take IS principles from theory to practice. In the words of one respondent,
“Current programs place so much emphasis on the theories and frameworks, but little emphasis on mentorship. Beyond being a science, implementation is also an art. Transferring knowledge within the arts involves lots of learnings which are informal.”
Crosscutting contextual issues
Several interviewees did not distinguish between implementation research topics and basic research topics. When asked about gaps in their IS knowledge and desire for future training, many stakeholders listed skills and topics related more to general research than to implementation research. Some topics suggested were retrospective reflection to determine program impact, proposal writing, evaluating literature and evidence, designing research studies, data visualization, analysis, evaluation, project management, and use of statistical software.
Similarly, some interviewees stated that applying implementation research was difficult in their countries because basic research capacity was lacking. The ability to conduct implementation research assumes foundational research methods knowledge, and many interviewees described the need to build these skills first or in conjunction with implementation research capacity. In the words of one interviewee:
“Research capacity and output [in my country] is low...so we are just struggling to do basic research—to do operational research. We haven’t been able to move from actually applying research to improving public health. So it is a difficult thing to do IS.”
In addition, some interviewees found the emphasis on implementation research training premature when there is still a critical need to develop evidence appropriate to LMICs. Several stakeholders felt that much of the evidence is developed in HICs rather than LMICs, and that interventions are “imported.” For example, many stakeholders worked on projects addressing HIV and/or tuberculosis (TB). The prevalence and impact of HIV and TB in Southern Africa is considerably different than in HICs. As one interviewee stated:
“The evidence is developed in HICs, but LMICs don’t have the baseline data even of the current status and need. I think that first we must generate that evidence, and then we will need to use IS knowledge to scale up.”
Interviewees also described how funding structures made conducting and applying IS research a challenge in their countries. In some cases, project funding that came from HICs placed constraints on implementers’ local decision-making authority, optimal measurement, and sustainability of projects. One mentioned that the US-funded project she worked on was “highly prescriptive and mandated by the US,” and two others spoke to the pressure in their US-funded projects to “do things fast,” and measure indicators that impede implementation rather than advance it to targets set by the donor. Another described donors’ impediment to project sustainability:
“So much of the work is donor-driven and therefore finite. At the end of the project cycle, the partner changes…The big development partners like [US funder] have capacity to absorb outputs [referring to IS research], but so much is done by community organizations. How can we involve those partners and capacitate there?”
Finally, five stakeholders mentioned language as a barrier to learning IS and/or spreading IS knowledge in their countries, pointing out that the vast majority of IS literature is written in English and that, “even the very definition of ‘implementation science’ is purely in English.” A French-speaking interviewee mentioned the lack of IS training materials available in French:
“When I started my master's in English, I found it extremely challenging to access resources, to read and understand them to differentiate one approach from the other. For the French-speaking world, it's the fact that training materials and resources in implementation science are unavailable.”
Further, the French-speaking interviewees reported that IS is not widely known as a science itself and that some stakeholders involved in IS are not fully aware that they are doing IS work.