Interviews were conducted between June, 2021 and January, 2022. Our final sample included 22 researchers (66% of those invited), and interviews lasted 50 minutes on average. The sample comprised 18 faculty members from research-intensive universities, 2 from non-profit research organizations, 1 from a pharmaceutical company, and 1 US government implementation researcher. Nine researchers mostly centered on mental health and substance abuse outcomes, 8 on the delivery of general health services, 2 on HIV and ART care, 1 on cancer outcomes, 1 on non-communicable disease outcomes, and 1 on nutrition outcomes. Twenty-one participants were based in the US and the other participant in the UK. Eighteen focused their research domestically and 4 focused on low and middle-income countries.
Our analysis identified four major themes: (1) a current lack of validated fidelity tools with the need to assess fidelity in the short term; (2) complexity of implementation strategies creating inherent difficulties assessing their fidelity; (3) conceptual complications when assessing fidelity within mechanisms-focused implementation research; and (4) structural barriers related to funding agencies and publication. We present each thematic barrier alongside proposed solutions using illustrative quotes to highlight key facets and variations within each theme. Solutions to barriers included (1) utilizing strategy specification and tracking techniques as well as theories of change, (2) allowing experts to lead the way in the development of fidelity tools of complex strategies, (3) adopting and enforcing implementation strategy fidelity reporting guidelines, (4) focusing funds for developing approaches to implementation strategy fidelity measurement, (5) utilizing technological innovations to facilitate efficient implementation strategy fidelity data collection, and (6) integrating implementation strategy fidelity assessment into mechanisms-focused implementation research.
Barrier 1: Operationalizing implementation strategy fidelity
Most participants defined fidelity of implementation strategies as the extent to which a strategy was delivered as intended. When asked more specifically about how fidelity of implementation strategies ought to be assessed, participants provided a range of responses. Some described a desire for validated measures of implementation strategy fidelity akin to other implementation outcomes:
You know, I’ve seen some of the more recent literature around where they've now had validated measures for feasibility and accessibility, it would be nice if there was a more validated universal measure [of implementation strategy fidelity]…I think this is particularly challenging because it's very individual to your own strategy which can be very significant.
Others described a preference for fidelity assessment using study-specific process measures but grappled with thoughts regarding their rigor.
I think the perception is that this is like tracking data, especially the process stuff, people don't see it as a hard outcome. Unless it's framed as fidelity ahead of time, and even there's so much in the process of tracking, there's so much detail, there's not one score of fidelity right? It isn't a measure that's easy to stick into a manuscript as another outcome.
The two participants quoted above described differing views regarding how researchers in our sample approached the assessment of implementation strategy fidelity. The first describes a desire for more rigorous, validated, universal tools that assess implementation strategy fidelity as an outcome variable. The second participant mentions the utility of tracking and process data to describe how a strategy was implemented. However, they question whether other researchers see process and tracking data as a “hard outcome,” suggesting others may perceive those data as less rigorous, and possibly of less scientific value. Several participants ultimately described how the development of validated strategy-specific fidelity tools may serve as a long-term goal but described the immediate utilization of process data as a pragmatic means of assessing implementation strategy fidelity in the short term. The variation regarding conceptual approaches to implementation strategy fidelity assessment may reflect the current state of implementation research. Another researcher expanded on this concept by describing how they approached implementation strategy fidelity with flexibility when serving as a peer reviewer:
Even if I’m not calling it implementation strategy fidelity it's hard for me to imagine that someone would get to the publication phase and be like, ‘oh no, I don't know, did I deliver the strategy?’ You know? I feel like there are ways that people could retrospectively piece together some kind of quality assurance metric…I mean, because I know that there aren't established tools, I'm going to be a little bit less stringent [as a reviewer] about like ‘oh you're not using a gold standard instrument’ if it doesn't exist.
The participant quoted above was not alone in their approach to peer review of implementation research. As our interviews went on, we asked participants what would convince reviewers that strategies were delivered as intended. Several other researchers shared the approach described by the participant above, with some additionally noting the utility of time-and-motion and costing data to describe the extent to which a strategy was implemented as designed. In the section below we describe the time-intensive labor involved in developing rigorous, strategy-specific, fidelity tools. Given the immediate and ever-present need to assess the likelihood of a Type-III error in implementation research, participants highlighted the value of process data to describe implementation strategy fidelity, despite some participants’ perceptions that it may have less rigor compared to the ideal of a validated fidelity tool. However, participants’ expectations that other researchers use process data to describe implementation strategy fidelity in their manuscripts signals its importance, even if “there’s not one score of fidelity.”
Barrier 2: Implementation strategy complexity
Nearly all respondents remarked that as strategies become more complex, so too do their fidelity assessments, serving as a major barrier to routine measurement. When asked to describe what they mean by ‘complex strategies,’ almost everyone mentioned that complex strategies include a high volume of discrete strategies and strategies that hinge on a more subjective interpersonal relationship between actors and action targets. Proposed solutions included the need for researchers with strategy-specific expertise to guide the field in fidelity assessment over the long term, and again, the utilization of process-like specification and tracking data to assess fidelity in the short term. When asked to describe specific complex strategies, participants frequently mentioned strategies like coaching, champions, and facilitation as the most complex implementation strategies. Several researchers described the additional frustration, and the feeling of being overwhelmed, when they think about assessing fidelity of complex multifaceted strategies:
I’ve read some articles and people are like ‘we specified an implementation strategy’ and they select like 2325 ERIC strategies [(26)]! And it's like, you're going to say we have to measure fidelity to each one? …I think people are just a little bit overwhelmed at unpacking the black box.
In addition to the encumbrance of assessing fidelity to multifaceted strategies, most participants also described the subjective nature of some strategies that hinge on interpersonal interactions, further complicating their fidelity assessment. One participant noted:
How much interaction is there between the strategy and the actor? How much discretion does the actor have over the execution of the strategy? And I think the more discretion that actor has, as with say facilitation or championing, some of those strike me as more art than science. So, when you have more art, how do you measure art? But when you have something where there isn't as much discretion and it's just ‘do this thing’ then it's easy to measure that thing.
When participants were asked how they might approach assessing interpersonal aspects of implementation strategies, responses varied with respect to both methodologic approach and intensity. Several suggested adapting existing measures:
…One of the more widely used is a working alliance inventory, 12 items, right? Three subscales. ‘Do we agree on goals for what we're doing?’ ‘Do we agree on the steps we take?’ and ‘Do we like each other,’ right?…Those could be translated pretty easily to [assess fidelity of] implementation strategies as well.
Others described a preference for assessing interpersonal facets of implementation strategies through qualitative interviews:
I definitely have a little bit more of a bias towards qualitative interviews for things like that, because I think that there's a quality of the way that people talk about that relationship that you can kind of hear, you know? …It’s the type of relationship that they had with the facilitator…Like what are the things that organically come up for that participant as being meaningful to them that I think are harder to capture in a pre-specified survey.
Another researcher described their preference for assessing facilitation strategy fidelity by coding facilitators’ notes:
You know, do you have your facilitators fill out field notes or lab manuals? Or do they write down reflections of what they did every day with a site or with a group of people or every week? And could you code those to describe exactly what was done?
Several respondents also described their approaches to assessing facilitation fidelity, with one participant describing a method of recording facilitation sessions and scoring facilitators based on 4 components using a binary response option. Another described utilizing mixed methods, combining the use of time tracking logs and qualitative interviews to assess facilitators’ adherence to 20 core components. The differing approaches regarding quantitative and qualitative methods, the number of identified facilitation components, and varied response options echoes our first theme focus on how a research environment that lacks consensus on fidelity operationalization gives rise to varied approaches to fidelity assessment of the same implementation strategy.
When asked to describe the way forward for assessing fidelity of complex implementation strategies, responses fell broadly into two sub-themes. One set of responses focused on an approach utilizing the knowledge of experts who study specific complex strategies to guide the field forward by (1) identifying core components of various complex strategies, or even components of the same complex strategy given their broad nature, and (2) forming fidelity criteria to the identified components. The second focused on the importance of adequately specifying and tracking the distinct components of complex strategies and linking strategy activities to a theory of change.
Several participants suggested allowing experts to guide the way to fidelity assessment of complex strategies. These researchers felt that those most focused on any one complex strategy might be most knowledgeable regarding identification of strategy core components and how to assess fidelity to them.
I think it's probably up to the people who are trying to develop the evidence, based on those strategies to try to figure this stuff out and I don't think it's lost on them, and I think that folks are doing it…The folks who are developing these strategies, it likely should be their job to think about [fidelity assessment of those strategies].
Two participants in our sample described their approach to developing a facilitation fidelity tool based on a scoping review and convening of experts to reach consensus of core components, followed by primary data collection to ascertain optimal fidelity data collection modalities for each component.
In the absence of developed fidelity tools, participants again described the utility of clarifying exactly how a strategy should operate (specification) and reporting on how it unfolded (tracking) to adequately determine if a complex strategy was implemented as intended. Researchers additionally described the importance of behavior change, organization, or implementation theories and frameworks in specifying the relationship between core activities within complex strategies and linking them to specific outcomes. Participants discussed how a theoretical rationale could give way to clarified strategy components and mechanistic pathways, and therefore clarified fidelity assessment. Respondents felt that utilizing a theory of change and specifying and tracking complex strategies might provide researchers with the tools to adequately determine if a strategy unfolded as it was designed.
Barrier 3: Mechanisms and implementation strategy fidelity
Most respondents described an opportunity for synergy between the development of implementation strategy fidelity and mechanisms-focused implementation research. While participants agreed on the importance of integrating strategy fidelity assessment within mechanisms-focused research, few commented on how to best assess strategy fidelity, and those who did proposed differing approaches (prospectively vs. retrospectively). When asked how implementation strategy fidelity assessment fits within a mechanistic framework, one participant illustrated their thoughts with the example of a video-based health education strategy:
If the mechanism is through delivering information in an exciting and emotionally relevant way, that prompts integration of information into people…I would say that fidelity to this strategy to me would be a precondition for the mechanism activation, that's where I would think of it…And I’m sure that there are others, well, precondition or [cognitive] moderator... probably both [cognitive] moderators and preconditions, that's probably where I would look at some of this implementation strategy fidelity.
In this example, the participant describes a pathway where a video-based health education strategy targets the activation of new information. They went on to explain that the “people” described above referred to a group of patients in a clinic waiting room who were shown a video to improve their knowledge of a pharmaceutical drug intervention. The participant describes how adequate fidelity to the video strategy is required to activate the mechanism of new information in patients regarding the intervention. Mechanistic models categorize two constructs that can impact the relationship between a strategy and the activation of a mechanism: preconditions for mechanism activation, and cognitive moderators. Preconditions include facets of the strategy that are required for a mechanism to be activated (27). The participant in the quote above went on to explain how clinics in their study sometimes experienced power outages, preventing patients from seeing the video. They explained how assessing the proportion of clinic days without electricity could serve as an implementation strategy fidelity indicator that might be assessed throughout the study period. Cognitive moderators are factors that impact the level of a strategy’s influence on the activation of a mechanism (27). The participant quoted above went on to describe various cognitive moderators that might impact the video’s ability to activate the mechanism of new knowledge within a patient in the waiting area. For example, they described how a patient’s mood might impact their ability to connect with the video and process the information it was meant to deliver. They described how assessing cognitive moderators like patients’ moods while exposed to the video in the waiting room might represent important information regarding the fidelity with which the strategy was delivered. The participant also described how one might determine cognitive moderators or preconditions of mechanism activation at the outset or early stages of a study, allowing for their prospective assessment throughout the study period.
A different participant similarly described adequate implementation strategy fidelity as a requirement of mechanism activation but shared a differing view on how it might be assessed. The participant used an example where a didactic training strategy targeted the mechanism of new knowledge in a group of primary care physicians to improve their administration of a depression screening tool, with the end goal of increasing the screening tool’s uptake in their routine clinical practice. When asked how they might go about assessing implementation strategy fidelity in their example, this participant described how the activation of new knowledge and skills might be pragmatically assessed via a pre- and post-training test, a proximal indicator of that mechanism’s activation. They described how knowledge test scores might vary based on fidelity components related to the training itself (e.g., quality of delivery, coverage of content, participant responsiveness), but noted that these facets are often harder to comprehensively assess compared to something like a pre-post knowledge test. This participant suggested that if researchers find that a strategy impacts a proximal outcome, such as new knowledge and skills, they might conclude that the necessary criteria for activation were met, providing a sense that fidelity may have been adequate. To that end, the participant also described the importance of implementation strategy specification in facilitating an explanation of exactly what activities occurred leading up to the activation of a mechanism as well as clearly stating how an activated mechanism might overcome a specific implementation barrier. While nearly all researchers described the importance of integrating fidelity within mechanisms research, only the two highlighted here described how they might do so.
Barrier 4: Pragmatic solutions to structural funding and reporting barriers
Nearly all researchers described the same structural barriers to implementation strategy fidelity assessment and reporting: word limit constraints, a lack of reporting requirements, and insufficient funding. Several researchers highlighted some journals’ more recent adoption of the Standards for Reporting Implementation Studies (StaRI) Statement as a reporting guideline (28), which they saw as a structural solution to improving implementation strategy fidelity assessment and reporting. StaRI gives researchers specific guidance and provides examples for including information about implementation strategy fidelity within implementation trials. While this seemed like a direct solution to a structural barrier, one participant voiced concern over their utility in practice:
Is [implementation strategy fidelity reporting a] common practice in the field? Heck no. I do think that, as the journals are starting to require checklists like StaRI or other things, that hopefully will become a little bit more. But I do think that journals sort of say ‘we need this’ and then sometimes I don't even think they check.
In addition to word limit constraints and reporting requirements, several participants described the structure of funding opportunities as a barrier to implementation strategy fidelity assessment, specifically requirements related to the assessment of clinical outcomes. All researchers described costs associated with implementation strategy fidelity data collection as a barrier; several clarified further how the requirement of clinical outcome measurement drew resources that might otherwise be used to elucidate implementation strategy fidelity:
So, you know, you can't be saying ‘I’m going to run a trial and it is going to run over three years it's going to cost you, you know $10 million or whatever.’ Because to be looking at fidelity in a huge amount of detail? This isn't a cost-effective study to propose. So I think, by trying to be pragmatic we lose the ability to go into a huge amount of depth on the fidelity question. So if we have more studies, with an implementation orientation…so you don't collect any effectiveness data, that creates the space to say okay we're going to look at scale up measures, we're going to look at uptake, we're going to look at the definitive feasibility, you know?
About half of all participants described working within the confines of current grant funding mechanisms, offering what they felt like were pragmatic solutions focused on reducing the costs of data collection techniques to make space within limited budgets for implementation strategy fidelity assessment. These techniques included technological innovations and finding multiple uses for data sources. Participants described the use of meta-data related to facilitator email response times and using machine learning and artificial intelligence to rate fidelity of training strategies. Several others described how costing data were regularly collected for cost-effectiveness analyses and how techniques like time and motion tracking could also be used to assess facets of fidelity to some implementation strategies (e.g., the frequency or duration of facilitator phone calls).
Despite barriers related to the operationalization of implementation strategy fidelity, the complex nature of multifaceted strategies, the assessment of implementation strategy fidelity within mechanisms research, and several challenges related to publication and funding, researchers in our sample held an overwhelming optimism and motivation towards the improvement of implementation strategy fidelity assessment and reporting. One participant described their motivation to scale up implementation strategy fidelity and reporting with a sense of pragmatism straightaway, eschewing the need to compare standards between implementation strategy fidelity and other perhaps more developed forms of measurement.
I think right now we're at a place, we just need to start doing something. It doesn't have to be perfectly, psychometrically, 100%, you know? We start where we are. Let’s start with the yes/no’s and the ‘did it happens?’ And then progress from there, maybe to quality and intensity and things like that… Just start where we are.