Engaging Stakeholders to Retrospectively Discern Implementation Strategies in Support of Program Evaluation

Background: Successful implementation of evidence-based practices is key to healthcare quality improvement. However, it depends on appropriate selection of implementation strategies, the techniques that improve practice adoption or sustainment. When studying implementation of an evidence-based practice as part of a program evaluation, implementation scientists confront a challenge: the timing of strategy selection rarely aligns with the establishment of data collection protocols. Indeed, the exact implementation strategies used by an organization during a quality improvement initiative may be determined during implementation. Nevertheless, discernment of strategies is necessary to accurately estimate implementation effect and cost because this information can support decision making for sustainment, guide replication efforts, and inform the choice of implementation strategies for other evidence-based practices. Main body: We propose an iterative, stakeholder engaged process to discern implementation strategies when strategy choice was not made before data collection began. Stakeholders are centered in the process, providing a list of current and potential implementation activities. These activities are then mapped by an implementation science expert to an established taxonomy of implementation strategies. The mapping is then presented back to stakeholders for member checking and renement. The nal list can be used to survey those engaged in implementation activities in a language they are familiar with. A case study using this process is provided. established

When implementation is conducted as a part of quality improvement efforts, the selection may not be informed by data collection needs of the evaluation, instead it may be based on the desires of various stakeholders.
When a disconnect between implementation strategy selection and data collection exists, retrospective mapping of implementation activities to implementation strategies using an established taxonomy can aid the evaluation.

Background
Healthcare quality improvement (QI) efforts need to emphasize e ciency and effectiveness, that is avoiding waste of funds and time, and providing services based on scienti c knowledge, respectively (1).
Providers are increasingly incentivized to undertake QI by payers. For example, in 2018, 60% of reimbursement for healthcare services was tied to quality (2). Implementing evidence-based practices (EBPs) is one method that organizations pursue to achieve these QI goals.
Yet frequently there is ambiguity about the most effective and affordable approaches to put these EBPs into clinical practice. In the absence of clear guidance, a complex negotiation between stakeholders often takes place with the ultimate goal then becoming a balancing of all their wants and needs. Key stakeholder groups include individuals in administration and operations, policymakers, clinicians, and patients. As such, the wants and needs of these groups may be in direct competition due to their varying perspectives, functions, and responsibilities. E cient and effective QI efforts begin with the selection of appropriate implementation strategies. Implementation strategies are processes and tools that support integration of an EBP in a new clinical practice setting. These strategies are de ned more formally as "methods or techniques used to enhance the adoption, implementation, and sustainability of a clinical program or practice" (3). The purposeful selection of these strategies can help maximize the likelihood of successful adoption and sustainment.
The study of implementation of EBPs occurs in a range of contexts. At one end of the spectrum is the randomized controlled trial (RCT) where implementation strategies are selected prospectively and where delity is monitored by the study team. At the other end is a retrospective program evaluation where the evaluators are not involved in the selection of implementation strategies and data collection necessarily occurs after the EBP has been implemented. In practice, evaluators have varying degrees of opportunity to engage with stakeholders invested in the implementation of an EBP and of in uence over strategy selection. This selection may occur before implementation, but it may also occur iteratively during implementation. Program evaluations may thusly involve studying implementation efforts characterized by strategy selection that resulted from an iterative or ongoing collaborative process.
The lack of clarity about which speci c strategies were chosen and how, resulting from balancing competing interests, leads to conceptual, methodological, and evaluation challenges. This can hinder leadership of an implementing site's ability to make an informed decision about which strategies best support long-term sustainment. It also makes comparing the effect and cost of different implementation strategies across sites more di cult; these comparisons are critical for organizations that may want to replicate the effort.
Regardless of the implementation strategies chosen, it is essential to program evaluation that the strategies are accurately compiled. Absent this information, it is impossible to characterize temporal ordering, dose, implementation actors, and implementation activities of each strategy (4). This level of detailed data collection supports a more complete estimate of the investment that was required for implementation and will be required for sustainment and replication. It also allows an evaluator to disentangle the cost of implementation from the cost of providing the EBP. This distinction is critically important when the selection of e cient strategies is a goal of the evaluation.

Implementation Strategy Mapping
Frameworks such as Expert Recommendations for Implementing Change (ERIC) provide a common vocabulary when discussing implementation strategies (5). Implementation science has improved in the use of a common language and standard reporting of implementation strategies (3), advancing our understanding of many aspects of the use of these strategies. As one advancement, the ERIC strategies have been ranked according to their relative importance and feasibility using expert consensus (6). Some studies recommend the use of complex methodologies (7) such as group model building, conjoint analysis, intervention mapping, and concept mapping (6). However, the eld provides limited guidance on how to select the best implementation strategy or bundle of strategies in particular organizational contexts and for speci c EBPs. Moreover, while the number of experts in implementation is growing, it is still small, presenting a challenge to engaging these few experts when planning an evaluation.
When strategies are chosen prospectively, data collection processes may factor into the design decision because different strategies require a higher time commitment for staff to use or speci c systematic methods to collect (8) and track (9). Additional considerations in the selection process include the characteristics of the EBP, the practice setting, and the availability and potential use of resources (e.g., technology, sta ng, implementation expertise). Comparative evidence supporting the use of certain implementation strategies or bundles over others is sparse. Additional work to monitor discrete and bundled strategies is essential to the advancement of implementation science.
Categorizing activities performed as implementation or non-implementation activities may be di cult for stakeholders if they are not familiar with the language of implementation science. Working with stakeholders to retrospectively review the implementation activities that were undertaken and mapping them to implementation strategies is one solution to the challenge that strategy selection may be ad hoc (4,9). Once the mapping of activities to strategies is complete, it is easier to determine which activities should be included in the evaluation and then to collect the required data. Data elements of interest may include role of staff members engaged in implementation, personnel time, and wage rates. This data allows an evaluation to determine which strategies require larger monetary investments and to identify the source of any between-site variation in implementation costs.
This process mapping is useful for program evaluation generally as it can provide iterative feedback to stakeholder groups on the strategies being used as a part of the ongoing implementation effort. Partnered evaluations, where the evaluation team and key healthcare stakeholders collaborate to implement the EBP, offer a unique real-world opportunity to engage stakeholders in discerning which strategies were chosen and describing their effect and cost.

Case Application: Advanced Care Planning -Group Visits
We have tested this stakeholder inclusive method for mapping activities to implementation strategies as part of a Department of Veterans Affairs (VA) Quality Enhancement Research Initiative (QUERI) funded evaluation. This mixed methods, partnered evaluation focuses on the national implementation and dissemination of a new initiative within the Veterans Health Administration (VHA) healthcare system. Advance Care Planning (ACP) via Group Visits (ACP-GV) is a facilitated dialogue by a trained clinician leading a small group of six to eight veterans and trusted others who have gathered to discuss the key concepts of ACP. ACP is critically needed for veterans and their families to prepare for their evolving care needs across their life course, especially if they are unable to communicate their health care preferences for any reason (e.g., illness, accident, use of a ventilator).
As part of the national ACP-GV program, all 171 VA medical centers (VAMCs) that compose the VHA system were able to elect to implement ACP-GV; thus far 60 sites have participated, and new sites are still joining (10). A variety of activities were initiated across sites to support local implementation at VAMCs and a liated community-based outpatient clinics (CBOCs). One goal of the ACP-GV program evaluation is to estimate the cost of implementing the program locally and nationally. To generate accurate estimates for this nancial analysis, it was essential to examine how program resources, especially personnel time at the local level, were deployed.
We began by consulting with ve stakeholder groups, including Implementation Team, Evaluation Team, Facility Point of Contact, Facility Staff, and National ACP-GV Leadership Team, to draft an exhaustive list of all potential implementation activities. The evaluation team used an iterative process to clearly de ne activities and their mapping to the ERIC strategies. The stakeholders then provided insight, clari cation, and feedback on activities being used. Meetings were held to determine who performed each activity and the processes contained within each at local and national levels. The full list of activities was standardized and then presented back to stakeholders for member checking, re nement, and edits to ensure descriptions as written accurately re ected what was observed in practice.
The result of this iterative process is a list of potential implementation activities. Each potential implementation activity was assessed to determine which, if any, implementation strategy it most closely aligned with in the ERIC taxonomy by a member of the ERIC team (MM). Previous authors have used concept mapping to group the 73 ERIC implementation strategies into nine conceptually relevant clusters (6). We eliminated three of these clusters based on the scope of the work provided by the ACP-GV national program (i.e., support clinicians, engage consumers, and change infrastructure). A total of 56 implementation activities were identi ed and mapped onto 20 strategies representing the remaining six ERIC clusters. These activities were conducted by the ve distinct stakeholder groups mentioned earlier.
Based on the identi ed implementation strategies, the team developed an electronic survey with items assessing time devoted to each implementation activity using language commonly understood by ACP-GV sites and key stakeholders. We pilot tested the survey weekly over a period of six months to two stakeholder groups, the implementation team and the evaluation team, using a survey management database. The pilot text informed stakeholder usability and supported the elding of an additional survey to ACP-GV program staff at implementing sites.

Conclusion
Tracking the frequency and dose of implementation activities and describing the resulting differences in effect and cost of the corresponding strategies is paramount. However, the real-world concern of managing expectations and minimizing the administrative burden on local staff who are tasked with implementing the programs thwarts efforts to collect necessary data elements such as staff role, time, and wages. Many administrators avoid mandating the collection and submission of effort devoted to various implementation activities while providers and staff may lack the training and time to gather this data.
There are many reasons why assessments of time spent on implementation strategies is challenging. Foremost among them is that the choice of implementation strategies outside of RCTs is either not explicitly made early in the design phase or the process by which strategies are selected is vague. In our experience these challenges were mitigated by using a stakeholder-informed approach. Although it is possible to map activities to strategies using pre-de ned coding rules (4), there are bene ts to engaging an implementation science expert who is involved in the implementation and evaluation efforts. This approach allows for iterative feedback between the expert and each stakeholder group and translation of complicated implementation science principles and jargon into language that is actionable by administrators, clinicians, front-line staff. and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature.
CO: drafted the work, substantively revised the work, approved the submitted version, and agreed both to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature.
DA: drafted the work, approved the submitted version, and agreed both to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature.
KG: made substantial contributions to the conception of the work, approved the submitted version, and agreed both to be personally accountable for the author's own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature.
All authors read and approved the nal manuscript.