Sustainment of Diverse Evidence-Based Practices Disseminated in the Veterans Health Administration (VHA): Development and Administration of a Pragmatic Instrument

are with


Abstract Background
There are challenges associated with measuring sustainment of evidence-based practices (EBPs). First, the terms sustainability and sustainment are often falsely con ated: sustainability assesses the likelihood of an EBP being in use in the future while sustainment assesses the extent to which an EBP is (or is not) in use. Second, grant funding often ends before sustainment can be assessed.
The Veterans Health Administration (VHA) Diffusion of Excellence (DoE) is one of few large-scale models of diffusion; it seeks to identify and disseminate practices across the VHA system. The DoE sponsors "Shark Tank" competitions, in which leaders bid on the opportunity to implement a practice with 6 months of implementation support. As part of an ongoing evaluation of the DoE, we sought to develop and administer a pragmatic instrument to assess sustainment of DoE practices.

Methods
In June 2020, surveys were sent to 64 facilities that were part of the DoE evaluation. We began analysis by comparing alignment of quantitative and qualitative responses; some facility representatives reported in the open text box of the survey that their practice was on a temporary hold due to COVID-19 but answered the primary outcome question differently. As a result, the team reclassi ed the primary outcome of these facilities to Sustained: Temporary COVID-Hold. Following this reclassi cation, the number and percent of facilities in each category was calculated. We used directed content analysis, guided by the Consolidated Framework for Implementation Research (CFIR), to analyze open text box responses.

Results
A representative from forty-one facilities (64%) completed the survey. Among responding facilities, 29/41 sustained their practice, 1/41 partially sustained their practice, 8/41 had not sustained their practice, and 3/41 had never implemented their practice. Sustainment rates increased between Cohorts 1 -4.

Conclusions
The development and administration of our pragmatic survey allowed us to assess sustainment of DoE practices. Planned updates to the survey will enable exibility in assessing sustainment and its determinants at any phase after adoption. This assessment approach can ex with the longitudinal and dynamic nature of sustainment, including capturing nuances in outcomes when practices are on a temporary hold.

Contributions To The Literature
Page 4/28 The terms sustainability and sustainment are used interchangeably in the literature; this paper provides clarity in de ning and differentiating these terms.
Sustainment determinants and outcomes are often con ated in the literature; this paper illustrates that many sustainment determinants are inaccurately described as outcomes.
Sustainment is dynamic; this paper provides an approach to better capture nuance in sustainment outcomes when practices are on a temporary hold.
A high rate of practice sustainment among responding facilities suggests that the VHA DoE is a promising large-scale model of diffusion.

Background Evaluating Sustainment of Evidence-Based Practices is Challenging
There is growing interest in sustainment of evidence-based practices (EBPs); however, the literature on how to best measure sustainment over time is still developing. Understanding sustainment of EBPs is challenging, which Birken et al. suggest is due to a lack of conceptual clarity and methodological challenges [1].
First, the terms sustainability and sustainment are often used interchangeably [1]. While these terms are related, there are important distinctions. Sustainability assesses the likelihood of an EBP being in use at a future point in time; it is measured by assessing contextual determinants ("factors which decisively affect the nature or outcome of something") [2]. For example, the EBP is perceived to have low sustainability due to inadequate funding or lack of priority. Operationally, the goal is to determine whether the conditions indicative of sustaining EBPs are in place, and if not, to guide efforts to put such conditions into place [3,4].
In contrast, sustainment assesses the extent to which an EBP is (or is not) in use after a speci c period of time after initial implementation; for example, the RE-AIM Framework speci es that the sustainment period begins at least 6 months after initial implementation is completed [5]. Sustainment is measured by assessing outcomes ("the way a thing turns out; a consequence") [6], e.g., the EBP is in use/not in use. Operationally, the goal is to determine if EBPs are still in place following the end of implementation support [7]. Distinguishing between sustainability and sustainment will help researchers develop shared language and advance implementation science [1,8].
Second, grant funding periods often end after implementation is completed, so initial and long-term sustainment cannot be assessed due to time and resource constraints [1]. As a result, most measure development has focused on sustainability (which can be measured at any point in time during grant funding periods) not sustainment (which cannot be assessed until after a su cient amount of time has elapsed) [9].

Limitations to Current Sustainment Instruments
The existing literature conceptualizes a mix of items as sustainment outcomes, including the presence or absence of an EBP after implementation is completed, such as the continued use of the EBP (and its core components) [9][10][11][12] or the level of institutionalization of the EBP [11,12]. In addition, the literature discusses continued "attention to the issue or problem" addressed by the EBP, even when the speci c EBP is no longer in use or is replaced by something else, as a sustainment outcome [11]. Finally, there are several outcomes referenced in the literature that have been used to measure both sustainability and sustainment, such as continued institutional support [9][10][11][12] and continued funding for the EBP [9], as well as the continued bene t of the EBP [10][11][12] (see Table 1). In effect, there is overlap in the literature between sustainment determinants and sustainment outcomes.
Although existing literature offers a variety of single-item sustainment measures for researchers to use, there are few complete pragmatic multi-item instruments. A narrative review by Moullin et al. identi ed 13 instruments for measuring sustainment. However, they highlighted the need for more pragmatic approaches since many of the existing multi-item sustainment instruments were "overly intervention or context speci c" and "lengthy and/or complex" [13]. Furthermore, most multi-item instruments were not well-suited for frontline employees to complete; they were more suited for individuals with expertise in implementation science frameworks [13]. Pragmatic instruments are needed to increase the likelihood participants will understand and respond to all items, especially when it is di cult to incentivize participants over time.

Veterans Health Administration: Diffusion of Excellence
The Veterans Health Administration (VHA) Diffusion of Excellence (DoE) is one of few large-scale models of diffusion; it seeks to identify and disseminate EBPs across the VHA system. DoE practices include innovations supported by evidence from research studies and administrative or clinical experience [14,15], and strive to address patient, employee, and/or facility needs. The DoE sponsors "Shark Tank" competitions, in which regional and facility leaders bid on the opportunity to implement a practice with 6 months of external implementation support. For additional detail on the DoE, see previous publications [7,[16][17][18]. Over 1500 practices were submitted for consideration between Cohorts 1 -4 of Shark Tank; the DoE designated 45 as Promising Practices and these were adopted at 64 facilities (some practices were adopted by more than one facility). Two additional practices were designated as Promising Practices but were implemented outside of standard DoE processes; these are not included in this evaluation. See Additional File 1 for practice descriptions.
In earlier phases of our evaluation, we focused on implementation and initial sustainment of DoE practices [18]. In brief, we conducted interviews after the 6-month external implementation support period to understand the level of implementation success as well as barriers and facilitators to implementation at the facilities [18]. Participants described a high level of successful implementation after the initial 6-month period of support. Due to extensive external implementation support, facilities were able to complete implementation unless signi cant barriers related to "centralized decision making, sta ng, or resources" delayed implementation [18]. We then evaluated the initial sustainment of the practices asking facilities to complete follow-up surveys (on average 1.5 years after external support ended). Over 70% of the initially successful teams reported their practice was still being used at their facility. Additionally, over 50% of the initially unsuccessful teams reported they had since completed implementation and their practice was still being used at their facility [18]. Although some of these initially unsuccessful facilities implemented their practice after external support ended, research suggests that many EBPs are not sustained once implementation support has ceased [19]. As a result, we shifted our focus to the evaluation of ongoing sustainment of DoE practices.
As part of an ongoing evaluation of the VHA DoE and to address limitations in the sustainment literature, the objective of this manuscript is to: 1) describe the development of a pragmatic sustainment instrument and 2) present results on ongoing practice sustainment, as well as barriers and facilitators to sustainment for DoE practices.

Survey Development
To assess the ongoing sustainment of DoE practices, we sought to develop a pragmatic survey that was: 1) easy to understand for those without implementation science expertise (i.e., simple), 2) quick to complete (i.e., less than 10 minutes), and 3) appropriate for 45 different practices (i.e., generic) [13,20]. Our primary evaluation question for the survey was: Is there ongoing sustainment of DoE practices? To assess this question, we used the last known status of a facility (based on the last interview or survey completed) and branching logic to route respondents through the survey based on their individual facility's situation (see Figure 1). If the last known status of a facility's practice was "not implemented" (the facility had not completed implementation) or not sustained (the facility had completed implementation but had since discontinued the practice), the introductory survey question asked if the facility had completed implementation or re-implemented the practice. If the answer was "No", they were asked about intentions to do so in the future, and then routed to the end of the survey. If the answer was "Yes", they were then routed to the beginning of the sustainment survey. Based on our working de nition of sustainment, items were conceptualized as primary or secondary outcomes; secondary items were derived from the literature to enhance the survey and provide additional contextual information (see below and Table 1). Furthermore, "Please Describe" open text boxes were included following all questions so participants could provide additional detail. Descriptions of each outcome are brie y described below; see Table 1 for outcomes mapped to the literature and Additional File 2 for the complete survey.

Terms and De nitions
As described earlier, our primary outcome is used as the overarching benchmark to determine if a DoE practice is sustained.

Practice Sustainment
Extent to which the DoE practice and its core components and activities are in use [9][10][11][12] Secondary Outcomes Given the importance of assessing more than whether the practice was in use, the survey included several items from the literature as secondary outcomes. These secondary outcomes provide additional information on the current status of the practice.

Practice Institutionalization
Extent to which the DoE practice is part of routine care and work processes [11,12] Practice Priority Extent to which there is attention to the issue or problem addressed by the DoE practice, i.e., "heightened issue salience" [11] Practice Buyin/Capacity/Partnership Extent to which key stakeholders and partners support the DoE practice [9][10][11][12] Practice Funding Extent to which funding is provided to support the DoE practice [9] Practice Bene t Extent to which the DoE practice is having the intended outcomes [10][11][12] Practice Improvements/Adaptation Extent to which the DoE practice is being improved and/or adapted [10] Practice Spread/Diffusion Extent to which the DoE practice is spreading or diffusing to other locations [11] Additional Survey Questions To assess the uid and longitudinal nature of sustainment, if a respondent answered "No" to the primary outcome, i.e., the DoE practice was not in use (see above and Table 1) they were asked about future plans to re-implement. This is similar to the question given at the beginning of the survey to facilities that had a last known status of no sustainment (see Survey Development above). If a facility's representative reported they planned to re-implement their DoE practice, they were retained in the sample for future sustainment surveys.
Due to the timing of the sustainment survey (only a few months after the Centers for Disease Control and Prevention (CDC) issued guidance to cancel and/or reschedule non-essential clinical activities) [21], it included questions about the COVID-19 pandemic. These questions assessed the impact the pandemic had on their practice as well as how participation in the DoE impacted their response to the pandemic (see additional File 2).  [11]. † A similar item was conceptualized as a sustainment determinant by one or more authors represented in the table [9][10][11][12] Data Collection In June 2020, surveys were emailed by a member of the evaluation team to representatives of the 64 facilities in Cohorts 1 -4 that adopted one of the 45 DoE Promising Practices and received 6 months of external implementation support. See Additional File 1 for practice descriptions. Survey follow-up periods ranged from 1 to 3 years, depending on cohort (i.e., when the practice was adopted). Incentives were not provided to VHA employees because surveys were expected to be completed during working hours. The survey was administered using the REDCap® platform. Per regulations outlined in VHA Program Guide 1200.21, this evaluation has been designated a non-research quality improvement activity.

Data Analysis
We calculated the overall response rate and used descriptive statistics (number, percent) to summarize the multiple choice and Likert scale questions. We used directed content analysis, guided by the Consolidated Framework for Implementation Research (CFIR), to analyze open text box responses [22]. The CFIR is a determinant framework that de nes constructs across ve domains of potential in uences on implementation: 1) Characteristics of the Intervention (e.g., Evidence Strength & Quality), 2) Outer Setting (e.g., Patient Needs & Resources), 3) Inner Setting (e.g., Tension for Change), 4) Characteristics of Individuals (e.g., Self-E cacy), and 5) Process (e.g., Planning). The codebook included deductive CFIR constructs as well as new inductive codes and domains that arose in the data, including relationships between constructs [23]. We used relationship coding to provide a high-level overview of how different constructs interact or relate to each other. In some cases, the relationship was unknown or not applicable, however, in other cases, it was helpful to describe the relationships between codes. See Table 2 for an excerpt of our CFIR informed codebook. Using a consensus-based process [23], two researchers (CR, AN) coded qualitative data from the open text boxes and discussed to resolve discrepancies. We began analysis by comparing alignment of quantitative and qualitative responses to the primary outcome (i.e., "Is this practice still being used or done at your site?") (see Table 1: Item 1). Seven facility representatives reported in the survey's open text box that their practice was on a temporary hold due to COVID-19 but answered the primary outcome question differently; two answered "Yes", two answered "No", and three answered "Partially". As a result, the team reclassi ed the primary outcome of those facilities into a new category under Sustained: Temporary COVID-Hold (see Figure 2). Following this reclassi cation, the number and percent of facilities in each sustainment category was calculated by cohort.

Secondary Outcomes
We calculated the number and percent of facilities for Items 2 -8 (see Table 1) within each of our primary outcome categories from Item 1 (Sustained, Partially Sustained, Not Sustained) (see Table 1: Item 1). We also analyzed the concordance between Items 1 (Practice Sustainment) and 2 (Practice Institutionalization) in the survey.

Primary Outcome
A representative from forty-one facilities (41/64; 64%) completed the survey in summer 2020 while 23 (35.9%) facility representatives were lost to follow-up; the rate of missing data was lower after the rst DoE cohort. Among responding facilities, 29/41 (70.7%) facilities were sustaining their practice, 1/41 (2.4%) facilities were partially sustaining their practice, 8/41 (9.5%) facilities were not sustaining their practice, and 3/41 (7.3%) facilities had never implemented their practice (see Table 3). Sustainment rates increased across Cohorts 1 -4. The CFIR constructs and inductive codes associated with primary outcome text responses are included in parentheses below; the facilitates/leads to relationship is illustrated with ">" and the hinders/stops relationship is illustrated with "|". Please refer to Table 2 for code de nitions.

Sustaining Facilities
Twenty-nine facilities (N = 41, 70.7%) were sustaining their practice (see Table 3). Of these 29 facilities, 22 (75.9%) were ongoing during the COVID-19 pandemic while 7 (24.1%) were on a temporary COVID-Hold (see Table 4). The differences between these two sustaining groups of facilities are described below.

Sustaining Facilities: Ongoing
In late March 2020, the Centers for Disease Control and Prevention (CDC) issued guidance to cancel and/or reschedule "non-essential clinical activities, including elective procedures, face-to-face outpatient visits, diagnostic testing, and procedures" [21]. However, 22 "We are currently orchestrating our third annual Summit (virtually because of COVID)." (Facility 3_IF09c) The other ongoing practices were designed to bene t employees or represented administrative process changes that were not impacted by the pandemic (Employee Needs and Resources > Tension for Change > Sustained: Ongoing): "As a [department] we use this regularly and inform our employees of their current status as we continue to perform our normal tasks and duties." (Facility 2_IF06a) Partially Sustaining Facility Only one facility (N = 41, 2.4%) was partially sustaining their practice (see Table 3). The respondent explained partial sustainment by noting the practice was in use "in some specialty clinics, palliative care and hospice." (Facility 3_IF04)

Not Sustaining Facilities
Eight facilities (N = 41, 19.5%) were not sustaining their practice (see Table 3). Within this group, 6/8 (75%) had a previous last known status of no sustainment and 2/8 (25%) had a previous last known status of sustained or partially sustained.
Not Sustaining Facilities: Facilities that were previously not sustaining As noted in the Methods section (see Survey Development), facilities that had a last known status of no sustainment were given an introductory question to determine if they had re-implemented their practice in the interim. Six facilities (N = 8, 75% of the not sustaining facilities) had not re-implemented for various reasons. Two of these facilities had not re-implemented due to losing necessary sta ng and not having completed re-hiring (Engaging Key Stakeholders > Not Re-Implemented).
" [The] person that initiated this practice left and it was not followed through with new staff." (Facility 2_IF07b) To better understand the uid nature of sustainment, facilities that were not sustaining their practice were given a follow-up survey question to determine if they intended to re-implement their practice in the future. Three of the eight (38%) not sustaining facilities intended to re-implement their practice in the future (two previously not sustaining facilities and one newly not sustaining facility) (see Table 5).
Two of these facilities explained that while they had lost necessary sta ng, they were in process or planning to replace them to re-implement in the future (Engaging: Key Stakeholders > Not Sustained).
"We recently hired a new Provider and are in the processes of getting her setup with [service] access/equipment." (Facility 2_IF02b) Table 5 Plans to Reimplement in Not Sustaining Facilities: Number (Percent) of Facilities Secondary Outcomes The following sections describe results from secondary outcomes, which were used to contextualize the primary outcome. Of note, there was a high level of missing data for the secondary outcome questions; our branching logic omitted secondary outcome questions for facilities that did not have their practice in place, i.e., did not re-implement or sustain, including two facilities that were reclassi ed from Not Sustained to Sustained: COVID-Hold (see Table 6 and 7; Footnote §). As a result, only practice effectiveness and practice institutionalization are presented below. The branching logic is illustrated in Figure 1; reclassi cation of outcomes is illustrated in Figure 2.

Practice Institutionalization
Overall, there was a high level of concordance (96%) between sustainment and institutionalization outcomes (see Table 6). In addition, two of the three facility representatives that reported partial institutionalization also reported partial sustainment, re ecting initial concordance; however, those two facilities were reclassi ed from Partially Sustained to Sustained: COVID-Hold during analysis (see Table  6, Foot Note † and Figure 2).
Though less frequent, three facilities had discordant sustainment and institutionalization outcomes. The qualitative data from the survey provided additional context to explain some of the reasons for this discordance. For example, the facility representative that reported partial sustainment (see above) reported the practice was institutionalized where the practice was in use, but it was only in use "in some specialty clinics, palliative care and hospice" (see Table 6, Footnote *). Another facility representative reported the practice was sustained but not institutionalized; though the practice was in use where it was initially implemented, they stated "we want it to expand" (Facility 4_IF02a) (see Table 6, Footnote ‡). Table 6 Concordance * Practice was not in use across all services, but was institutionalized where it was in place (Facility 3_IF04) † Lack of concordance for 2/3 facilities due to reclassi cation of sustainment outcome (Facilities 2_IF07a, 4_IF05); the third facility did not have any qualitative data to contextualize the responses Facility 4_IF09c) ‡ Practice was sustained but had not spread (Facility 4_IF02a). § Branching logic omitted the institutionalization question for facilities that never implemented/did not sustain, including for two facilities that were reclassi ed from Not Sustained to Sustained: COVID-Hold.

Practice Effectiveness
Of the 29 facilities sustaining their practice, 23 representatives (79.3%) reported the practice was demonstrating effectiveness (see Table 7). They reported using a variety of measures appropriate to their practices to track effectiveness, including patient-level (e.g., clinical measures, satisfaction rates), employee-level (e.g., turnover rates), and system-level metrics (e.g., time and cost savings). For example, one facility representative reported their practice led to a "decrease[d] LOS [length of stay for patients in the hospital] and higher patient satisfaction scores." (Facility 4_IF07b).
One representative (N = 29, 3.4%) reported the practice was partially demonstrating effectiveness, stating they had received feedback from employees that the practice was not fully meeting their needs and they were considering adapting the practice to make it more effective at their facility (Facility 2_IF07a) (see Table 7, Footnote *). Two representatives (N = 29, 6.9%) reported the practice was not demonstrating effectiveness; one representative reported the practice "was found to be ineffective with our nontraditional patient population" and they were "transitioning to new presentation and process," (Facility 4_IF09c) while the other reported they were "not tracking" and therefore were not able to demonstrate effectiveness (Facility 4_IF02a) (see Table 7, Footnote ‡).  * This facility received feedback from employees and were considering adapting the practice to make it more effective at their facility (Facility 2_IF07a) † This facility did not provide any qualitative data on this question (Facility 3_IF04) ‡ One facility found the practice to be ineffective (Facility 4_IF09c) and the other was not tracking (Facility 4_IF02a) § Branching logic omitted the effectiveness question for facilities that never implemented/did not sustain, including for two facilities that were reclassi ed from Not Sustained to Sustained: COVID-Hold. The nal facility in this category was also a COVID-Hold facility.

Discussion
With the growing attention on sustainment of EBPs, there is a need for clarity in de ning and measuring sustainability versus sustainment. Given that funding often ends before longer-term sustainment can be assessed, it is important for researchers to develop pragmatic sustainment measures that can be used when there are fewer resources and incentives for participants. As part of an ongoing evaluation of the VHA DoE, we developed and administered a pragmatic survey to assess ongoing sustainment across diverse practices. Based on the relatively high response rate (over 60%) and logical responses provided, we can discern several pragmatic features: it was short, easy to understand, and applicable across a wide range of practices [13,20].
Survey results indicated a high rate (over 70%) of practice sustainment among responding facilities, which suggests that the VHA DoE is a promising large-scale model of diffusion. Sustainment rates increased across Cohorts 1 -4, with later cohorts reporting higher rates of sustainment than earlier cohorts. Ongoing enhancements made to the VHA DoE processes over time (e.g., re ning methods to select Promising Practices, better preparing facilities for implementation) may have helped improve sustainment rates over time. It's also possible lower rates in Cohorts 1 -2 (2016 and 2017) highlight challenges to sustainment over longer periods. However, only two additional facilities discontinued their practice in the year prior to the survey and these were part of Cohort 3 (2018). Future sustainment surveys with these and new cohorts will help build understanding about changes over time and factors that help or hinder ongoing sustainment. Our ability to continue following these practices is a unique strength of this evaluation.
There were several important lessons learned that will improve our ongoing evaluation efforts and subsequent surveys. First, our primary measure failed to capture nuance in the data related to practices being temporarily on hold. Our survey was administered during the COVID-19 pandemic, during which the CDC issued guidance to cancel and/or reschedule non-essential in-person healthcare. As a result, several respondents used the open text boxes to explain that their practice was in place but on hold during the pandemic, and that they planned to resume operations in the future. However, facility representatives were not consistent in how they answered the primary question; responses ranged from sustained to partially sustained to not sustained. Based on open text explanations, we reclassi ed some responses as Sustained: COVID-Hold (see Figure 2).
Though temporary holds were common in our evaluation due to the pandemic, EBPs may be paused for a variety of reasons that do not necessarily indicate discontinuation and lack of sustainment. For example, two facility representatives reported their practice was not sustained because they lost employees, but they were in the process of re-hiring; in effect, though the reason was different, these practices were on hold similar to practices paused by the pandemic. It is important to note that turnover and gaps in sta ng aligns with a key nding from our earlier work: when implementation and sustainment are achieved via the efforts of a single key employee, it is impossible to reliably sustain the practice when that person leaves or simply takes vacation [18].
In the future, we will add responses to capture whether the practice has been discontinued permanently or is temporarily not in use/not in place. In addition to better tting the data, this re nement allows the measure to be used at any time point from initial adoption to sustainment; although adoption, implementation, and sustainment are de ned differently based on the measurement point, they all assess whether the innovation is being used or delivered. This re nement further shortens the survey by eliminating the need for a follow-up question about re-implementation of the practice.
Second, the sustainment literature often con ates sustainment determinants with sustainment outcomes. Table 1 lists measures conceptualized as outcomes in the literature that were included in our survey. However, if a facility representative reported the practice was not in use (our primary outcome), many of the secondary outcomes were not applicable to that facility. For example, if a practice was not in use, asking whether the practice was demonstrating effectiveness would be illogical; continued effectiveness is a determinant to successful sustainment, not an outcome. Since we did not include secondary outcomes for those who reported they were not sustaining their practice, there was a high rate of missing data for these items by design. Future versions of the survey will reconceptualize Items 3 -7 in Table 1 as sustainment determinants. Item 2 (Practice Institutionalization) was correlated with our primary sustainment outcome (Item 1). Goodman and Steckler de ne institutionalization as the "long-term viability and integration of a new program within an organization" [24]. Institutionalization is conceptualized as a deeper, more mature form of sustainment; where the practice is fully routinized and embedded into clinical practice, beyond just relying on the effort of a single person [25]. Basic sustainment (whether a practice is in use) would be a prerequisite for practice institutionalization. Finally, Item 9 (Practice Spread/Diffusion) will be conceptualized as a diffusion outcome. Rogers de nes diffusion as "the process through which an innovation […] spreads via certain communication channels over time" [26] within and across organizations [27]. Survey respondents may report sustainment within their own setting with or without diffusion to additional sites. A key goal for the DoE is to widely diffuse effective practices across clinical settings within and outside VHA.
Third, we used "please explain" as a prompt for our open text boxes to provide respondents with an opportunity to contextualize their experiences. However, the information they provided often focused on the rationale for the response rather than barriers and facilitators that led to their reported outcome. For example, when a facility representative reported a practice was sustained, they provided a rationale for their answer (e.g., all core components were in place) vs. a description of facilitators that allowed them to sustain their practice (e.g., continued funding). Changing this prompt to "Why?" and reconceptualizing Items 3 -7 of our survey as sustainment determinants (see above) will more directly assess relevant barriers and facilitators.
Fourth, we will add a sustainability question (i.e., elicit prospects for continued sustainment) to the survey for all respondents. Although we asked not sustaining facilities a prospective question about plans to reimplement, we did not ask sustaining facilities a prospective question about continued sustainment. Our previous work indicated that predictions of sustainment were relatively accurate [18]. Sustainment is dynamic and may ebb and ow over time; those working most closely with the practice are best positioned to assess prospects for future sustainment as well as anticipated barriers. Low ratings of sustainability could provide an opportunity for early interventions to stave off future failure to sustain.

Limitations
There are several limitations to this evaluation. First, missing data may in ate or skew our sustainment results. The rate of missing data generally decreased with each new cohort, which may be a function of shorter time periods elapsed since initial implementation. Nonetheless, we plan to continue including nonresponding facilities in future surveys until they have been lost to follow-up for three years. Second, the branching logic for facilities that did not sustain their practice resulted in a low number of responses to our secondary outcomes by design. We plan to modify future surveys to include determinants of sustainment (Items 3 -7 in Table 1) regardless of sustainment status. Figure 1 Survey Branching Logic AdditionalFile1Cohort14PracticeDescriptions.pdf