The Perils of a “My Work Here is Done” Perspective: A Mixed Methods Evaluation of Sustainment of an Intervention to Improve Quality of Transient Ischemic Attack Care

Dawn M Bravata (  Dawn.bravata2@va.gov ) Roudebush VA Medical Center: Richard L Roudebush VA Medical Center Edward J Miech Roudebush VA Medical Center: Richard L Roudebush VA Medical Center Laura J Myers Roudebush VA Medical Center: Richard L Roudebush VA Medical Center Anthony J Perkins Indiana University School of Medicine Ying Zhang Indiana University School of Medicine Nicholas A Rattray Roudebush VA Medical Center: Richard L Roudebush VA Medical Center Sean A Baird Roudebush VA Medical Center: Richard L Roudebush VA Medical Center Lauren S Penney South Texas Veterans Health Care System Charles Austin Roudebush VA Medical Center: Richard L Roudebush VA Medical Center Teresa M Damush Roudebush VA Medical Center: Richard L Roudebush VA Medical Center


Background
Sustaining quality improvement has been recognized as a key implementation challenge for healthcare systems. (1)(2)(3)(4) Sustainment involves ingraining the processes that were successful during active implementation into routine work ow and hence ongoing facilitation is no longer required to maintain quality improvement.(5) Sustainment is a key element of the Learning Healthcare System model, an approach where "clinical informatics, incentives, and culture are aligned to promote continuous improvement and innovation, with best practices seamlessly embedded in the delivery process and new knowledge captured as an integral by-product of the delivery experience."(6) The Protocol-Guided Rapid Evaluation of Veterans Experiencing New Transient Neurological Symptoms (PREVENT) program was a multi-component quality improvement intervention to improve the quality of transient ischemic attack (TIA) care. (7) The PREVENT intervention was designed in alignment with the Learning Healthcare System model.(6, 7) The objectives of this evaluation were to examine the degree to which quality improvement was sustained after the end of active implementation and to identify the factors that contributed to sustainment. This manuscript adheres both to the Standards for Reporting Implementation Studies (StaRI) statement(8) and the STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) guidelines. (9) Methods This sustainment assessment was a pre-planned component of a ve-year, augmented stepped-wedge implementation trial at six sites where active implementation was initiated in three waves, with two facilities per wave. (10,11) The design has been described elsewhere. (7,12) Six control sites were matched to each of the six PREVENT active implementation sites on the basis of TIA patient volume, facility complexity (e.g., teaching status, intensive care unit level), and quality of care (measured by the without-fail rate, described below). The total number of control sites was 36. The de nition of the study period (baseline, active implementation, and sustainment) for each control site was identical to the de nition used for the matched PREVENT active site. The study was registered with clinicaltrials.gov (NCT02769338) and received human subjects (institutional review board [IRB]) and Department of Veterans Affairs (VA) research and development committee approvals.

Context
In the VA, quality measurement and systems redesign for many clinical processes are integrated into the healthcare system within administration and clinical operations. (13,14) Although stroke care quality metrics are reported, there is no VA system-wide focus on TIA care quality. We con rmed at baseline that none of participating facilities had existing TIA care focused quality improvement activities.

Quality Improvement Intervention
The design of the PREVENT intervention(7) was informed by baseline quality of care data, (15) staff interviews,(16) existing literature, (17)(18)(19)(20) and validated electronic quality measures. (15,21) The PREVENT intervention consisted of ve components: clinical programs, data feedback, professional education, electronic health record tools, and quality improvement support including a virtual collaborative. (7) The PREVENT intervention targeted facility clinical staff, not patients. The composition of the facility teams varied, (22) but generally included neurology, emergency medicine, nursing, pharmacy and radiology. Some teams also included hospitalists, primary care providers, education staff, telehealth staff, or systems redesign staff. (22) Implementation Strategies The implementation of the PREVENT intervention and its evaluation were guided by two complimentary frameworks: the Consolidated Framework for Implementation Research (CFIR) and the integrated Promoting Action on Research Implementation in Health Services (i-PARIHS) Framework. (23,24) PREVENT employed a bundle of three primary implementation strategies: (1) team activation via audit and feedback, re ecting and evaluating, planning, and goal setting; (2) external facilitation (EF); and (3) building a community of practice. (25,26) In addition, PREVENT allowed for local adaptation of the intervention components. External facilitation was provided by the study team to participating site team members primarily by a nurse with experience in quality improvement and a physician with experience in both cerebrovascular disease and quality improvement. (27) The design of the PREVENT intervention highlighted three aspects of a Learning Healthcare System: learning from data, learning from each other, and sharing best practices. (7) Intervention: Active Implementation Active implementation began with a kickoff during which the facility team: (a) explored their local performance data to identify processes of care with the largest gaps in quality for the greatest number of patients; (b) identi ed barriers to providing highest quality of care; (c) described potential solutions to address barriers; and (d) developed a site-speci c action plan. The PREVENT web-based hub provided process and outcome data, allowing teams to interact with their own site's performance data to explore hypotheses and to monitor performance over time.(26) The teams joined monthly PREVENT virtual collaborative conferences (conducted by telephone or videoconferencing) during which the teams shared progress on action plans, articulated goals for the next month, and reviewed any new evidence or tools. (25)

Sustainment Period
The sustainment period began immediately after the end of the one-year active implementation period and duration varied by site according to the stepped wedge design, ranging from 4.5 to 13.7 months. Sustainment began immediately following the end of active implementation and ended on September 30, 2019 for all sites. Each site's sustainment period was launched by a "promotion" ceremony during the monthly collaborative conference during which: (1) facility leadership (e.g., chief of staff) presented the site team members with a certi cate of recognition and praised them for their accomplishments during the active implementation period; (2) a colleague from one of the other participating PREVENT facilities described what they had learned from that site over the preceding year; and (3) the clinical champion from the site re ected on their experiences during active implementation and their plans for sustainability.
During sustainment the site team members were able to continue to participate in the monthly collaborative conferences but no longer provided monthly goal updates and they also had full access to the PREVENT hub.
The key difference between active implementation and sustainment was that external facilitation was no longer initiated by the study team. For example, if the study team observed a decrement in a site's care quality during active implementation, the external facilitator would reach out to the site to offer support and encouragement. However, during sustainment, facilitation was only provided in response to explicit requests by the site team members. For example, the facilitator would answer questions about individual cases and take the opportunity remind the requestor to visit the data hub to review site-level data and shared resources. Regular reporting of process and outcomes data, as well as the use of "Plan, Do, Study, Adjust" cycles have been shown to enhance sustainability;(28) the PREVENT program included each of these elements. Moreover, sustainment was explicitly included as a topic during three of the monthly collaborative calls (e.g., "Using Systems Redesign and Lean Concepts: Sustainability and Best Practices") including discussions about how to incorporate key processes into existing structures and the value of engaging with performance data for re ecting and evaluating, goal setting with feedback, and planning.

Sustainment Outcome
We retrospectively identi ed Veteran patients with TIA who were cared for in the ED or inpatient setting(7) based on primary discharge codes. (21) The primary outcome was the facility-level "without-fail" rate, which was an "all-or-none" measure of quality of care. (29) It was de ned as the proportion of Veterans with TIA at a speci c facility, who received all of the processes of care for which they were eligible from among seven processes of care: brain imaging, carotid artery imaging, neurology consultation, hypertension control, anticoagulation for atrial brillation, antithrombotics, and high/moderate potency statins. (7,12,21) The processes of care were assessed with electronic health record data using validated algorithms. (15,21) The consolidated measure of quality reported at the facility-level described the number of patients who received a process ("passes") divided by number of patients who were eligible for a process ("opportunities"). The 90-day all-cause mortality rate (de ned as death from any cause within 90-days of presentation for the index event) was obtained from the VA Vital Status File. (30) Process of care data were obtained from the VA Corporate Data Warehouse (CDW) which included: inpatient and outpatient data les in the ve-years pre-event to identify past medical history, (31)

healthcare utilization, and receipt of procedures (Current Procedural Terminology [CPT], Healthcare Common
Procedures Coding System [HCPCS], and ICD-9 and ICD-10 procedure codes). CDW data were also used for vital signs, laboratory data, allergies, imaging, orders, medications and clinical consults. Fee-Basis Data were also used to identify inpatient and outpatient healthcare utilization and medical history.

Mixed Methods Assessment of Sustainment
We employed a mixed-methods design to evaluate sustainment of the implementation of the complex PREVENT intervention with prospective data collection from multiple sources. The intervention sought to activate staff involved in TIA care to improve facility-wide care processes.

Quantitative Analysis
The outcome rates were compared across sites. Fisher's exact test was used to compare whether categorical variables differed between the PREVENT sites and matched control sites as well as between baseline and implementation periods. Two-sample t-tests or Wilcoxon Rank Sum tests were used to test whether continuous outcomes differed between the PREVENT intervention and matched control sites as well as between baseline and implementation periods.
Generalized mixed-effects models at the patient level, with random effects for site and xed effects for wave and period (baseline, active implementation, sustainment), were used to analyze the PREVENT intervention effects.(32) Separate risk adjustment models were constructed for each process of care, for the without-fail rate, and for the consolidated measure. Fully risk adjusted models included site, wave, and the speci c patient characteristics that were associated with the particular outcome of interest (e.g., the without-fail rate). Variables used in the risk adjusted models are presented in Supplemental Table C. All analyses were performed using SAS Enterprise Guide version 7.11.
The primary analysis compared the average without-fail rate during sustainment to the without-fail rates during the baseline and active implementation periods among the PREVENT implementation sites adjusting for wave and site variations. The one-year baseline period was de ned as the 12 months prior to the baseline site visit at each of the participating facilities. The one-year active implementation period began one month after the site's kick-off which, providing one month for site teams to initiate QI activities before assessment of quality and outcomes began. As described above, the sustainment period began immediately after the active implementation period and duration varied by site according to the stepped wedge design, ranging from 4.5 to 13.7 months. We also compared the change in the without-fail rate between the PREVENT sites and matched control sites to ameliorate the potential in uence of temporal trends in care that may confound the assessment of the intervention effect within the stepped-wedge design. Secondary analyses included an assessment of sustainment for the consolidated measures of quality, the seven individual processes of care, and 90-day outcomes (i.e., recurrent stroke, allcause mortality, and recurrent event rates).

Qualitative Analysis
Qualitative data sources included: semi-structured interviews, observations, eld notes, and Fast Analysis and Synthesis Template (FAST) facilitation tracking. (33) Interviews were conducted in-person during site visits or by telephone at baseline, and at 6-and 12-months after active implementation. Key stakeholders included staff involved in the delivery of TIA care, their managers, and facility leadership; we also accepted "snowball" referrals from key stakeholders. Upon receipt of verbal consent, interviews were audio-recorded. The audio-recordings were transcribed verbatim. Transcripts were de-identi ed and imported into Nvivo12 for data coding and analysis. 31 Using a common codebook, two team members independently coded identical transcripts for the presence or absence of CFIR constructs as well as magnitude and valence for four CFIR implementation constructs (i.e., Goals & Feedback, Planning, Re ecting & Evaluating, and Champions). Valence (+ or -) was scored for each construct if it was present and in uencing the implementation of PREVENT at that site.(34, 35) Magnitude was scored as 2 if it had a strong in uence on PREVENT implementation, 1 if it had a weak or moderate effect, and 0 if it had a neutral effect. The evaluation team conducted formal debrie ngs after each kickoff, site visit, and collaborative call. These observations were recorded and transcribed for analyses and informed the CFIR construct scoring. We conducted site case comparisons on CFIR related factors.

Results
The six PREVENT sites were geographically diverse including sites in the West, Northeast, Southeast, and Midwest. The annual TIA patient volume varied across PREVENT and controls sites and across periods, ranging from 10 to 67 (Table 1).
The patient characteristics were similar between patients cared for in the PREVENT sites and the matched control sites (Supplemental Table A). Within the PREVENT and control sites, the characteristics of patients were also similar for those cared for during the baseline, active implementation, and sustainment periods (Supplemental Table A).
Overall, the observed without-fail rate at PREVENT sites improved from 36.7% at baseline to 54.0% during active implementation and settled to 48.3% during sustainment. Whereas at control sites, the observed without-fail rate improved from 38.6% at baseline to 41.8% during active implementation and continued to 43.0% during sustainment (Figure 1). Quality of care varied considerably across sites and across time periods (Tables 1 and 2). The without-fail rate improved at 3 sites, declined at two sites, and remained unchanged at one site during sustainment. The without-fail rate during sustainment was not related to baseline or active implementation rates. In adjusted analyses, although the without-fail rate improved at PREVENT sites compared with control sites during active implementation, no statistically signi cant difference in quality was identi ed across sites during sustainment (Table 2; Supplemental Table B).
The mixed methods assessment thematic analyses identi ed two core activities that existed at sites where quality of care improved during sustainment): (1) the importance of integrating processes of care into routine operations and (2) the value of re ecting and evaluating on performance data to plan quality improvement activities or respond to changes in quality. A key challenge during sustainment was competing demands from new facility quality priorities.
Cross site case comparisons indicated that strong clinical champions played an important role by embedding the targeted processes of care into their organization during active implementation to ensure sustainment. Re ecting and evaluating quality performance was common among facilities which improved during sustainment. In addition, sites that improved during sustainment scored highly on the CFIR construct of cosmopolitanism (e.g., voluntarily and intentionally reaching out to work with others outside of their own environments) whereas cosmopolitanism was not observed either among sites without change or with declining quality during sustainment.
Promoting Sustainment: Integrating processes of care into routine operations The site teams worked to embed TIA quality improvement into a broad range of existing processes or structures of care at their site. For example, two sites embedded TIA quality of care as an ongoing topic into their existing stroke team structure and activities. Three sites described how they provided regular updates to facility leadership about quality of care within the context of regularly scheduled meetings. Sites with improvements in quality during sustainment embedded PREVENT activities within the fabric of routine work-ows: "…it's not so much that people are thinking of it as part of PREVENT, it's just on that's just what we do for patients." [Site B_2] And from a second site which improved during sustainment: "…we changed the process, and so it's not something that I have to monitor every day." [Site F_5] Several interviewees described automatization of PREVENT processes such that they no longer required active attention or work.
"I think the biggest impact was…allowing things to be able to be somewhat on autopilot because there's little that the clinicians have to do other than implement it or rather admit the patient. You know it ows very consistently with the existing process." [Site F _2] "It was pretty well hard wired into our practices before we graduated the program. As we went through it was minor tweaks and changes so it's already a regular deep-seated process for us." [Site F_3] Promoting Sustainment: Value of examining performance data regularly to identify and respond to changes in quality The sites that improved or maintained during sustainment used performance data to check on their status. This habit of status-monitoring with performance data, formed during active implementation and persisted into sustainment, was performed by the clinical champion.
"It was good to see the trending of our data points…when we had our group meetings and you know we would discuss those numbers…" [Site A_1] At one site, the champion used the data to engage front-line staff: "…sometimes it's surprising like oh I didn't know we were like this this last time I looked down and I looked at the Primary Care 30-day visit, and I was wondering oh wow, I didn't know that was kind of lower than the national average for that…that kind of has given me something to think about to bring to like you know Neurology or like the hospitalist folks that I deal with on different kind of projects." (Site D_1) Quality of care data at the facility level allow champions to identify patterns in care quality that may be "invisible" to them (e.g., care provided by other services): "By having the dashboard, it helps me capture patients that I myself would not necessarily have known about." [Site B_2] In contrast, team members at sites with declining quality of care during sustainment reported that they believed that PREVENT was integrated into existing structures or policies at the facility but did not actively examine performance data during sustainment, and were therefore, unaware of evidence of decrements in performance during sustainment.
"I think that all of the things that we did do are still in place today. So that's really good." [Site C_2] "I guess that I haven't been as hands on with like the sustainability portion of the study. Since we haven't met as a team since we graduated, I can't really speak on how well that we've sustained. I still think that we're all taking care of TIA patients and things like that, but as far as looking over the seven criteria as a team together, we haven't done that…Probably just not paying attention I guess. [Site C_3], "…the changes that we have implemented have been sustained. So the same changes have been maintained throughout the facility after the graduation." [Site E_2] Barriers to Sustainment: Di culties using performance data to inform quality improvement Respondents at sites without an improvement in quality during sustainment identi ed two challenges related to using performance data for quality improvement: (1) the di culty of interpreting pass or fail rates in the setting of a low volume of patients, and (2) coding problems that made it di cult to discern if without-fail rates re ected genuine systemic issues with patient care or alternatively were spurious.
"…the biggest barrier that we've faced is just coincidence in numbers. This year, our TIA numbers have actually been less than in past years…The most that we're able to do is review the cases and see…where things went wrong and what we could do to x it…it's really hard to identify patterns when the pattern is like we have one patient to review. Like sometimes it's a little easier when you see like hey, if we had like ten patients, we'd be like all right. Eight were ne, but the pattern of the people who did have issues was this. But I think that that's what's really been the toughest part." [Site C_1] Concerns about miscoding made data more challenging to interpret: The weakness of it [local PREVENT] is again, like there have been a few cases where I think that either their TIA was mislabeled and we didn't like push or emphasize it strongly enough… [Site E_1].
During active implementation, some site team members identi ed relatively infrequent coding issues which nonetheless in uenced their performance data. For example, a patient coded as having atrial brillation and identi ed as failing the anticoagulation for atrial brillation metric may not actually have had atrial brillation, and therefore it was appropriate not to have prescribed an anticoagulant. Given that the without-fail metric was based on administrative data, coding errors required chart review for identi cation, and working with facility coders for remediation. During the active implementation period, external facilitators conducted chart review for patients cared for at participating sites and helped local site teams interpret the performance data that were provided in the hub in the context of the chart review information; but this assistance was not routinely available during sustainment.

Barriers to Sustainment: Competing Demands
Respondents across sites identi ed the problem of competing demands on time as the major threat to sustainment.
"I mean I think that it's a great program. You know. To come together with like a common goal and actually see how good the results can be. I think that it's an awesome program. I just wish that there was a way for us to I guess keep everything going. I mean I know that we were still keeping things going, but it's like they'll come to us and say we're trying to improve CHF care, and then we'll do that, and then it's like going on to the next project." [Site E_3] "I haven't been able to do anything with the sustainability portion, like I said, just because there are a ton of other projects, and I don't want it to sound like that I'm giving an excuse, but there is literally like a million other things that management focuses on or that comes up in the ED." [Site C_3] However, some clinical teams in sites that successfully sustained their quality were able to overcome competing demands by modifying their culture. By engraining the processes involved in both providing and monitoring high quality care into routine practices, oversight require less active engagement and therefore the issue of competing demands was less salient.
"…we changed the process, and so it's not something that I have to monitor every day. … So I think that the culture, and also we changed the culture. [Site F_5]

Discussion
This study provides an example of an intervention that successfully improved quality of care during active implementation but where performance during the sustainment period was heterogeneous, with some sites improving but others declining in care quality. This site-to-site variability during sustainment provided an opportunity to examine factors which promoted sustainment as well as barriers to sustainment.
Neither the baseline quality of care nor performance during active implementation predicted performance during sustainment. However, the two factors that were most consistently and robustly associated with sustainment were integration of intervention processes into routine care at the facility and instituting practices for ongoing reviewing and re ecting upon performance data. Both activities appear to be necessary but not su cient for successful sustainment.
For example, the sites where team members reported their perceptions that processes were embedded into routine practice but did not report ongoing evaluation of performance data declined in performance during sustainment, despite team members' impressions that they were doing very well in terms of quality of care. In these latter cases, the success of the active implementation phase led the team members to erroneously believe that their work was completed and that they no longer needed to actively monitor their performance data. Our prospective, mixed methods approach-which allowed us to examine quantitative change in performance and qualitative interview data-highlighted the perils among clinical teams during sustainment who perceived their quality work to be complete after achieving quality improvement during active implementation.
Although the PREVENT intervention included some elements that focused on sustainment; they were insu cient to ensure sustainment across all participating sites. This study provided additional data to support the use of external facilitation for quality improvement given that its withdrawal during sustainment led to declines in quality at some sites.(36) Future implementation studies may wish to explicitly examine alternative strategies to enhance sustainment including application of two key lessons learned from PREVENT. Speci cally, facilitating the incorporation of the intervention into routine practice and ongoing review and re ecting upon performance data appear to be a fundamental to program sustainment. Future studies should examine whether incorporating the intervention into routine practice mitigates the issue of competing demands during sustainment. Future studies should also examine whether technological approaches to implementing a Learning Healthcare System might support sustainment; for example, automated alerts when changes in quality performance are observed may trigger teams to re ect and evaluate during critical periods to maintain quality improvements.
Several limitations of the PREVENT program merit description. First, because of the stepped-wedge design, the last site enrolled in the nal wave had the shortest time during sustainment (6 months versus 12 months for other sites). Although this site maintained its sample size, changes in practices or quality over time might have not be observed. Second, PREVENT was implemented only within VA facilities, which may limit its generalizability; future research should evaluate its effect when implemented in non-VA sites where the healthcare infrastructure (e.g., electronic health record and quality improvement 34. Damschroder LJ, Lowery JC. Evaluation of a large-scale weight management program using the consolidated framework for implementation research (CFIR).   Figure 1 Change in TIA Quality of Care: Intervention versus Matched Control Sites. The primary quality of care measure was the without-fail rate which is calculated as the proportion of patients who received all of the processes for which they were eligible among the seven key processes of care (brain imaging, carotid artery imaging, neurology consultation, hypertension control, anticoagulation for atrial brillation, antithrombotics, and high/moderate potency statins).

Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download.