The success of both system-wide innovations and evidence-based practices depends on implementation strategies that effectively promote adoption, sustainment, and dissemination at scale (1–3). As articulated in the Expert Recommendations for Implementing Change (ERIC), one promising strategy is to “model and simulate change” (p. 6). Given the rapid growth of simulation modeling in the health sciences (4–6), there are increasing calls for its greater use in implementation science to promote evidence-informed decision-making (7, 8). However, at its core, simulation modeling is a quantitative method widely used as an analytic strategy. To facilitate its use as an implementation strategy, the current paper presents a method referred to as Rapid-cycle Systems Modeling (RCSM)—a three-step, cyclical method designed to realize the benefits of simulation modeling for implementation science. Specifically, we describe the evidence and theory underlying the two major components of RCSM: (1) the simulation model, itself, and (2) the process of stakeholder engagement necessary to realize its full potential as an implementation strategy. We then present a case-study to demonstrate the utility of RCSM for implementation.
Simulation modeling to promote evidence-informed decision-making
Despite rapid growth in some fields, simulation modeling remains under-utilized, especially in implementation science (4–6). One clear barrier is the lack of familiarity with simulation modeling among core constituencies. As one paper noted (9), “clinicians and scientists working in public health are somewhat befuddled by this methodology that at times appears to be radically different from analytic methods, such as statistical modeling, to which the researchers are accustomed,” (p. 123S). Simulation modeling represents a way of thinking that differs from the inductive logic underlying most empirical methods. Rather than beginning with observed data and then generating inferences, simulation modeling typically involves “reasoning to the best explanation,” a form of logic known as abduction that was first described by the pragmatic philosopher, Charles Sanders Peirce, and is common throughout all branches of science (10, 11).
Notably, one form of simulation modeling is already widely accepted by health researchers: power analysis. By definition, research studies are intended to investigate areas of scientific uncertainty; yet this uncertainty creates challenges for developing a priori study designs. Prior to clinical trials, for example, researchers gather evidence to inform assumptions regarding expected treatment effect, consider their risk preferences regarding type 1 and type 2 errors, and apply statistical expertise to estimate optimal sample size. Often, researchers consider a range of plausible effect sizes that are consistent with available evidence and risk preferences (e.g., 90% or 80% power). Ultimately, researchers settle on the power calculation deemed most appropriate and use it to justify and inform decisions regarding sample size.
In similar ways, simulation models of many kinds can support evidence-informed decision-making for implementation of system-wide innovations. Indeed, we argue that implementation scientists should not expect a system-wide innovation to realize a net benefit within a given context without first ensuring that the assumptions of their implementation design are consistent with prior evidence and that potential risks are acceptable. Such judgments can be meaningfully informed by simulation modeling. Furthermore, simulation modeling can inform the implementation process by broadening consideration of candidate implementation strategies (e.g., by linking to fields such as operations research), deepening the search for implementation barriers and facilitators (e.g., by considering dynamic complexity and policy resistance), and facilitating outcome evaluations (e.g., by identifying full cascades of potential effects—both intended and unintended).
Simulation modeling as an analytic strategy
Simulation modeling offers a flexible approach to synthesizing research evidence and applying it to a range of decisions necessary for system-wide innovations. To cite one example, a recent systematic literature review was conducted to inform a state-level effort to implement screening for adverse childhood experiences (ACEs) in pediatric settings (12). Whereas meta-analysis synthesizes evidence across multiple studies to estimate a single parameter (e.g., prevalence or screening sensitivity), simulation modeling offers the flexibility to synthesize disparate forms of evidence while considering distal outcomes. In this case, the authors analyzed potential implications of screening implementation by applying available research evidence to a simple simulation model of the clinical pathway from detection to intervention. Results demonstrated that extant evidence is consistent with a wide range of scenarios in which implementation of ACEs screening induces anything from modest decreases in demand for services to very large increases. While available evidence was found to be insufficient to support precise predictions, results highlighted the importance of monitoring demand and attending to workforce capacity, as well as the potential of leveraging existing datasets to address evidence gaps in operations outcomes following screening implementation.
The process of simulating possible implementation scenarios holds an additional benefit: simulation often promotes insight. While seldom defined or operationalized, modelers often use the term “insight” to refer to lessons learned regarding the causal determinants of a given problem (13–15), the net value of and/or tradeoffs inherent in potential solutions (15–17), unrecognized evidence gaps (15), unexpected results (16), or sensitivity to the metrics used to measure outcomes (16). Notably, in none of these instances does “insight” refer to a precise estimate or a statement of truth, as is the typical goal of inductive and deductive logic, respectively. Instead, all provide examples of learnings that support abductive logic, often through careful examination of underlying assumptions.
Concretely, the act of simply writing out all the parameters required to specify even a simple simulation model begins to make explicit the assumptions that underlie expectations. For example, simulating the number of patients who will require treatment after implementing a screening program minimally requires estimates of underlying prevalence, screening tool accuracy (e.g., sensitivity and specificity), and the probability that referrals will be offered and completed. Identifying underlying assumptions can thus reveal important evidence gaps, highlighting the minimal amount of evidence required to understand a system. In the words of one famous modeler (18), “uncertainty seeps in through every pore” (p. 828), even for seemingly simple problems. In particular, system-wide innovations generally enjoy an evidence base that is less robust than for clinical interventions, which are more often subject to randomized trials and are more easily standardized than system-wide innovations (19).
Moreover, consideration of underlying assumptions can facilitate understandings of alternative strategies that target different points in a larger system. For example, a simulation model designed to understand clinical decision-making following behavioral screening suggested multiple strategies for improving early detection including not only screening, but also audit-and-feedback to improve error rates and integrated behavioral health services to facilitate referrals and reduce the perceived cost of false positive results (20).
Equally important, simulation models can reveal implicit assumptions that are inconsistent or contradictory (21). For example, one might assume that as long as capacity to provide treatment exceeds demand, waitlists should not present a problem. However, even the simple simulation model described above was capable of demonstrating complex interactions between supply and demand, including how waitlists can emerge despite significant capacity (22). For example, a missed appointment can expend an hour of a treatment provider’s time (if they cannot immediately schedule another patient) while simultaneously adding to the waitlist (assuming the patient reschedules). Thus, it may not be enough to offer more treatment hours: mechanisms to manage missed appointments might also be considered during implementation planning. Waitlists are a classic operational research problem; as Monks (22) argues, simulation modeling forms the foundation of operational research, which in turn can address logistical problems and optimize healthcare delivery (22).
At a deeper level, simulation models can help address foundational assumptions of the statistical models employed when planning and evaluating system-wide interventions. As Raghavan (23) argues, prevailing conceptual models for system-wide interventions are typically multidimensional and complex, often positing mutual interactions between variables at different socioecological levels (e.g., sociopolitical, regulatory and purchasing agency, organizational, interpersonal; 23). Many of these relationships involve reciprocal causation—i.e., when two variables are each a cause of the other. Whereas most inferential statistics based on the general linear model fail to address reciprocal causation—in fact, they assume it does not exist (24–26)—simulation models address reciprocal causation through the concept of feedback loops, in which changes in one variable cause consistent changes in associated variables (reinforcing loops) or mitigate such changes (balancing loops; 27). System dynamics—a field of simulation modeling with a strong focus on feedback loops—suggests that we, as implementation scientists, ignore reciprocal causation at our peril. Dynamically-complex systems marked by reciprocal causation, feedback loops, time delays, and non-linear effects often exhibit policy resistance—that is, situations where seemingly obvious solutions do not work as well as intended, or even make the problem worse (28). Examples of systems-level resistance to innovations are common, such as the historic trend toward larger, more severe forest fires in response to fire suppression efforts or the rapid evolution of resistant bacteria in the face of widespread use of antibiotics. As Sterman (28) points out, the consequences of interventions in dynamically-complex systems are seldom evident to those who first implemented them. Simulation modeling offers a quantitative method to uncover and address the underlying assumptions of system-wide interventions, thus facilitating the identification of potential implementation barriers (e.g., feedback loops driving adverse outcomes) early in the planning process. In this way, simulation modeling can refine “mental models”— human’s internal understandings of an external system — which are often both limited and enduring (29). For example, the ACEs screening model (12) demonstrates the potential for treatment capacity to be influenced through balancing and/or reinforcing feedback loops involving waitlists and staff burnout—both of which introduce the potential for dynamic complexity and policy resistance. Simulation modeling thus offers an opportunity for careful reflection about the complex dynamics in which many interventions function as elements of the systems they are designed to influence (30).
Simulation modeling as an implementation strategy
As an analytic strategy, simulation modeling can help synthesize a range of available evidence applicable to a given implementation challenge while making underlying assumptions explicit. But analysis is only half the battle. If assumptions appear solely in the “fine print” of a model’s computer code, they are unlikely to be understood, interrogated, or challenged by other stakeholders. Engagement is needed to realize simulation modeling’s full value. Here, we argue that to be an effective implementation strategy, simulation modeling is best implemented in the context of cultural exchange—i.e., an in-depth process of negotiation and compromise between stakeholders and model developers (31). In turn, stakeholder participation can improve the analytic value of the models themselves. Concretely, making assumptions explicit through simulation models allows for their refinement and critique through dialogue between researchers and stakeholders, including clarification of their frequently divergent assumptions, sources of evidence, and priorities.
The importance of engagement in the modeling process has empirical support. Decision-makers have endorsed the “co-production” of simulation models, citing the insights gained, the desirability of simulating proposed interventions effects prior to implementation, and the identification of evidence gaps (32). The process of negotiation and compromise while co-producing models has been found to influence decision-makers’ attitudes, subjective norms, and intentions (33), which help achieve alignment and promote community action (34, 35). These findings are consistent with observations in management science from over 50 years ago (22, 36), as well as recent research on cultural exchange theory demonstrating that dialogue, negotiation, and compromise between scientists and implementers can directly contribute to implementation success (31).
Consistent with contemporary epistemology, this perspective on modeling suggests that application of the scientific method is not sufficient to prevent bias or error and that findings are imbued with theory and values that are influenced by social context (37, 38). As a remedy, theories of situated knowledge advocate for “critical interaction among the members of the scientific community [and] among members of different communities” (39) as the best way to discern scientific assumptions and address their potential consequences. Consistent with this focus, system dynamics is explicitly intended to help scientists uncover hidden assumptions and biases (40) based on recognition of the limits of traditional research methodologies as well as the observation that “we are not only failing to solve the persistent problems we face, but are in fact causing them.” (28, p. 501) Recognizing the benefit of uncovering hidden assumptions and biases in our scientific understandings holds profound implications, shifting our translational efforts from uptake of research evidence alone to promoting the bidirectional exchange of evidence, expertise, and values (41).
To facilitate cultural exchange of this kind, RCSM emphasizes dialogue among all relevant stakeholders (e.g., decision-makers, model developers, researchers). Dialogue theory describes different forms of relevant interactions (42). For example, shared inquiry is initially necessary to gain a mutual understanding of available evidence and relevant priorities. As stakeholders develop opinions about possible implementation strategies and their implications, critical discussions can ensue about their relative merits, using the simulation model as an interrogation guide. Finally, when the time and cost of further critical discussions outweigh their benefits, a simulation model can guide deliberations about how implementation should proceed and be monitored and evaluated. The effectiveness of the simulation model can thus be assessed by its relevance to implementation decisions, the insight it elicits, and its utility for further planning.
However, there is not enough concrete guidance on how to promote engagement with simulation models to support implementation efforts. To fill this gap, RCSM uses an approach similar to group model building (GMB), which is a process of engagement with system dynamic models and systems thinking that is well-suited to facilitate use of simulation modeling in implementation science (43). Several GMB principles are conceptualized as core attributes of RCSM. Both are “participatory method[s] for involving communities in the process of understanding and changing systems…,” (44, p. 1) both emphasize scientific uncertainty and the questioning of assumptions, and both focus on collaboration between stakeholders and simulation modelers across multiple stages, from problem formulation to generating consensus regarding strategies for intervention (45). However, use of GMB in implementation science has been limited. Building on GMB, RCSM targets the needs of implementers by focusing on rapid cycles that can fit within short policy windows. Moreover, RCSM is not limited to system dynamics, but is open to any form of simulation modeling that can usefully address decision- makers’ questions with transparency. For example, whereas the screening example described above involved a Monte Carlo simulation, other types of models are also possible, including microsimulation, agent-based modeling, Markov modeling, and discrete-event simulation. At its core, RCSM is a pragmatic approach that is designed to be responsive to decision-makers’ needs.
Case Study: Rapid-cycle Systems Modeling (RCSM) of trauma-informed screening
RCSM involves a process of iterative, stakeholder-engaged design to test the assumptions that underlie system-wide innovation and implementation. Consistent with traditions in evidence-based medicine that derive from decision analysis, RCSM recognizes the need for the best available scientific evidence, the expertise to address scientific uncertainty in the application of that evidence, and stakeholder values to define model scope and purpose and to weigh tradeoffs between competing outcomes (46). To accomplish these goals, each cycle of simulation modeling in RCSM involves three steps: (1) identify and prioritize stakeholder questions, (2) develop or refine a simulation model, and (3) engage in dialogue regarding model relevance, insights, and utility for implementation. This final step can inform prioritization of stakeholder questions for future cycles of RCSM.
To explain its rationale and demonstrate its use, we report an illustrative example of an initial cycle of RCSM conducted with state-level decision-makers seeking to promote trauma-informed screening programs for children and adolescents ("youth") in foster care. In response to federal legislation, U.S. states have been working to implement trauma-informed screening and evaluation for children in foster care over the past decade (47, 48). This case example builds on prior studies investigating the role of mid-level administrators’ use of research evidence while enacting statewide innovations for youth in foster care (3, 48).