The widespread implementation of healthcare innovations usually takes many years, many initiatives fail [3] or are not translated into practice [31]. We used RE-AIM to evaluate the national scale-up of ESCAPE-pain, a complex healthcare intervention for knee and hip OA, by England’s 15 Academic Health Science Networks (AHSNs).
Since 2014, ESCAPE-pain has been adopted in over 110 sites, reaching over 9000 people with knee or hip OA. Although it is delivered predominately by physiotherapists a growing number of exercise professionals are now delivering the programme. It has been adopted across an expanding range of settings beyond the original model tested in the trial (from NHS outpatient departments to non-clinical community venues), and by a diverse number of providers and funding arrangements. This shows that complex healthcare interventions of this kind can be scaled-up successfully into similar types of settings and professions, and “spread-out” into different contexts [3, 5].
Monitoring demonstrates that ESCAPE-pain has been scaled-up; however, it is important to determine what (exactly) has been implemented (i.e. fidelity) and whether it is effective (i.e. delivering intended outcomes) [32, 33]. Measuring intervention fidelity and quality within a national scale-up process has been pragmatic in its approach. A mandatory 1-day training course was developed as a strategy to help safeguard that ESCAPE-pain was implemented with fidelity (i.e. by building knowledge and skills to implement and deliver the programme)[34]. In addition, all sites implementing ESCAPE-pain were required to self-report compliance with the programme’s core components – sites that do not report compliance are not considered to be delivering ESCAPE-pain. Participants adherence levels were comparable with those observed within the original trial [23, 24].
Individual level long-term follow-up data are not collected as part of the AHSNs’ national programme, which means that it is not possible to determine if the self-management strategies and subsequent benefits delivered by ESCAPE-pain are maintained by participants. At an organisational level, the number of sites continuing to deliver the programme is high, suggesting it is largely sustained in practice settings once implemented. There is debate in the literature about what constitutes sustainability (e.g. continued delivery of intervention components, extent of integration, realisation of outcomes, duration) [8, 33]. In the case of ESCAPE-pain, the majority of sites are < 1-year post-implementation; therefore, the extent of long-term sustainability is to be seen.
As interventions move from highly resourced, controlled research conditions into ‘real world’ settings there is a risk effectiveness can be reduced due to the intervention’s essential core components being incorrectly implemented [33, 35]. Therefore, it is important to continue to monitor the effectiveness of interventions as they are implemented in different contexts [12, 13, 33]. Critically for ESCAPE-pain, on-going data collection demonstrated that the programme’s effectiveness has been maintained as it spread from a controlled, cloistered trial setting, into very different ‘real world’ clinical and community settings.
The systematic, on-going monitoring of scale-up demonstrated by AHSNs for ESCAPE-pain is uncommon [12, 13, 33]. However, data collection has been difficult as staff (both clinical and non-clinical) in sites often lack systems to routinely collect data, have little time and may be unable or reluctant to collect data. Although the AHSNs have created systems to ease the burden of collecting and analysing data, there is no way of enforcing data return. Consequently, not all sites returned data and not all datasets returned were complete, which results in limitations for reporting scale-up.
Other limitations are that implementation outcomes relied on self-reported and indirect measures (e.g. compliance with core components of ESCAPE-pain, numbers trained, facilitators’ ability implement and deliver ESCAPE-pain). However, impartial observation of implementation across a large number of geographically dispersed sites was not feasible. Whilst these measures do not guarantee the programme was implemented and delivered with fidelity or quality, they provided a pragmatic approach to monitoring. A further challenge going forward is that as the number of sites expands it is essential that systems and processes underpinning monitoring (e.g. data collection, quality controls, analysis and reporting) continue to be rigorous and sustainable (i.e. feasibly resourced) [12, 13].