The theory of change of the CHORD intervention is that active, culturally congruent engagement of patients with prediabetes with trained CHWs, through individualized goal setting, educational coaching, supplementation of PC visits and facilitated referrals, will support positive lifestyle changes and prevent the onset of diabetes. However, interventions will unlikely affect lifestyle change if they are not implemented with fidelity. This study conducted a concurrent process evaluation of the core components of the CHORD trial intervention, to examine the extent to which the study team successfully implemented each of the core components of the program (implementation fidelity). The study provided additional information for implementation efforts by examining the factors that influenced the extent of implementation (fidelity moderators).
Our analysis demonstrated moderate to high rates of implementation fidelity in this trial. We found that CHWs were able to complete an intake with nearly 80% of the patients enrolled in the intervention arm, and three of the four core components (goal setting, PC visit and education) were delivered to nearly 80% of the patients. While the component of facilitating a referral was delivered to only 45% of the patients, a quarter received 3 or more referrals. While coverage and content adherence were moderate to high for the four components, we found high variability in the dosage of these core components. As compared to a minimum of 6 required encounters per patient in the intensive and maintenance phase respectively, the median number of successful encounters in the intensive and maintenance phases in total was only 5(IQR: 3,8). In addition, nearly 66% completed the intensive phase of the intervention, with an average follow-up time of more than a year. Given the planned duration of intervention, which was 1 year, the observed average follow-up time is a measure of implementation success.
In a complex intervention, there is potential for deviations from the planned program. Lifestyle interventions, such as the one being tested by the CHORD trial, are inherently complex because they involve multiple dimensions, have diverse interacting components and target several organizational levels. This makes their success highly dependent on a multitude of real-world variables.[1, 2] These real world variables can precipitate deviation of the ‘intervention as designed’ from the ‘intervention as delivered’. Using multiple regression models we assessed the fidelity of implementation by two among the five moderating factors described in the modified CFIF,[1, 5] a contextual factor (the site of implementation, BH vs. VA) and a measure of patient activation (PAM score). While the implementation site moderated several measures of content adherence and dosage, patient activation was not associated with fidelity.
Our study used PAM score to measure participant responsiveness. In light of evidence which shows that patient activation—"the skills and confidence that equip patients to become actively engaged in their health care” contributes to positive health outcomes, null findings in our process evaluation were unexpected. However, these findings might be explained by emerging evidence indicating that interventions tailored to a patient’s level of activation can build skills and confidence, thereby increasing patient activation. The components of the CHORD trial were tailored to the patient activation measure at the time of intake, which might have affected their activation and engagement throughout the intervention. Therefore, a comparison of the quality of implementation by baseline PAM score might not have revealed any differences. It is also important to note that PAM score at intake might not be directly associated with the perception and view of patients about the relevance of an intervention, which has been suggested to directly moderate implementation fidelity by impacting the engagement of patients. Hence, PAM score at intake might be limited in its representation of the concept of ‘participant responsiveness’ as defined in the CFIF, and of the concept echoed in Rogers' diffusion of innovations which posits that “the uptake of a new intervention depends on its acceptance by those receiving it”. A mid or post intervention PAM measure or a qualitative assessment of patients’ engagement could have altered implementation fidelity of the CHORD trial.
Two other moderators of implementation fidelity than those measured in our study have been suggested in the CFIF -‘comprehensiveness of policy description’ and ‘strategies to facilitate intervention’[1, 25] Detailed specific descriptions of interventions and simple interventions are more likely to be implemented with high fidelity than ones that are vague. In addition, support strategies such as the “provision of manuals, guidelines, training, and monitoring and feedback for those delivering the intervention”, may moderate fidelity by optimizing and standardizing the intervention delivery, particularly critical for complex interventions. Although we did not measure these moderators quantitatively - our description of the CHORD intervention can provide a guide to program administrators interested in replicating the intervention, by informing the approaches that were taken to facilitate the optimal delivery of the core components of the CHORD trial.
First, one of the inherent features of pragmatic trials which can pose challenges in accurately measuring implementation is that the intervention usually does not start or end on fixed days. Because CHWs maintain ongoing and continuing contacts with their patients, some of the interventions, such as referrals, were delivered outside of the intervention period. For the purpose of this analysis, these interventions were considered a part of the implementation if their information was available in the standard study data forms completed by CHWs. The CHORD protocol was developed to provide referrals during the intensive phase, but if needed, CHWs made some referrals even during the maintenance phase. All these interventions were considered part of CHORD implementation. Second, fidelity measures can be intervention specific. Therefore, the measures used in this study might not directly translate to other complex interventions. However, in the absence of detailed guidance about appropriate indicators for fidelity measures and moderators, this study adds to our knowledge base on the diversity of indicators that future evaluations can apply. Third, one important aspect of implementation is how well the participants engage with the intervention. In our study, while we measured the delivery of core components from the perspective of CHWs, we could not assess the extent to which delivered interventions were received by patients. Fidelity is influenced by both, the people delivering as well as those receiving an intervention. Three dimensions of fidelity have been conceptualized: 1) Fidelity delivery is concerned with delivering an intervention as per protocol; 2) Fidelity receipt refers to participants’ understanding of the intervention and their capacity to apply the information or skills; 3)Fidelity enactment reflects the extent to which participants can apply the delivered information or skills in actual situations. Our analysis was not able to parse out these three dimensions.
Study strengths and contributions to implementation science
Fidelity assessment is key to understanding the reasons for the success or failure of interventions. However, very few studies systematically document and report implementation processes of their intervention programs.[3, 5] The CHORD protocol built in collection of data regarding several fidelity measures and moderating factors that can impact fidelity. This allowed us to conduct concurrent rather than retrospective process evaluation. Concurrent process evaluations are important, as they can capture implementation experiences in real time and ensure that the theory behind the intervention is accounted for during evaluation. In addition to reporting on concurrent process evaluation of a pragmatic complex trial, the study responds to the general calls for conducting quantitative evaluations of fidelity in intervention studies. Use of real time data reported by the key implementors, the CHWs adds to the validity of our analysis.
Another major contribution of our study is in empirically testing the CFIF. The framework provided a useful tool for conceptualizing and organizing measures of fidelity and their moderators. These findings, when evaluated with the data on CHORD’s outcomes, can help address questions about the relative importance of the measures of implementation fidelity as described in the framework. These evaluations can also provide information on the predictive validity of fidelity measures. Our process evaluation also highlights the limitations of the CFIF for selecting measures of potential moderators. To standardize quantitative fidelity assessments, the field will benefit from further guidance about “how-to” measure and apply factors that can moderate implementation fidelity.