Setting. The analysis reported here was part of a larger evaluation of different strategies for implementing a culturally appropriate, individualized lupus DA (17). The DA provides information regarding the disease, the comparative efficacy and safety of various treatment options including their risks and benefits and how to manage these, reproductive health issues, and risks and benefits of glucocorticoids (18). Previous research showed the lupus DA was superior to a lupus educational pamphlet in reducing decision conflict for the use of immunosuppressive medications for the treatment of lupus in a diverse sample of lupus patients, mostly from racial/ethnic minority backgrounds with low socioeconomic status in a multicenter randomized trial, and had high acceptability and feasibility (16). Lupus is a disease exemplar for significant race/ethnicity associated morbidity and mortality disparities, where incidence, prevalence, poor outcomes, dialysis-dependence, and mortality are much higher in African Americans and Hispanics vs Whites (19–22).
Using a purposive sample, 15 rheumatology clinics were identified through the professional network of the principal investigator and invited to participate based on their geographic distribution throughout the United States and their capacity to meet study criteria (e.g., commitment to use the DA for study duration, ability to recruit minimum number of patients to view the DA). We used a combination of standardized and customized implementation strategies to implement the DA, designed to educate lupus patients about their treatment options and help them engage in more shared decision making with their physicians. All clinics used standardized implementation strategies that were provided uniformly by the research team (e.g., training on use of DA, designation of a clinic champion and refresher training course). In addition, each clinic could choose from a ‘menu’ of implementation strategies that could be customized to their clinic. These customized implementation strategies were directed to both clinic personnel and patients. Clinic-targeted strategies focused on integrating the DA into existing work processes, while patient-targeted strategies focused on raising awareness and educating patients about the DA. This approach was utilized to provide clinics the flexibility to choose activities that met their unique needs and capabilities. Clinics could choose to employ any and all of the available customized strategies or rely solely on the standard strategies. Table 1 provides a list of both the standardized and customized strategies.
Data sources. Our analysis used data from two sources. First, the number of implementation strategies were collected from the 15 clinics as a part of the DA implementation process. Prior to initiating the DA use at each clinic (6 months following the start of the study), two members of the research team (JS and LH) met virtually with the implementation team at each clinic. The clinic implementation teams included the site principal investigator (a rheumatologist), clinic champion, nurses, and office staff. The purpose of these virtual meetings was to summarize the perceived barriers to implementing the DA identified during the formative interviews described above (23). Another purpose was to select implementation strategies that the implementation team believed were feasible in their clinic and would be effective at facilitating implementation of the DA. The study team used the Expert Recommendations for Implementing Change (ERIC) (24) to identify a candidate set of implementation strategies for each clinic based on its specific barriers. ERIC is a compendium of implementation strategies and definitions identified through a modified Delphi process with implementation science experts and clinicians.
As noted earlier (Table 1), all clinics were given the freedom to choose standardized or customized strategies. A higher number of strategies meant that clinics added more customized strategies. We recorded the implementation team’s initial selection of strategies. Approximately three months after initiating the DA at each clinic, we asked the clinic champion to verify these strategies, including adding ones that were subsequently adopted and removing ones that were never used. We used the second verified list in our quantitative analysis.
The second data source was an Internet-based survey of clinic personnel, including physicians, physician assistants, nurse practitioners, nurses, medical assistants, administrators, and administrative staff. The survey was administered at three different times over the study period: 1. 4-months after the DA was first implemented in the clinic (4-months post), 2. 12-months after the DA was implemented in the clinic (12-months post), and 3. 2-years after the DA was implemented in each clinic (24-months post). These time periods corresponded to the following calendar periods: 1. September 2019 – February 2020, 2. May 2020 - October 2020, and 3. May 2021 – October 2021, respectively. Response rates for the three waves of surveys were: 1. 4-months post (56.5%, 109 responses / 192 surveys sent), 2. 12-months post (46.5%, 87 responses / 187 surveys sent), and 3. 24-months post (48.5%, 68 responses / 134 surveys sent). Table 2 provides a more detailed description of the response rates by site for each survey period.
Outcomes and Outcome Measures: We assessed the following clinic personnel outcomes at 4-, 12- and 24-months post-implementation of the lupus DA: acceptability (4-item Acceptability of Intervention Measure (AIM)), appropriateness (4-item Intervention Appropriateness Measure (IAM) (25)), feasibility (4-item Feasibility of Intervention Measure (FIM) (25)). All responses were recorded on a 5-point scale ranging from 1 (“Strongly Disagree”) to 5 (“Strongly Agree”). For each outcome, the scores on the individual items were averaged to create one composite mean scale score (range 1–5), where higher scores reflect a better outcome, i.e., greater acceptability, appropriateness, feasibility. The outcomes were constructed separately for each survey period.
Moderating climate measures. The primary climate variables examined in this study were four summated scales based on multiple Likert-type items from the web-based survey. Two dimensions of readiness to implement change were assessed in the study (26). Change commitment was operationalized as the average of five survey items, all of which were measured on a 5-point scale ranging from 1 = Disagree to 5 = Agree. Similarly, change efficacy was operationalized as the average of seven survey items measured on the same 5-point scale. To assess the appropriateness of aggregating the individual responses to the clinic level, we calculated three multi-item interrater agreement statistics: 1. Intraclass correlation coefficient (ICC(1)); 2. Within group reliability (rwg(j)); and 3. Average Deviation (ADm(j)) (27–29). Collectively, these statistics indicated a sufficient degree of consensus within clinics and supported the decision to use a direct consensus composition model to aggregate the individual responses to the clinic level (30). Thus, the final step entailed averaging the individual responses to the clinic level.
The clinic learning climate was also measured along two dimensions (31). The internal learning climate was operationalized using eight survey items measured on a 7-point scale (1 = Strongly disagree, 7 = Strongly agree). The external learning climate was operationalized using four survey items measured on the same 7-point scale. The aggregation statistics again supported the decision to aggregate the individual responses to the clinic level, thus, the final step entailed averaging the individual responses to the clinic level. Given these operationalizations, the learning and readiness climate can be considered structural elements of the clinics.
Covariates: The analysis included three clinic covariates that reflected structural, cultural, and general staffing attributes of the clinic. Clinic specialty was a dichotomous variable that indicated whether a clinic included only a single medical specialty (i.e., rheumatology; 0) or multiple medical specialties (1). Ownership was also a dichotomous variable that indicated whether a clinic was owned by a university/part of an academic medical center (0) or not owned by a university (1). Clinic size was operationalized as a series of dummy variables reflecting three categories: 1–10 clinic employees (referent); 11–30 clinic employees; and more than 30 employees.
The analysis also accounted for four clinic personnel attributes. The number of years of experience working in the clinic was included as a continuous variable. Respondent age was operationalized as a series of dummy variables reflecting four categories: 19–24 years of age (referent); 25–44 years of age; 45–64 years of age, and 65 years of age and older. Likewise, respondent education was operationalized as a series of four dummy variables: high school (referent); 4-year degree; professional degree (e.g., MBA, MSW), and doctorate (e.g., MD, PharmD). Role in the clinic was accounted for with three dummy variables: physician (referent); other clinician; and administration. Finally, we controlled for changes over time with three dummy variables corresponding to the three different survey waves: four-months post-implementation of the DA (referent); 12-months post-implementation of the DA; and 24-months post-implementation of the DA.
Statistical Analyses: We estimated summary statistics, including means and standard deviations for continuous variables and frequency counts and percentage of sample for categorical variables. We used random intercept mixed-effects regression models with an unstructured covariance structure to account for clustering of clinic staff within clinics while examining the association of the total number of implementation strategies (main model 1) and the number of clinic-focused vs. patient-focused implementation strategies (main model 2) with the clinic personnel's perceptions of decision-aid acceptability, appropriateness, and feasibility, adjusting for all covariates listed above. We calculated the beta estimates and the 95% confidence interval (CI) for these associations.
We also examined the moderating effects of the two organizational climate factors by including multiplicative interaction terms between the number of implementation strategies and each organizational climate construct. Each organizational climate construct was modeled with two covariates reflecting the two respective subdimensions (i.e., change commitment vs. change efficacy; internal vs. external learning). We examined the moderating influence of each organizational climate factor in separate models (i.e., one set of models for organizational readiness for change climate as moderator and one set of models for the learning climate as moderator) due to our modest sample size and to facilitate more straightforward interpretations. Similarly, to facilitate interpretation, we present the results of our moderation analysis using marginal effects, whereby we estimated the predicted value of each outcome (acceptability, appropriateness, and feasibility) at the mean and +/- one standard deviation above and below the mean values of each organizational climate variable (e.g., low, intermediate, and high change commitment climate) for different numbers of implementation strategies (e.g., 1, 3, 5, and 7 strategies). A p-value < 0.05 was considered statistically significant for all analyses. The study was approved by the [institution blinded for review purposes] Institution Review Board (Protocol #: 300002272).