Study design
In a pragmatic three-arm parallel cluster-randomized controlled trial with assessment at baseline and at 12-month follow-up, we compared a simple feedback approach with two higher intensity feedback approaches: basic assisted feedback and enhanced assisted feedback. In contrast to explanatory trials, aiming to assess whether an intervention works under well-defined and highly controlled conditions, pragmatic trials aim to assess whether an intervention is effective under ‘real-life conditions’ that are more complex and less controllable and predictable [35]. Although the primary study outcome focused on the involvement of care aides in formal team communications about resident care, the intervention was directed to care unit managerial teams: care managers, the director of care, and persons who assist them (e.g. clinical educators). We targeted managers because they are the persons who have the power and ability to change the organizational structures and processes required to facilitate care aide involvement in these formal communications. Nursing homes are made up of one or more care units, the organizational subunits where residents receive care by care teams. Quality of care can vary substantially within nursing home care units. Norton et al. [36] demonstrated that improvement strategies targeted at the care unit level are more powerful than those at the facility level. We performed cluster randomization (i.e. all units within a nursing home randomized into the same study arm) to prevent contamination (i.e. spread of the intervention to non-intervention units). Our study intervention was designed to help care teams increase care aide involvement in formal team communications about resident care, to improve quality of communication, worklife for care staff, and resident care.
INFORM was registered (ClinicalTrials.gov Identifier: NCT02695836) and the trial protocol published [26]. We followed CONSORT reporting guidelines for cluster-randomized trials [37].
Hypotheses
- Basic and enhanced assisted feedback on care aide involvement in formal communications about resident care to managerial teams will increase (a) care aide involvement in formal team communications, (b) quality of resident care, and (c) care aide quality of worklife more strongly than simple feedback.
- Enhanced assisted feedback on care aide involvement in formal communications about resident care to managerial teams will increase (a) care aide involvement in formal team communications, (b) quality of resident care, and (c) care aide quality of work-life more strongly than basic assisted feedback.
Setting
Nursing homes participating in TREC (Translating Research in Elder Care, a longitudinal program of applied health and care research), took part in this study [27]. At INFORM inception, TREC included 75 urban nursing homes with 9,613 residents across the Western Canadian provinces of Alberta and British Columbia, randomly selected from the overall urban nursing home population and stratified by health region, ownership type, and size. TREC facilities participate in a longitudinal observational study that generates a comprehensive dataset of resident, care staff, care unit, and facility outcomes. We used data from our 09/2014–05/2015 wave of primary data collection to assess baseline outcomes, then carried out our INFORM intervention in the subset of TREC facilities participating in INFORM, and then carried out our 01/2017–12/2017 wave of primary data collection to assess follow-up outcomes.
Participants
Table 1 lists inclusion and exclusion criteria for facilities and care units. Only nursing homes that participated in the TREC observational study were included because our trial outcomes were available for these facilities. Only facilities with at least one care unit with 10 or more care aide responses on the TREC baseline survey were eligible, for stable, valid, and reliable aggregation of study outcomes at the unit level [38]. We excluded facilities if we were unable to assign surveys to their care units, which made unit-level analyses impossible in those sites. We only included care units with an identifiable care manager or leader. At baseline, 4,641 care aides and 1,693 nurses cared for a total of 8,766 residents in 67 eligible nursing homes.
[Table 1]
Randomization and masking
To avoid contamination, we randomized at the facility level. All care unit managerial teams within a facility received the same feedback intervention. Using stratified permuted block randomization, an independent person not involved in this study assigned eligible nursing homes to one of three study arms. Study arm allocation was determined by assigning computer-generated random numbers to facilities. Randomization was stratified by health region (Edmonton or Calgary Health Zones in Alberta, Fraser or Interior Health Regions in British Columbia) to account for regional policy differences that might influence structures, processes, or outcomes in participating facilities.
A regional project coordinator in each health region obtained additional written informed consent from facilities randomized to the basic and enhanced assisted feedback arms. These facilities were offered additional feedback – coordinators explained to managers that they would receive specific extra feedback as part of the intervention – but we blinded managers to the fact that there were additional study arms. Facilities in different study arms received different recruitment materials (information sheets and informed consents) and attended different types of workshops. Coordinators organized intervention workshops, invited managerial teams to workshops, and disseminated workshop materials to teams. Coordinators could not be blinded to study arm allocation in their region, but we blinded each coordinator to how facilities were allocated in all other regions. Coordinators were requested not to reveal to anyone how their facilities were allocated. To inform a coordinator about study arm allocation of facilities in their region, we used a secure online platform (https://www.igloosoftware.com/) that could only be accessed by the coordinator, the person who carried out randomization, and the system administrator.
People who delivered the intervention and monitored intervention fidelity were (a) a facilitator and a study investigator who attended every workshop and (b) additional investigators, trainees, decision makers, and study staff (varied by region and time of intervention). Workshop contents, activities, duration, materials, and format differed between basic and enhanced assisted feedback groups (described below). The simple feedback group (our control or usual care arm) received tailored feedback reports only 2-3 months after the baseline data collection. We consider simple feedback ‘usual care’ because all TREC homes receive this kind of feedback after each wave of data collection [28-31], regardless of whether they participate in an intervention study or any other TREC activity. To deliver the correct intervention to each study group and to monitor correct fidelity criteria, persons involved in these activities could not be blinded to study arm allocation. However, they were requested not talk about facilities by name or share information that could identify a facility or its allocation status. None of these persons took part in data analyses. Before workshops, each facility agreed to not share workshop tools with other managers or facilities during the study.
Data analyses were carried out by analysts not involved in intervention delivery or data collection. These analysts worked with de-identified data sources (surveys, interview and focus group transcripts) and data sets. A research assistant not involved in INFORM assigned a unique random ID to each participant, care unit, and facility in all data sources before processing, cleaning, and analysis. Random IDs were generated by the same person who randomized facilities. Only that person, the research assistant, and the TREC managing director had access to the de-identification list, and they were all required to keep this information confidential.
Interventions
The study had three arms (Figure 1). The audit component of this study is reflected by the baseline data collections for care units in all three study arms. We used baseline data on a care unit’s involvement of care aides in formal communications about resident care and fed these data back to care units. The control or usual care group received simple feedback only: a feedback report to each facility with information on resident and care staff outcomes. This feedback report was discussed in 4-hour face-to-face dissemination workshops. In addition to simple feedback, the two higher intensity groups received a more focused feedback report on care aides’ involvement in formal communications about resident care. Results were discussed in a 3-hour in-person goal setting workshop. Basic assisted feedback participants received two additional 1.5-hour web-based support workshops. Enhanced assisted feedback participants received two additional 3-hour in-person support workshops and had access to on-demand email and phone support. For the enhanced assisted feedback arm, longer workshops and in-person contact of participants with peers and the study team aimed to provide a more intense learning experience to improve intervention effectiveness.
Goal Setting Theory
Our trial protocol contains a detailed description of the theoretical foundations of our intervention [26]. Briefly, audit and feedback interventions based on goal setting theory are more effective. Participants who set goals and define strategies to achieve them will identify with the goals, perceive them as achievable, and work to achieve them [32]. Both short-term and long-term goals are required. Short-term goals break down the task and enhance self-efficacy and task persistence. Long-term goals keep people accountable. Both performance and learning goals are also required. Learning goals indicate how to make improvements. Performance goals indicate what to achieve [32]. Written action plans hold individuals accountable and are reminders of performance goals [20, 22]. Specifically, managers and their teams attending the basic and enhanced intervention workshops were supported to specify various performance goals related to the improvements targeted on their care units (such as, involving care aides in formal team communications about resident care in 80% (vs 10% at baseline) of the formal team meetings on the care unit within the next 3 months). Further, managers specified learning goals related to their own learning (such as, improving a manager’s ability to specify measurable performance goals and to routinely measure success in achieving performance goals) and to their care team’s learning (increase the ability and comfort of care aides to speak up in formal team communications about resident care, and increase the team’s acceptance of care aide opinions as valuable contribution).
Dissemination Workshops
Sites in all three study arms received simple feedback in November 2015. Reports focused on a core set of actionable measures including care aides’ perception of their involvement in formal communications about resident care (formal interactions), data-based feedback on their care unit’s performance (evaluation), connections within their teams (social capital), and slack time. Managerial teams of each facility were then invited to a half-day face-to-face dissemination workshop where feedback reports were discussed. Workshops were led by an experienced professional facilitator, hired by the study team. After a senior researcher of the research team presented reports, managerial teams participated in small group discussions to interpret results, identify improvement areas, and think about improvement strategies. Workshops did not set specific goals but gave simple instructions on interpreting reports and planning improvement strategies.
Goal Setting Workshops
In June 2016, care unit managers and their teams in the basic and enhanced assisted feedback groups participated in a face-to-face goal setting workshop. We held separate workshops for basic and enhanced assisted feedback groups in each of the four health regions. Each unit received a package of goal setting workshop materials one week before the workshop: a feedback report on the care unit’s data (formal interactions, evaluation, social capital, slack time) and a goal setting workbook summarizing details of the INFORM study, defining key concepts, and outlining the goal setting approach. Managers were encouraged to bring care staff (care aides, nurses, allied health providers) to the workshops. Workshops used small group activities such as reflecting on data, establishing a series of specific and measurable learning and performance goals, and identifying measures and tools to track goal achievement. Participants generated an action plan and received instructions on how to track goal progress and how to report back at the support workshops.
Support Workshops
In November 2016, basic assisted feedback participants attended a 1.5-hour virtual support workshop via a web-based conference platform. Enhanced assisted feedback participants attended a 3-hour face-to-face support workshop. Managerial teams reported their progress in implementing goals and described their implementation strategies. They discussed challenges encountered and received support from the study team, regional decision makers, and their peers in addressing these challenges. A second support workshop with the same content was held in April 2017.
On-demand email and phone support
Enhanced assisted feedback teams also had access to on-demand email and phone support from the facilitator throughout the intervention period. The facilitator addressed questions and helped resolve challenges as managerial teams worked toward goal achievement.
Process evaluation
We comprehensively evaluated intervention fidelity, implementation, and participant experiences using a mixed methods approach. Methods and results are reported in a separate publication [submitted to Impl Sci as companion paper to this paper; add reference upon acceptance].
Outcomes
Our study outcomes were assessed using data collected in two waves of TREC’s longitudinal observational study [27] – 09/2014–05/2015 to assess baseline outcomes before the intervention and 01/2017–12/2017 to assess follow-up outcomes after the intervention (Figure 1). These included care staff use of best practices, quality of worklife (e.g. psychological empowerment, job satisfaction), organizational context, and characteristics of nursing homes and care units (appendix 1) – all of which were measured using the validated TREC survey [27]. Resident data were obtained from the Resident Assessment Instrument – Minimum Data Set 2.0 (RAI-MDS 2.0) [39]. Nursing homes in health regions participating in TREC are required to assess residents on admission and at least quarterly thereafter and participating TREC homes submit their resident data to TREC on a quarterly basis. Data are used for national reporting.
Primary outcome
Our primary outcome was care aides’ self-reported involvement in formal team communications about resident care (formal interactions). Formal interactions is one of 10 concepts measured by the Alberta Context Tool, a comprehensively validated tool to assess modifiable features of care unit work environments (details in appendix 1) [40]. The Alberta Context Tool is embedded within the TREC care aide survey [27]. a suite of validated survey instruments completed by computer-assisted structured personal interview. The formal interactions rating consists of four items (rated from 1=never to 5=almost always) asking care aides how often, in the last typical month, they participated in the following: team meetings about residents, family conferences, change-of-shift reports, and continuing education (conferences, courses) outside their nursing home. In our psychometric studies [40]. we found that the most valid way to generate an overall score is count-based: recoding each item (1 and 2 to 0; 3 to 0.5; 4 and 5 to 1) and summing recoded values (possible range: 0–4).
Secondary outcomes
Organizational
Using the Alberta Context Tool completed by care aides, we assessed evaluation (feedback of routine data to the unit), social capital, and slack time. Using the TREC unit survey completed by managers, we assessed care unit managerial teams’ response to major near misses and managers’ organizational citizenship behaviour. Using the TREC facility survey completed by Directors of Care, we assessed processes and practices in quality improvement activities. These instruments are described elsewhere [27] and in appendix 1.
Staff
Using the TREC care aide survey, we assessed care aides’ use of best practice, psychological empowerment, job satisfaction, and individual staff attributes (appendix 1).
Residents
Using the RAI-MDS 2.0 data, we assessed two practice-sensitive (modifiable by care staff) quality indicators [41]: residents with worsening pain and residents with declining behavioural symptoms (appendix 1). These outcomes were chosen because they are two of the most practice sensitive (modifiable by care staff) [41] quality indicators and both, responsive behaviours and pain are considered among the most burdensome resident outcomes [42].
Statistical analyses
Sample size calculation
Full details of the sample size calculation are in our trial protocol [26]. Assumed effect sizes of the formal interactions score were β1=0.2 in the simple feedback group, β2=0.4 in the basic assisted feedback group, and β3=0.6 in the enhanced assisted feedback group. An adapted simulation-based approach [43]. suggested that 12 facilities per study arm (with on average three units per facility) were required to detect the assumed effects with a statistical power of 0.90. To allow for attrition and effects smaller than the assumed ones, we invited all eligible units in the 67 eligible facilities in Alberta and British Columbia to participate in this study.
Statistical approach
We used SAS® 9.4 for all statistical analyses. Using descriptive statistics, between study arms we compared baseline characteristics of nursing homes, care units, participants in our first intervention workshop, and care aides working on participating units. To assess effects of interventions on our primary outcome (formal interactions) and secondary staff outcomes, we ran mixed effects regression models with random intercepts for care unit and facility levels, and a random effect for care aides responding to our survey at both baseline and follow-up. We adjusted the model for the three stratification variables of the TREC facility sample (region, owner-operator model, and facility size); baseline differences of the dependent variables; care aides’ sex, age, and first language (English yes/no); and care unit staffing (total care hours per resident day and percentage of total hours per resident day provided by care aides). Intra-cluster correlation (ICC) of formal interaction scores within facilities and care units was calculated by dividing the cluster-level variance by the total variance (sum of residual variance, unit-level random intercept and facility-level random intercept). For secondary resident outcomes (percent of residents on a care unit whose behaviour worsened and percent of residents on a care unit whose pain worsened), we ran mixed effects regression models with a facility-level random intercept. These models were adjusted for facility characteristics (region, owner-operator model, and facility size); baseline differences of the dependent variables; and care unit staffing. We carried out an intention-to-treat analysis. A care unit was considered to be adherent with the intervention if at least one representative of this unit attended the goal setting workshop and at least one of the two support workshops.
Public and patient involvement
TREC is a program of integrated knowledge translation research [44]. Throughout all projects, TREC partners with researchers, trainees, policy makers, owner-operators, care staff, people in need of care, and their family/friend caregivers in all phases of the research process. In INFORM, these stakeholders were co-applicants on the research grant that funded the study, were team members of working groups and committees that carried out the study, and were involved in discussions on study results and their interpretation.