Study design
This was a single arm prospective pilot study, lasting one year.
Setting & Sample
The SCOPE pilot was conducted with a random sample of 7 of 16 nursing homes located in Winnipeg, Manitoba that are enrolled in the Translating Research in Elder Care (TREC) program. TREC is a multilevel, longitudinal program of applied health services research designed to improve the quality of care and quality of life for nursing home residents, and also the quality of work life for their care staff [33]. TREC applies these constructs at the clinical microsystem (care units) where quality is created [34, 35]. The overall TREC cohort was created using a stratified (owner model, size, region) random sample [33].
The SCOPE Teaching and Coaching Strategies
The SCOPE intervention is based on a modified Institute for Healthcare Improvement (IHI) Breakthrough Collaborative Series model [36]. It was also informed by knowledge translation theory, specifically focusing on the role that facilitation plays in implementation success [37, 38]. Details of these coaching strategies are provided elsewhere [31, 39] with each component shown in Figure 1. Components include (1) a ‘Getting Started’ evidence kit with clinical information and improvement strategies, specific to one of three clinical areas (reducing pain, improving mobility, reducing dementia-related responsive behaviours) selected by teams; (2) three 2-day learning congresses designed to train SCOPE teams in basic QI approaches, and importantly, to provide them with peer to peer (from other units and sites) networking and learning opportunities; (3) a quality advisor who helped to design and implement the learning congresses, and who supported teams (in-person visits, telephone calls) regularly between these sessions; (4) a quality coordinator who provided oversight to the quality advisor, and who led virtual and in-person discussions to help unit and facility managers support front-line QI teams; and, (5) a celebratory congress held at the end of the pilot.
The quality advisor was the main liaison with each team. Duties included: 1) meeting with each team at the beginning of SCOPE to review the ‘Getting Started’ information kit; 2) working with the quality coordinator and research team to prepare and facilitate learning congresses; 3) conducting face-to-face meetings with each team at least monthly, to help them enact their PDSA plans and brainstorm solutions to challenges encountered; 4) being available for additional team consultation (phone, email) as needed; and, 5) keeping a diary of team interactions and progress.
Participants and Study Procedures
Executive Directors from each facility received a written invitation to participate in the pilot followed by an in-person meeting to answer questions, to explain nursing home responsibilities, and to discuss available support. Sites were selected randomly with replacement; 1 site declined to participate, stating insufficient staff levels to engage in research. No sites were lost to follow-up during the research.
Following written consent to participate in the pilot, the Executive Director identified a senior sponsor (usually the Director of Care) to help promote SCOPE to other management staff, and to remove implementation barriers throughout the pilot as needed. This individual identified, at their discretion, one unit from their facility to participate in the pilot, and selected a unit-level team sponsor (usually a unit-level clinical nurse manager) who was responsible to support day-to-day project activities. Senior and Team Sponsors collaborated to select a front-line team consisting of 5-7 members. At least 2 team members were care aides with one as team lead; other care staff (e.g., social workers) were selected as needed. Each team chose their intervention to focus on either reducing pain, improving mobility, or reducing dementia-related responsive behaviours. These three areas were selected based on a ranking exercise, completed by care aides before the pilot, using 4 criteria: (1) their perceived importance to care aides, (2) the ability to measure outcomes in these areas using the Resident Assessment Instrument-Minimum Data Set (RAI-MDS 2.0), (3) a distribution in the measures that demonstrated there was room for improvement, and (4) their modifiability [40, 41].
Each congress occurred three months apart (Figure 1); the agenda for each learning congress is provided in Appendix 1. In Learning congress 1, teams were coached to develop effective QI aim statements, while learning congresses 2 and 3 focused on measurement and strategies to spread effective QI strategies within each team’s unit, respectively. Congresses also helped teams to problem solve and share solutions to challenges that they encountered (e.g., getting buy-in from peers), provided teams with knowledge sharing and socialization opportunities (e.g., through impromptu networking sessions and team presentations sharing their PDSA experiences), and provided dedicated planning time to integrate lessons learned into teams’ upcoming daily care routines (action periods). During the celebratory conference teams celebrated their achievements, discussed lessons learned, and considered next steps. Examples of activities conducted during the congresses included storyboard sessions and team presentations (designed to help teams share their successes and opportunities to improve); technical training (creating aim statements, conducting PDSA cycles) using improv and simulation techniques, and interactive “games” designed to promote learning in specific quality improvement areas; and time dedicated for team reflection and planning.
Ethics
Approval to conduct the research was provided by the University of Manitoba Health Research Ethics Committee (reference number H2015:045). Each nursing home received $3000 to help backfill participating team members who attended learning congresses. This study was funded by the TREC program (grant number PS 148582).
Measures
Treatment Enactment
Enactment is an element of treatment fidelity that measures the extent to which people actually implement a specific intervention skill, and differs from what is taught (treatment delivery), what is learned (treatment receipt), and the extent of its effect (treatment efficacy) [42]. Enactment is one of the most challenging aspects of treatment fidelity to measure [42]. Traditional approaches to measuring it include the use of questionnaires and self-reports, structured interviews, and activity logs [42].
Each team was asked to complete a self-assessment form every two months during the pilot (Appendix 2). Teams were asked to use this form to (1) create and refine their QI aim statement; (2) report how well they were able to implement QI interventions using PDSA methods (e.g., starting with one or two residents, and involving other residents and/or staff depending on their success); and (3) document how they used measurement strategies and data to guide team decision making (e.g., to assess whether they were making progress towards their aims).
Measures – care aides
Workgroup cohesion is the “degree to which an individual believes that the members of his or her work group are attracted to each other, are willing to work together, and are committed to the completion of the tasks and goals of the work group” [44]. We measured work cohesion using 8 items adapted to align with the pilot (e.g., We have a lot of team spirit among team members; We know that we can depend on each other; We stand up for each other).
Workgroup communication is the “degree to which information is transmitted among the members of the work group” [44]. It was measured using 4 items adapted to align with the pilot (e.g., We frequently discuss resident care assignments with each other; We care share ideas and information).
Each of these measures was scored on a seven-point Likert scale ranging from ‘strongly disagree’ to ‘strongly agree’; item scores were averaged to provide an overall score ranging from 1 to 7, with the latter representing strong agreement team cohesion/communication). Scores of ‘4’ on these scales define groups with neutral agreement about group cohesion and communication.
Measures - residents
Quality indicators were assessed using RAI-MDS 2.0 [45], focusing on the percentage of people who showed improvements in mobility, the percentage of people whose responsive behavioral symptoms improved, and the percentage of people with pain. Resident mobility was assessed using the third generation [46] RAI-MDS 2.0 quality indicator “MOB1a” (the percentage of residents whose ability to locomote on the unit improved). This indicator excludes residents who are comatose, have six or fewer months to live, and/or who were independently mobile during their previous RAI-MDS 2.0 assessment [45]. The quality indicator entitled “BEHI4” was used to identify the percentage of residents on each unit whose behavioral symptoms (i.e., wandering, verbally abusive, physically abusive, socially inappropriate or disruptive behavior) improved from the previous RAI-MDS 2.0 assessment [45]. This indicator excludes residents who are comatose or who had missing behavioral scores in their previous assessment. Resident pain was measured using the RAI-MDS 2.0 pain scale [45]. This quality indicator assesses the percentage of residents with any amount of pain in the last seven days, excluding those with missing or conflicting (no pain frequency but with some degree of intensity) item responses.
Data analysis
Treatment enactment
Each component of treatment enactment was scored using a 5-point scale ranging from poor (1) to excellent (5) (Table 1). Aim statements were scored by the extent they met the SMART criteria of being Specific, Measurable, Achievable, Relevant, and Timely [47]. Teams’ ability to plan and implement their intervention using PDSA cycles were scored based on the degree to which their reported plans aligned with aim statements, and by the extent to which they reported spreading their improvement strategies to involve other residents and/or staff within their unit. Teams were also scored by the extent that they documented using measurement strategies and data to guide intervention revisions and related decisions. Two authors (MD, LG) independently rated teams’ scores of treatment enactment, using bi-monthly data reported during the pilot. Scoring discrepancies were resolved through iterative discussions.
Team cohesion and communication
Descriptive measures of workgroup cohesion and communication are shown for months 1, 7 and 12 of the pilot, after verifying that results are equivalent to bi-monthly scores.
Table 1 Scoring System used to Rate Team’s Level of Treatment Enactment During SCOPE
Treatment Enactment Category
|
Scoring Based on Teams’ Self-reported Progression
Throughout the Pilot
|
|
Excellent (5)
|
Adequate (3)
|
Poor (1)
|
Creating Actionable AIM statementsa
|
The team developed an aim statement that reflects 4 of 5 of the SMART components including the ‘specific’ and ‘measurable’ categories.
|
The team developed an aim statement that reflects up to 3 of the SMART components.
|
The team’s aim statement did not reflect any of the SMART components.
|
Intervention Progression using Plan-Do-Study-Act (PDSA Cycles
|
(1) The team planned and implemented their intervention in a way that aligned with their aim statement, AND (2) reported using PDSA cycles to spread it to involve other residents &/or staff on their unit.
|
The team planned and implemented their intervention, but it (1) didn’t align clearly with their aim statement, OR (2) was only conducted on a limited number of residents &/or staff on the unit.
|
The team provided no evidence of implementing their intervention, or using PDSA cycles to promote change
|
Use of Measurement to Guide Decision Making
|
The team included specific text documenting how measurement and data were used to guide improvement decisions in successive PDSA cycles.
|
The team made vague reference to measurement tools and/or strategies used to guide decision making in successive PDSA cycles.
|
The team did not report how measurement and data were used to guide decision making.
|
a Team aim statements had to include operational terms (e.g., define responsive behavior) (Specific); contain a target goal (e.g., identify the degree of improvement sought) (Measurable); be realistic (e.g., initially focus on a smaller number of residents) and/or show progression throughout the pilot (Achievable); include information about how (e.g., by creating toolkits to support implementation) or when (e.g., during mealtime) the intervention would happen (Relevant), and; include a reference point/date by which intervention success would be measured (Timely).
Quality indicators
RAI-MDS 2.0 quality indicators were calculated at the unit-level using quarterly data collected during the pilot, using statistical process control (SPC) methods [48]. Data were not distributed normally and thus the following SPC zones were created using pre-SCOPE (January, 2013 to December 2016) data: a) zone -3=1st-5th percentile; b) zone -2=5th-34th percentile; c) zone -1=34th-50th percentile; d) zone +1=50th-66th percentile; e) zone +2=66th-95th percentile; f) zone +3=95th-99th percentile. Following the SPC Western Electric rules [49], non-random variation was detected if (a) one or more data points during the SCOPE pilot were beyond zone 3 of pre-SCOPE results, (b) two of three successive data points were beyond zone 2, or (c) four of five successive data points were beyond zone 1.