This study is part of a larger study evaluating the implementation of an EBP, the Triple P – Positive Parenting Program, in two health services catchment areas in the province of Quebec, Canada. Triple P entails a five-level integrated system of universal, selective, and indicated interventions whose intensity increases along with the needs of parents of 0-12 year-old children (40). There is significant scientific evidence supporting Triple P’s efficacy for increasing positive parenting practices and reducing emotional and behavioral problems in children (41–44). There is also some evidence that Triple P prevents child maltreatment (45,46). The present study focused on Selective Triple P (Level 2 – public seminars), Primary Care Triple P (Level 3 – individual coaching), Group Triple P (Level 4 – parent training), and Pathways Triple P (Level 5 - active skills training including cognitive reattribution components), delivered by trained practitioners (40). Service delivery was supported by a promotional campaign (Level 1) developed locally (47,48). In each community implementing Triple P, a team of community partners carefully planned the implementation process (49). These partners came from different sectors of activity (child care services, schools, non-governmental and governmental organizations). Managers in the partner organizations targeted practitioners to receive training in one or more levels of Triple P. To receive the proposed training, practitioners had to agree to participate in the study. Data were collected among trained practitioners through a pre-implementation survey (prior to Triple P training) and a post-implementation survey (1-2 years later). Meanwhile, the practitioners were expected to deliver the various components of Triple P and monitor their Triple P interventions on an ongoing basis, with the support of the research team. This procedure was approved by the relevant ethics research board.
Several means were put in place to ensure optimal implementation of Triple P in the communities. First of all, the implementation was carefully prepared in accordance with the QIF (28). In particular, the needs and resources of the targeted communities were assessed, as well as their readiness to act in maltreatment prevention. In addition, the differentiation of Triple P from other parenting support programs in use in Quebec was established, in order to ensure possible linkages with other programs. Two local implementation coordinators from each of the communities were hired to act as resource person during all phases of implementation. Their role included mobilizing other partners in the field and acting as a bridge between the research team and the partners.
A local implementation committee was formed in each of the communities, bringing together regional and local partners, i.e., representatives of government authorities (e.g., public health department, youth protection department), the local coordinator for the implementation of the territory, as well as managers or representatives of partner organizations. The mandate of these implementation teams was to plan the concerted implementation of Triple P on their territory.
During the active implementation phase of the program, the local implementation coordinators were mandated to provide supervision, to help the practitioners while promoting their self-regulation, and to help refer parents to the level most suited to their needs. The managers were briefed on their role in supporting practitioners, which included informing the implementation team members of the needs of their staff, providing time and tools to practitioners to become efficient in delivering the program, and working in collaboration with the other organizations to share resources and knowledge. Finally, the research team established procedures to facilitate the work of practitioners, for example, by providing them with an electronic tablet that they could use to show parents intervention materials (Triple P videos and tip sheets, for example) and by encouraging them to document their interventions using specially designed computerized monitoring tools. While the research team was more involved in the planning and coordination of the initiative at the beginning of the project, it took on more of a coaching role over time so that communities partners could take ownership of the initiative and develop their collective capacity for implementation on their own.
Participants were 115 practitioners (93% females) trained in at least one level of Triple P in fall 2014 (n=94) or fall 2015 (n=21). Of these, 99 completed the posttest (retention rate: 86%). Participants’ characteristics are presented in Table 1. Posttest completers and non-completers were similar with regard to all sociodemographic variables, except the number of years of experience working with families, with completers having significantly more experience (M=14.04, SD=9.41) than non-completers (M=8.29, SD=5.33), t(26.7) = -3.35, p = .002.
Variables were assessed using four validated questionnaires completed at pretest and posttest. All measures were translated into French by the research team (except for the PCSC measure that was translated by Triple P International) and contextualized to the implementation of Triple P when applicable. Internal consistency was calculated for each questionnaire translated and used in the present study to ensure the validity of the measures. A sociodemographic questionnaire was included to collect background information on participants (sex, academic background, discipline, years of experience working with families and type of organization).
Attitudes towards EBPs. Participants’ attitudes towards EBPs were assessed using the Evidence-Based Practice Attitude Scale (EBPAS) (15). This questionnaire comprises 15 items rated on a Likert-type scale (1 = not at all to 5 = to a very great extent) and divided into four subscales: Appeal (extent to which EBPs are intuitively appealing to the practitioner); Requirements (extent to which the practitioner would adopt an EBP if his/her supervisor required it); Openness (general receptivity to new practices); and Divergence (perceived divergence between EBPs and the current practice). With the exception of the Divergence subscale, higher scores indicate more favorable attitudes towards EBPs. Internal consistency was satisfactory in both Aarons’ (15) original validation study (Chronbach’s alphas for subscales = .80, .90, .78, .59, respectively) and the present study (.73, .93, .87 and .71).
Perceived training needs. This variable was assessed using the Training Needs subscale of the Organizational Readiness for Change measure (ORC) (50), comprising 8 items rated on a Likert-type scale (10 = strongly disagree to 50 = strongly agree). In the present study, the last item, relating to “using computerized client assessments,” was removed because it did not apply to the context. The remaining items assessed, for example, the extent to which practitioners felt they needed more training to increase client participation in treatment, monitor client progress or improve client thinking and problem-solving skills. This subscale, conceptualized as a measure of motivational readiness for change, demonstrated good internal consistency in both Lehman et al.’s study (50) (Chronbach’s = .84) and the present study ( =.87).
Self-efficacy. The Parent Consultation Skills Checklist (PCSC) (5), translated in French by Triple P International, was used to assess the practitioners’ level of confidence in their skills for working with parents reporting difficulties with their children. This measure, developed by Turner and Sanders (51), is specifically tailored to levels 2, 3, 4, and 5 of the Triple P program. Items refer to both content self-efficacy (e.g., teaching positive parenting principles to parents) and process self-efficacy (e.g., installing and using the audiovisual equipment required for the session) (25), and are rated on a Likert-type scale (1 = not at all confident” to 7 = very confident). This instrument showed good internal consistency in both Turner et al. study’s (5) (Chronbach’s = .96 to .97 for the difference program levels) and in the present study (Chronbach’s = .92, .96, .94 and .95, respectively). At pretest, the PCSC was completed just before training in each level of Triple P. When practitioners were trained in more than one level, only the score on their first completed pretest PCSC was used in the analyses. At posttest, practitioners completed a PCSC for each level of Triple P in which they had received training. A mean score for all the completed posttest PCSCs (ranging from 1 to 7) was computed and used in the analyses.
Perceived organizational capacity at pretest. This variable was assessed by computing an aggregated score for three subscales of the Factors Related to Program Implementation measure (FRPI) (36): Ideal Agency, Ideal Staff, and Ideal Champion. This procedure was justified given the high correlation found between these three subscales (r ranging from .51 to .80). Practitioners rated 24 Likert-type items assessing the extent to which various characteristics of the agency, staff, and supervisor would be a barrier or an asset to the implementation of Triple P (1 = significant barrier to 5 = significant asset). FRPI items cover different agency characteristics (e.g., perceived coherence of Triple P with organizational mandate, perceived quality of program coordination), staff characteristics (e.g., perceived level of motivation and competence, and communication between team members), and supervisor characteristics (e.g., perceived level of motivation, competence, availability and support). The aggregated score showed good internal consistency in the present study (α = .85).
Pretest surveys were completed a few days prior to Triple P training. Posttest surveys were sent to participants and collected in fall 2016, or earlier if the practitioner was going to be leaving the organization for any reason, such as maternity leave, prolonged sick leave or a change of assignment. To increase the response rate, follow-up calls were made to practitioners who did not return their questionnaire within the prescribed period.
Descriptive analyses of variable distributions revealed no problems related to the conditions of use of any of the planned analyses. A negligible amount of missing data was found for each dependent variable (3.5% on average). Consequently, procedures for handling missing data were deemed unnecessary (52). Analyses were conducted using SPSS and SPSS macro PROCESS (53).
Using a bootstrapping method, six linear regressions were conducted to test the interaction effect of perceived organizational capacity on the level of change in the dependent variables over time. The six dependent variables were the levels of change in the four attitude subscales of the EBPAS (Appeal, Requirements, Openness and Divergence), the ORC Training Needs subscale and the PCSC Self-efficacy measure; the moderator was the global FRPI score; and the independent variable was time (pretest, posttest). Figure 1 illustrates the moderation model tested.
The Johnson-Neyman procedure, probing interactions with continuous moderators, was performed to determine regions of significance of the interaction effect. This procedure indicates the value of the moderator (i.e., the specific score of the FRPI) at which the interaction effect becomes significant. The advantage of this method is that it provides a more complete picture of the interaction effect and does not require an arbitrary dichotomization of the moderating variable (54).
Regression analyses were conducted to test for the presence of a time effect when the interaction effect was not significant. These analyses controlled for participant characteristics (e.g., level of education, prior experience, type of organization) previously associated in the literature with participants’ attitudes, training needs or self-efficacy (12,55,56). The analyses also controlled for the length of time between pre-test and post-test, since it varied between one and two years depending on participants.
Preliminary analyses involving sociodemographic data showed that only two control variables were significant predictors in some tested models: practitioners’ prior experience working with families (in number of years) and community membership (i.e., working in one health catchment area or the other). Including these variables as covariates did not change the direction, magnitude or significance of the results. More parsimonious models excluding these covariates are thus presented below.