Participants
Participants included a national sample of N = 75 clinicians who provide mental health interventions to students on their school campus. Participants were primarily Female (n = 68, 91%) and White (n = 55, 73%), working in elementary, middle and high schools. The most common role was mental health counselor, followed by school social worker, school psychologist, school counselor, and other professional roles. Demographic and professional characteristics for participating clinicians are presented in Table 1.
Procedure
Recruitment
Participants were recruited via numerous professional networks, listservs and social media. Inclusion criteria were minimal to enhance generalizability and only included the requirements that 1) participants routinely provided individual-level mental health interventions or therapy and 2) spent ≥50% of their time providing services in schools. This was done to help ensure the representativeness of the sample compared to the clinicians who would ultimately access the training and consultation supports when later disseminated at a large scale. The study team conducted informed consent meetings via phone with prospective participants and received written/verbal consent consistent with procedures approved by the University of Washington institutional review board. Recruitment lasted for a period of approximately six weeks to achieve the desired sample size.
Randomization
All participants were randomly assigned to either a BOLT+PTC condition or a service as usual control condition (see below). Clinicians in the BOLT+PTC condition (n = 37) were further randomized to 2 (n = 14), 4 (n = 10), or 8 (n = 13) weeks of consultation. Clinicians in the control condition only completed study measures while continuing to provide services as usual. See Additional File 1 for the study CONSORT diagram.
Data collection
All data were collected via online surveys and self-reported by participants. After enrolling in the study, all participants completed pre-training measures of their demographic, professional, and caseload characteristics and MBC knowledge, skill, attitudes, and use. These measures were collected weekly for 32 weeks following study enrollment. All participants received incentives for the time spent completing assessments throughout the study.
Training and consultation
The online training and post-training consultation strategies were developed via an iterative human-centered design process intended to ensure their efficiency and contextual fit [60,61].
Online training. After completing pre-training measures, participants assigned to any BOLT+PTC condition were asked to complete the online training within 2 weeks. Training included a series of interactive modules addressing the following content: (1) Utility of MBC in school mental health; (2) Administration and interpretation of measures; (3) Delivery of collaborative feedback; (4) Treatment goal-setting and prioritization of monitoring targets; (5) Selecting and using standardized assessments; (6) Selecting and using individualized assessments, (7) Assessment-driven clinical decision-making; and (8) Strategies to support continued use. The interactive online training modules are accompanied by a variety of support materials (e.g., tools to help integrate MBC into clinicians’ workflow; job-aids for introducing assessments and providing feedback on assessment results; a commonly used youth measures reference guide) via an online learning management system.
Consultation. The consultation model consisted of (1) one-hour small group (3-5 clinicians) live consultation sessions led by an expert MBC consultant and (2) asynchronous, message board discussions (hosted on the learning management system and moderated by the same consultant). Regardless of the consultation dosage, live consultation calls followed a standard sequence: (a) introduction and orientation to the session; (b) brief case presentations including MBC strategies used; (c) group discussion of appropriate next MBC steps for the case, including discussion of alternative therapeutic approaches/ strategies if MBC indicates that a change in treatment target or intervention strategy is needed; (d) expert consultant recommendations (as appropriate); and (e) wrap up, homework assignments, and additional resources. Asynchronous message board discussion provided a central location where clinicians reported on their experiences completing homework. All participants in the BOLT+PTC condition (n = 37) posted more than once on the discussion boards.
Services as usual
Typical education sector mental health services tend to include a diverse array of assessment strategies that may include some inconsistent use of formal assessment and monitoring measures [50,56]. Clinicians in this condition only completed study assessments.
Measures
Clinician demographic, professional, and caseload characteristics
Clinician demographic, professional and caseload characteristics were collected using a self-reported questionnaire developed by the study team, informed by those used in prior school-based implementation research (e.g., [62]). Participants completed this questionnaire upon study enrollment.
MBC Knowledge Questionnaire (MBCKQ)
Modeled on the Knowledge of Evidence-Based Services Questionnaire [63], the MBCKQ was designed to assess factual and procedural knowledge about MBC. The 28-item, multiple-choice MBCKQ was iteratively developed based on the key content and learning objectives of the MBC training modules and administered at baseline and 2, 4, 6, 8, 10, 16, 20, 24, 28, and 32 weeks.
MBC skill
Clinicians responded to 10 Likert-style items assessing MBC skills including selection and administration of measures, progress monitoring, treatment integration/modification based on the results, and feedback to clients. Responses range from 1 (‘‘Minimal’’) to 5 (‘‘Advanced’’). This scale has previously demonstrated good internal consistency (α = .85) when used with school-based clinicians [50]. In the current sample, α = .90. It was also administered at baseline and subsequently at 2, 4, 6, 8, 10, 16, 20, 24, 28, and 32 weeks.
MBC attitudes
The Monitoring and Feedback Attitudes Scale (MFA) [64] was used to assess clinician attitudes toward ongoing assessment of mental health problems and the provision of client feedback (e.g., “negative feedback to clients would decrease motivation/engagement in treatment”). Responses range from 1 (“Strongly Disagree”) to 5 (“Strongly Agree”). The MFA has two subscales: (1) Benefit (i.e., facilitating collaboration with clients) and (2) Harm (i.e., harmful for therapeutic alliance, misuse by administrators). In the current sample, the MFA subscales demonstrated strong internal reliability (α = .91 and .88, respectively) and was administered at baseline and 2, 4, 6, 8, 10, 16, 20, 24, 28, and 32 weeks.
MBC practices
Clinician self-reported use of MBC practices was measured by their completion of the Current Assessment Practice Evaluation – Revised (CAPER) [46], a measure of MBC penetration. The CAPER is a 7-item self-report instrument that allows clinicians to self-report their use of assessments in their clinical practice during the previous month and previous week. Clinicians indicate the percentage of their caseload with whom they have engaged in each of the seven assessment activities. Response options for each activity are on a 4-point scale (1 = “None [0%],” 2 = “Some [1-39%],” 3 = “Half [40-60%],” 4 = “Most [61-100%]”). The CAPER has three subscales, which are (1) Standardized assessments (e.g., % of case load administered standardized assessment during the last week); (2) Individualized assessments (e.g., % of caseload systematically tracked individualized outcomes last week); and, (3) Treatment modification (e.g., % of clients whose overall treatment plan altered based on assessment data during the last week). Previous versions have been found to be sensitive to training [42]. The CAPER demonstrated good internal consistency across its three subscales in the current study (α = .85, .94, .87). Clinicians completed the CAPER every week for 32 weeks, including baseline; total CAPER scores for each subscale were used as implementation outcomes.
Analyses
The main goal of the current study was to test whether the BOLT+PTC strategies led to improvements in MBC implementation mechanisms and outcomes, relative to a no-training services as usual condition. Specific MBC practices we measured using the CAPER were standardized and individualized assessment use and treatment modification informed by assessment data collected. Our second goal was to test whether the dose of PTC was differentially related to implementation outcomes (i.e., MBC practices). Finally, we tested the main effects of training and consultation dose on consultation mechanisms (i.e., MBC knowledge, attitudes, and skill).
We used R [65] for all analyses, and tested our main hypotheses using multilevel models (MLM) with the statistical package ‘nlme’ [66]. MLMs allow for the analysis of clustered data, such as multiple observations of clinicians collected over time, and allows for missing data at the observation level. Because there were 32 weeks of CAPER data, we were able to compare multiple polynomial forms of change over time. We centered time such that the intercept reflected the end of consultation, regardless of consultation dosage. For instance, for those with 2 weeks of PTC the intercept was centered at Week 4 and for those with 4 weeks of PTC, the intercept was centered at Week 6. This means that the main effects of intervention reflect differences across groups in levels of the outcomes at the end of consultation, or after the first two weeks of observations for the no-consultation control group.
We compared polynomials (quadratic and cubic effects) to a piecewise model, which estimated separate models of change during the consultation period and change in the post-consultation period. Models were chosen based on fit; we selected a given model as “better” fitting when all indices (e.g., BIC, AIC and -2LL test) agreed to avoid capitalizing on chance. Because we had a relatively small sample size, we only estimated random intercepts and linear slopes; estimating random quadratic slopes produced model convergence problems.
We used 3 orthogonal contrast codes to compare the effects of intervention across conditions. The first compared the effects of BOLT+PTC to control. The second compared the effects of receiving PTC for 4 or 8 weeks to receiving PTC for 2 weeks, and the final contrast code compared receiving 4 vs. 8 weeks of PTC.