Participants
Study participants (N=52) were drawn from a national sample of school mental health stakeholders: 1) School-based providers with experience providing and/or supervising mental health interventions in schools (N=31); and 2) researchers with experience partnering with schools or districts to implement EBPs (N=21). Providers were sampled from the National School Mental Health Census and researchers were sampled from their established list of 138 school mental health researchers. All participants are US-based, and work in 23 states (AZ, AR, CA, CO, CT, FL, GA, IL, IN, LA, MD, MA, MI, MN, NE, NH, NC, OH, OR, PA, TX, VA, and WA).
Demographic, professional, and urbanicity characteristics of participants are in Table 1. School providers identified primarily as school psychologists (N = 6, 19%), school social workers (N = 5, 16%), or school counselors (N = 5, 16%). Other provider roles included being a school psychology supervisor (N = 2, 7%), director of related services/special education/student support (N = 2, 7%), counselor (community- or hospital-employed; N = 1, 3%), mental health agency administrator (N = 1, 3%) or other position (N = 9, 29%). School-based providers worked in 18 states representing all regions of the United States, and researchers worked in 14 states and worked with school partners in 43 states at the time of data collection or in the past, the District of Columbia, Guam, the US Virgin Islands and other US territories. Most School-based providers indicated they had current or past experiences providing (N = 30, 97%) and/or supervising (N = 20, 65%) mental health treatment services in schools. All researchers had experience conducting research about child and adolescent mental health, conducting research in partnership with schools/districts, training school-based personnel, and providing consultation or technical assistance to schools/districts. Most researchers had current or past experience training graduate students about working in or with schools (N = 20, 95%), providing mental healthcare in schools (N = 16, 77%), supervising direct mental healthcare in schools (N = 13, 62%) and serving as an educator (N = 11, 52%). There was a 94% retention rate of participants for Survey 2 (N=49; N=30 providers and N=19 researchers). Demographic and professional characteristics and urbanicity of participating providers were representative of those recruited (70). Researchers represented various age groups, field of training and urbanicity across the United States, and gender identity (56% Female) and degree (100% PhD) was very similar to researchers in our datasets who were not invited to participate. Thus, results from participants in this study are likely to be generalizable to stakeholders of similar demographics, professional expertise, and geographic location.
Procedures
Systematic sampling procedures that drew on nationally-representative databases for school-based providers and researchers were used to identify the study sample. Providers were selected through random stratified sampling from the National School Mental Health Census, a nationally-representative survey of school and district mental health teams’ services and data usage. Inclusion criteria was holding a position as a school mental health provider or clinical supervisor with likelihood of experience delivering or supervising school-based psychotherapy, in which MBC would be used (e.g., school social worker). Census data with individuals meeting this inclusion criteria were stratified based on rural-urbanicity continuum codes (metropolitan vs. non-metropolitan) and geographic representation. Prospective participants were randomly selected with replacement until a target sample of at least 30 school mental health providerswas achieved. We monitored the sample for approximate distributions in the United States for 1) metropolitan and non-metropolitan/rural urbanicity; and 2) geographic location. Using this approach, we oversampled for non-metropolitan/rural providers toward the end of recruitment to ensure adequate representation.
We recruited 194 school mental health providers; after a response rate of 19% (N=36), five were ineligible and thus screened out for a final sample of 31 providers. Eligible recruits who did not participate had nonworking emails (N=24), did not respond to our recruitment request (N=106), or declined (N=28). Providers received up to three reminder emails over the course of three weeks to respond to the study invitation to consent and start Survey 1.
Researchers were selected using purposive sampling from two sources, which were 1) Implementation Research Institute fellows who applied to and were selected for implementation science training via a competitive process were reviewed for school mental health expertise (71); and 2) an established list maintained by the National Center for School Mental Health of 138 school mental health researchers with active peer-reviewed publications and/or grants on topics pertaining to school mental health and wellbeing. This latter group of researchers were part of an invitation-only annual meeting and pre-reviewed for their scholarship and impact on the field, adjusted for career stage, by a planning committee team comprised of national school mental health scholars. Inclusion criteria were: 1) expertise with mental health program or practice development, effectiveness testing and/or implementation research; 2) experience partnering directly with schools; and 3) Associate Professor or Professor at their institution, which resulted in N=56 eligible researchers. Next, advanced expertise implementing mental health programs or practices in schools was coded on a four-point scale (3 = “optimal”, 2 = “good”, 1 = “okay’, and 0 = “unable to assess”) by three senior school mental health researchers with extensive experience in evidence-based practice implementation in schools. Ratings were averaged for each researcher and then recruits were invited with replacement from the highest ratings downwards until a sample size of at least N=20 was achieved. We recruited 29 research participants, which resulted in a response rate of 72% (N=21); among recruits, one did not respond to recruitment emails and seven declined.
Measures: Delphi Surveys
Participants completed two rounds of feedback using anonymous Delphi surveys. Each survey started with operational definitions of implementation strategies, MBC, school mental health providers, and three vignettes illustrating MBC use in schools (see Supplemental File 1). Vignettes were developed and revised for clarity and accuracy based on feedback from several co-authors and other collaborators. The vignettes intentionally focus on MBC clinical practice representing various school mental health professional roles, presenting concerns, student ages[1], and measures. Due to our focus on identifying implementation strategies for MBC as a clinical practice, the vignettes did not refer to any implementation supports, such as decision support by a measurement feedback system or other digital interface for scoring and viewing progress data.
The Delphi technique is an established method using a series of surveys or questionnaires to obtain controlled, mixed methods input from a diverse set of expert stakeholders so as to gain reliable consensus on a health care quality topic (72, 73). This method was used in the Expert Recommendations for Implementing Change (ERIC) project to identify a complete list of implementation strategies and definitions for selection and application to specific practices and settings (8, 68). Another research team then replicated and extended this research to identify an adapted set of important and feasible strategies for implementing evidence-based practices in schools (27, 74, 75). The Round 1 survey included 33 implementation strategies rated most important and feasible by a prior study examining evidence-based practices generally in schools (74). For each strategy, participants indicated whether it is relevant to MBC care specifically (“yes”, “yes with changes” or “no”). For strategies rated as relevant (“yes” or “yes with changes”), participants then were asked to provide 1) importance and feasibility ratings (1 = “not at all important/feasible” to 5 = “extremely important/feasible”), 2) possible synonyms or related activities to the strategy, and 3) suggestions about the definition or application of the strategy. At the end of the survey, participants were also asked to suggest additional implementation strategies not listed. The Round 2 survey included an updated list of strategies and definitions based on Round 1 results. Participants had four weeks to complete Round 1 and 2 surveys. Participants were compensated for their time and study procedures were approved by the Yale Institutional Review Board.
Data Analyses
Descriptive statistics of quantitative feasibility and importance ratings were examined for normality. Independent samples t-tests were used to compare ratings between providers and researchers. Mean feasibility and importance ratings were plotted for each strategy on a “go-zone” plot to compare relative feasibility and importance by quadrants (76). Go-zone plots provide a bivariate display of mean ratings and are often used in concept mapping. The origin represents the grand mean of both variables of interest (in this case, feasibility and importance) and the four resulting quadrants are used to interpret relative distance among items (in this case, strategies). The top right quadrant is the “go-zone” where strategies of the highest feasibility and importance appear.
A multimethod approach was used to reduce strategies and refine definitions between Survey 1 and Survey 2. First, a document was developed to display quantitative and qualitative Survey 1 results for each strategy. This included each Survey 1 strategy and definition, go-zone quadrant results (overall, as well as for providers and researchers), quantitative considerations (e.g., percentage of stakeholders who indicated the strategy was not relevant for MBC in school, significant differences between providers and researchers, any distribution normality concerns with ratings), qualitative synonyms, and qualitative definition change recommendations made by participants. Second, one rater (EC) reviewed each strategy using this document and established decision-making guidance vetted by study team members for each go-zone. She coded an initial decision (e.g., retain with revisions, collapse, or remove) with justification for each, documented any synonyms reported more than three times and drafted definition changes that were a) minimal language adjustments; b) not substantial additions to definition length; and b) consistent with overall scope of the strategy. Then, another rater (CS) reviewed coded decisions and documentation, and all discrepancies were resolved through consensus conversations. Final decisions about collapsing strategies were made based on consultation with two implementation researchers.
To analyze Survey 2 results, descriptive statistics, independent samples t-tests and go-zone plots were used again, as was the multi-step process detailed above.
[1] Vignettes refer to student “grade”, not age. In the United States, 3rd grade is in primary school, approximately 8 years old, 6th grade is considered “middle school”, approximately 11 years old, and 9th grade students is the beginning of secondary school, approximately 14 years old.