Study design overview
For this randomized controlled trial (RCT), we are testing the feasibility, outcomes, and cost of the two levels of E3 training compared to current practices in CACs in a 1:1:1 allocation ratio. CACs were randomized to the E3w, E3w+c, or a delayed waitlist control condition. Data was collected directly from training participants pre-training, immediately post-training, and at follow-up. Further, utilizing NCA’s standard data systems, outcome data was collected for caregivers and community stakeholders pre- and post-training. We hypothesized that the E3 training would be readily implemented within the training structure of NCA and that Victim Advocates and CAC Directors would report high levels of satisfaction with the training. More importantly, we hypothesized that E3w alone would improve Victim Advocates’ knowledge, resulting in minor improvement in EBP engagement, while the addition of consultation in E3w+c would lead to increased use of engagement skills, thereby resulting in greater improvement in family engagement in EBP (see Fig. 1 for flow diagram). For purposes of the current study, we will examine family engagement via rates of mental health screening, rates of referral to EBP by Victim Advocates, and family attendance at the first session. Cost data was also collected to support examining cost-effectiveness in future studies.
Training for both E3w and E3w+c was provided via a web-based platform. Although webinars themselves are not unique to the training of professionals in mental health or child maltreatment, by using recommended practices for webinars (e.g., pre-work activities, interactive components, provision of follow-up resources , we are testing an interactive and engaging training. In addition, a web-based training session was provided to CAC administrators and community stakeholders (i.e., MDT members) across both E3 and E3+w conditions. The goal of the MDT webinar was to provide education regarding the role of the Victim Advocate and strategies MDT members can use to enhance family engagement in EBP.
The E3w+c training involved two separate orientation training calls for Senior Leaders and Victim Advocates that reviewed the responsibilities and structure of the training. This was followed by 10 consultation web-based calls; Victim Advocates were required to attend 80% for successful completion. Calls began weekly in order to solidify learning from the webinars; the final six meetings then took place biweekly. The meetings provided opportunities to individualize learning and practice skills related to mental health screening, engagement (TIES and MI strategies) and linkage to EBP. With Victim Advocates from multiple CACs on each call, there was opportunity for shared learning, as each participant had the opportunity to share identified barriers encountered and gain feedback from experts and their peers.
Recruitment and selection of CACs took place in fall 2019. CACs completed an application to participate through NCA, with procedures following NCA’s established guidelines for the application, proposal evaluation, site selection, and implementation of training processes. Applications for training were released via email to the accredited CACs across the U.S. Informational calls were held in the fall of 2019 to address questions and review the commitment required for participation in all aspects of the project (training and research). Applicants were reviewed for meeting the following inclusion criteria: (a) Fully accredited by NCA, (b) either directly provided EBP for child mental health or had established and demonstrated linkages for services in the community, (c) participated in the NCA’s Outcome Measurement System (OMS), and (d) had Memorandum of Understanding (MOU) or data sharing agreements with all referral sources. Selection was made at the CAC site level, rather than Victim Advocate level. This was to ensure that all Victim Advocates at the same CAC were placed in identical conditions, thereby avoiding any cross-sharing of knowledge across training conditions. See Fig. 2 for a flow-chart regarding enrollment.
CAC administrators (Senior Leaders) and Victim Advocates from the sites that met the inclusion criteria were invited to participate and complete consenting procedures, as approved by the University of Oklahoma Health Sciences Center. Informed consent was completed with all individual participants via an electronic platform (i.e., REDCap). Participants were informed that they are allowed to discontinue participation as a site or as an individual at any time. Multiple data collection methods are planned for pretraining, post training and follow up (see Fig. 3). After each webinar, a short training evaluation form was to E3w and E3w+c participants. Data collection was monitored by the project coordinator, who assisted sites with any questions or concerns with support from the research team. A data monitoring committee was not utilized given the low level of risk for participating sites. Sites received $600 for their participation.
The Outcome Measurement System (OMS)
Measures completed by caregivers and MDT members was captured through three of the NCA Outcome Measurement System’s surveys: (a) the Initial Visit Caregiver Survey, offered at the end of a CAC visit; (b), Caregiver Follow-Up Survey, completed approximately 6 weeks after the family’s initial visit to the CAC, and (c) the MDT survey, completed twice over the study. CACs were required to participate in all three OMS survey systems. Anonymous and voluntary, the surveys are delivered via both paper and electronic methods either on-site or through take-home options. The survey questions are a mix of Likert-scale, yes/no, and open-ended items to provide a variety of ways respondents could share opinions, concerns, and suggestions. The standard OMS surveys were modified for the current study in order to include questions assessing Victim Advocate family engagement skills and connection to EBP.
NCA Member Statistics & Census
CACs provide administrative data to NCA on the scope of services provided and remaining service needs through two statistical sources: NCA statistics submitted every 6 months through the NCATrak case management system and NCA Member Census Surveys collected every two years through Qualtrics. Statistics include basic outputs like number of children served, client demographics, and case resolutions. The Census Survey includes more detailed questions on topics like funding sources, staffing information, and information on mental health services provided by CACs and partner agencies. The most recently available Census was collected in the summer of 2018 and the next Census was distributed in the summer of 2020.
All project-specific data, including measures noted below, client tracking information, and any other assessments completed by the Victim Advocates and Senior Leaders were collected via REDCap at the University of Oklahoma Health Sciences Center [43, 44]. Victim Advocates at each site were able to enter data at any time and were only able to view their own site’s data.
In the first year of the project, prior to the selection and randomization of sites, we implemented an electronic survey of Victim Advocates and Senior Leaders across all CACs. Collected through NCA’s Qualtrics system, the survey was distributed to the national network of CACs. Questions focused on the current roles, responsibilities, activities, tools, and management of Victim Advocates. We received responses from 915 Victim Advocates and 540 CAC Directors, which were then utilized by the training team to develop the E3 training. In addition, several items on mental health screening procedures and barriers that Victim Advocates face when engaging families in EBP were used in the adaptive randomization procedure (see below).
Multiple measures were collected over the course of the study. Details regarding these measures are described in Table 1.
Proposed Mechanisms of Change.
The key mechanisms proposed to impact rates of child mental health screening, referral, and linkage to EBP via E3 training are changes in Victim Advocates’ knowledge and family engagement skills.
A self-report knowledge test directly examined the knowledge Victim Advocates gain through the training process. Items developed focus on engagement strategies, trauma and effects of trauma, evidence-based mental health treatments, screening for child mental health concerns, and strategies for identifying EBP in their own communities. Our goal is to test change in knowledge acquisition by Victim Advocates.
Skill (i.e., fidelity) measures were adapted from previous research examining self-reported fidelity to the TIES model , as well as current coding manuals for MI fidelity [46, 47] to create a self-report checklist of skills taught in the training. In consultation with the TIES experts, we developed a self-report checklist that includes both engagement-consistent behaviors (e.g., inquiring about previous mental health experiences) as well as behaviors counter to the MI and TIES strategies (e.g., providing advice). The inclusion of both item types ideally decreased the demand for overly positive responses by Victim Advocates. Such a measure will allow us to examine skill development and its influence on primary outcomes (see below).
Factors that can affect the acquisition of knowledge and application of skills may occur at both the individual and system level. As such, we included measures of Victim Advocate learning anxiety, motivation, executive functioning, attitudes, and cultural sensitivity, as well as organizational and supervisory culture and support.
Targeted outcomes are as follows: implementation of screening, referrals for services, successful linkage to at least one mental health appointment, types of services accessed (i.e., EBP status), and reduced caregiver stress. These were captured via both OMS caregiver surveys and through REDCap surveys completed by the Victim Advocate to address (a) screening forms implemented, (b) engagement strategies used, (c) results of screening, (d) referrals made, and (e) first treatment session documented by date.
To capture direct and indirect costs associated with implementing the E3 training, during the project we tracked (a) the amount of time Victim Advocates spend completing the webinar and pre-work activities; (b) the number and length of consultation calls attended by each Victim Advocate (if applicable); and (c) number of screening assessments and referrals completed at each CAC. Detailed cost information was collected at the follow-up, comprising questions about salary/wages and benefits, time, and resource use. Costs associated with development of the training materials and resources were also collected from the E3 training team to examine overall training development costs.
A power analysis was conducted initially to determine how many sites would be needed per randomized condition. Because we had three treatment conditions (i.e., E3w, E3w+c, delayed waitlist control), we will be able to assess intervention effects for all three two-way combinations of interventions. The power analyses were conducted for each of these two-way comparisons. To avoid overestimating power , we used the smallest number of clusters in an intervention group to estimate power. Power analyses were conducted using the Optimal Design software . With a small intraclass correlation (ρ=0.05) and 50 total CACs (i.e., total clusters across a pair of intervention conditions), the minimal detectable effect size (MDE) is relatively small δ=0.19 as a standardized mean difference assuming 80% power and a Type I error rate of 5%. This also assumes there are at least 200 referrals per CAC. For the same design criteria and a larger intraclass correlation (ρ=0.50), the MDE is large at 0.57. Overall, power analyses suggested that we randomize at least 25 CACs per condition. We received 114 applications, all of which were evaluated for inclusion criteria. In addition, the research team determined if the CAC lacked capacity to participate in the training (e.g., only one part-time advocate employed, ongoing participation in multiple other training initiatives), they were not included in the randomization. After review, 81 sites were eligible to participate, and all were randomized.
The adaptive randomization process began with a preliminary exploration of baseline covariates that were correlated with the outcome variable caregiver engagement. Variables were taken from pre-existing data collected through the NCA Census (N = 753, q = 222), NCA statistical data [N = 838, q = 68], OMS Surveys (including the caregiver follow-up survey [N = 490, q = 16] and MDT survey [N = 560, q = 14]), and pre-RCT surveys (CAC director survey [N = 540, q = 123] and advocate survey [N = 880, q = 156]).
Based on factors hypothesized to influence the outcome of interest (i.e., child engagement in EBP), the initial analysis included the followings variables: (a) type of location (urban vs. rural); (b) region of the CAC (e.g., Northeast, Southern), (c) number of children served, (g) number of total CAC staff, (d) number of advocates on staff, (e) organization type (e.g., hospital-based, government based), (f) EBP services provided onsite or via community, (h) level of MDT collaboration, (i) number of barriers CAC staff report experiencing when referring families to EBP, (j) use of a mental health screening tool, (k) advocates previous training experiences, (l) number of children reported to EBP, and (m) number of children who received EBP. The main purpose of the exploratory analysis was to specify the factors most predictive and apply them as the baseline covariates. Both variables (l) and (m) were used as outcomes, and the rest of the variables were predictors in generalized linear models. Because of the exploratory nature of this aim, as well as the existence of missing data, the major risk was a false discovery due to capitalizing on chance. Therefore, the analysis practiced the stepwise model selection based on multiple imputed data . Notably, variables (a) region of the CAC, (b) number of barriers when referring families to EBP, and (c) use of a mental health screen tools appeared in more than 50% of the selected models from twenty imputed data. Therefore, this analysis used these three variables as the covariates in the adaptive randomization.
Covariate adaptive randomization is an approach to ensure that the participants are approximately balanced with respect to covariates in the randomization . The current analysis utilized the method of permuted block randomization with eight stratas (4 region areas × 2 screen tool usage levels) to assign 81 CAC sites randomly into three arms. Group A (N = 26), Group B (N = 28) and Group C (N = 27), corresponding to E3w, E3w+c, and Delayed control respectively. A preliminary baseline equivalence test was also applied to check whether any differences between three arms existed. It did not find any difference between groups on children’s rate of referral to EBP (F(2, 78) = 0.185, p = 0.832), rate of EBP receipt (F(2, 78) = 0.146, p = 0.864), or number of advocates on staff (F(2, 78) = 1.423, p = 0.247).
Quantitative Analytic Plan
The outcomes analysis will be obtained from the post-training and follow-up assessments of Victim Advocates and Senior Leaders, as well as the continual collection of OMS survey data from caregivers and team members. The variables collected from Victim Advocates and Senior Leaders are the time-varying and CAC-varying provider fidelity, knowledge, and perceptions of the utility of training. The other part of outcomes will relate to family engagement. Statistical analysis will include, but is not limited, to the following: a) applying linear mixed effect models to evaluate the changes of the primary outcomes between conditions across time, should the distribution of the outcomes and residuals suggest being appropriate [52, 53]; and b) investigating the mechanism that is responsible for the causal effect between training conditions and outcomes, with the mediator of knowledge/skill achievement. Covariates collected (e.g., perceived supervisory support, learning anxiety) will also be examined for their influence on the outcome of interest. As a feasibility study, the principal goal in this stage is to examine whether Victim Advocate knowledge and skills change due to training, what factors might be associated with the change, and how that influences family engagement in mental health services.
Missing data will not be avoidable due to the large amount of data collected from sites across the nation, and the repeated measurements across multiple time points. Therefore, stochastic multiple imputation methods will be used to handle missingness, if the assumption of ignorable missing mechanism can be held [54, 55]. In addition, analyses will be “intent-to-treat,” such that individual participants or sites who leave the study will be included in analyses.
All the analysis will be completed by the statistical package R (3.5.2)  with multiple packages, such as dplyr, tidyr, ggplot2, lme4, stats, readr, and mice.
Qualitative Analytic Strategy
The research team plans to conduct thematic analysis of all qualitative responses on evaluation and follow-up measures. To do so, all responses to each question will be reviewed in their entirety in order to identify broad themes within the responses. Themes will be organized into a broad codebook, and additional coding will focus on refining themes further. Coding will be conducted by multiple members of the research team, and interrater reliability will be determined through cross-coding of responses and comparison of identified themes. Discrepancies will be reviewed with the larger research team to discuss and finalize coding.
We will generate descriptive statistics from the quantitative cost data to describe typical costs (i.e., means) and variability in costs (i.e., standard deviations) associated with delivery of the E3 training. Direct costs will be calculated in terms of the cost of the resource and the frequency of its use (e.g., consultation fee x number of consultation sessions). Indirect costs will be calculated by applying a shadow price , which estimates the value of lost productivity for alternative professional activities of CAC staff, to time spent on training activities (i.e., hourly shadow price x hours of training activities). All cost estimates will be placed on the same metric through adjustment to (a) an index year using the Consumer Price Index  to account for inflation and (b) national average U.S. dollar values using the Council for Community and Economic Research Cost of Living Index  to account for costs of living differences between CAC locations. We will sum all direct and indirect expenses separately before calculation of descriptive statistics and examine descriptive statistics for total (i.e., direct plus indirect) costs.