Study design overview
For this randomized controlled trial, we will test the feasibility, outcomes, and cost of the two levels of E3 training compared to current practices in CACs in a 1:1:1 allocation ratio. CACs will be randomized to the E3w, E3w+c, or a delayed waitlist control condition. Data will be collected directly from training participants pre-training, immediately post-training, and at follow-up. Further, utilizing NCA’s standard data systems (including the Outcome Measurement System, OMS outcome data will be collected for caregivers and community stakeholders pre- and post-training. We hypothesize that the E3 training will be readily implemented within the training structure of NCA and that Victim Advocates and CAC Directors will report high levels of satisfaction with the training. More importantly, we hypothesize that E3w alone will improve Victim Advocates’ knowledge, resulting in minor improvement in EBP engagement, while the addition of consultation in E3w+c will lead to increased use of engagement skills, thereby resulting in greater improvement in family engagement in EBP (see Fig. 1 for flow diagram). For purposes of the current study, we will examine family engagement via rates of mental health screening, rates of referral to EBP by Victim Advocates, and family attendance at the first session of EBP. Cost data will be collected to support examining cost-effectiveness in future studies.
Webinar development
Training for both E3w and E3w+c is planned to be provided using the web-based platform and accessed via the Internet. Although webinars themselves are not unique to the training of professionals in mental health or child maltreatment, by using recommended practices for webinars (e.g., pre-work activities, interactive components, provision of follow-up resources [42], we plan to test an interactive and engaging training.
In addition, a web-based training session is planned for CAC administrators and community stakeholders (i.e., MDT members) across both E3 and E3+w conditions. The goal of this training is to provide education regarding the role of the Victim Advocate, EBP, and strategies MDT members can use to enhance family engagement in EBP.
Consultation plan
The E3w+c training also involved two separate orientation training calls for Senior Leaders and Victim Advocates that reviewed the responsibilities and structure of the training. This was followed by 10 consultation/coaching web-based meetings, with the requirement that Victim Advocates attend 80% for successful completion. These web-based meetings are to occur over a 4 months total. The purpose of starting with weekly meetings is to solidify learning from the webinars. Calls are designed to provide opportunities to individualize learning and practice skills related to mental health screening, engagement (TIES and MI strategies) and linkage to EBP. With Victim Advocates from multiple CACs on each call, there will be shared learning, as each participant will have the opportunity to identify barriers encountered and gain feedback on the application skills.
Site recruitment
Recruitment and selection of CACs took place in fall 2019. CACs completed an application to participate through NCA, with procedures following NCA’s established guidelines for the application, proposal evaluation, site selection, and implementation of training processes. Applications for training were released via email to the accredited CACs across the U.S. Informational calls were held in the fall of 2019 to address questions and review the commitment required for participation in all aspects of the project (training and research). Applicants were reviewed for meeting the following inclusion criteria: (a) Fully accredited by NCA, (b) either directly provides EBP for child mental health or has established and demonstrated linkages for services in the community, (c) participates in the NCA’s Outcome Measurement System (OMS), and (d) has Memorandum of Understanding (MOU) or data sharing agreements with all referral sources. Selection was made at the CAC site level, rather than Victim Advocate level. This was to ensure that all Victim Advocates at the same CAC were placed in identical conditions, thereby avoiding any cross-sharing of knowledge across training conditions. See Fig. 2 for a flow-chart regarding enrollment.
Procedures
Timeline
CAC administrators (Senior Leaders) and Victim Advocates from the sites that met the inclusion criteria were invited to participate and complete consenting procedures, as approved by the University of Oklahoma Health Sciences Center. Informed consent was completed with all individual participants via an electronic platform (i.e., REDCap). Participants were informed that they are allowed to discontinue participation as a site or as an individual at any time. Multiple data collection methods are planned for pretraining, post training and follow up (see Fig. 3).. After each webinar, a short training evaluation form is to be sent to E3w and E3w+c participants. Data collection will be monitored by the project coordinator, who will assist sites with any questions or concerns with assistance from the research team. Upon completion of the study, sites will receive a $600 payment for their participation.
Data sources
The Outcome Measurement System (OMS)
Measures completed by caregivers and MDT members will be captured through the NCA Outcome Measurement System’s Initial Visit Caregiver Survey (offered at the end of a CAC visit), Caregiver Follow-Up Survey, and MDT survey. Over the course of the project year, CACs are required to participate in all three OMS surveys. OMS surveys are anonymous and voluntary for caregiver and team member participants and can be delivered in multiple methods, including paper and electronic methods available both on-site and through take-home options. On all three surveys, the questions are a mix of Likert-scale, yes/no, and open-ended items to provide a variety of ways for respondents to share opinions, concerns, and suggestions. The standard OMS surveys were modified to include questions assessing Victim Advocate family engagement skills and connection to EBP.
NCA Member Statistics & Census
CACs provide administrative data to NCA on the scope of services provided and remaining service needs through two statistical sources: NCA statistics submitted every 6 months through the NCATrak case management system and NCA Member Census Surveys collected every two years through Qualtrics. Statistics include basic outputs like number of children served, client demographics, and case resolutions. The Census Survey includes more detailed questions on topics like funding sources, staffing information, and information on mental health services provided by CACs and partner agencies. The most recently available Census was collected in the summer of 2018 and the next Census will be distributed in the summer of 2020.
REDCap
All project-specific data, including measures noted below, client tracking information, and any other assessments completed by the Victim Advocates and Senior Leaders will be collected via REDCap at the University of Oklahoma Health Sciences Center [43, 44]. Victim Advocates at each site will be provided with a user login for REDCap and will be able to enter in data for their site at any time. Advocates will only be able to view the data for their site.
Pre-RCT Survey
In the first year of the project, we created and implemented an electronic survey of Victim Advocates and Senior Leaders across all CACs. The survey was collected through NCA’s Qualtrics system and distributed to the national network of CACs. Survey questions focused on the current roles, responsibilities, activities, tools, and management of Victim Advocates. We received responses from 915 Victim Advocates and 540 CAC Directors. Responses on the survey were utilized by the training team to develop the E3 training. In addition, several items focusing on the mental health screening procedures and barriers that Victim Advocates face when engaging families in EBP were used in the adaptive randomization procedure (see below).
Measures
Multiple measures will be collected over the course of the study. Details regarding these measures are described in Table 1.
Proposed Mechanisms of Change.
The key mechanisms proposed for the impact of the E3 training on rates of child mental health screening, referral, and linkage to EBP are changes in Victim Advocates’ knowledge and family engagement skills.
A self-report knowledge test will directly examine the knowledge Victim Advocates gain through the training process. Items developed focus on engagement strategies, trauma and effects of trauma, evidence-based mental health treatments, screening for child mental health concerns, and strategies for identifying EBP in their own communities. Our goal is to test change in knowledge acquisition by Victim Advocates.
Skill (i.e., fidelity) measures were adapted from previous research examining self-reported fidelity to the TIES model [45], as well as current coding manuals for MI fidelity [46, 47] to create a self-report checklist of skills taught in the training. In consultation with the TIES experts, we developed a self-report checklist that includes both engagement-consistent behaviors (e.g., inquiring about previous mental health experiences) as well as behaviors counter to the MI and TIES strategies (e.g., providing advice). The inclusion of both types of items will ideally decrease the demand for an overly positive response by Victim Advocates.
Factors that can affect the acquisition of knowledge and application of skills can occur at the individual and system level. As such, we included measures of Victim Advocate learning anxiety, motivation, executive functioning, attitudes, and cultural sensitivity, as well as organizational and supervisory culture and support.
Outcome Data
Targeted outcomes are as follows: implementation of screening, referrals for services, successful linkage to at least one appointment, types of services accessed (i.e., EBP status), and reduced caregiver stress. These are captured via OMS caregiver surveys and through REDCap surveys completed by the Victim Advocate to address (a) screening forms implemented, (b) engagement strategies used, (c) results of screening, (d) referrals made, and (e) first treatment session documented by date.
Costs
To capture direct and indirect costs associated with implementing the E3 training, during the project we will track (a) the amount of time Victim Advocates spend completing the webinar and pre-work activities; (b) the number and length of consultation calls attended by each Victim Advocate (if applicable); and (c) number of screening assessments and referrals completed at each CAC. Detailed cost information will be collected at the follow-up, comprising quantitative and qualitative questions about costs associated with Victim Advocates’ activities since completion of the training. Quantitative items are expected to include salary/wages and benefits, time, and resource use, but these items may be modified or supplemented based on qualitative data collected during the training year. The qualitative items will focus on the impact of the family navigator model (e.g., “Describe the impact, if any, that adopting the family navigator role has had on your other job responsibilities”, “What resources will be required to sustain the family navigator role at your CAC?”). Costs associated with development of the training materials and resources will be collected to direct survey of training developers during and post implementation.
Randomization
A power analysis was conducted initially to determine how many sites would be needed per randomized condition. Because we have three treatment conditions (i.e., E3w, E3w+c, delayed waitlist control), we can assess intervention effects for all three two-way combinations of interventions. The power analyses were conducted for each of these two-way comparisons. To avoid overestimating power [48], we used the smallest number of clusters in an intervention group to estimate power. Power analyses were conducted using the Optimal Design software [49]. With a small intraclass correlation (ρ=0.05) and 50 total CACs (i.e., total clusters across a pair of intervention conditions), the minimal detectable effect size (MDE) is relatively small δ=0.19 as a standardized mean difference assuming 80% power and a Type I error rate of 5%. This also assumes there are at least 200 referrals per CAC. For the same design criteria and a larger intraclass correlation (ρ=0.50) the MDE is large at 0.57. Overall, power analyses suggested that we should randomize at least 25 CACs to each condition. We received 114 applications, and the research team evaluated all applications for inclusion criteria. In addition, the research team determined if the CAC lacked capacity to participate in the training (e.g., only one part-time advocate, participation in multiple other training initiatives), they would not be included in the randomization at this time. After review, 81 sites were eligible to participate, and all were randomized.
The analysis process began with a preliminary exploration of baseline covariates that were correlated with the outcome variable caregiver engagement. Variables were taken from pre-existing data collected through the NCA Census (N = 753, q = 222), NCA statistical data [N = 838, q = 68] and OMS Surveys (including the caregiver follow-up survey [N = 490, q = 16], MDT survey [N = 560, q = 14], CAC director survey [N = 540, q = 123], and advocate survey [N = 880, q = 156]).
Based on factors hypothesized to influence the outcome of interest (i.e., child engagement in EBP), the initial analysis included the followings variables: (a) type of location (urban vs. rural); (b) region of the CAC (e.g., Northeast, Southern), (c) number of children served, (g) number of total CAC staff, (d) number of advocates on staff, (e) organization type (e.g., hospital-based, government based), (f) EBP services provided onsite or via community, (h) level of MDT collaboration, (i) number of barriers CAC staff report experiencing when referring families to EBP, (j) use of a mental health screening tool, (k) advocates previous training experiences, (l) number of children reported to EBP, and (m) number of children who received EBP. The main purpose of the exploratory analysis was to specify the most predictive factors and apply them as the baseline covariates. Both variables (l) and (m) were used as outcomes, and the rest of the variables were predictors in generalized linear models. Because of the exploratory nature of this aim, as well as the existence of missing data, the major risk was a false discovery due to capitalizing on a chance occurrence. Therefore, the analysis practiced the stepwise model selection based on multiple imputed data [50]. Notably, variables (a) region of the CAC, (b) number of barriers when referring families to EBP, and (c) use of a mental health screen tools appeared in more than 50% of the selected models from twenty imputed data. Therefore, this analysis used these three variables as the covariates in the adaptive randomization.
Covariate adaptive randomization is an approach to ensure that the participants are approximately balanced with respect to covariates in the randomization [51]. The current analysis utilized the method of permuted block randomization with eight stratas (4 region areas × 2 screen tool usage levels) to assign 81 CAC sites randomly into three arms. Group A (N = 26), Group B (N = 28) and Group C (N = 27), corresponding to E3w, E3w+c, and Delayed control respectively. A preliminary baseline equivalence test was also applied to check whether any differences between three arms existed. It did not find any difference between groups on children’s rate of referral to EBP (F(2, 78) = 0.185, p = 0.832), rate of EBP receipt (F(2, 78) = 0.146, p = 0.864), or number of advocates on staff (F(2, 78) = 1.423, p = 0.247).
Proposed Analyses
For the randomized controlled trial, we will conduct both quantitative and qualitative analyses that will examine the effect of the training on the key outcomes of interest. Proposed analyses are described below.
Quantitative Analytic Plan
The outcomes analysis will be obtained from the post-training and follow-up assessments of Victim Advocates and Senior Leaders, as well as the continual collection of OMS survey data from caregivers and team members. The variables collected from Victim Advocates and Senior Leaders are the time-varying and CAC-varying provider fidelity, knowledge, and perceptions of the utility of training. The other part of outcomes will relate to family engagement. Statistical analysis will include, but is not limited, to the following: a) applying linear mixed effect models to evaluate the changes of the primary outcomes between conditions across time, should the distribution of the outcomes and residuals suggest being appropriate [52, 53]; and b) investigating the mechanism that is responsible for the causal effect between training conditions and outcomes, with the mediator of knowledge/skill achievement. Covariates collected (e.g., perceived supervisory support, learning anxiety) will also be examined for their influence on the outcome of interest. As a feasibility study, the principal goal in this stage is to examine whether Victim Advocate knowledge and skills change due to training, what factors might be associated with the change, and how that influences family engagement in mental health services.
Missing data will not be avoidable due to the large amount of data collected from sites across the national networks, and the repeated measurements across multiple time points. Therefore, stochastic multiple imputation methods will be used to handle missingness, if the assumption of ignorable missing mechanism can be held [54, 55]. In addition, analyses will be “intent-to-treat,” such that individual participants or sites who leave the study will be included in analyses.
All the analysis will be completed by the statistical package R (3.5.2) [56] with multiple packages, such as dplyr, tidyr, ggplot2, lme4, stats, readr, and mice.
Qualitative Analytic Strategy
The research team plans to conduct thematic analysis of all qualitative responses on evaluation and follow-up measures. To do so, all responses to each question will be reviewed in their entirety in order to identify broad themes within the responses. Themes will be organized into a broad codebook, and additional coding will focus on refining themes further. Coding will be conducted by multiple members of the research team, and interrater reliability will be determined through cross-coding of responses and comparison of identified themes. Discrepancies will be reviewed with the larger research team to discuss and finalize coding.
Cost Analysis
We will generate descriptive statistics from the quantitative cost data to describe typical costs (i.e., means) and variability in costs (i.e., standard deviations) associated with delivery of the E3 training. Direct costs will be calculated in terms of the cost of the resource and the frequency of its use (e.g., consultation fee x number of consultation sessions). Indirect costs will be calculated by applying a shadow price [57], which estimates the value of lost productivity for alternative professional activities of CAC staff, to time spent on training activities (i.e., hourly shadow price x hours of training activities). All cost estimates will be placed on the same metric through adjustment to (a) an index year using the Consumer Price Index [58] to account for inflation and (b) national average U.S. dollar values using the Council for Community and Economic Research Cost of Living Index [59] to account for costs of living differences between CAC locations. We will sum all direct and indirect expenses separately before calculation of descriptive statistics and examine descriptive statistics for total (i.e., direct plus indirect) costs.