This study will use a simulation modeling approach using data from a previously-conducted clinical trial(14,27).Our results are reported using the Consolidated Health Economic Evaluation Reporting (CHEERS) guidelines (28). Implementation strategies included in the model reflect implementation strategies that could be developed using data from the trial. In this study, we focus on the ADEPT community-based mental health or primary care clinics who were nonresponsive after 6 months of Replicating Effective Programs (REP) and would receive additional implementation support (i.e., Facilitation) to enhance uptake of Life Goals. Non-responsive to REP wasdefined as 10 or fewer patients receiving Life Goals or <50% of patientsreceiving a clinically significant dose of Life Goals, fewer than three Life Goals sessions (<3 out of 6), after six months(29–31).Eligible sites had at least 100 unique patients diagnosed with depression and could designate at least 1 mental health provider to administer individual or group collaborative care sessions for patients. The study was approved by local institutional reviewboards (IRBs) and registered under clinicaltrials.gov(identifier: NCT02151331).
Modeling Approach
Using data from the ADEPT trial, we designed a cost-effectiveness study to evaluate three strategies that could be implemented to support the uptake and clinical effectiveness of Life Goals. These strategies do not exactly match the arms in the clinical trial because our goal was to evaluate the optimal strategies for Phase II (or non-responders). We developed a decision tree to assess 1-year costs and outcomes for different intervention strategies following 6 months of REP among non-responsive sites. Interventions included in the model (see Figure 2) were: 1) REP+EF for 12 months, 2) REP+EF for 6 months, add IF for 6 months, 3) REP+EF for 6 months, no implementation strategy for 6 months (responder), 4) REP+EF/IF for 12 months, 5) REP+EF/IF for 6 months, no implementation strategy for 6 months (responder).Probability of non-response to the implementation strategies in the model was based on observed non-response rates in the study, which remained consistent across each phase at approximately .09. Sites who responded to their assigned implementation strategy after 6 months discontinued the strategy. The analysis uses a 1-year time horizon and assumes a health sector perspective. Parameter inputs were derived using primary data from ADEPT.
Costs
Implementation strategy costs include personnel costs for intervention training and delivery,time spent training, supervising, providing technical assistance and Facilitation, training compensation (e.g., pay during non-work hours), time costs of assisting with intervention delivery for clinical providers. Facilitation costs are based on the facilitation logs. The study EF and site IFs logged their tasks, categorizing mode, personnel interaction, duration, and the primary focus of each task. We calculated costs based on time spent by hourly wage plus fringe rates for facilitators. As there was one EF employed by the study team, we used the EF hourly wage + fringe. For the IFs, training, and background (and thus costs) varied. We based the IF salary and fringe rates on current rates for LMSW (Licensed Masters of Social Work) professional using Bureau of Labor Statistics data, as many of the IFs were LMSWs.Non-labor costs included costs of the curriculum (manual and materials), and travel costs(20,32).As we anticipated differences in uptake, that is the number of patients receiving Life Goals by condition, we calculated the total site-level cost per strategy(the level of randomization) and divided by the number of patients in that implementation strategy condition. The number of patients per condition was obtained from site-level records. Costs were collected in 2014 and adjusted to US 2018 dollars using the Consumer Price Index(33).A summary of cost parameters is provided in Table 1.We report summary statistics for implementation costs with 95% confidence intervals. We estimatedthe costs of REP using the available cost data to obtain a comprehensive assessment of total implementation intervention costs, plus the costs of facilitation activities in each condition (EF and EF/IF).
Health Outcomes
Quality-adjusted life-years (QALYs).To develop a preference-based healthutility measure for the current study, we mapped the SF-12 (which was collected as part of the patient-level evaluation in the ADEPT trial) tothe EQ-5D, a multi-attribute utility instrument, using an established algorithm developed by Franks and colleagues(34). The EQ-5D yields interval-level scores ranging from 0(dead) to 1 (perfect health). This mapping provides a health utility measure for each health state experienced by patients in the study and can be used to calculate quality-adjusted life years, the preferred measure for health benefits used in cost-effectiveness analysis.
Data Analytic Approach
We used a decision-tree model to comparethe cost-effectiveness across different scenarios for combining REP + facilitation for the Life Goals EBP (See Figure 2). The time horizon for this analysis was 12 months as this is the duration of the trial phase of the study. In this analysis, weare adopted a health system/payer perspective. This narrower perspective stands in contrast to the full, societal perspective, which incorporates all relevant costs and benefits and is recommended for most economic evaluations(35). While this is a narrower perspective can potentially ignore important costs or benefits from the broad societal standpoint, it has the practical value of explicitly addressing the budgetary concerns of payers. Thus, this approach fits well with implementation science contexts where financial factors are often central to whether programs and services are adopted and sustained(36).
Assumptions were made on the psychometric properties of the outcome measures, the effectiveness of the Life Goals intervention, and the reliability of time reporting by the Facilitators. We test these assumptions in the sensitivity analyses by varying the costs and outcomes related to each intervention condition at low and high values (95% confidence interval).To address missing data on our utility (outcome) measures, we employed an inverse probability weighting (IPW) approach(37).
We estimated per-patientcosts and QALYsfollowing the 12-month Trial Phase for each implementation strategy sequence. We calculated the per-patient cost by dividing the total costs per condition by the number of patients in each condition. To compare interventions, we divided netincremental costs (net increase in costs from REP+EF/IF versus REP+EF, for example) byincremental effectiveness (net increase in QALYs in REP+EF/IF versus REP+EF groups, for example) to calculate the incremental cost-effectiveness ratio for patient-level outcomes across the conditions. We conducteda one-way sensitivity analysis on all input parameters listed in Table 1 to create a tornado diagram using Net Monetary Benefits (NMB). We used NMB as this facilitates multiple comparisons, as in the current study, and incremental cost-effectiveness ratios (ICERs) are less suitable with more than 2 comparators(38).Thesensitivity analysis evaluates how costs and incremental cost-effectiveness is affected by variations in key parameters(26).When available, we based upper/lower bound estimates on the 95% confidence intervals.We also conducted a probabilistic sensitivity analysis (PSA).PSA characterizes uncertainty in all parameters simultaneously, reflecting the likelihood that each model parameter takes on a specific value and provides information on overall decision uncertainty based on parameter uncertainty(39). We conducted 1,000 model simulations to quantify the probability that the implementation strategy is cost-effective for a range of thresholds of willingness-to-pay(40).