Study Setting & Participants: Primary care health centers were recruited from the OCHIN (not an acronym) practice-based research network (PBRN).10 OCHIN is the largest network of CHCs using a single instance of the Epic EHR. Its centrally hosted EHR is deployed in over 100 health center systems (595 clinic sites) caring for nearly 3.6 million patients across 22 states. For this study, eligible OCHIN health centers met the following criteria: located in a state that expanded Medicaid in 2014, implemented the OCHIN EHR prior to 2013, and no history of participation in a study of similar tools targeted toward children.11 Of 32 eligible health centers, 10 were not approached because they were being recruited for other OCHIN PBRN projects. Of the 22 health centers invited, seven (31.8%) agreed to participate. This study adhered to CONSORT guidelines (Appendix Figure 1). The seven participating health centers were composed of twenty-three individual clinic sites (we will refer to the larger entities as health centers, recognizing that most health centers are systems with more than one affiliated clinic site). The health centers included in this study ranged from one to six clinics per health center, and all were designated Federally Qualified Health Centers (FQHCs). All participating clinics sites received HRSA grant funding for O&E. Quantitative data were available for all 7 health centers throughout the entire study period, however, one health center in Arm 1 (Health Center D) was lost to qualitative follow up.
Study Design and Intervention: This project was designed as a hybrid effectiveness-implementation study to identify the factors that explain why some health centers implemented the tools and others did not. More detailed description of study design can be found in the study protocol by DeVoe.5 The tool implementation period consisted of 18 months – September, 2016 through March, 2018. This period was preceded by a 6-month tool testing and refinement window (March-September, 2016) where a preliminary version of the tool was released to all participating clinics and Arm 2 clinics were engaged in ‘beta testing.’ The final version of the tool was released to all clinics in Arms 1 and 2 mid-September, 2016. Qualitative data collection continued through July, 2018. The intervention included the enrollment tool, educational materials, ‘beta testing,’ and facilitation which are described below.12
- The Enrollment Tool: As described in Table 1, the enrollment tool consisted of an electronic fillable ‘form’ which appeared alongside typical patient registration processes within the EHR. This form was intended for use by enrollment assisters or other staff who help patients or community members with registration or insurance enrollment. Each ‘form’ can be used to assist multiple individuals and is electronically linked to a single individual’s health record. Screenshot of the Enrollment Tool is included in the appendix (Appendix Figure 2).
- Educational materials: All participating health centers received basic educational materials consisting of an electronic manual with instructions for tool use.
[INSERT TABLE 1]
- ‘Beta testing:’ Only Arm 2 clinics participated in an initial period of beta testing (the 6 months prior to the study period) which included working with the facilitator and participating in user-centered design feedback sessions that led to tool refinement and updates.
- Facilitation: In addition to basic educational materials, Arm 2 clinics received facilitation customized to meet individual practice needs. During the 18-month implementation study period, the facilitator contacted all Arm 2 health centers (by phone, virtual meeting space, or email) at least monthly. During these contacts, the facilitator provided audit and feedback reports which included graphs of monthly rates of uninsured and Medicaid visits, as well as monthly use of the enrollment tool, for each of the clinic sites and the composite health center. The facilitator also provided tailored guidance to support clinic-led rapid change cycles focusing on enhancing tool utilization. Throughout the study period, the facilitator was also able to expedite problem-solving (including technologic support) for any questions/concerns regarding study tools for Arm 2 clinics.
Randomization Procedure: The study team randomized four health centers (11 clinic sites) to Arm 1 which received educational materials only, and three health centers (12 clinic sites) to Arm 2, which received educational materials plus facilitation. Participating clinic sites were randomized by health center system into one of two intervention arms through covariate-constrained randomization. We used state (Oregon vs. non-Oregon), number of clinics per health center, total number of patients per health center, and percentage of uninsured patients per health center as covariates in our random procedure as these were theorized to be important confounders. Given the focus on comparing intervention arms on tool adoption, we do not report on how we selected controls for testing effectiveness as these are beyond the scope of this paper and can be found in the study protocol paper.5
Quantitative Data: The primary quantitative outcome of interest was tool use during the study period. First, we measured individual instances of tool use and number of unique patients for whom the tool was used. Then, because health center size and population varied widely, we quantified tool use as a rate. We set the numerator as the number of unique patients with tool use who had ≥1 Medicaid-insured or uninsured clinical visit at an intervention CHC during the study period, and the denominator as the total number of CHC patients with ≥1 Medicaid-insured or uninsured clinical visits at an intervention CHC during the study period (presumed to be individuals at highest risk of insurance discontinuity). An instance of tool use was defined as any instance where the form was opened and saved.
For descriptive purposes, we collected patient- and CHC-level information rolled up to the health center. Patient-level characteristics included patient status at time of first tool use (established patient, new patient, or never patient); and insurance type prior to first tool use (Medicaid, Medicare, private, other public, Uninsured, no prior visits/missing). CHC characteristics included number of active patients (individuals with an ambulatory encounter during the study period); % uninsured ambulatory visits during the implementation period; % Medicaid-insured ambulatory visits during the implementation period; % nonwhite; % Hispanic; median patient age (years); % Federal poverty level (FPL) <138% at the beginning of the implementation period; number of clinics within their health center system; and urbanicity (rural, urban, mixed—one health center had clinic sites in both rural and urban locations).
Qualitative Data: Qualitative data collection included ethnographic observation and semi-structured interviews with key stakeholders at CHCs and was focused on understanding why (or why not) and how the enrollment tool was implemented, with attention to differences between each arm. Data were collected through in-person site visits to CHC clinics, and through monthly phone interviews with CHC contacts. Interview guide and field definitions are included in the appendix (Appendix Tables 1 and 2).
Two experienced researchers conducted at least one in-person site visit with all but one of the CHCs – Health Center D did not participate in a site visit. Site visits focused on identifying practices’ experiences with the enrollment tool, including how the tool was integrated into clinical work flows and the barriers and facilitators to tool use. Health centers in Arm 2 additionally received a baseline site visit aimed at assessing existing clinic O&E processes prior to intervention, understanding motivation for using the enrollment tool, and understanding aspects of practice organizational capacity, including existing tools used for O&E. Health centers generally participated in site visits according to their own willingness, capacity, and clinic structure (one at A, C, and E; two at B and F, and three at G). Site visits lasted 1-2 days and included 8-14 hours of observation, 3-6 semi-structured interviews with practice staff (n=47) utilizing snowball recruitment technique to identify all staff who were directly or indirectly involved in tool use (e.g., front desk staff, leadership, outreach and enrollment staff, heath information technology support), and the collection of artifacts related to the enrollment process (e.g., enrollment applications, training and educational materials). The number of interviews conducted at each clinic varied by clinic size and captured all available clinic-identified stakeholders in the health insurance outreach and enrollment process. Interviews were approximately 45-60 minutes long.
In addition to site visit data, one researcher made monthly phone contact with each CHC during the study period, targeting one to two key informants who were most closely involved with implementation of the enrollment tool. Phone interviews were approximately 20 minutes long and included discussion of tool implementation, enrollment patterns, and experience with implementation support.
Quantitative Analysis: First, we described patient and clinic characteristics by health center and study arm. Similarly, we compared several characteristics of tool utilization by health center and study arm. We compared rates of tool use between Arms 1 and 2 using a two-sided Poisson exact rate ratio test, and report the rate ratio (95% confidence interval) comparing Arm 2 to Arm 1. Statistical significance was set at α<0.05. To maximize learnings from individual health centers, we also report descriptive rates of tool use by health center. Lastly, to understand uptake and continued tool use throughout the implementation period, we visually present trends in tool utilization over time by estimating monthly rates of tool use by study arm and for each health center. Quantitative analyses were performed using SAS software, v.9.4 (SAS Institute Inc., Cary, NC) and R version 3.6.0.13
Qualitative Data Management and Analysis: Interviews were audio-recorded, professionally transcribed, and checked for errors. Jottings made in the field were developed into comprehensive field notes within 24-48 hours of the site visit’s end. Data collection and analyses were iterative with initial findings guiding subsequent interview questions.14 Fieldnotes and interview transcripts were de-identified and put into Atlas.ti (Version 7.0, Atlas.ti Scientific Software Development GmbH, Berlin, Germany) for data management and analysis.
We used an immersion/crystallization approach15 to analyze the data. Two experienced qualitative analysts read the data from each health center (immersion) and then regularly met to discuss patterns within each site (crystallization). Through this process, the team developed a codebook to tag relevant portions of the text. Data were then analyzed a second time to draw comparisons across health centers. This process yielded patterns in the factors that influenced use of the enrollment tool, and we began to make connections to relevant literature to help enhance and explain emerging results. This methodology has been used elsewhere in qualitative primary care research.16-18
We used the Consolidated Framework for Implementation Research (CFIR)19 to help name the barriers and facilitators to implementation that we observed. Lastly, we developed a matrix organized by health center and study arm, and input qualitative data for each of the factors identified as influencing tool use. This allowed the team to organize cross-cutting findings and to develop a more robust understanding of what happened among participating health centers and why.20
Quantitative/Qualitative Data Integration
Throughout the study period, quantitative and qualitative analytic teams worked in parallel to analyze the data while minimizing bias. When analyses were complete, the results were examined by the full study team to identify common themes explaining the observed results and to integrate the presentation of data. With assistance of the full study team, qualitative and quantitative results were triangulated and contextualized within the broader literature and study design. This methodology has been used elsewhere and is noted to maximize rigor in mixed methods analyses.21
This study was approved by the Institutional Review Board at Oregon Health & Science University and was registered as an observational study at clinicaltrials.gov (NCT02355262).