This is a cluster-randomized controlled trial (RCT) of implementation of our multimedia COVID-19 messaging platforms in seven hospital EDs (mix of academic, community, and safety net EDs) in four US cities: 1) San Francisco, CA: Zuckerberg San Francisco General Hospital [ZSFGH] and UCSF Medical Center – Parnassus; 2) Philadelphia: Thomas Jefferson University Hospital, Methodist Hospital and Jefferson Torresdale Hospital; 3) Seattle, WA: Harborview Medical Center; and 4) Durham, NC: Duke University Medical Center.
Rationale for Cluster Design
Our primary goal with this research is to determine whether implementation of PROCOVAXED as an ED-site level intervention results in greater acceptance and uptake of COVID-19 vaccines in vulnerable ED populations. Each site sees approximately 125–250 patients per day and applying or not applying the intervention (delivery of PROCOVAXED messaging) under an individual patient randomization scheme in this high workflow, rapid patient turnover ED environment is less practical and would likely result in extensive cross-contamination between intervention and control arms. Therefore, randomization by weeks at sites and removal of the intervention from the site completely during specified time periods of non-intervention was considered to be the optimal approach. Although a single switch of the intervention at each site (i.e., stepped-wedge trial design) is easier to enact, changes in general population attitudes over time limiting the validity of this trial method. We expect changes in baseline acceptance of the COVID-19 vaccine over time, which would likely introduce substantial bias toward or against the intervention. These practical and methodological benefits of the week unit cluster RCT far outweigh the smaller sample size and easier analysis with an individual patient unit RCT or a stepped-wedge design.
Randomizations are computer-based pseudo-random sequences of seven-day (one week) periods. Within each of the seven study sites, we randomized 30 one-week periods to the intervention group and 30 one-week periods to the control group to ensure equal allocation to control and intervention settings. We stratified sequences by study week period so that three centers will be in the control condition for one week and four centers will be in the experimental condition for one week, or vice versa in a Latin square design. This is intended to minimize the effect of secular trends on the comparison of the intervention. We generated a 60-week study calendar based on this randomization scheme. To maintain masking of allocation, sites are notified of their treatment assignment for the next week no more than three days prior to that week.
Study Enrollment Procedures
Practical budget considerations and limits on research personnel in patient care areas during the COVID-19 pandemic preclude 24/7 study enrollment and delivery of the study intervention. Thus, we will enroll a convenience sample of patients across all study sites, approaching all potentially eligible adult patients who present to study EDs during 6 to 10-hour weekday blocks, typically beginning at approximately 09:00 and continuing to approximately 17:00. Sites have leeway to choose their preferred daytime enrollment block periods, as long as those blocks remain consistent between study arms throughout the study. Research staff avoid telling providers whether this is an intervention versus control period.
All sites have ED dashboards that include patient age, chief complaint and COVID vaccination status. Research staff review these dashboards, query ED providers regarding suitability for the study and approach patients who potentially meet inclusion/exclusion criteria. We include adult (> 17 years of age) patients presenting to study sites according to the following inclusion criteria: 1) not already vaccinated for SARSCOV2; 2) able to provide informed consent; 3) fluent in English or Spanish (inclusion of Spanish speakers will only occur at three sites that have Spanish-speaking research staff); and 4) anticipated ability to complete study intervention in ED, i.e., able to watch the short video. We exclude patients with the following characteristics: 1) inability to participate in a survey because of major trauma, intoxication, altered mental status, or critical illness; 2) in police custody or incarceration; 3) psychiatric chief complaint or on psychiatric hold; 4) medical reason (as per the ED provider or patient) that they cannot receive a COVID-19 vaccine (e.g., instructed by their primary physician that they should not receive a COVID-19 vaccine); and 5) suspicion of acute COVID-19 illness with any of the following constituting suspicion: cough, fever, myalgias, shortness of breath, sore throat, chest pain and patient or provider declaration of suspicion of acute COVID-19. Of note, given that many patients are receiving COVID-19 testing for surveillance reasons (e.g., routine admission testing), performance of a COVID-19 test itself is not an automatic exclusion. However, if a COVID test returns a positive result, the patient is excluded.
For potential study patients, we deliver scripted verbal consent for two short study surveys (the Intake Survey [see Additional File 1] and the Vaccine Acceptance Survey [see Additional File 2 & 3]) in a manner that we have used with numerous other ED survey studies, including those that have addressed COVID-19 vaccine hesitancy.21 Considering that the intervention (COVID-19 vaccine messaging) is entirely educational and firmly a part of standard best-practice ED care (COVID-19 messaging is currently enacted in EDs across the US), only verbal consent is required. Patients are informed that they will not be compensated for participation.
For patients agreeing to the above surveys and meeting inclusion/exclusion criteria, the Intake Survey is administered to assess demographics and other study subject characteristics. All surveys are delivered orally - research staff read questions to the participants in their preferred language and record responses.
Approximately one hour after the Intake Survey, the Vaccine Acceptance Survey is administered. Although we are using an hour as our general guide for this Vaccine Acceptance Survey, we expect variability in patients’ visit time and care plans in the ED (e.g., patients may be undergoing procedures or away from their rooms for x-rays precluding the survey at one hour). Therefore, research staff can conduct this survey anytime between 30 minutes and six hours after the Intake survey. The Vaccine Acceptance Survey for Intervention [see Additional File 2] arm participants assesses whether the messaging platforms affected their views on getting a COVID-19 vaccine. The Vaccine Acceptance Survey for Non-Intervention [see Additional File 3] arm participants asks whether anyone delivered messages about COVID-19 vaccines to them in the ED. The last question in the Vaccine Acceptance Survey in both arms of the study is “Would you accept the COVID vaccine in the emergency department today if your doctor asked you?”
For all subjects saying “No” to the question “Would you accept the Covid vaccine in the emergency department today if your doctor asked you?”, research staff ask if they can contact them by phone and review their electronic health records (EHR) in a month for follow up, with options to agree to both phone calls and EHR review, only phone call (no EHR review), only EHR review (no phone call). If the participant agrees to follow-up, then the CRC will obtain the relevant full written informed consent, including separate HIPAA document agreements. They then ask subjects for their best phone number(s) to reach them for a follow-up phone call. They also ask for 1-month follow up in those subjects who said “Yes” to accepting the COVID-19 vaccine in the ED but did not it in the ED.
The intervention consists of three COVID-19 messaging platforms that were developed by our team in the first phase of this work, using findings from qualitative interviews focused on understanding vaccine hesitancy and on potential methods to addressing this hesitancy.
- Videos: Short (approximately 4-minute) Public Service Announcement type videos that are presented on an electronic tablet. We have developed five versions; all with the same wording in the message, but each with a different pair of physician messengers:
- African American physicians
- Latinx physicians, English version
- Latinx physicians, Spanish version
- Mixed race physicians
- White physicians
- Printed materials: One-page information flyer. We have developed five versions; all with the same format and wording/captions in the flyer, but each with different pictures of patients receiving the vaccine and health care providers administering the vaccines.
- Predominantly African American patients and providers
- Predominantly Latinx, English version patients and providers
- Predominantly Latinx, Spanish version patients and providers
- Mixed race patients and providers
- Predominantly White patients and providers
- Face-to-face messaging: A short (<1 minute) scripted message printed on a sheet of paper and delivered by one of the patient’s providers in the ED (physician, nurse or mid-level practitioner).
At the end of the Intake Survey, research staff ask patients if they are willing to watch a short video about COVID-19 vaccines. If they agree to watch the video, research staff show them a video on the electronic tablet. After finishing with the video (or after refusal to watch the video), the research staff ask the patient if they would like to see an informational flyer about COVID-19 vaccines. If the patient agrees, then staff hand the patient the flyer. Staff then ask the subject if they may return in approximately an hour for the Vaccine Acceptance Survey. After leaving the participant’s room, staff ask one of the patient’s primary providers (doctor, mid-level practitioner, or nurse) to deliver the COVID-19 face-to-face vaccine message, using the scripted message.
We deliver messaging from our platform libraries in patients’ preferred language (English or Spanish only). In our previous qualitative interviews, patients reported preferences for ideal vaccine messengers as being congruent with their race and ethnicity. Thus, research staff match videos and informational flyers with subjects’ declared ethnic and racial characteristics declared during the Intake Survey (e.g., Latinx messenger on video with Latinx subject). All surveys and interventions are delivered in real-time patient visits in site EDs, during waiting times such that they do not interfere with patient care.
Description of Usual Care
Study procedures during Control Period (Non-Intervention) blocks are identical to procedures in Intervention blocks with the exception that patients are not given the Intervention. Randomization to the control group does not in any way preclude delivery of vaccine messaging by ED providers, and research staff are not telling providers to avoid delivering vaccine messaging. During control group weeks, ED providers are free to practice their usual practice of delivering or not delivering vaccine messaging.
The anticipated flow of the study during Control Period Blocks is summarized in Fig. 2.
Provider Notification of COVID-19 Vaccine Acceptance
At this time, all of our EDs have the capability of administering COVID-19 vaccines in some manner, and we expect that this vaccine availability will continue for at least the first six months of the trial. The last question in the Vaccine Acceptance Survey in both arms of the study is “Would you accept the COVID vaccine in the emergency department today if your doctor asked you?” When a participant says they will accept the vaccine, research staff ask the patient if they can notify the patient’s providers that they said they will accept the vaccine. They also ask the participant if research staff can check to see if they receive the vaccine in the ED. Other than this question and notification, research staff do not push that they get vaccinated. They do not provide any counseling and do not tell patients whether they qualify for a COVID-19 vaccine in the ED.
When patients agree to the vaccine and agree that research staff can notify the ED providers of vaccine acceptance, research staff notify the provider of vaccine acceptance. We are clarifying with providers that we have not reviewed their medical history, indications and contraindications to COVID-19 vaccination. Research staff emphasize with both patient and provider that it is up to the provider to determine whether the patient can receive the vaccine in the ED (research staff are merely informing providers that the patient would accept it if offered). Staff later check with the provider and patient to see if the patient received the COVID-19 vaccine in the ED.
Data entry and management
We manage data using REDCap, hosted by the core site (UCSF), for secure data entry and management. Research staff have the option of inputting survey responses to the REDCap database on iPads in real time or using paper surveys (and later inputting into REDCap). For study subjects who have consented to phone and EHR follow-up, separate files linking patient identifiers (medical record numbers and phone numbers) to unique study ID numbers are housed at individual study sites in files, separate from other study data. We developed a detailed data dictionary to ensure consistent standards across sites and to reduce missing or erroneous data using the REDcap data quality tool.
Primary Outcomes and Ascertainment
Our primary outcome of acceptance of a COVID-19 vaccine in the ED is ascertained in both arms of the study by the question in the Vaccine Acceptance Survey, “Would you accept the COVID vaccine in the emergency department today if your doctor asked you?”: “Yes” to this question = acceptance; “No” or “Unsure” = non-acceptance.
Our primary outcome of COVID-19 vaccine uptake 32 days after their index ED visit is ascertained via 1) confirmation of receipt of a COVID-19 vaccine during their index ED visit, 2) review of EHR for receipt of a COVID-19 vaccine at 28 days, and 3) phone follow-up at 28 to 32 days - response to the question “Have you received a COVID-19 vaccine since you were in the Emergency Department?
This is a superiority trial in which we seek to verify our central study hypothesis that provision of PROCOVAXED will result in greater acceptance and uptake of the COVID-19 vaccine. Following the recommendations of Hussey and Hughes22,23, our statistical analyses will focus on comparing the vaccine uptake rates during intervention periods and control periods using mixed effects logistic models. The outcomes of interest are the binary indicators of whether a patient will accept the COVID-19 vaccine ( “Will you accept the COVID-19 vaccine if it was offered to you” – yes/no) and whether they have received a COVID-19 vaccine (uptake - yes/no) upon follow-up at 30 to 32 days. Models will include a random center effect to accommodate potential within-center characteristics (e.g., case mix, demographics), as well as terms for time and intervention. Hypothesis tests will focus on the statistical significance of the intervention indicator. We will fit the mixed effects models using maximum likelihood and routines in Stata.
We will test our primary hypothesis and analyze outcomes according to the study arm (index visit in intervention time period vs control time period) to which patients were allocated, regardless of whether they received PROCOVAXED messaging platforms or not – an intention to treat analysis. We will also conduct a per-protocol analysis, in which we assess results that would occur if they actually did or did not receive PROCOVAXED messaging (e.g., viewed the video clip given to them) during their index visit (ascertained by direct questioning in the Vaccine Acceptance Survey). When compared to the primary analysis, the per-protocol analysis will allow us to dissect the reasons for success (or failure) in demonstrating improved vaccine acceptance and uptake with PROCOVAXED. For example, if we find better acceptance and uptake in the per protocol analysis and not in the intention to treat allocation analysis, we would subsequently seek ways to improve delivery of PROCOVAXED messaging. Conversely, if both analyses fail to improve acceptance, then the PROCOVAXED intervention truly fails and other efforts to improve delivery would not be indicated.
In addition to the effects on total vaccine acceptance, we will also examine the effect of PROCOVAXED on acceptance in patient sub-groups, especially African Americans, Latinos and patients who lack primary care (as determined by direct questioning). We will additionally stratify outcomes by study site (representing different regions of the country and different communities), age, gender, race/ethnicity, as well as patient-level experience characteristics, such as having had COVID-19.
We will also analyze data from Intervention group Vaccine Acceptance Survey assessing participants’ views on the vaccine messaging platforms. This data includes opinions on which platforms were helpful in promoting vaccine acceptance and feedback on improving the platforms.
Sample Size Considerations
The sample size calculations for this research are governed by testing the hypothesis that implementation of a trusted messenger informational program will be associated with increased acceptance and uptake of COVID-19 vaccines in unvaccinated ED patients. Considering the commonality of hesitancy (non-acceptance), the high benefit of increasing acceptance and the negligible risk of the intervention (a trusted messaging program), even a small effect size of increased acceptance would be a clinically important difference. By investigator consensus and in consultation with a panel of health policy experts, we have determined that vaccine messaging platforms would be clinically useful if they increased acceptance by 7%. Similarly, with the same considerations of negligible risk, we determined that these platforms would be useful with an effect size on vaccine uptake of 7%.
Our sample size calculations accommodate the randomization of clusters design consisting of one-week periods (PROCOVAXED platform weeks versus non-intervention weeks) to the intervention at each of seven sites. To avoid period effects, we will assign sites using a Latin square design S2. We base the sample size calculation on the comparison of the proportion of patients who accept the vaccine between the PROCOVAXED and usual care time periods using standard formulae for individual randomization. We have verified that these sample sizes are conservative by simulation of data using a mixed random effects model.
When our protocol was originally written in February 2021, vaccines were not widely available and the degree of baseline vaccine acceptance was unknown. We therefore calculated sample sizes for a wide range of vaccine acceptance and uptake rates with a plan to measure these in the non-intervention (control) group during the first month of the trial. After the first month of the study, we estimated that our baseline vaccine acceptance and uptake rates (without intervention) will be approximately 15%. With this baseline uptake rate of 15%, we find that at an alpha = 0.05 level and a power of at least 0.9, we will need to enroll 1,290 patients (645 in each arm) in the study to detect the difference of interest (a setting in which the vaccine acceptance rate will increase by 7% in PROCOVAXED weeks). With this same baseline 15% rate of uptake and the same specifications for power, we will need to enroll 1,290 patients (645 in each arm) to detect a vaccine uptake difference of 7%. Thus, our target enrollment for this implementation trial is 1290 subjects across all sites.
In terms of total projected time for enrollment, we expect enrollment of four subjects per site per week at the seven sites, or 28 enrollees per week. We therefore expect to attain our target enrollment of 1290 subjects in approximately 46 weeks.
Early Termination of Study Monitoring Committee (ETSMC)
In our study we seek to determine whether implementation of COVID-19 vaccine messaging platforms in the ED result in greater COVID-19 vaccine acceptance and uptake in unvaccinated ED patients. One of the primary goals of the data safety and monitoring boards for most studies is to prevent the ongoing use of unsafe treatments. This safety goal does not apply to this study for the following reasons: 1) We are not delivering a drug or other physical intervention in this trial; the intervention is vaccine messaging. 2) We are not testing or measuring the safety of any drug or therapy – COVID-19 vaccines have undergone rigorous testing in multiple other studies; 3) The intervention (vaccine messaging) is an accepted and highly recommended public health intervention in all patient care settings; 4) Randomization to non-intervention week does not preclude delivery of vaccine messaging by providers. Providers are unaware of treatment arms during the study, and we are not telling providers to avoid delivering vaccine messaging. During non-intervention weeks, providers are free to practice their usual practice of delivering or not delivering vaccine messaging; 5) We are not telling patients that they qualify for the COVID-19 vaccine in the ED. We are only asking this question: “Would you accept the COVID vaccine in the emergency department today if your doctor asked you?”; and 6) We are not prescribing or ordering vaccines in study patients. We are merely informing ED providers that their patient would accept the vaccine if they were offered it in the ED. We are emphasizing with providers that we have not reviewed their medical history, indications and contraindications to COVID vaccination. The decision as to whether they would offer or give the vaccine is entirely left up to the ED provider.
Given the above-described rationale about safety, there remain two primary considerations with regard to stopping the trial before sample size enrollment in this study:
1) Decreased vaccine acceptance or decreased vaccine uptake in the intervention arm – It is possible that the intervention may increase vaccine hesitancy (decrease vaccine acceptance and uptake) and that this ineffectiveness could be determined statistically before full patient enrollment. Under this circumstance, continuation of the trial would therefore be futile and not ethically justified.
2) Superior efficacy of the intervention – Conversely, it is also possible that the intervention may clearly improve vaccine acceptance and uptake before full sample size enrollment. In this case, continuation of the study in the non-intervention arm would no longer be justified.
To assess for either of the two early termination scenarios, we have established a three-person ETSMC to conduct a blinded interim analysis at the one-quarter, one half, and three-quarter points of study enrollment (after enrollment of 323, 645, and 977 patients). We will provide the ETSMC a detailed algorithm with clearly identified criteria for this early termination assessment.
Site Orientation and Manual of Operating Procedures
We developed orientation materials to familiarize the ED Sites with the study protocol. Each site employs one or more RCs, who report to the site principal investigator (PI) and are responsible for day-to-day study implementation. We developed and disseminated a manual of operating procedures (MOP) with standard personnel training methods, including education kits with scripts, summary cards, and PowerPoint presentations to assist coordinators in the orientation of site clinicians and other staff to our study protocol. We convened videoconference calls to review this summary and develop plans for optimization of study procedures to improve usability and workflow. We continue to update the MOP to reflect changes in study procedures.
We reviewed study implementation procedures with sites individually and at group conferences prior to study intitiation. We conducted walk-through sessions of workflow on hypothetical study subjects in both Intervention and Non-intervention study arms. We continue to refine procedures with updates delivered to site PIs and research staff during weekly videoconferences. We maintain a study hotline during primary study hours and encourage study personnel to contact the PI and Central Study Coordinator for all issues and queries.
We implement rigorous methods for clinical trial quality assurance and performance improvement, including: 1) systematic review of enrollment logs, 2) quarterly audits of random samples of data for accuracy and missing elements, and 3) structured review of protocol deviations or violations. The Central Study Coordinator prepares monthly summary report cards, tabulating individual site quality assurance metrics for review during scheduled Steering Committee calls. The study PI discusses site-specific data with site PIs individually and summarizes these data collectively during Steering Committee calls, with prompt dissemination of plans for process improvement.