The UC-COVID (Understanding Community Considerations, Opinions, Values, Impacts, and Decisions for COVID-19) Study is a community engagement study undertaken to characterize health and access to care during the COVID-19 pandemic. This study features an educational trial component to test the ability of a novel intervention to impact knowledge of and trust in institutional capacity to implement ethical allocation of scarce resources. We adopted a broad social media based recruitment strategy where we collaborated with community organizations, disease advocacy groups, and professional societies to invite study participation by direct messaging from organizations to their members; we also employed targeted social media posts and referrals from participants. Though our recruitment strategy primarily focused on groups in California (71% of our sample), eligibility was not restricted by location and all adults (age ≥ 18) were eligible. Recruitment for the survey opened on 5/8/2020 and closed on 9/30/2020; though 99% of respondents entered the survey between May and August. This study was approved by the Institutional Review Board and informed consent was obtained from all participants. This study was registered with ClinicalTrials.gov (NCT04373135).
Study Design and Procedures
To promote recruitment, we established partnerships with several disease advocacy groups (e.g., COPD Foundation, Taking Control of Your Diabetes, Pulmonary Hypertension Association, Vietnamese Cancer Foundation, AltaMed) and professional societies (e.g., California Thoracic Society, American Thoracic Society, Society for General Internal Medicine). Investigators contacted these groups, presented aligned goals, and proposed utilizing their networks for targeted study recruitment. Recruitment messages were primarily posted on social media accounts, message boards, and via direct newsletters to the email distribution lists of networks and partner groups. Broader study promotion was also achieved through ‘sharing’ of study information via personal/professional social networks of study investigators, colleagues/institutions, and participants (website analytics revealed that visitors shared study information 279 times via embedded share applet – similar to snowball sampling). Recruitment messages included IRB-approved language to promote the study (brief descriptions, inclusion criteria) and provide a link directing participants to a hosted study website. The study website included general study information (including IRB and trial registration information), research team contact information, and a share applet that allowed users to send an email or create a social media post linking back to the landing page. We tracked study website traffic (user and page views) during the recruitment period, including device on which the page was accessed (mobile, tablet, desktop), referral source, language, and location.
The study website directed participants to click an outbound link that transferred them to a Research Electronic Data Capture(13, 14) (REDCap, Nashville, TN; 5/8/20 to 6/19/20) or Qualtrics (Provo, UT) survey (6/19/20 to study close), hosted on secure servers at UCLA. Though we initially implemented our study in English only, to expand our ability to include diverse participants we utilized professional translation services (International Contact, Inc.; Berkeley, CA) to translate our study (website, consent, and survey) into the five most commonly spoken foreign languages in California (Spanish, Mandarin Chinese, Korean, Tagalog, and Vietnamese). As REDCap did not have native support for translation of the command buttons for the survey (e.g., “Next,” “Submit”), we migrated the survey to Qualtrics to facilitate full translation of the survey and interface.
After entering the survey, respondents first viewed an online consent form with language included in a typical written consent. Of the 2,844 survey initiations (Fig. 1), 362 (12.7%) entries from ‘bots’ and 82 (2.9%) duplicate participant entries were excluded; 2,384 respondents affirmed consent via the online form, 15 respondents declined consent, and 1 respondent exited the survey without affirming or declining consent. Of those who consented, 413 (17.3%) respondents did not continue the survey beyond that point, resulting in 1,971 (82.7%) consented, active participants. 1,540 (78.1%) participants completed the baseline assessment through at least part of the section on SRA policies, and thus were eligible for pre/post comparison of key trial outcomes.
Baseline participants who did not complete on their first attempt (and who provided an email address) received a reminder email four weeks after their last activity, then weekly for three weeks (for up to four total invitations) thereafter to remind/encourage survey completion.
Assessing Data Validity
The Qualtrics platform has a built-in option (‘Prevent Ballot Box Stuffing’) that is designed to prevent duplicative entries by placing a cookie on the browser of participants during their first entry into the survey. If the same respondent comes back on the same browser and device, without having cleared their cookies, they are flagged as a duplicate and not permitted to take the survey again. However, clearing browser cookies, switching to a different web browser, using a different device, or using a browser in ‘incognito’ mode would all allow a participant to enter the survey again. As such, we additionally relied on embedded data to identify potential fraudulent entries for records attached to IP addresses that were duplicated in the data greater than four times; three of four instances were suspected to be the result of bots (fraudulent activity)(15) and discarded from the data. In the first instance, one IP address (geo-tagged to a location in China) contributed 172 attempted survey entries, none of which progressed in the survey beyond the consent page, that were all submitted within a 24-minute window. In the second and third instances, two IP addresses contributed 121 and 69 attempted survey entries, none of which progressed past the demographics section of the survey, all included similarly formatted email addresses (random word + four random letters @ domain), including some emails that were duplicated across these two IP addresses. The final instance included 19 records with unique and valid emails; these records were determined to be valid and submitted by unique individuals using a shared server. Invalid records submitted by bots were largely consistent with each other (e.g., 100% identified as health care workers, 100% reported their age reported an age between 30 and 33) and compared to valid records were more likely to report younger age, male sex, divorced/widowed/separated, having a bachelor’s degree, currently working, and having a military background (data not shown).
Follow-Up Surveys
Following consent, respondents were asked to provide an email address for eligibility to receive a gift card and to receive follow-up surveys; respondents could still participate in the baseline survey if they did not provide an email (N = 222 did not provide an email). Follow-up invitations were sent in batches by month of baseline survey entry beginning in the second week of August so follow-up surveys were predominantly completed 2–3 months after the baseline survey; the first follow-up survey was closed in December. Participants received an email with a unique link to participate in the first follow-up survey; participants who did not complete the survey after the original invitation subsequently received a reminder email weekly for three additional attempts (for up to four total invitations) thereafter or until they completed. Of 1,749 provided e-mail addresses, only 1,550 invitations were initially sent as 18 e-mails were returned as undeliverable and by an unidentified error, 181 e-mails were marked as ‘not sent’ at the close of the pre-programmed Qualtrics distribution. Of follow-up invitations sent, 19 respondents opted-out/declined to participate in the follow-up survey and 592 did not respond to the follow-up requests, resulting in 939 (60.6%) entries into the first follow-up survey. Participants were sent a second follow-up survey via automated email invitation in January 2021, with up to 4 automated reminders to complete.
Participants who provided email addresses were entered in a raffle to win one of twenty-five (25) $100 gift cards for an online retailer; participants who complete two surveys receive one entry and those who complete all three surveys receive two entries.
Intervention
During the first follow-up survey, respondents from California were automatically randomized to receive either a brief educational video explaining SRA policies or no intervention using a randomization module programmed into the survey that executed a stratified randomization scheme based on health care worker status, gender, age, race, ethnicity, and education. As the intervention was based on the policy developed by the University of California system (one of the largest providers in the state, with 10 campuses, five medical centers, and three affiliated national laboratories), participants outside California were treated as negative controls and not randomized.
Participants randomized to the intervention were automatically shown the intervention video, which was housed on a private Vimeo (New York, NY) channel and embedded in the survey. The 6:30 minute long video was animated by a professional video production company (WorldWise Productions, Los Angeles, CA) and covered key topic of public health ethics, policy development, and a summary of how the University of California’s SRA policy would be implemented during a crisis. A copy of the video is available upon request. In addition to viewing the intervention video, participants randomized to treatment were also shown five additional survey questions to assess their impressions of the intervention; all other content of the follow-up survey was identical to controls.
Safety
At the completion of each survey, participants were directed to a “Thank you” page that additionally included a message directing them to contact the study team with any questions or concerns, including information on how to do so. Participants were also instructed to reach out to their personal health care or mental health provider if they experienced discomfort or distress, and provided with the website and phone number for the National Suicide Prevention Lifeline in the event they were in crisis.
External Comparison
To determine the extent to which our sample is representative of the larger population from which our sample was drawn (primarily California adults, but also US adults), we compared our sample to respondents from 2019 Behavioral Risk Factor Surveillance System (BRFSS)(16). Our survey used a number of BRFSS questions (see below) to facilitate comparison.
Survey Measures
Survey data included information on demographics (Sect. 1 & 6), health and health behaviors (Sect. 2), access to care (Sect. 3), experience with COVID-19/coronavirus (Sect. 4), and SRA policies (Sect. 5). The baselines questionnaire was the longest (approximately 35 minutes to complete) while subsequent surveys were designed to be shorter (approximately 15 minutes to complete).
Respondents first self-reported their status as a health care worker (Are you a health care professional? Examples include: physician/doctor, nurse, pharmacist, respiratory therapist, rehab specialist, psychologist, clinical social worker, or hospital chaplain. If you are a health professional student (pre-degree or certificate) please select "no" for the purposes of this survey.) Those who identified themselves as a health care worker received different survey items than non-health care workers. All participants were also asked to report their employment status, educational attainment, gender identity, year of birth, race, ethnicity, health insurance, place of residence, marital status, and if they had children. Health care workers were also asked to identify their specialty and tenure. A shorter version of this section was administered in the follow-up surveys to assess changes to employment and insurance.
The second section of the survey ascertained information on health and health behaviors; the majority of these questions were drawn from the BRFSS(16). Information included diagnosed chronic conditions, self-reported general health status (5-point Likert scale from ‘poor’ to ‘excellent’), number of days in the past 30 days where mental health was “not good” (“Now thinking about your mental health, which includes stress, depression, and problems with emotions…”) and where physical health was “not good” (“Now thinking about your physical health, which includes physical illness and injury…”), screeners for depression (PHQ-2)(17) and anxiety (GAD-2)(18), a single item on sleep from the PHQ-9(19), and were asked to compare their mental health now to the same time last year. Respondents were asked about alcohol use in the past 30 days (number of days of use, number of drinks per occasion), cigarette(20) and e-cigarette(21) use in the past 30 days, and exercise in the past 30 days; they were also asked to report if recent changes in these behaviors and if COVID-19 was a cause. A shorter version of this section was administered in the follow-up surveys to assess physical and mental health. Self-identified healthcare workers were also asked about burnout in subsequent surveys.(22)
The third section focused on access to care; all participants were asked about receipt of an influenza vaccination for the 2019 season, if they had a personal doctor, and the time they last saw their personal doctor. Those who previously reported any chronic medical condition were asked a set of novel questions about the impact of COVID-19 and related social distancing on their disease management and symptoms. Non-health care workers were asked an additional series of novel questions about changes in their ability to access health care during COVID-19, delayed or forgone care during COVID-19, and changes in the use of prescription and over-the-counter medications during COVID-19. This section was not administered during the first follow-up survey and an abbreviated version was administered at the second follow-up survey.
The fourth section focused on COVID impacts and asked respondents about their knowledge of government regulation of activities during the pandemic, personal protective behaviors, COVID-19 information seeking,(23) COVID-19 related stress,(23) and perceived personal risk from COVID-19. At follow-up, this section additionally contained questions about COVID testing and vaccination.
The fifth and longest section focused on awareness/knowledge of SRA policies, alignment with values governing SRA policies, preferences for SRA implementation and communication, and trust/anxiety for SRA. All SRA questions were novel but demonstrated acceptable psychometric properties (e.g., Cronbach’s alphas ranging from 0.5666 to 0.8954; Appendix 1). A shorter version of this section was administered in the follow-up surveys.
The final section asked optional personal questions regarding COVID-19 experiences(24) (exposure to COVID-19), disability status(25), advanced care planning, general sources of news/information(26), the experience of discrimination in health care, and other personal characteristics(27) (e.g., religion, sexual orientation, political identity). This section was not administered in the follow-up surveys.
Statistical Analysis
All analyses were performed using SAS 9.4 (Cary, NC). Summary statistics were used to describe website traffic information based on analytics derived from the hosting platform (Square Space) and from Google analytics; correlation between website traffic and respondent counts by geography were evaluated. To ascertain the representativeness of the sample, we calculated and compared descriptive statistics for participants (N = 1,971), BRFSS respondents from California (N = 11,613), and BRFSS respondents from all 50 states and Washington DC (N = 409,810); BRFSS prevalence data are weighted to represent the US adult populations. Finally, differential non-responses was evaluated by comparing characteristics of those with complete (into SRA section) vs incomplete data at baseline. Differences were assessed using appropriate two-sided bivariate tests with a 0.05 alpha criterion.