Between 2020-21, three online surveys were launched to gather data regarding the mental health impacts of the pandemic. The details of the surveys are described in Table 1.
Table 1
Survey | My Wellness and COVID-19 Survey (WAC) | Young People’s Health Survey (YPHS) | Heal Well India Survey (HWI) |
Aim | To examine the mental health and social effects resulting from social isolation in India due to COVID-19 | To understand the prevalence of anxiety, depression, substance use, and dating violence in young people between the ages of 18–24 years | To examine how the mental health of healthcare workers was impacted during the pandemic |
Type | Longitudinal | Cross-sectional | Longitudinal |
Audience | 1) Anyone above the age of 18 years 2) Indian citizens 3) Currently residing in India | 1) Young people between the ages of 18–24 years 2) Indian citizens 3) Currently residing in India | 1) Healthcare professionals above the age of 18 years 2) Indian citizens 3) Currently residing in India |
Variables explored | Current Covid-19 status; caregiving responsibilities; access to essential goods; knowledge about Covid-19; time spent in home-based activities; patterns of home-based activities; and differences in life since Covid-19 were assessed | Substance use; common mental disorders; physical activity and sports engagement; prevalence and patterns of dating violence and risky behaviours | Experiences, fears, and concerns about COVID-19; trust in the workplace and formal; anxiety, depression, and substance use-related problems; prevalence of Posttraumatic Stress Disorder; resilience |
Target sample size | 1275 | 1800 | 2000 |
Reach (attempted responses) | 1404 | 3409 | 648 |
In this section we describe the challenges faced in conducting the surveys and mitigation strategies deployed in response.
Dissemination and recruitment
In the first month after the launch of each survey, rigorous dissemination was undertaken through social media platforms (e.g., Instagram, Facebook, Twitter), newsletters, emails, and WhatsApp. Typically, the disseminated messages included a brief note about the survey, eligibility criteria, and the survey link. These messages were tailored to each platform – for example, colourful and interactive graphics were developed for social media and messenger apps whereas text-based information was shared via email.
Since each survey targeted a different set of populations, we created a database of our existing networks within those populations. Tapping into known contacts and snowballing from there seemed to be the most effective strategy across surveys. We observed a spike of eighty to hundred responses within the first month post-launch, as a result of active sharing of survey links within personal and professional networks. However, once our personal networks were exhausted, the recruitment numbers started to plateau.
Another strategy adopted by the team was cold emailing relevant stakeholders and organizations. Barring a few exceptions, we did not receive any interest or responses through cold emailing. Paid advertisements on social media were deployed to enhance recruitment efforts, which facilitated greater visibility of the surveys but did not lead to a significant increase in recruitment numbers.
Launched in the middle of the first lockdown, the WAC survey was well-received since online activities were at an all-time high during that time period. Gradually, the response rate declined. Participant feedback suggested that the survey was too lengthy (it took at least 20 minutes to complete). The pandemic also evolved rapidly, and survey questions that were relevant during the first lockdown were no longer relatable at follow up data collection 8–12 months down the line. For example, questions like ‘time spent doing home-based activities’ were appropriate during the lockdown and were perceived to be no longer relevant once the lockdown was lifted.
Another recruitment technique used was to disseminate information about and a link to the survey through an advert in a local newspaper. This technique was intended to disseminate the survey to a larger audience which may not be present on social media platforms but were consumers of traditional media. However, no discernible spike in recruitment was observed immediately after this strategy was deployed.
For the YPHS, we mobilised our advisory board consisting of nineteen young people from across the country to disseminate the survey within their networks. Collaborations with youth-led and youth-centric organizations in the socio-development sector helped us garner the highest number of responses when compared to other recruitment strategies. Three ways in which these collaborations worked were: (a) the collaborating organization shared our survey within their networks through social media, email, and WhatsApp; (b) the collaborating organization circulated our survey to their networks on the condition that we amplified their work on our own digital platforms. For example, we shared the link to an online survey conducted by a collaborating organization on our social media channels, and they reciprocated by circulating our survey within their networks; (c) paid collaborations with organizations, which included partnering with an organization with access to a vast network of our survey’s target audience. This collaboration was formalised through a contract agreement specifying the deliverables and terms of payment. The first two strategies provided a momentary boost in responses and greater visibility for the surveys on digital platforms. The third led to a much more substantial boost; we engaged in a paid collaborated with Yuvaa [Yuvaa.co.uk], a youth engagement media platform in India, resulting in the generation of over 900 responses in about four months from a diverse sample of young people, which we may not have been able to reach otherwise. Yuvaa’s Youth Champions widely disseminated the YPHS links across their university networks. A unique code (such as YUVAA01) accompanied the links, which enabled the data manager to track the responses from Yuvaa’s network on the backend. We ensured that participants’ identity was kept confidential and only accessible to the data manager.
Once the lockdown restrictions were relaxed and lifted after the second wave of the pandemic, we conducted in-person dissemination of the online survey. This involved circulating the survey link and QR code during in-person interactions with potential participants at events such as college workshops. For remote areas with limited network connectivity in two states where we had available staffing, in-person assisted administration was used. This involved enabling the participants to take the survey on a project-owned digital tablet or completing a paper-pencil questionnaire.
During in-person workshops with college students in one state, the link and QR code for YPHS was shared using a PowerPoint slide, and printed brochures and stickers. This dissemination strategy enabled us to recruit young participants from that state, a sample that was not adequately covered during online dissemination.
For the HWI survey, a comprehensive database was created including audiences such as government and private hospitals, public health organizations, medical colleges and associations, physical and mental healthcare professionals, Accredited Social Health Activists (ASHA workers), and social media influencers working in the healthcare sector. The survey link was emailed in a staggered manner to one audience group each week. Most of these audience networks included new contacts and as mentioned earlier, cold emailing did not yield many results. In fact, the HWI survey received the least number of responses as compared to the other two surveys, possibly due to the fact that healthcare professionals were exceptionally preoccupied during the pandemic. However, in-person assisted administration using digital or paper-pencil questionnaires helped us gather more responses from remote areas in the state.
Translation
English versions of the surveys were created relatively speedily following the initial idea for each survey, and the preliminary work in developing the questions. However, translating a lengthy questionnaire into Indian regional languages such as Hindi, Marathi, and Konkani was considerably time-consuming. It was challenging for our in-house team to find the most appropriate translation of words which were not culturally equivalent such as ‘psychosocial’ and ‘well-being’, into regional languages. One solution attempted was to hire an external agency for translation, however, the coordination and proofreading required from our team led to a delay in our timelines. This meant that each survey translated into regional languages could only be launched a few months after the English one, thereby delaying recruitment for regional language versions.
Ideally, one may launch all translated versions of the survey at the same time, however, we chose to deploy a staggered approach as we did not want to lose time considering the evolving nature of the COVID crisis. Hence, we released versions that were ready to go, to prioritise data collection as soon as possible, during the pandemic.
Despite developing translated versions of the survey to make it more accessible, it was found that 99% of the participants chose to respond in English. Potential reasons may be that a large number of the population with access to digital technology in India are English-speaking.
Survey format
The format of the surveys – including factors such as length, visual appearance, order of questions, and ease of navigation - played a crucial role in determining survey completion. The format was workshopped with sample participants and community advisors (e.g. youth advisors), both within and outside the host institution, before the launch of each survey. When the surveys were active, our team also monitored the points at which participants were dropping off.
Feedback from both these sources informed iterative revisions to the format and design. For example, with one survey, the length of the survey was flagged during the review, and it was recommended to add progress bars, page numbers, interactive messages and memes to encourage participants to finish the survey. It was also noted that the survey ‘landing page’ was extremely text-heavy, which might have potentially discouraged a participant from continuing beyond that page. Hence, the first screen of the survey was modified to include crisp information along with a link to the detailed information sheet.
Follow-ups
Follow-ups were designed to study the impact of the pandemic at different time points. As such, only the two surveys collecting data related to the pandemic (the WAC and HWI) involved follow-ups. For WAC, after completing baseline, follow-ups were sent to participants monthly for 12 months; and then later at 6 and 12 months post the pandemic. For HWI, follow-ups were sent to participants at 3, 6, 12 months post the completion of baseline.
The follow-up surveys were shorter in length as compared to the baseline, and were tested within the team to optimize for time required to complete, relevance of questions, and language comprehension. However, there was a gradual decline in response rates for the follow-up surveys, especially for WAC, perhaps because the follow-ups were too frequent.
It was a challenge for our data team to coordinate the follow-ups, and required a few rounds of trial and error and manually scheduling follow-ups before automated follow-ups could be streamlined.
There were several software glitches while sending follow-up surveys, requiring multiple rounds of troubleshooting with the global REDCap support team. Managing data across three surveys in several regional languages with multiple follow-ups became increasingly complex for the data team. To simplify and systematize the process, the task was divided such that one team member was managing follow-ups for only one survey, under the supervision of the data manager.
Software and data management
REDCap (Research Electronic Data Capture) is a secure, web-based application for building and managing online surveys and databases. It was developed in the United States at Vanderbilt University in 2004 for academic and non-profit use in health research, quality assurance, and health service management (4). It was important to the survey team that the tool we used ensured data security and privacy of participant information, hence REDCap was chosen to host our three online surveys since it was HIPAA (Health Insurance Portability and Accountability Act) compliant – this is a USA-based Act, but has also become an International standard. REDCap also was chosen because it offered various unique features: for example, REDCap was able to record participant journeys through the surveys, thus, allowing us to monitor points where participants were dropping off. Also, multiple languages could be embedded within a single survey in REDCap, thereby, allowing the participant to switch to their preferred language easily by clicking a small button in the application.
The major challenge while using REDCap was the lack of accessible and prompt customer support. Since our team was using the platform for the first time, adequate and timely support was required to navigate it. However, the only option to access help was to join a Global forum on a Zoom call, organized once a week in a different time zone. This was inconvenient as the forum was held close to midnight in Indian Standard Time (IST), and as it was only weekly, quick troubleshooting was not possible.
To enable quick problem-solving, the data team used online tools and resources to gain proficiency in operating REDCap. One of our team members was also familiar with using REDCap, which facilitated the setup of the tool. Additionally, an external expert was consulted on a needs-basis to provide troubleshooting support.
Two issues that came up while using this software were occasional server outages thereby preventing participants from taking the survey; and us facing several glitches while automating the follow-up surveys on REDCap, eventually having to manually send the follow-ups to hundreds of participants. This increased the amount of resources and labour required from our small, resource-constrained team.