We used a cross-sectional survey design which was informed by established approaches for conducting systematic needs assessment surveys [14, 15]. We published our study protocol, and all data are freely available on the Open Science Framework at https://osf.io/da4uy/. We complied with the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) when preparing our manuscript for publication (Appendix A) . Important definitions and concepts are found in Box 1.
Development of the survey and questionnaire design
A survey with 15 questions was developed by the investigative team. Five members of the group piloted the survey and modified it iteratively to improve clarity, face validity, and content validity. We used a web-based survey platform (Qualtrics Labs, Provo, UT, USA). Survey questions were presented in closed response (12 questions) and open-ended formats (10 questions), following an introductory explanation regarding the background and purpose of the survey. Participants were allowed to skip questions they did not wish to answer. The survey was written in English.
There were two main parts to the survey: (1) demographic information and information about whether the stakeholder or the stakeholder’s organization used or produced NMAs; (2) purpose of the RoB-NMA tool, namely whether stakeholders preferred to assess the bias in the results, the authors’ conclusions of an NMA, or both. Further sections (found in Appendices) asked about additional NMA bias concepts that might have been missed from our list; and interest and engagement in development, piloting, dissemination, and training. We did not randomise or alternate items in the survey to avoid answer option order bias. Respondents were able to review and change their answers by going back.
The full survey questionnaire is provided in Appendix B. We included two questions (Q9 and Q10) that were identical in concept but were worded differently. These items can be used to assess whether different forms of a question yield comparable responses. To do this, we checked whether the univariate frequency distributions were the same.
All responses were rendered anonymous. Approval from the University British Columbia Ethics Board was obtained before conducting the study, and consent was implied when participants completed the online survey. Unique site visitors were identified via IP address and personal information was collected on a voluntary basis from participants (no incentives were offered) who wished to be contacted about the survey’s results and to be involved in dissemination and training. We quoted responses by participants, but they were not attributed to the specific person.
Email list development
We created an email list of journal editors publishing NMAs, using one bibliometric study of NMAs . From this list, we extracted the journals that publish NMAs, and names of authors of NMAs. We also developed a list of organizations and institutions producing NMAs (Cochrane Multiple Treatments Methods Group, Guidelines International Network, JBI – formally the Joanna Briggs Institute, Campbell Collaboration, U.S. Agency for Healthcare Research & Quality’s Evidence-based Practice Centre program, Centre for Reviews and Dissemination, Canadian Agency for Drugs and Technologies in Health [CADTH], Evidence for Policy and Practice Information and Co-ordinating Centre [EPPI-Centre], Centre for Implementation Research at the Ottawa Hospital Research Institute, the GRADE NMA group, and CINeMA [Confidence In Network Meta-Analysis] developers; Appendix C). We also included in the email list participants from a UBC Methods Speaker Series on evidence synthesis methods (https://www.ti.ubc.ca/2022/01/01/methods-speaker-series-2022/).
Email list recruitment
All potential survey respondents were sent an email describing the purpose of the study, requesting their participation and providing a link to the survey (Appendix D). We welcomed and included all knowledge users of NMAs, including stakeholders, to participate in the survey, irrespective of the level of their familiarity with NMA.
Dissemination through social media
A knowledge translation plan was followed to disseminate and advertise the survey (Appendix E). Anonymous links were included in LinkedIn and Twitter posts which were circulated through targeted Twitter accounts, such as the Knowledge Translation Program, SPOR (Strategy for Patient-Oriented Research) Evidence Alliance, and the DSEN (Drug Safety and Effectiveness Network) Methods and Applications Group for Indirect Comparisons. Tweets were retweeted amongst followers. We used twitter cards (i.e. advertisements with pictures) and targeted hashtags to increase awareness of the survey (see the Twitter Campaign in Appendix F). In addition, we advertised through the e-newsletters of Knowledge Translation Canada, SPOR Evidence Alliance and Therapeutics Initiative.
The stakeholder survey ran from June 28 to August 1, 2021. Qualtrics email reminders were scheduled at two-week intervals throughout this period to unfinished or non-respondents. We estimated that the survey would take approximately 10 minutes to complete of a respondents’ time.
Prior to data analysis, the responses were transferred from Qualtrics to MS Excel and cleaned to ensure data quality. Questionnaires that were terminated early, for example, where users did not go through all questionnaire pages, were included in analyses where available, but those that were entirely blank were excluded. We measured the time respondents took to fill in a questionnaire regardless of whether it was complete.
Descriptive statistics were calculated for each closed response question including count, frequency and percentage response distributions, with denominators taken as the number who provided a response to the question. One researcher coded the open-ended questions independently by identifying themes. Respondents’ comments on questions 9 and 10 were merged as they were similar in nature.