Search
Our search yielded 18 articles (see Fig. 1 for full search results). JH and MS double-screened 10% of the non-Chinese titles and abstracts and agreed on 98.3% of them (170/173). The remaining three were resolved after discussion and consensus. All eligible records were written in English and included surveys, semi-structured interviews, focus groups, and writing tasks.
Only 7 of the 18 records reported where participants came from. Three mixed method survey studies (7, 16, 17) included participants from a wide range of countries but it was not possible to tell which participants completed the optional qualitative questions. Four interview studies (5, 8, 18, 19) included participants who were almost exclusively from North America, Europe, and Australia, with one participant from Brazil.
Critical appraisal of the studies using CASP-Qual rated the studies ranging from valuable to not very valuable; the less valuable studies had few qualitative components or minimal reporting of qualitative analysis or findings. Study characteristics are reported in Table 3.
Table 3
Authors
|
Participants
|
Participant’s country
|
Guideline(s) considered
|
Methods
|
Phenomena of interest
|
CASP rating
|
Burford et al., 2013 (25)
|
151 systematic review authors
|
Not reported
|
PRISMA Equity
|
Mixed methods survey
|
Perceived utility, facilitators, and barriers
|
Fairly valuable
|
Davies et al., 2015 (19)
|
18 experts and 29 end users
|
USA, Canada, Sweden, the UK, and Norway
|
SQUIRE
|
Focus groups and interviews
|
Experiences with and impressions of the SQUIRE Guidelines
|
Valuable
|
Davies et al., 2016 (6)
|
44 graduates faculty, and directors of healthcare
|
Not reported
|
SQUIRE
|
Mixed methods survey and written exercise
|
“whether SQUIRE 1.6 was understood and implemented as intended by the developers”
|
Valuable
|
De Vries et al. 2015 (46)
|
7 researchers
|
Not reported
|
Systematic review protocols of animal intervention studies
|
|
Feedback on usability, missing items, possibilities for improvement and clarity
|
Not very valuable
|
Dewey et al., 2019 (17)
|
74 out of 831 survey respondents that provided (optional) free text comments
|
The full survey was answered by respondents in the USA, Canada, China, South Korea, Japan, Germany, France, Italy, UK, Other European countries, Middle East, Latin America, Other. It's unclear where respondents for the free text answers came from.
|
CONSORT, STROBE, PRISMA, STARD
|
Mixed methods survey
|
"(1) When and how are reporting guidelines and checklists used by authors and reviewers? (2) What is their impact on the content of final manuscript drafts according to authors? and (3) How do authors and reviewers perceive the value of reporting guidelines and related checklists?"
|
Fairly valuable
|
Eysenbach 2013 (26)
|
61 authors
|
Not reported
|
CONSORT Ehealth
|
Mixed methods survey
|
Views on completing the checklist as part of submission
|
Fairly valuable
|
Fuller et al., 2015 (8)
|
5 authors
|
USA and Australia
|
TREND and Reporting Guidelines in general
|
Semi structured interviews
|
Factors that affected authors’ use of TREND and other reporting guidelines
|
Valuable
|
Korevaar et al., 2016 (18)
|
4 radiology residents, 8 laboratory medicine experts
|
Radiology residents were from the Netherlands. No geographical details provided for experts
|
STARD
|
Interview (residents) and mixed methods survey (experts)
|
To identify items that were vague, ambiguous, difficult to interpret, or missing
|
Fairly valuable
|
Macleod et al., 2021 (16)
|
211 authors, but only some answered the free text question
|
The full survey was answered by participants in the USA, China, Japan, Germany, EU, and “Other” areas. It is unclear who answered the free text question.
|
Materials Design Analysis Reporting framework
|
Mixed methods survey
|
Whether the checklist was clear and useful
|
Fairly valuable
|
Page et al., 2021 (20)
|
110 systematic review authors and experts
|
Not reported
|
PRISMA
|
Mixed methods survey
|
Opinions on PRISMA and proposed changes
|
Valuable
|
Prady et al., 2007 (21)
|
40 authors
|
Not reported
|
Standards for Reporting Interventions in Controlled Trials of Acupuncture
|
Mixed methods survey
|
Experiences using PRISMA
|
Fairly valuable
|
Prager et al., 2021 (24)
|
5 of 18 survey respondents that answered the free text question
|
Not reported
|
STARD
|
Mixed methods survey
|
Barriers to STARD 2015 adherence
|
Fairly valuable
|
Rader et al., 2014 (47)
|
263 systematic reviewers
|
Not reported
|
PRISMA
|
Mixed methods survey
|
Barriers or difficulties in meeting more detailed reporting standards in PRISMA
|
Fairly valuable
|
du Sert et al., 2020 (5)
|
11 authors
|
UK, USA, Belgium, Brazil
|
ARRIVE
|
Interview and writing task
|
Authors’ opinions, interpretation, and experiences of updated ARRIVE guidelines
|
Fairly valuable
|
Sharp et al., 2020 (7)
|
203 of 1015 researchers that answered free text questions
|
The full survey was answered by participants in Africa, Asia, Europe, North and South America, Middle East, and Pacific Region. It is unclear who answered the free text question.
|
STROBE
|
Mixed methods survey
|
Experiences and attitudes towards STROBE
|
Valuable
|
Struthers et al., 2021 (22)
|
623 authors, 274 of whom answered free the text question
|
Not reported
|
Reporting guidelines in general
|
Mixed methods survey
|
The question asked, “What could we do to improve the guideline?”
|
Fairly valuable
|
Svensøy et al., 2021 (27)
|
10 authors
|
Not reported
|
Not specified
|
Semi structured interviews
|
Experiences using guidelines or templates
|
Valuable
|
Tam et al., 2019 (28)
|
230 authors, 62 of whom answered the open-ended questions
|
Not reported
|
PRISMA
|
Mixed methods survey
|
Opinions on PRISMA
|
Fairly valuable
|
Synthesis findings
The relationships between our codes, descriptive themes, and analytic themes are reported in Table 4. We identified the following analytic themes: 1) Researchers may not understand guidance as intended or what reporting guidelines are, even if they think they do; 2) Researchers report a variety of reasons for using reporting guidelines, and that some are more important than others; 3) Researchers report using reporting guidelines for different tasks and wanting guidance to be delivered in ways that better fit their needs; 4) Using reporting guidelines has costs which researchers may feel outweigh benefits; 5) Reporting guidelines may need to be revised and updated; 6) Researchers may not be able to report all items, which can leave them feeling uncertain or worried; 7) Awareness and accessibility may limit reporting guideline usage; 8) Reporting guidelines may be more useful to less experienced researchers, but these researchers may find them harder to use; 9) Researchers want or need design advice, but reporting guidelines may not be the right place to find it; 10) Reporting guidelines can be harder to use if their scope is too broad, too narrow, or poorly defined; 11) Researchers may have to use multiple sets of reporting guidelines, multiplying complexity and costs; 12) Researchers may use checklists but never read the full guidance.
Table 4
Codes, descriptive themes, and analytic themes
Codes
|
Descriptive themes
|
Analytic theme
|
What does this term mean? (6,18–20,22)
What does this item mean? (5,6,18–22)
How are these items different?(5,19–21,26)
Have I understood this as intended? (6,19)
Examples help me understand items (5,20,46)
Why is this item important? (5,18–20,28)
Who is this item important to? (7,19,20)
Have I understood the guideline’s scope as intended? (20,22)
Does this item apply to me? (5,19–22,26)
Is this item optional? (19,21)
What are reporting guidelines? (7,24)
How should I use a reporting guideline? (8)
|
What does this mean?
Why is this item important?
Does this apply to me?
I don’t understand what reporting guidelines are
|
Researchers may not understand the guidance as intended, or what reporting guidelines are, even if they think they do
|
I find guidelines useful in general (5,17,22)
Guidelines make me feel confident (7)
Guidelines help me develop as a researcher (5,7,25)
Guidelines may help me improve my manuscript (7,17,19,25,26)
I believe guidelines may help me publish more easily (27)
I may use guidelines because journals and editors tell me to (7,8,25,27)
I may use guidelines because other researchers expect it (8,27)
Standardized reporting benefits the community (7,27,48)
Immediate benefits are more important than hypothetical ones (7,27)
Personal benefits are more important than benefits to others (27)
|
Guidelines benefit me
I use guidelines because of other people
Guidelines benefit others
Some benefits are more important than others
|
Researchers report a variety of reasons for using reporting guidelines, and that some are more important than others
|
I use reporting guidelines for planning research (7,19)
I use reporting guidelines for designing research (5,7,17,21,24)
I use reporting guidelines for writing (5,7,17,19,21)
I use reporting guidelines for checking my own or other people’s writing (7,24)
I use reporting guidelines to appraise the quality of other people’s reporting (18)
I use reporting guidelines for peer reviewing (7)
I want items presented in the order in which I must do them (5,46,48)
I want design or methods advice (7,19,20)
I want templates for writing (17)
I want checklists that are easy to fill in (16,22)
I want checklists embedded into journal submission workflows (17)
I want items embedded into data collection tools (25)
|
Researchers use reporting guidelines for different tasks
I want guidance presented in formats that are better suited to the task I am doing
|
Researchers report using reporting guidelines for different tasks and wanting guidance to be delivered in ways that better fit their needs
|
Guidelines take time to read, understand and apply (8,25,27)
Some items require extra work which takes time and effort (6,19,47)
I want an indication of which items to prioritize (19,21)
Perceived complexity (16,17,19,27)
Long guidelines are off-putting (7,22,25,26)
Itemization helps me navigate guidance(20)
Itemization summarizes the guidance(17)
Itemization makes guidance appear longer(20)
Itemization blocks the bigger picture(19)
Following reporting guidance can result in long, bloated articles (19,21,25,26)
Long, bloated articles may exceed journal word limits (8,16,21,26)
I want options for where to report this item (5–8,19,20,26)
The benefits of using a reporting guideline may not outweigh the costs (7,8,26)
Guidelines are more valuable when used early (7,17,19,22)
|
Guidelines take time
Itemization may decrease costs
Itemization may increase perceived costs
I think guidelines make my manuscripts long and bloated
The benefits of using a reporting guideline may not outweigh the costs
The balance of benefits vs costs may be more favourable when guidelines are used early
|
Using reporting guidelines has costs, and researchers may not feel that benefits outweigh the costs
|
I would clarify this item (20,21)
I would move this item (6,19)
I would split this item into two (5,19,20,46)
I would add or remove items from this guideline (5,18–21)
I would add or remove requirements from this item (5,7,20,21,28)
Guidelines can become out of date (19)
Guidelines need to be updated (20)
|
I think the guidance could be improved
Guidelines need to be kept updated
|
Reporting guidelines may need to be revised and updated for different reasons
|
I cannot report this because I didn’t do it
(19–21,26)
I cannot report this because of intellectual property issues
(26)
I cannot report this because it clashes with journal guidelines (20)
I cannot report this because data was missing from my primary studies (25)
Editors, reviewers or co-authors asked me to remove this item (21,47)
I feel uncertain because I don’t know how to say that I didn’t do it (20)
I feel worried that I will be judged for transparently reporting something I didn’t do (7,20)
|
I feel unable to report this
I feel nervous or uncertain if I am unable to report an item
|
Researchers may not be able to report all items which can leave them feeling uncertain or worried
|
I may not know that reporting guidelines exist, or what guidance exists (8,17,18,22,27)
I may not be able to easily access guidance (22,27)
|
I can only use what I know about and have
|
Awareness and accessibility may limit reporting guideline usage
|
Reporting guidelines may be less valuable to experienced researchers (7,17,26)
Experienced researchers feel that they already know how to report (7,17,19)
Experienced researchers find guidance patronizing and feel untrusted (8,16,20,26)
Reporting guidelines can be hard to use at first but get easier with experience (8,19,27)
|
Reporting guidelines are more valuable to inexperienced researchers
Reporting guidelines can be hard to use at first but get easier with experience
|
Reporting guidelines may be more useful to less experienced researchers, but less experienced researchers may find them harder to use
|
I want design or methodological advice (7,16,20)
I don’t know how to do this item (19–21)
Guidelines are procedural straightjackets (7)
This guideline is too prescriptive (7,20,28)
|
I want or need design advice
I think this guidance prescribes how research should be designed
|
Researchers want or need design advice, but reporting guidelines may not be the right place
|
The guideline’s applicability criteria are not clear (17,18,22)
This guideline isn’t a perfect fit for me (22)
This guideline doesn’t generalise (7,16,17,20,28)
This guideline is too prescriptive (7,20,28)
I don’t want to see optional items that only apply to other types of study (21,22)
|
A guideline’s scope can be unclear
A guideline can be too narrow
A guideline’s scope can be too broad
|
Reporting guidelines can be harder to use if their scope is too broad, too narrow, or poorly defined
|
I need to adhere to journal guidelines or other research guidelines (8,17,20,21)
I might need to use multiple reporting guidelines (7)
I want reporting guidelines to be linked or embedded (18,20)
I want reporting guidelines to use similar structure (20)
I want reporting guidelines to use similar terms (20)
|
Authors often need to adhere to multiple sets of guidance
I want guidelines to harmonize
|
Researchers may have to use multiple sets of reporting guidelines, multiplying complexity and costs
|
I don’t like checklists(7,17,22,26)
I may use the checklist instead of the full guidance (5)
I may use the checklist before I read the full guidance (5)
|
I experience reporting guidelines primarily as, or through, checklists
|
Researchers may use checklists but never read the full guidance
|
We identified that ‘barriers’ and ‘facilitators’ were not consistent experiences. What may be a barrier for one person might be a facilitator to others or when occurring in different context, and so we refrained from labelling our analytic themes as one or the other.
1) Researchers may not understand guidance as intended or what reporting guidelines are, even if they think they do
Researchers commonly stated that they need more information to fully understand the intention of the guideline developer. When asked about the clarity of guidance, researchers across many studies reported difficulty in understanding certain terms, concepts, or checklist items:
’outcome’: “Does it mean the domain, or does it mean the domain measure, metric, method of aggregation, and time?”(20).
“Primary and secondary improvement related question is confusing, what does that mean?… We had a hard time with the [difference between the] improvement question and the study question.”(19)
A few researchers reported ignoring an item if they could not understand it:
“Only one item was identified as hard to understand by more than one respondent: ‘methods employed to ensure completeness of data’, which two participants said they left out because of difficulty in comprehending the item”(6)
Some researchers reported feeling that reporting guidelines were “simply not comprehensible”(19). Others reported that they had understood, but further investigation revealed that their interpretations could be “different from that intended by the developers”(6). For example, Davies et al. (6) found that one SQUIRE item “was reported as used fully [in the quantitative survey], but the qualitative analysis revealed that its usage was frequently inconsistent with the intention of the developers”. One reason for this may be because different researchers may interpret the guidance in different ways depending on their prior experience, the research context, or if the guidance is ambiguous. For example, the SQUIRE developers found that the word ‘theory’ “meant different things to different people. For some, the word ‘theory’ meant ‘mechanism by which an intervention was expected to work’, for others it meant ‘lean or six sigma for example’, and for still others it meant ‘logic model’”(19).
Even when researchers reported understanding what an item meant, they may not have understood why it is important or who it is important to, leading them to remark that an item “seems unnecessary”(5). Few researchers referenced the needs of evidence synthesizers or patients as consumers of research, but more reported considering whether an item would be useful to other researchers, editors, and reviewers:
“the information provided does not matter as the reviewers do not know what to do with it’’(7)
In addition to not understanding guidance or who it is important to, many researchers expressed difficulty understanding whether an item was applicable to their work. Some reporting guidelines specify that not all items are compulsory or that some items may only apply to a subset of research articles. Researchers highlighted that this may not be obvious, especially if this nuance is buried in a long elaboration document. Some researchers therefore reported uncertainty over which items applied to them:
“Authors asked for clarification of which items were always required and which were nonessential”(21)
“Not always clear what was relevant to their study”(5)
“He had realised with experience and re-reading the Guidelines that SQUIRE did not require him to include every item in the manuscript”.(19)
This uncertainty may extend to the entire reporting guideline if researchers don’t know when to use one over another. One researcher declared that “PRISMA guidelines can also be used rather than the MOOSE”(22), when the two are primarily for reviews of intervention studies and observational studies respectively. Sometimes there may not be a perfect reporting guideline for a given study, as one researcher commented after using ARRIVE (which focusses on experimental research involving laboratory animals):
“Our report was an animal based cadaveric study looking at accuracy of drill guides. We were unsure which category it should fall under.”(5)
Even if a researcher understands the guidance, why it is important, and why it applies to them, they may not understand how to report it or “how much detail to report”(5). Some researchers “used examples [included in the guidance] to understand what should be reported” because they “demonstrate what is meant in practice”(5).
At a more fundamental level, researchers varied in their understanding of what reporting guidelines are. Often researchers would talk about reporting guidelines as if they were design guidelines, e.g., describing STROBE as “woefully deficient in encouraging…use of appropriate data analytic approaches”(7). This suggested that the researcher had not noticed the stipulation that “these recommendations are not prescriptions for designing or conducting studies” included in STROBE’s explanation and elaboration document(23). Other researchers wrote about STARD as is if the guidance was to be used when collecting imaging data:
“Two comments suggested that reporting quality may be impacted by the physical environment in which […] data are collected. These comments may indicate an incomplete understanding of reporting guidelines which pertain to reporting results during manuscript writing, not the process of imaging acquisition itself.”(24)
2) Researchers report a variety of reasons for using reporting guidelines, and that some are more important than others
Some researchers listed personal benefits to using reporting guidelines. Some described reporting guidelines as a “training tool”(25) for personal development, noting that guidance helps “develop a strong foundation and habits”(7). Some talked about how guidance made them feel: “As a junior scientist it gives me confidence to request the reporting of a certain piece of information”(7). Others said that reporting guidelines are “a helpful reminder”(25) and that going through the checklist “improved their manuscripts”(26). Some saw value in fostering a “transparent reporting process”(17) and for making sure your “project is written up as rigorously”(19).
A couple of researchers noted altruistic benefits, reporting that widespread reporting guideline adherence “helps in standardizing how research is reported”(7) and calling “for more scientific reports to be published, preferably using a template or guideline to make them comparable”(27).
In the absence of anticipated benefits, some researchers said that they use reporting guidelines simply because “it was what was implicitly expected of them to do”(8), and that these expectations came from journals and their peers. Some used “tools promoted by journals, which often promised to ease the publishing process”(27) but others wrote that they found this to be an empty promise:
‘‘I have never had (nor have I heard of) an editor or reviewer pushing back on a claim that all STROBE criteria were met. Therefore, when a STROBE checklist is required for manuscript submission, it seems to turn into a[n] exercise in additional administrative busywork without really improving the research.’’(7)
A few researchers reported being more likely to comply with journal requirements if they thought the journal was likely to enforce them: “Does the journal only suggest or actually require submission of a reporting guideline checklist?”(8). Some said they were more likely to comply if “it was a high impact factor journal and I thought that I would only get one crack at it”(8).
A few researchers compared different motivations for using reporting guidance, noting that personal, guaranteed, and immediate benefits were more motivating than hypothetical benefits or benefits to others:
“I suppose you are looking for short-term gain, short-term benefits as a writer of a report”(27)
“it can be difficult to put the energy into using STROBE (or any other) one a priori since ultimately, it depends on the journal submitted to and accepted to”(7)
“All the researchers wanted more homogenous reporting but emphasized that: “As an individual reporter, one is prone to choose the easiest and most accessible one.””(27)
3) Researchers report using reporting guidelines for different tasks and wanting guidance to be delivered in ways that better fit their needs
Although reporting guidelines were designed to help researchers draft and check their manuscripts, many researchers mentioned using reporting guidelines for other tasks including designing research, planning, peer-reviewing, and educating. A few researchers suggested different ways that the guidance could be delivered to make their task easier. For example, some “thought [the] order of items should reflect [the] order considered when designing the experiment”(5). Others wanted “a manuscript template”(17) to make writing easier. Some suggested that “online form[s]”(16) or software to “mark in the text what corresponds to each item in the list”(22) would make it easier to complete a reporting checklist as part of journal submission.
4) Using reporting guidelines has costs which researchers may feel outweigh benefits
Researchers noted that some items require extra work, either to collect the necessary information or just to think about and report, and that sometimes this workload felt overly burdensome:
“If we put the onus on everybody out there who’s trying to improve care to deal with that sophisticated question […], I just think we are putting a barrier in place that is going to be a mountain”(6)
This work requires time, and “the length of time it would take to consider the items”(25) was cited by many researchers as a cost, with some asking themselves whether “sufficient time [was] available to comply with [the] reporting guideline”(8).
Researchers noted that a reporting guideline’s “length and content is a key factor influencing the time needed to complete it.”(7). Some found checklists to be “very complete, but to follow every single point is overwhelming”(26). As a solution, many wanted to “simplify” or “shorten the checklists”(22). A few researchers wanted a “hierarchy” to know which items were most “important to include”(6). Another suggested that checklists presented as online forms could include “logic for irrelevant [items]”(16) so that the users are presented with only items that apply to them.
Complexity was sometimes mentioned alongside time: “As the research often was performed out of work hours, the required time and complexity of the guidelines or templates may have played a crucial role [in deciding whether to use a reporting guideline]”(27). Researchers had conflicting opinions about whether itemization reduces perceived complexity. Proponents noted that the “Checklist is a very helpful summary of sometimes confusing guidelines”(17) and that itemization made guidance “easier to follow” and “more approachable”(20). But a few said that presenting guidance in small pieces made it difficult to “get the whole picture of what you are supposed to be doing”(19) and that itemization makes “the checklist appear more daunting for users: “If you make the checklist too long people will see it as too complicated and then won’t use it””(20).
Another concern cited by many researchers was that following a reporting guideline can result in long reports:
“We use SQUIRE a lot for planning—we complete the sections up through the methods at the time we design the study…[but] SQUIRE creates sort of long reports if followed exactly.”(19)
“the document you create if you use SQUIRE exactly as written is unintelligible”(19)
“this [item] would require another paper”(26)
This problem was exacerbated by journal word limits:
“We believe it is a useful instrument but it is unrealistic to assume that every single suggestion can be detailed in a 6000-words manuscript.”(26)
“two remarked that word limitations has necessitated removal of many items”(21)
Although a handful of researchers noted that “the relaxation of word limits”(8) would help, many researchers objected to long articles because they were bloated, harder to read, or simply “unintelligible”(19) and requested strategies to “enhance readability” regardless of journal policies. Some wondered where they could place this information besides the article body, such as “in an appendix”, an “online supplement or repository”, or a figure(20). Some researchers preferred to report information in the checklist instead of the article body because of “space restrictions, because [it was] a minor component of the study, because they considered the information to be obvious, or because they were unsure of how to incorporate it in the manuscript.”(5). Some used this strategy to report items that had “not been used or observed during the study, for example that no inclusion or exclusion criteria had been set, no data had been excluded, randomisation and blinding had not been used…”(5) although it was not clear whether this was motivated by a desire for a concise article or a concern about highlighting potential weaknesses.
Faced with the costs of time, work and article length, some researchers explicitly weighed perceived benefits against costs and disagreed about the balance:
“The manuscript has improved. However, we felt that the amount of effort was considerably greater than the degree of improvement.”(26)
“it also adds to the time required to put together a manuscript, and I am not sure how much it improves the chances of a manuscript being published”(7)
“it does increase the quality of the articles, it is clearly worth the time”(7)
The balance of costs versus benefits may be most favourable when guidance is used early in the research workflow. Researchers who used reporting guidelines earlier in their workflow (e.g., for planning research or drafting) used language that implied it was something they did regularly (e.g., “We use SQUIRE a lot for planning”(19)). Some reported that they had come to this habit by their own initiative and that reporting guideline developers should “encourage people to use the criteria early in the writing process (I have, which probably is why I only changed one thing [at the point of submission])”(22). One researcher suggested that “policy that focuses on a front end approach would be helpful”(7), noting that “To fully apply the criteria, I would need to systematically apply the STROBE criteria on the front end design of a project, grant, etc. rather than at the time of writing a project”(7).
Conversely, many authors who completed a checklist during manuscript submission, very late in their in workflow, emphasised the costs, using words like “arduous”(7) and expressing negative opinions of this process (see Researchers may use the checklist but never read the full guidance). This may be because researchers lack the motivation, time, or ability to edit their manuscripts at this point.
5) Reporting guidelines may need to be revised and updated for different reasons
Researchers in most studies had opinions on how guidance could be improved through clarifying, reorganising, splitting, merging, adding, or deleting items, and sometimes these views fed into the revision of reporting guidelines(5, 6, 20). This feedback may be useful for reporting guideline developers. Even if a reporting guideline was considered perfect at one point in time, researchers noted that guidance must be kept up to date in response to changes in the field and broader scientific ecosystem:
“The evolution of the healthcare improvement scholarly literature in the intervening years since the publication of the SQUIRE Guidelines has led to the development of concepts that were not fully anticipated at the time of initial release”(19).
Updates to one reporting guideline may necessitate the update of another. For instance, as PRISMA was being updated, a few researchers “supported referring to PRISMA for Abstracts, but suggested it also needs updating” to reflect updates being made to PRISMA(20).
6) Researchers may not be able to report all items, which can leave them feeling uncertain or worried
Some researchers described being unable to report items because of external factors, including intellectual property or data rules, disagreement between co-authors, or because “peer reviewers or editors had suggested editing out much of their [reporting guideline]-specific text”(21). Others reported feeling unable to report an item because they did not do it, whether on purpose, due to an oversight, or because requirements had changed since the study began:
“[This item was] not part of the study objectives”(26)
“This [item] is a good idea, but we did not do this.”(26)
“The RCT was initiated before trial registration became customary in Norway, and therefore does not have a Trial ID number.”(26)
This left some researchers fearing that “an ‘‘incomplete’’ checklist [gave] the impression that their study is ‘‘less than ‘perfect’.’’(7). Some expressed concern that strict wording that assumed something was done may “force people to lie/mislead by asking a question they cannot answer”(20) and suggested that guidance should instead use more agnostic language and specify what to do if an item were not addressed, such as “If no publicly accessible protocol is available, please state this”(20).
7) Awareness and accessibility may limit reporting guideline usage
Researchers may not know what guidance exists and may be more likely to use whatever is most accessible and discoverable:
“Several of the researchers did not have extensive knowledge about the different reporting tools, so the accessibility of the guideline or template was often a decisive factor.”(27)
One researcher wrote that “poor dissemination strategy by authors of reporting guidelines had inhibited uptake”(8), and others recognised that reporting guidelines could be “better highlighted”(17) by journals or advertised on “social media platforms”(22).
8) Reporting guidelines may be more useful to less experienced researchers, but these researchers may find them harder to use
Some researchers reported that they didn’t need the guidance as they were experienced enough to know what they were doing:
“One of the most prevalent themes was the expression of self-assuredness. ‘‘[I] follow the STROBE guidelines in my reporting reasonably well without actually referring to them or using a checklist’’ (group 3, ID1) and ‘‘[I] already apply the STROBE recommendations despite not having heard of it until today’’”(7)
Sometimes this was accompanied by an acknowledgement that reporting guidance may be more beneficial to less experienced researchers:
“Despite experienced researchers generally not seeing a benefit to personally using STROBE, there were strong feelings that it is valuable to early-career researchers”(7)
“Helpful at beginning of career, but not at later stage”(17)
“this exercise might be good for college students but is insulting for professionals”(26)
However, less experienced researchers often reported finding “reporting guidelines being difficult to use initially”(8), or that reporting guidelines became easier with experience in medical writing in general, and with experience in using other reporting guidelines. For instance, “Participants with less experience in scholarly medical writing found the SQUIRE Guidelines harder”(19).
9) Researchers want or need design advice, but reporting guidelines may not be the right place
Many researchers reported wanting advice on design choices but disagreed on where that design guidance should go. Some researchers suggested referring researchers to other design resources through hyperlinks or citations. Others explicitly wanted design guidance to be written into reporting guidelines so that others would read it. Some went as far as calling for reporting guidelines to express an opinion and encourage one technique over another. One researcher objected to a “neutral tone”(20) in a reporting guideline that may give the impression that a design choice (that they disapproved of) was reasonable practice.
However, other researchers objected to reporting guidelines that were opinionated about design choices. One user described STROBE as a “procedural straightjacket”(7), suggesting that it dictates how studies should be conducted. Users who encounter the guidance late in writing may be unable to act on any design recommendations and consequently may feel fearful of reporting transparently if their design choices deviate from what the guideline recommends as best practice (see Researchers may not be able to report all items, which can leave them feeling uncertain or worried).
Perhaps with these concerns in mind, one wrote that “we need to make sure that the language around this elaboration gives [researchers] some flexibility”(20), with another noting that “I am OK with the idea of emphasizing the value of [this design choice], but we cannot mandate it”(20).
10) Reporting guidelines can be harder to use if their scope is too broad, too narrow, or poorly defined
Reporting guideline developers may narrow the scope of their guidance by limiting it to certain design choices or research contexts. This frustrated some researchers, who noted that narrow “checklists cannot fit all types of research”(17) and “cautioned that ‘‘that balance between freedom and structure is important to consider’’ […] and that it is ‘‘important to recognise that each study/analysis is unique and doesn’t always fit with the recommendations’’.(7)
This narrowing of scope may have been a conscious decision, as requested by this researcher giving feedback on a proposed update of the PRISMA guideline: “We need an agreement on whether PRISMA is to be only for intervention studies (as implied by the proposed modification above) or more general.”(20) However, scope may not always be clearly communicated: another PRISMA user “opined that “the assessment of risk of bias, statement of risk ratio and explaining additional analyses depend on the study design … [For] a systematic review of cross-sectional surveys or a meta-synthesis I do not need this information””(28), suggesting they were unaware of PRISMA’s focus on interventional studies or that MOOSE and ENTREQ would be more appropriate for these kinds of studies (see previous themes for further discussion of awareness and understanding the applicability of reporting guidelines).
Researchers noted that scope could be made broader by removing items or, more commonly, by extending items with more options and examples:
“omit “(benefits or harms)” from the checklist item to be more inclusive of reviews that do not examine effects of interventions”(20)
“If the new PRISMA will more explicitly embrace topics other than interventions (which I think it should), then some additional examples could be added to the parenthesis (e.g. sensitivity and specificity, disease prevalence, regression coefficient)”(20)
However, extending guidance with options can make the guidance appear longer and means researchers must work out which parts apply to them.
11) Researchers may have to use multiple sets of reporting guidelines, multiplying complexity and costs
There are now over 500 reporting guidelines indexed on the EQUATOR Network website, with more added each year as reporting guideline developers seek to cover more and more use cases. Researchers may be expected to use one reporting guideline instead of another. Other times they may be expected to use a second or third reporting guideline in addition to the original one. Some researchers “pointed out that these extensions have created needless complexity and ‘‘additional confusion in reporting of observational studies’’ […] and that the ‘‘number of extensions has become excessive, especially given that multiple extensions may apply to a single study’’”(7).
One researcher wrote: “it would be good to have better connection between different checklists (perhaps using digital linking, decision-trees, etc.)”(20). Some showed concern that hyperlinks will go unused and so developers should “incorporate all relevant details in the […] checklist and elaboration (in case authors don’t read the extension)”(20). When writing about PRISMA, one researcher noted that “it would be wise to limit the number of additional documents to look up. This is only item 7, and I have already been referred to PRISMA for Abstracts and PRISMA for Searches. As a systematic review author, reviewer, or editor, I would be unlikely to go to several sources for reporting guidance”(20).
A few researchers wrote that related reporting guidelines should be mutually updated to keep in sync with each other before linking or embedding them. Researchers wanted the instruction, terminology, and structure of different sets of reporting guidelines to be coherent, suggesting, for example, that the updated PRISMA should be structured to be “in line with PRISMA-P”(20).
In addition to reconciling multiple reporting guidelines, researchers must also comply with journal, funder, and other scientific guidelines and expressed frustration when instructions contradicted each other. For example, some reporting guidelines specify subheadings for abstracts and one researcher pointed out that a “major issue is that journals wildly differ in requirements/what is allowed in abstracts”(20).
12) Researchers may use checklists but never read the full guidance
Reporting guidelines typically consist of the guidance itself and a checklist that serves as a summary of the guidance and a tool to demonstrate compliance. Sometimes the document containing the full guidance is called the Explanation and Elaboration (or E&E for short). When talking about a reporting guideline, it was often unclear whether the researcher was talking about the checklist or the E&E.
Some researchers implied that their only experience with reporting guidelines was completing a checklist as part of submission. We noticed that many negative statements were directed specifically at this process, describing checklists as “painful”(17), “pedantic”, “annoying”(7), or a “stupid exercise”(26).
One study explored researchers’ use of checklists and E&E documents, noting that “Participants used the guidelines and the E&E in different ways. Some did not read the E&E and used only the checklist, others read the E&E first and then used the checklist and a further group used the checklist and referred to the E&E for help with specific items.”(5). One researcher even went as far as to say that the “E&E appeared to be redundant”(5).
If some researchers only use checklists, which typically lack any nuance included in the E&E, this may explain why some described reporting guidance as inflexible and prescriptive, warning that “Blind checklists are not relevant to most work”(22) or that “Authors may ‘‘fear the ‘Checklist Manifesto’ becoming a rigid bureaucracy, and also becoming contrived’’”(7).