In 1731, the Royal Society of Edinburgh established the peer review process as a necessary step in the science communication process. Since then, it has been considered the sacred cow and safeguard of scientific publication. However, an emergent concern echoed through the research community is that traditional peer review is a “black box”, an opaque process with unpredictable procedures that negatively impact dissemination and that it is “based on faith in its effects, rather than on its facts” (1, 2). A growing body of research indicates that the current peer review system is ineffective at improving the quality of manuscripts and decreases the amount of research that is published.
Before peer review occurs, editors reject 30-65% of submitted research (3, 4). If peer review proceeds after submission, the review process can take months to years to complete, ranging between 14 weeks to one year, and extending to three years in some instances (2, 5, 6). Reviewers and editors may hold biases that impair publication, including: positive outcome bias against studies reporting statistically non-significant and negative outcomes in favor of those reporting statistically significant and positive outcomes (7, 8); status bias against authors from less prestigious institutions and with less notoriety (9, 10); location bias favoring submissions from their home country (11, 12); confirmatory bias where reviewers are biased against results that contradict their theoretical perspective (7); and intervention bias against studies testing unconventional treatments (13).
Studies examining the peer review process demonstrate that reviewers fail to detect 60-75% of errors and that the process is not a reliable method to detect fraud (14-16). Reviews are generally subjective and lacking in structure without guidelines to standardize how reviewers evaluate research and make recommendations regarding if the research should be published (2, 17). Consequently, consensus between editors and their inter-rater agreement about the quality of a single research manuscript may be poor (18). Hence, even if peer reviewers agree that a manuscript is sound enough to be published, researchers do not have confidence that editors will accept their manuscript for publication, highlighting the unpredictable nature of the peer review process. After peer review and acceptance by a journal, the manuscript is typeset, which improves the clarity of the content (19). Optimizing the readability of a manuscript could be considered an outcome of the peer review process. However, it is a copyediting procedure that is external to the actual peer review process that does not guarantee the scientific quality of the manuscript will be improved post-peer review (20).
The failure of the current peer review system is further evidenced by statements from publication industry leaders. Dr. Richard Smith, longtime editor at the British Medical Journal and an outspoken critic of peer review stated (17):
“After 30 years of practicing peer review and 15 years of studying it experimentally, I’m unconvinced of its value. Its downside is much more obvious to me than its upside, and the evidence we have on peer review tends to support that jaundiced view. Yet peer review remains sacred, worshiped by scientists and central to the processes of science… Peer review will become the job of the many rather than the few, and we know that the many can solve problems better than the few… [authors] favor moving to a system of ‘publish and let the world decide,’ preferably with systems of reputation and article metrics.”
Similarly, Vitek Tracz, founder of the journals BioMed Central, F1000Research, Current Opinions, and an innovator working to reform the peer review process stated (21):
“Scientific publishing is unbelievable and ridiculous…that science information should become available to people who want to know it with a year delay…peer review has become mortally sick, and the reason for the sickness is secrecy…it just begs for misuse.”
Researchers echo these sentiments. For example, a survey of 637 researchers demonstrated that 61% believed the review process was flawed and should be altered (5). These statistics and sentiments express that, at best, peer review may provide value to researchers by improving the readability of manuscripts. However, its function as a watchdog is lacking, and its current structure may do more harm than good.
A variety of entities involved in the research process affect publication rates. Authors, funding agencies, institutions, pharmaceutical and device manufacturers, and other trial sponsoring groups may choose to not disseminate research, especially if results are unfavorable (22). However, within the competitive research environment, publications are a researcher’s currency; they demonstrate expertise, imbue prestige, and are used to advance their careers and acquire funding (23). Publications are used as a proxy indicating the value and impact of researchers, who are perceived as being more influential, having greater notoriety, and being more likely to function as principal investigators based on the number of publications they have produced, the impact factors of the journals in which their research is published, and whether their papers receive more citations than average for the journal in which they were published (23-28).
With publications acting as a career-driving mechanism for researchers to demonstrate impact, it is unfavorable for researchers to suppress results. Hence, other factors need to be considered, the most explicit being the peer review process and its inherent biases that act as a bottleneck on the publication process. When research results are statistically significant and positive, with positive results that support the experimental hypothesis over the null hypothesis, manuscripts are more likely to be accepted, published, and cited than if statistically nonsignificant or negative results were found. For example, compared to studies demonstrating statistically nonsignificant or negative results, studies with positive or statistically significant results were more likely to be accepted by high profile journals (29); had 2.2-4.7 higher odds (30) and 13-18% increased chance of being published (31); were published faster - in 4-5 years versus 6-8 years (32); and were cited 1.6-2.7 times more often (33, 34).
If the decision to publish research is made based on the significance and perceived impact of the results, rather than the methodological quality of the research, a large quantity of research is never published. Examinations comparing the amount of published versus unpublished literature demonstrate that out of 807 clinical trials, 52% were unpublished as of the year 2012 (35); out of 594 randomized controlled trials with results posted in ClinicalTrials.gov, 58% were unpublished in corresponding journals (36); less than half of NIH-funded trials were published after 30 months of a study’s completion; after a median of 51 months, one-third of trials were unpublished (37); out of 13,000 clinical trials, only 38.3% reported results (38); and only 63% of randomized controlled clinical trial results presented as conference abstracts after a trial’s completion were subsequently disseminated in a full publication (31).
The peer review process cannot be remediated on its own because it exists within a larger publishing ecosystem, from conducting to disseminating to receiving recognition for performing research. To remediate peer review and optimize how science is disseminated and evaluated, the dynamics of the publication process as a whole must be reimagined.
The word “publication” connotes the act of publishing a book, periodical, paper, etc. The prefix (public) means “done, perceived, or existing in open view” and the suffix (ation) means “an action or process.” Hence, publication signifies “the action or process of making public,” and any process where ideas are made public can be considered a “publication.” The scientific publication process is simply a procedure that links researchers to ideas and results presented in research manuscripts, certifying that they have performed a study, discovered the results, and shared the material with the community at a point in time. Thus, the importance of the word publication lies in the symbolism of the action where an entity is made public.
In the Web 2.0 digital age, researchers can publish their work online in preprint repositories before, or in parallel, to their research being submitted to a journal. Preprint repositories allow drafts of manuscripts to be published online for early communication and engagement with the scientific community. These repositories give researchers the capacity to claim their ideas by date and time stamping their preprint with a permanent digital object identifier (DOI) that certifies when and where their preprint was publicly posted. The DOI establishes precedence and protects work from being scooped, allowing researchers to immediately claim rights to their content (39). By disseminating research as a preprint, it is made public and can be considered a publication.
Peer review does not occur before preprints are posted online, making preprint publication an unbiased mechanism for sharing results. Through this publication model, the peer review process cannot filter research before it is openly shared, helping to ensure that the global quantity of disseminated knowledge is not impeded. The status of the manuscript as a preprint is declared, indicating that peer review has not occurred and that the research should be interpreted cautiously before expert evaluation is performed.
In parallel, or in the aftermath of posting a preprint online, research can be submitted to a journal for consideration to be published as a post-print, a finalized manuscript version that has been peer reviewed. At this stage of the publishing process, the peer review process would benefit from the following reforms:
- Peer review needs to be standardized so that reviewers evaluate objective structured components of the research, to reduce the odds that highly subjective and biased opinions could sway editors from rejecting the work.
- Peer review needs to be confined to a short timeline, with a minimum number of revision rounds and requests for additional experiments, to minimize the lengthy time lags that inhibit research from being published as a post-print.
- Reviews need to be made openly accessible to the public, to hold reviewers ethically and scientifically accountable while disincentivizing potential biases against the research or the authors. By making peer reviews openly accessible, the scientific community can read an expert’s evaluation and critique, gain important insights about the strengths and limitations of the study, and be stimulated to openly discuss the research, which can facilitate a greater understanding of the work.
- Manuscripts should be peer reviewed and evaluated as preprints. While this may seem to be only a minor change in semantics, “preprint peer review” (3PR) clearly communicates that the content in the manuscript is under discussion between the authors and the reviewers, demonstrating to the community that the content has not been finalized in a post-publication format.
At Research Square, we have built the infrastructure to help shift the dynamics of the scientific publication process as a whole and taken steps to reform the peer review process. Our preprint platform allows researchers to publish their work as a preprint while certifiably claiming rights to their ideas by being issued a unique DOI. Researchers can submit manuscripts directly to Research Square or can opt to have their manuscript posted as a preprint while under consideration at one of our hundreds of affiliate journals offering our In Review service. Multiple versions of a manuscript can be uploaded, each receiving a new DOI, creating a living document trail and allowing the scientific community to see how the manuscript has evolved. The platform integrates several interactive services for fellow researchers and the public to connect with the content, including a Twitter interface that collates tweets related to their preprint; annotation through Hypothesis; a customized commenting application for discussing and evaluating the content; and editorial badges, awarded by Research Square’s PhD-level editorial staff, that signify rigor in the reporting of methods and statistics.
The infrastructure and services offered at Research Square reimagine the science publication process and positively affect the larger scientific ecosystem. Environments impact behavior. The publication system is an environment. Researchers must operate within this environment, are subject to its rules, and have little ability or incentive to change their habits unless the environment is altered and perceived in a new light. The first step needed to catalyze change in the publication process is helping researchers understand that disseminating research as a preprint is a form of publication. The second step is creating the systems that functionalize preprint publication while integrating peer review into this new paradigm. These actions stand to remediate long-recognized defects with the peer review process, because it will no longer function as a bottleneck to research dissemination, and it will cease to be an opaque and mysterious operation with unknowable inputs and unpredictable outcomes. Ultimately, Research Square levels the playing field for all researchers to share their work, giving them an incentive to operate within a new environment, and making research communication faster, fairer, and more useful.
Christopher Baur is an Editorial Quality Advisor at Research Square.
1. Jefferson T, Alderson P, Wager E, Davidoff F. Effects of editorial peer review: a systematic review. Jama. 2002;287(21):2784-2786.
2. Smith R. Peer review: a flawed process at the heart of science and journals. Journal of the Royal Society of Medicine. 2006;99(4):178-182.
3. Meyer HS, Durning SJ, Sklar DP, Maggio LA. Making the First Cut: An Analysis of Academic Medicine Editors' Reasons for Not Sending Manuscripts Out for External Peer Review. Academic medicine : journal of the Association of American Medical Colleges. 2018;93(3):464-470.
4. 5 ways you can ensure your manuscript avoids the desk reject pile: Looking at your submission through the eyes of a journal editor. Elsevier. https://www.elsevier.com/authors-update/story/publishing-tips/5-ways-you-can-ensure-your-manuscript-avoids-the-desk-reject-pile. Published April 10, 2015. Accessed September 20, 2018.
5. Nguyen VM, Haddaway NR, Gutowsky LF, et al. How long is too long in contemporary peer review? Perspectives from authors publishing in conservation biology journals. PloS one. 2015;10(8):e0132557.
6. Shah A, Sherighar SG, Bhat A. Publication speed and advanced online publication: Are biomedical Indian journals slow? Perspect Clin Res. 2016;7(1):40-44.
7. Mahoney M. Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research Vol37(5), Oct 2013, pp 996-1003. 1977;1(2):161-175.
8. Emerson GB, Warme WJ, Wolf FM, Heckman JD, Brand RA, Leopold SS. Testing for the presence of positive-outcome bias in peer review: a randomized controlled trial. Archives of internal medicine. 2010;170(21):1934-1939.
9. Peters DP, Ceci SJ. Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences. 2010;5(2).
10. Tomkins A, Zhang M, Heavlin WD. Reviewer bias in single- versus double-blind peer review. Proceedings of the National Academy of Sciences of the United States of America. 2017;114(48):12708-12713.
11. Link AM. US and non-US submissions: an analysis of reviewer bias. Jama. 1998;280(3):246-247.
12. Nieminen P, Isohanni M. Bias against European journals in medical publication Databases. Lancet. 1999;353(9164):1592.
13. Resch KI, Ernst E, Garrow J. A randomized controlled study of reviewer bias against an unconventional therapy. Journal of the Royal Society of Medicine. 2000;93(4):164-167.
14. Schroter S, Black N, Evans S, Godlee F, Osorio L, Smith R. What errors do peer reviewers detect, and does training improve their ability to detect them? J R Soc Med. 2008;101(10):507-514.
15. Godlee F, Gale CR, Martyn CN. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. Jama. 1998;280(3):237-240.
16. Baxt WG, Waeckerle JF, Berlin JA, Callaham ML. Who reviews the reviewers? Feasibility of using a fictitious manuscript to evaluate peer reviewer performance. Annals of emergency medicine. 1998;32(3 Pt 1):310-317.
17. Smith R. In Search Of an Optimal Peer Review System. Journal of Participatory Medicine. 2009;1.
18. Ernst E, Saradeth T, Resch KL. Drawbacks of peer review. Nature. 1993;363(6427):296.
19. Roberts JC, Fletcher RH, Fletcher SW. Effects of peer review and editing on the readability of articles published in Annals of Internal Medicine. Jama. 1994;272(2):119-121.
20. Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for improving the quality of reports of biomedical studies. The Cochrane database of systematic reviews. 2007(2):Mr000016.
21. Verma H. F1000’s Vitek Tracz: Redefining Scientific Communication. http://reviews.libraryjournal.com/2015/03/reference/f1000s-vitek-tracz-redefining-scientific-communication/. Published 2015. Accessed.
22. Bassler D, Mueller KF, Briel M, et al. Bias in dissemination of clinical research findings: structured OPEN framework of what, who and why, based on literature review and expert consensus. BMJ open. 2016;6(1):e010024.
23. van Dijk D, Manor O, Carey LB. Publication metrics and success on the academic job market. Current biology : CB. 2014;24(11):R516-517.
24. Alberts B. Impact factor distortions. Science. 2013;340(6134):787.
25. Shanta A, Pradhan AS, Sharma SD. Impact factor of a scientific journal: Is it a measure of quality of research? J Med Phys. 2013;38(4):155-157.
26. Sahel JA. Quality versus quantity: assessing individual research performance. Sci Transl Med. 2011;3(84):84cm13.
27. Carpenter CR, Cone DC, Sarli CC. Using publication metrics to highlight academic productivity and research impact. Academic emergency medicine : official journal of the Society for Academic Emergency Medicine. 2014;21(10):1160-1172.
28. Peters D, Ceci S. Peer-review practices of psychological journals: thefate of submitted articles, submitted again. Behavioral and Brain Sciences. 1982;5(2).
29. Murtaugh PA. JOURNAL QUALITY, EFFECT SIZE, AND PUBLICATION BIAS IN META-ANALYSIS. 2002;83(4):1162-1166.
30. Dwan K, Altman DG, Arnaiz JA, et al. Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias. PloS one. 2008;3(8):e3081.
31. Scherer RW, Langenberg P, von Elm E. Full publication of results initially presented in abstracts. Cochrane Database of Systematic Reviews. 2005(3).
32. Hopewell S, Clarke MJ, Stewart L, Tierney J. Time to publication for results of clinical trials. Cochrane Database of Systematic Reviews. 2005(2).
33. Duyx B, Urlings MJE, Swaen GMH, Bouter LM, Zeegers MP. Scientific citations favor positive results: a systematic review and meta-analysis. Journal of clinical epidemiology. 2017;88:92-101.
34. Jannot AS, Agoritsas T, Gayet-Ageron A, Perneger TV. Citation bias favoring statistically significant studies was present in medical research. Journal of clinical epidemiology. 2013;66(3):296-301.
35. Blumle A, Meerpohl JJ, Schumacher M, von Elm E. Fate of clinical research studies after ethical approval--follow-up of study protocols until publication. PloS one. 2014;9(2):e87184.
36. Riveros C, Dechartres A, Perrodeau E, Haneef R, Boutron I, Ravaud P. Timing and completeness of trial results posted at ClinicalTrials.gov and published in journals. PLoS medicine. 2013;10(12):e1001566; discussion e1001566.
37. Ross JS, Tse T, Zarin DA, Xu H, Zhou L, Krumholz HM. Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis. Bmj. 2012;344:d7292.
38. Hallinan ZP, Getz KA, Bierer BE. Compliance with results reporting at ClinicalTrials.gov. The New England journal of medicine. 2015;372(24):2370.
39. Desjardins-Proulx P, White EP, Adamson JJ, Ram K, Poisot T, Gravel D. The case for open preprints in biology. PLoS biology. 2013;11(5):e1001563.