What is publication bias?
A publication bias is introduced into the literature when research results are published based on the strength or direction of their outcomes rather than the soundness of the methods used to conduct the research. Studies with positive and significant results are more likely to be published than those with negative or non-significant results. A topic’s overall literature base is biased when the totality of the research is not published, because the negative and non-significant findings cannot contribute to pooled effect estimates if they are never disseminated. Consequently, the direction and magnitude of an effect is biased from the true estimation, which impairs the ability to understand true relationships between variables and outcomes.
Publication bias occurs because:
- Authors choose not to publish their work
- Authors attempt to publish the research but it is not accepted by a journal for publication
While the former option can occur, it is less likely since publications are a researcher's academic currency. However, researchers are incentivized to submit research with positive and significant outcomes to journals. Publishers favor this type of reearch work because it may enhance a journal's impact factor and increase their citation index resulting in more prestige and financial gain. Hence, researchers may have a bias toward submitting positive research for publication because they are aware their odds of being published are increased by submitting novel work.
The push-pull dynamic between researchers and publishers creates a vicious cycle. The more positive research is published, which is typically a net positive for journals, the more researchers and publishers are incentivized to disseminate that type of material. Negative studies are filed away, never to see the light of day.
Publication bias leads to incorrect conclusions
The publication process is a bottleneck that filters which studies are “worthy” of dissemination. Publication bias is extremely detrimental in the clinical sciences, where negative and non-significant findings can be equally as important as positive and significant results when all results are pooled together. Lack of the dissemination of complete data, regardless of their significance or direction of effect, may lead to incorrect and misleading conclusions. This is consequential for clinical decisions regarding how interventions affect morbidity, mortality, and health-related quality of life.
For example, several meta-analyses were conducted examining published and unpublished (data submitted to the FDA) literature on antidepressants. One study found that out of 74 FDA-registered studies, 31% had not been published. Separating the published and unpublished literature showed differences in the effect sizes ranging from 11-69%. Another study that compared reboxetine versus placebo and selective-serotonin reuptake inhibitors (SSRIs) found that reboxetine overestimated the effectiveness of reboxetine by 115% and SSRIs by 23%. A third study found that SSRIs were safe to use for children. However, the addition of unpublished data demonstrated that the safety profile disappeared and the risks of SSRIs outweighed the benefits.
It should be mandatory for the data from all clinical research to be made available to the scientific and medical community so that the totality of data can be analyzed together. When the decision about the value of research is placed in the hands of a few (editors and peer reviewers) to decide if research should be disseminated, the risk of publication bias will persist. This bias will remain unless the publication process is entirely reformed to a post-publication model, where research is submitted to a journal and formally published before peer review, or, unless regulations are put in place requiring all clinical research (or at least the data) to be made openly available in a repository.
A solution to prevent publication bias: preprinting research
Preprint servers are a solution to many of the problems that plague the current publication paradigm, including publication bias. They give authors an outlet to share their research with the scientific community regardless of how a publisher views their material. Authors are in complete control over the dissemination of their work when they disseminate a preprint. They choose the time-point to share their work and in which preprint server. There are no constraints or biases that could prevent a preprint from being published online, providing that the scope of the server matches the scope of the research and that the research is ethically sound. Technically, preprints are publications. Any work that is “made public” can be considered a publication. Because preprints have DOIs that certify when the work was published, authors can be assured that their ideas are protected in date and time, to claim primacy to their research.
These features make preprints an optimal vehicle to ensure that all data can be made openly available, to be accessed, analyzed, and pooled with the totality of the literature around a topic. By putting authors in the driver seat to decide the fate of their research, not only are they a remedy for publication bias across all scientific domains, but most importantly, they aid evidence-based medicine to enhance the accuracy and precision of clinical recommendations.
Science serves humanity and the scientific community has an obligation to seek the truth as best possible. Absence of publicly sharing data, when available, disservices this cause. Researchers and publishers need to acknowledge that when data is not openly accessible, the time, commitment, hope, and clinical costs of the research have been wasted. Patients may spend years in a study, providing data to researchers. If this data is never disseminated, excluding incidences of fraud or severely biased research that would harm the overall literature, the scientific community has failed in upholding their obligation to seek the truth. Until the science publication and peer review processes are reimagined, preprints may be the best tool the community has to reduce publication bias and optimize how research is understood.