The number of scientific journals has grown exponentially over the past twenty years that it is "now impossible for a researcher to apprehend all of the published literature in his field" 1. This provides fertile ground for the development of predators seeking to take advantage of the situation. The number of predatory journals is currently estimated at 13,000 journals with more than hundred thousand published articles 2,3.
3 provided a detailed description of the predation phenomenon in the scholarly publishing. Predatory journals are characterized by three main strategies: all predatory journals are Open Access (OA) journals based on a "author-pays" business model, in which peer review is not rigorous (or even absent) and displays "standard markers of scientific publications" (e.g. ISSN, DOI, etc.).
As much of the literature has shown, the publishing world cannot be categorized solely as black or white 4,5. Sometimes the line between what a predatory journal and a real scientific journal is not clear. One of the criteria that allows deciding is the quality of the peer review practiced within journals. In particular if, all the other administrative characteristics surrounding the publication are not false or usurped (e.g. ISSN, DOI). This is the case, for example, of newly created journals, little known, with low bibliometric indicators (e.g. low impact factor). To grow and exist in the current oligopolistic market of scholarly communication, with many barriers to entry, these journals sometimes adopt aggressive commercial strategies. Therefore, they may look like predatory journals without actually being.
This issue creates a certain anxiety among the scientific community who fear for their reputation if they happen to publish in journals, which prove to be predatory, or without scientific basis 6,7. Some have described a kind of omerta within the scientific community 8. In this regard, 9 stressed that the OA status of journals should not be a reason for distrust for researchers, since not all OA journals are predatory. (Björk and Solomon, 2012) have also emphasized that researchers should, on the other hand, carefully check the quality standards of journals before submitting their work.
In this context, the role of peer review is paramount and deserves special attention. How can we "assess" peer review? Does the length of reviewers’ reports improve the quality of manuscripts? Alternatively, is it only a rhetorical instrument intended for publishers and authors? The analysis carried out by Publons team is the only one to our knowledge that has investigated the link between the size of reviewers’ reports and the citation impact on a large sample of over 378,000. 10 showed the existence of a positive correlation between the average number of words in evaluation reports and the impact of journals. Nevertheless, the study concluded that it is not possible to say with certainty that longer reviews are better or worse than shorter ones: “Great reviews can be short and concise. Poor reviews can be long and in-depth, or vice-versa”. The analysis carried out by Publons remains quite descriptive and does not indeed allow confirming the positive link observed between the two variables. For this, an econometric model is necessary to neutralize the effects of the variables affecting citation impact on the one hand, and the length of the reviewers’ reports on the other.
In this study, we hypothesize that reviewers’ reports actually improve the quality of publications. The interest they have received within the scientific community and results in a high number of citations reflects to some extent their quality. Thus, the length of reviewers’ reports is representative of the number of modifications / improvements that authors must make to their manuscript for an eventual acceptance for publication. In other words, there should be a positive relationship between the length of reviewers’ reports and the quality of scientific publications.
To test this hypothesis, we use on the one hand the data provided by Publons (https://publons.com/) for the length of reviewer’ comments, and on the other hand on the data from the Web of Science (WoS) database for the calculation of bibliometric indicators. We performed an econometric model to analyze the link between the two variables: size of reviewers’ reports and citation impact.
The article's plan is the following: First, we present a literature review on the link between peer review (in general) and the quality of publications. Then, we present some descriptive statistics on the database used (Publons), followed by the results section. Finally, the conclusion and the discussion return to the results obtained and their scope and implications.
Literature review
Peer review is a topic that has garnered much ink in the literature in Scientometrics and the sociology of science. We will only present here the work related to our research question, which is the impact of the peer review process on the quality of publications, approached by citations indicators.
Peer review processes are generally long, with disparities across disciplines 11–14. Especially for accepted publications 15. Over the past ten years, the duration has lengthened because of the inflation in the number of publications submitted to journals. On the other hand, the acceptance rate has increased by around 50% 16. In addition to the constraints related to the number of publications submitted each week to journals, the fact that referees are not paid for their review activity can contribute to slowing down the process 17–19. The increase in the technicality of publications can also increase the evaluation time in certain disciplines (Azar, 2007).
Whatever the reasons that may affect the evaluation times of scientific publications, the final objectives of this are multiple and revolve around improving the effective quality of publications. 20 outlined eight purposes of peer review: (1) assess the contribution and originality of a manuscript, (2) perform quality control, (3) improve manuscripts, (4) assess the suitability of manuscripts for the topics of the journal, (5) provide a decision-making tool for editors, (6) provide comments and peer feedback, (7) strengthen the organization from the scientific community, and (8) provide some sort of accreditation for published papers.
Studies that assess the quality of the review process, from the perspective of authors or editors, are quite rare. 14 analyzed data from 3,500 experiences and author reviews published on SciRev.sc (https://scirev.org/). The SciRev.sc interface groups together author reviews and ratings assigned to journals based on the quality (perceived by them) of the review process. 14 have shown that, unsurprisingly, reviews with short response times tend to score higher. The same goes for experiences that resulted in a positive response from the journal. 21 arrived at similar results on a sample of 193 authors. Furthermore, 14 have shown that journals in disciplines where the evaluation processes are relatively long are on average better rated than journals in disciplines where the processes are short. In a more recent study, 22 analyzed the perception of authors and editors of the quality of peer reviews in 12 journals: 809 manuscripts and 313 opinions and recommendations. 22 found that authors give high ratings and positive perception when reviewers' comments recommend acceptance, unlike comments that recommend rejection (which is to be expected). On the other hand, the evaluations recommending a revision, are of better quality according to the indicator used (Review Quality Instrument - RQI). In addition, 22 have shown a strong association between the recommendation of referees and the publication decision of editors.
Studies that cross the length of the peer review process with the impact of journals or publications are even rarer. The study of 23 on 22 ecological / interdisciplinary journals, showed the existence of an inverse relationship between the acceptance delay and the impact factor of the journals. These same journals, however, have relatively low acceptance rates. The analysis of 24 on the three journals (Nature, Science and Cell), over the period 2005-09, arrived at different conclusions from those of 23. For the three journals studied, the authors observed an inverse relationship between editorial time and the number of citations received.
There is room for improvement in the existing literature on assessing the length of the peer review process. The analyzes of 14, of 21 and of 22 can be criticized insofar as they assess the perceived quality of the evaluation and not the intrinsic quality. The perceived quality depends very much on the final decision and the length of the revisions requested by the referees; it is therefore very subjective. Hence, a thorough analysis of referees can result in a lot of criticism and be a source of improvement of the publications, even if the authors perceive it negatively, because it delays the publication (and adds a lot of additional work). To measure intrinsic quality, it is essential to cross-reference the performance indicators (eg citations received) of journals or articles.
Likewise, the study by (Pautasso and Schäfer, 2010) which showed that high impact journals have short turnaround times ignore the type of decision made by the journals. In general, the number of articles submitted to high-impact journals is important, as is the rejection rate. The rejection decision usually comes a few days / weeks after submission, reducing the average processing time for the entire review. Taking into account the time to get the first response and the type of response are important elements in the analysis. Thus, it can be assumed that lead times for accepted papers are higher in high-impact journals.
Data
Web of Science data
The data about citations scores and disciplinary assignation of publications has been extracted from the “Observatoire des Sciences et Techniques” (OST) in-house database. It includes five indexes of WoS available from Clarivate Analytics (SCIE, SSCI, AHCI, CPCI-SSH and CPCI-S. for more information see: https://clarivate.com/webofsciencegroup/solutions/webofscience-platform/) and corresponds to WoS content indexed through the end of November 2020. We have limited the analysis only to the original contributions; i.e. the following documents types: "Article", "Conference proceedings" and "Review".
Publons data
Publons is an interface created in 2012 specializing in peer review issues. In 2017, the supplier of the WoS database, Clarivate Analytics, acquired Publons. Currently, this database index over 2 million researcher profiles who share their peer review experiences. Publons offers a free service for researchers to promote their contributions. Publons also made available more than 300,000 items designating the characteristics of referee evaluation reports in journals: average word count, country of referee and journal. As part of this study, Publons made its database available to us by adding article identifiers (WoS UT). This allowed us to easily match Publons data with that of the WoS database and add other variables. Our final dataset contains 58,093 distinct publications reviewed by 62,152 reviewers (Table 1). Publons database provides word count per review.
Table 1
Number of reports per publication
#Reports by publication
|
#Publications
|
Freq. (%)
|
#Reports
|
Freq. (%)
|
1
|
54,296
|
93.464
|
54296
|
87.360
|
2
|
3552
|
6.114
|
7104
|
11.430
|
3
|
229
|
0.394
|
687
|
1.105
|
4
|
15
|
0.026
|
60
|
0.097
|
5
|
1
|
0.002
|
5
|
0.008
|
Total
|
58,093
|
100.000
|
62,152
|
100.000
|