The list of questionable journals is considered an effective method of research evaluation by universities and research institutions (Nature 2018). It serves as a filter for research evaluation agencies to quickly identify low-quality publications. Moreover, it provides guidelines for researchers to shape their publishing behavior and helps them identify low-quality scholarly information. However, the list of questionable journals is highly controversial. Different developers of such lists employ varying criteria, resulting in the inclusion of different scholarly journals whose accuracy has been questioned (Strinzel et al. 2019; Tang and Jia, 2022).
Previous studies have concluded that 28 criteria of the list of questionable journals introduced by Cabell need clarification and revision, and 39 should be removed (Teixeira da Silva et al. 2023). Additionally, Beall’s list of predatory journals has also faced criticism due to its vague criteria (Teixeira da Silva and Kimotho, 2021). Indeed, there appears to be a gray area between “good” and “bad” (Dunleavy, 2022), which casts doubt on the credibility of the list of questionable journals (Teixeira da Silva and Tsigaris, 2018). In contrast, some scholars have defended the list of questionable journals and proposed ways to improve the criteria of the lists (Frandsen, 2019; Nelhans and Bodin, 2020; You et al. 2022).
Despite the criticism, the list of questionable journals continues to be widely used in evaluation practices. In December 2020, the National Science Library of the Chinese Academy of Sciences (NSLC) published a list containing 65 international scientific journals, all of which are indexed by the Web of Science (WoS). These journals were considered potentially lacking academic rigor (Zhang et al. 2022). Since then, more than 40 Chinese universities and institutions have promptly adopted the list as a basis for their research evaluations (Tang and Jia, 2022). Researchers who publish in the list of questionable journals are not eligible for material or honorary awards. However, the release of this list, like other journals, has sparked considerable controversy, leading Multidisciplinary Digital Publishing Institute (MDPI) to issue a statement emphasizing their strict review criteria and the high recognition of their journals in the academic community (MDPI 2022). They also assure readers that they will continue to communicate with the NSLC and remove their journals from the list as soon as possible. After analyzing the two lists, we discovered that the 2021 edition had a significant decrease in the number of MDPI-affiliated journals compared to the previous year. Specifically, the count decreased from 22 to seven.
We have discovered that there is scarce previous research extensively discussing whether the stratification or mobility in the list of questionable journals was sound. In simpler terms, the fundamental question arises: can we truly place our trust in the criteria utilized to assess the inclusion or exclusion of scholarly journals from these lists? If a list of questionable journals cannot guarantee credibility and trustworthiness, it may give rise to issues of stigmatization and could corrupt the symbolic capital of academic journals (Teixeira da Silva and Kimotho 2021). The very trustworthiness of these lists becomes a subject of doubt and scrutiny. By addressing this matter, we can provide global guidance for academic review practices and, more significantly, establish a more trustworthy and accountable system for scholarly evaluation. Consequently, we have chosen to focus our study on the list of questionable journals (called the warning journal list) published by the NSLC (2020 and 2021 editions) due to its significant representation and values (Brainard 2023).
The warning journal list developed by the NSLC was first published in 2020, and it is updated annually thereafter (although the release of the 2022 edition was suspended for unknown reasons). Considering the continuity of time, the focus will be on the 2020 and 2021 editions. The NSLC classifies questionable journals into three warning levels: high-level, middle-level, and low-level, with their risk indices decreasing in that order. With each annual update, the NSLC introduces three additional categories for the warning journal list. The first category is the retained group, which includes journals listed in both the 2020 and 2021 editions. The second category is the added group, which includes journals not listed in the 2021 version but included in the 2021 editions. The third category is the excluded group, which includes journals not listed in the 2021 version but included in the 2020 editions. By distinguishing these groups, it will assist us in this research.
Theoretical Frameworks and Research Questions
Stratification of Academic Journals
Academic publishing serves as a means for researchers to establish priority and validate their findings, while also functioning as an incentive within the academic community (Merton 1957). The publication of articles in reputable academic journals not only enhances researchers’ symbolic capital but also provides them with valuable economic resources (Beckman 2019; Carpenter, Cone, and Sarli 2014; Elliott 2013). This system, in conjunction with research evaluation mechanisms, has established a hierarchical framework. Researchers who publish more articles in highly prestigious journals gain greater recognition and may even attain esteemed academic status, while others may be regarded as having “lower” or “no” reputation (Bornmann and Williams 2017). Conversely, when highly respected researchers publish in academic journals, it elevates the journals' standing, leading to internal differentiation (Blanford 2016; Onodera and Yoshikane 2014). One such measure of journal stratification is the impact factor. For instance, the Web of Science (WoS) categorizes academic journals into four tiers based on their impact factor, with those in the top 25% considered of high quality and repute (Clarivate 2022). However, due to significant criticism of the impact factor, there has been an emergence of alternative indicators for journal stratification, such as the JIF index introduced by WoS and the CiteScore index introduced by Scopus (Chapman et al. 2019). In order to further discern the quality and reputation of academic journals, some research institutions have implemented whitelists or ranking systems (Soteropulos and Poore 2021). Drawing on theories of social stratification (Lambert and Griffiths, 2018, p. 36), the stratification of academic journals relies on distinct criteria to establish clear hierarchies. However, in the absence of these evident disparities, making definitive distinctions becomes challenging. Based on this, we propose the following research questions (RQ):
RQ1: Are there significant differences in key academic indicators between the low-warning-level group and the middle-warning-level group?
RQ2: Are there significant differences in key academic indicators between the low-warning-level group and the high-warning-level group?
RQ3: Are there significant differences in key academic indicators between the middle-warning-level group and the high-warning-level group?
Stratum Mobility of Academic Journals
Once social strata are formed, mobility within these strata begins (Sorokin 1927). As Schumpeter (1955) once said, stratum may be conceived of as conveyances that constantly carry different ‘passengers’ without changing their shape. The creation of academic journal lists has led to the division of academic journals into various strata. However, the structure of these strata is not fixed. Just like the warning journal list established by the NSLC, some journals may transition from being high-risk to becoming normal journals with no risk, while others may directly shift from non-risk to high-risk categories. In society, at times, the boundaries between strata are quite clear (Giddens 1975 p.106). For instance, when an individual’s total income reaches a certain threshold, they can ascend to the middle stratum (Madrueño-Aguilar 2017). Most academic journal lists, whether inclusion or exclusion lists, follow similar principles. For example, despite the criticism of the JIF (Nair and Adetayo 2018), when the JIF of an academic journal reaches the top 25% within its field, it is regarded as entering the high-impact stratum (Miranda and Garcia-Carpintero 2019). If it fails to meet the criteria, it is regarded as slipping down to the non-high-impact stratum. The advantage of clear boundaries is that it visualizes stratum mobility (Giddens 1975 p.106). Academic journals within a particular stratum can explicitly know the criteria they need to meet to move up to a higher stratum or slide down to a lower stratum. Therefore, based on this, we propose the following research questions:
RQ4: Are there significant differences in key academic indicators between the added group before (2020) and after (2021) inclusion?
RQ5: Are there significant differences in key academic indicators within the retained group over two consecutive years (2020 and 2021)?
RQ6: Are there significant differences in key academic indicators within the excluded group before (2020) and after (2021) exclusion?