3.1. How do the editors employ open science and peer review in practice?
In this section, we outline the SSH review processes, as described by the interviewees. Although some experienced readers might consider this section not adding new knowledge to the field, we consider it important to provide a detailed description in order to empirically illustrate what many believe these processes are—we were not able to find previous studies that would have systematically analysed SSH review processes—moreover, this this baseline will also enable us to later pinpoint exceptions and reflect on the chief editors’ personal views.
All interviewed journals employed multiple stages for reviewing their submissions. We start by summing up the general protocol that was similar in all journals:
- Screening. When submission arrives at a journal, editorial body X (sometimes with Y) will screen the submission and decide if it will be reviewed. Negative decisions lead to desk rejection.
- External review. Based on the screening and pre-review, X and/or Y look for appropriate external reviewers Z and invite them to review the submission.
- Revisions. Based on the combined feedback of X, Y, and Z, the authors are asked to revise their submission and resubmit. Steps 2 and 3 are repeated until the submission is accepted or rejected.
- Post-review. Accepted submissions are typically followed by copyediting and various debriefing procedures related to the review process.
Sometimes the chief editors were responsible for all screening themselves, but often these decisions were informed by or entirely delegated to other parties such as assisting editors, associate editors, editorial boards, or the managing editor. The amount of personnel involved varied dramatically, from a sole editor to teams with more than ten people. The number of desk rejections at the end of this process varied even more, ranging from as low as 15% to over 85% of all submitted manuscripts. The average overall rejection rate lies at about 70%, with the lowest rejection rate at 33% and the highest at 95%.
Five unique reasons were identified as causing a desk reject: lack of fit with journal scope, poor overall quality, ignoring relevant literature, narrowness or low impact, and instances where too many similar manuscripts were already in review or published. In one exceptional case, the chief editor noted almost all desk rejections to derive from the fact that the journal’s name was similar to other journals with different profiles, which made authors constantly submit manuscripts that address out-of-scope topics.
The screening processes had combinations of closed and open elements. In the most common scenario, the chief editor and/or staff would screen the manuscript with the authors’ names visible. In other words, the most common methods in the screening phase were disclosed or single-blind review where the reviewer saw the authors but not always the other way around, i.e. the authors might not know if they had been (desk) accepted or rejected by a chief editor, assistant editor, associate editor, editorial board, managing editor, or someone else. The interviewees’ felt that double sided anonymity at this stage would be both impossible and impractical. One interviewee, for instance, was ready to let assistants make desk decisions, but in a way that would allow the chief editor to supervise the process in an open format:
That has to be open. For example, you would never assign a reviewer who's in the same department as the author. So you need to know where the other is. You also want to avoid assigning a reviewer who was likely to have been the author's advisor. So that really can't be blind, or you'll end up just sending things to coauthors or work colleagues. I am much less concerned with the anonymity of authors than the anonymity of reviewers.
One journal practices a system that inverses the dynamic of the screening stage: a significant proportion of submissions are first informally suggested to the chief editor, who then seeks input from experts in the field to support authors in developing proposals. Only then proposals are submitted and subjected to a double blind peer review, which then has a very high acceptance rate. To this chief editor, nurturing promising submissions from the start strengthens the quality of the journal while guaranteeing a steady stream of high-quality publications – a desirable strategy because “we are in the business of publishing, not punishing.” This feed-forward form of curation creates less need for critical feedback (and rejection) in the later review stages: “I like to think that we do a good job prior to the submission, so the author can confidently send an article and receive a positive, constructive feedback.” The described process reminds one of the “registered report” article format (Chambers 2017), which was not explicitly used by any of the 12 SSH journals (not even those that were psychologically inclined).
In some journals, screening was supplemented with full editorial review. In these cases, one or more persons of the editorial staff read the entire manuscript and provided (signed or anonymous) decision feedback before moving to external review:
- Sometimes we get articles from [country] that are 1.5 page – then it's a desk rejection. But if it's a full article, the editorial manager will assign it to one of the other editors. And then it's the editor's task to go through the article.
- Chief editors have a look at the first round, we can then give a desk reject right away. But if it goes further, then two members of the editorial board read the text and assess whether it's good to go to review or not. We might not desk reject, but send ‘OK, didn’t go to review yet, but if you do these changes, we’ll reconsider.’
Summary: Screening is carried out by one or more editors. The full manuscript may be read or not, and feedback is optional. Since this phase is editorial, the main transparency issue is whether the authors’ know who contributes to the desk decision, and based on what criteria.
All editors participating in the study characterized the external review phase as crucial. Here we find the strictest adherence to double blinded review practices. Virtually all interviewees supported the view that double blind external peer review was considered a standard by the academic community as a whole. With one exception, all journals employed strict double blind external peer review, with between two and four reviewers per submission for original submissions. Other formats, e.g. book reviews, are often treated differently.
The journals had several means for recruiting reviewers. This could be done by the chief editor or another staff member, or as a collective effort. Higher volume journals had reviewer databases with hundreds of potential experts; smaller journals would mainly recruit via the personal networks of the editorial staff. All editors agreed that academic qualifications were the primary selector, supported by a whole range of other criteria.
It's because they're considered experts. It's because sometimes we know them personally. It is because they are more committed, because they're on the board. And it is also because they are familiar with the journal, the direction of the journal, the expectations and the level of quality of the journal.
Altogether, nine unique criteria were considered relevant in choosing the reviewers: age (“the issue is how do you find the really bright young people who are doing really thoughtful work”), biases (“we try to give papers to reviewers that are perhaps on different sides of a divide”), commitment (“because you know they are more committed”), diverse perspectives (“we certainly try to find both reviewers from close to the paper’s discipline and also from outside”), distance (“connection or lack of connection to the author”), expertise (this was mentioned by all interviewees in many ways), nationality (“you need someone who will pick up on the local nuances”), personal preferences (“I would be hesitant to send something to somebody who I didn't feel I knew”), and recommendations (“we also make very good use of the suggestions we receive”). Additionally, two journals were proud to have ‘harsh’ or ‘super’ reviewers who could be used for the most challenging tasks:
- I try my best to balance reviewers and also to sometimes avoid using harsh reviewers on material that I personally don't think is the strongest. So I might reserve my harshest reviewers for things that I myself find to be of very high quality.
- And then there are my sort of super reviewers. They are people whom I have just learned to trust. Who will first of all do it if they say they'll do it, but also are good at sort of sifting through, reading and picking things up. These are often people who have been journal editors or just have a track record. And if I had a structure of associate editors, these are people who would be associate editors.
The role of external review was somewhat polarized among the journals. On one side, certain journals consider the external review to be advisory instead of decision-making. The editors of these journals define their roles rather as curatorial or akin to editors in book publishing. Beyond assuring high quality publications, they see their responsibility in stimulating innovative impulses to developing fields and helping authors bring their concepts to full fruition in a collaborative process.
we don’t follow the peer review slavishly, but then again, the issue of not recognizing what the reviewer was saying has not really arisen – we end up reading every single piece submitted and everything, and then every piece that is published in the journals, we go over each one of us, more than once.
At the other extreme, however, chief editors were utterly clear about operating as nothing but ‘mediators’ between reviewers and the reviewed submissions. These journals pursued first and foremost the assurance of scientific quality, and in this picture, the external blind reviewers served as ‘objective’ measures.
- If I'm in doubt as the editor, I will send it out to reviewers who can then make that decision for me. So the idea is that not one person, in cases of doubt, will make the decision. I don't make the decision, the reviewers do.
- I tend to view myself as an umpire. I'm not qualified to make these decisions. My job really is to try and ensure that the reviewers are appropriate and that the reviews are fair, to the extent that I can. And I'm sure there are mistakes.
It is worth noting that sometimes editorial review was merged with the external review. In these scenarios, internal editorial reports could be delivered based on either double or single blinded principles. None of the chief editors expressed concerns or policies regarding
the disclosure of these internal processes, but we also did not inquire directly about them.
- We have an editorial board. We have this board to kind of draw on in cases if we need a second opinion or third reviewer. But then they would also work as a blinded reviewer.
- Sometimes it gets complicated and then the person I ask will be somebody on our editorial board who has been fairly helpful in the past, because you're asking them basically to follow through the entire editorial history of this. I'll send them the whole thing, like, ‘here's the history, here are the reviews, what do you think.’
Summary: For research articles, the general practice of disclosed external review was a rare outlier in our data. A consensus among the chief editors was that double blind review was the fairest and most objective means to carry out the process. Sometimes the process included internal review procedures, which could be non-blinded.
With few exceptions, all research that had been published in the interviewed journals went through revision (“I don't think I've ever experienced a situation where a text would’ve been good to go”). The revision process was either conducted between authors and editorial bodies, or via additional external (blind) reviews after the first revision. One editor explicitly stated that a manuscript which had not sufficiently improved after a revision would not benefit from further revisions and was rejected. Other editors described an iterative process that could go through as many as eight revisions. A common method was to ask the same external (blind) reviewers to review the manuscript a second or more times.
Despite collectively following and agreeing on the previously discussed benefits of closed external review, several editors voiced doubts regarding this phase in particular. These doubts derive from the nature of the revision process, namely, these interviewees felt that initial blind processes would benefit from increased transparency after the necessary ‘gatekeeping’ had been cleared out:
[blind review] is not the gold standard that people often perceive it to be, and in fact, often what’s far more useful is a sort of semi-collaborative editorial process that follows after double blind peer review. That's where the improvements are really made. This is just a kind of initial gatekeeping, and sometimes it’s useful and sometimes tokenistic.
In cases such as the above where chief editors expressed a personal liking for disclosing external reviews or revisions, they systematically cited institutional requirements that would not index their journal as a proper scientific journal without ‘blindness’ involved. For instance:
A completely open process, I think, is far more plausible. But then the issue is also that I've been on review committees where people have said, well, if this has not gone through a double blind review, it doesn't count as much. So I still think there's a huge hurdle to overcome in terms of how we can get the academy as a whole to value anything other than that kind of traditional double blind review.
Summary: All journals include a revision stage, which is internal, external, or a combination thereof. The transparencies in the revision process generally follow those of earlier stages, yet with increased open editorial input.
In the post-review phase, most journals draw strongly on their editorial assistants for technical quality assurance such as checking the integrity and completeness of the citations.
In this phase, pragmatic differences, primarily relating to the ways in which the journals are financed, emerge. The journals that are primarily or exclusively financed by universities report dwindling subsidies, whereas journals with strong ties to associations or publishers appeared more stable. Relations to publishing houses were characterized unanimously as harmonious and unproblematic, except for some unease in cases of journals facing an upcoming periodic review of viability. One chief editor, addressing manuscript transformations into PDFs with DOIs, positively admits:
I think the press handles all that, I've not been involved in any of that ... We've discussed occasionally whether [volume number should increase]. Other than that – cover design, occasional changes, that's always collaborative.
Only one journal employed technical means to assure quality control beyond peer review. They had recently started using software for an originality check, which would make sure that plagiation of all sorts could be detected before final publishing (“now we run all papers through a system to see a possible relapse”). Transparency-wise, however, perhaps the most critical question at this phase was whether author or reviewer identities could be opened after a positive publication decision.
When I send an acceptance notice and say ‘dear so-and-so we've accepted your article’ I send that to the reviewers as well. It seems to me at that point I can include the name of the author. I mean, we've made the decision. But sort of automatically I take it out. But I keep thinking, why am I taking out the name of the author?
Finally, some chief editors perceived the articles that they publish in the larger continuum of scientific evolution. Namely, the peer review of a publication is not something that takes place in or by the journal alone; rather, journal review is one evaluative event in an article’s life, which continues post-publication as peers read and review it in academic forums:
So if the paper is not good, it will be lost in history, it won’t get citations. If it's really influential but problematic, there will be some dialogue, there will be some criticism, there will be some contrasting results presented and so on.
Summary: The post-review phase consists mainly of technical tasks such as copyediting. Some interviewees considered a positive publication decision as a potential reason for disclosing otherwise closed identities. Public review of articles starts after publication.
3.2. What are the editors’ personal opinions about open science and peer review?
In this section we move more explicitly toward the chief editors’ subjective perspectives concerning the review process. After that, we discuss the results via our research questions.
A majority of the editors expressed, if not misgivings, then at least doubts about the universal viability of double blind peer review. The same chief editors likewise considered fully or partially disclosed open reviews problematic. For both sides, we identified seven unique but connected problems, which are presented in Table 1. In addition to these format-specific problems, five chief editors mentioned the general issue of external peer reviewers sometimes being too hasty and either providing little or no feedback to the authors. In these instances, the review reports were usually discarded and new reviewers were recruited.
Our space does not allow discussing all 14 identified problems respectively, but it is worth dovetailing some selected concerns. First, we highlight the notion listed as ‘institutional discredit’, which some chief editors considered a key obstruction that makes even thinking about moving to open peer review not worth the time. Despite the fact that several world-leading journals such as those in the Nature-series support open review formats, many interviewees recognized the double blind as a ‘gold standard’ that they could not move away from without sacrificing credibility. The pedigree of these standards appeared important particularly for new burgeoning journals:
We did not want to be innovative or be radical in any way. We wanted to have a journal that would be regarded and identified as a very standard traditional scientific scholarly journal, because we wanted to establish [our subfield] as a typical, solid, and traditional.
Journals with a narrower regional or thematic focus likewise had a reason of their own for keeping the peer review process closed. Drawing on smaller numbers of authors and reviewers, anonymization was perceived essential to reduce conflicts of interest when virtually all experts are acquainted. One editor argued that, particularly in small, closely knit fields, peer-to-peer accountability based on disclosure of names might quickly devolve into “interpersonal as well as disciplinary conflicts.” The same editor mentioned a frequent need to revise the language of reviewers, because their observations seemed addressed to the editors rather than the authors, and formulated in a “language that would be shared among friends”, i.e. not always respectful. “So of course, I remove that – it’s unnecessary and insulting, and I rephrase it.” In other words, the respectful tone of the reviewers that was characterized as central for their journal’s review process is, at least in some cases, the result of an editorial process. Three other journals reported a similar policy, according to which a respectful tone was maintained by systematic review report editing.
For some chief editors, the author’s identity was also relevant in the decision making process. Against the majority who considered anonymous double blind processes fair and objective, a minority felt that genuine fairness meant evaluating each submission in the context of the author’s current career stage and background:
We get senior scholars, well known people, and we get graduate students. And I think that work is going to be assessed in part in relationship to the identity of the author. And so I think it's important for me, who's going to be making some decisions about that, to know that. Now that then also means that I have to be conscious, as conscious as I can about my biases and so on, and I try to do that.
This was also the only one of our journals in which disclosed review practices were strongly present. At the other extreme, chief editors considered all transparency unethical and pursued complete anonymity at the screening phase too:
We wanted to have a double blind review process, because that is, as far as one can tell, the most fair way of selecting what gets published. When I screen a manuscript, I also have them screened without any information about the authors. There might be a surname attached, but I normally will not look up who that person is before I screen it.
Considering that all but one of the 12 journals were running completely closed double blind peer review processes, a common problem was that sometimes peer reviewers wished to disclose their identities to the authors. When asked, three journals reported such instances to be somehow linked to the Peer Reviewers’ Openness Initiative. The chief editors handled these requests in opposing ways: either allowing the reviewers to sign their reports, or denying disclosure. One chief editor felt that this transparency would challenge their values:
I've pushed back on that, in the sense of we've stated very clearly that our journal is a double blind peer review journal. When they submit their work, that is the practice that we're going to function under. I believe that if there is a desire on the part of the reviewer to want to make his or her name known to the author, that's actually pushing on a value system that the author may not agree.
In line with the above, a majority of the editors described the conduct of their own journal as dependent on the context or conventions of their respective academic domain, suggesting that different fields needed different approaches to reviewing. For instance:
Ultimately we all want to publish the best of possible articles. So if one way works for an editorial board, fine. If another way works for a different journal, fine. In the end, people will read the final works published that contribute to scholarship. All the roads lead to Rome as far as I’m concerned.
One chief editor, representing the humanities, explicitly called out the entire field lagging behind and lacking proper peer review to begin with. According to them, “there's too little blind review or even peer review in humanities ... we’ve been leaning on this curatorial model way too much, which also gives editors way too much power – that’s something where we have a lot to learn from other fields.” Another chief editor, speaking on behalf of communication studies, diagnosed open research practices as a reaction to bad research practices in other fields, and since “we have not encountered that type of difficulty, we can go about having a real discussion about what are the upsides and downsides.” Meanwhile, one interviewee felt that openness, as such, was not considered relevant within the SSH:
authors seem to have very little interest in open science. I've also spent some time for an open data initiative and I'm surprised – the extent to which I don't see very many people actually interested. I just see a shrug of the shoulders. A kind of ‘Eh, this doesn't really apply to us, why would I want to do this? It’s just more work, it's more effort.’
Almost every journal also supported publication formats that were not peer reviewed or peer reviewed differently. The problems with these formats, particularly for those in early career stages, were connected to the fact that scholars have to live up to quality criteria about their publication styles and venues, with double blind peer review as a universal criterium.
To create something that isn't going to provide people with a line they can put on their CV, under peer reviewed journal publication, that's a very tough thing to ask of people. And this is, to me, an incredible frustration because doing things like creating a podcast or blogging – or any number of things that we can think of that have happened over the last 15 years that would be valuable contributions to the scholarly conversation – are not going to count.
Several voices echoed the above reality contributing to their lack of motivation to create something new, or anything that would not be traditional peer review.
The chief editors widely agreed on some central issues, the most fundamental of them being an unanimous satisfaction with the work of their peer reviewers, describing this collective contribution in downright enthusiastic terms. Three journals estimated the prevalence of ‘bad’ review reports numerically, saying them to be but 1%–2% of all received reports. In order to maintain high quality in their review processes and make sure that the review system would work in the future as well, the interviewees disclosed systematic and non-systematic means by which they keep track of both internal and external reviewers.
- we’re, of course, monitoring quality. We can tell reviewers that we don't find the quality of their review high enough. We will quietly not use those reviewers if they over time display signs of lacking diligence. It's a very simple method and it also works well.
- We actually have this internal system, and most of us remember to rate the reviewer. When you go out and look for a reviewer, if you look in our system, you would see if one of the other editors had rated the reviewer very low.
The concept of quality in the above and other cases was rarely a matter of content alone, but also reliability and speed. Reviewers who did not respect deadlines or were difficult to communicate with could likewise be classified bad quality, even though their feedback was appropriate.
Related to the transparency of the peer review process, the interviewees listed miscellaneous elements that they considered topical. For instance, one chief editor talked about marking submission, revision, and acceptance dates in the final article as a feature that can remove doubts about the process, however, it may also turn against the journal:
I see on some journals now the notation ‘manuscript submitted on such and such a date, accepted such and such a date, published such and such a date’. I don't think that's a very revealing statistic or data point for a journal like ours where, in my opinion, so much of that timeline is outside my control. But it's part of greater transparency.
We should also add that none of the journals provided financial compensation for their external peer reviewers who do all such work as free service. In general, finding external reviewers was one of the core challenges for journal editing, as multiple editors noted how it would not be unusual to ask up to 15 people to review before finding two who would agree. The trend was occasionally described as increasing (“there's a momentum building up”), in which case the chief editors felt unequipped to solve the problem due to lacking means for compensating the review work that they still needed to run the journal: “How do you reward reviewers? Because this whole gift economy depends on reviewers’ unpaid labor.” Systems like Publons were mentioned as possible solutions, with the caveat that they would not remove the original problem of volunteer work.
Some journals actively pursued editorial diversity, for instance, by carefully managing the board with ethnicity, gender, and regions in mind. A few expressed surprise that such diversity should even be considered. In the external review process, the defining diversity concerns were about disciplinary or methodological domains, i.e. many chief editors felt that having ‘both sides of the coin’ would benefit the review, especially in polarized topics.
A further complicating matter in the picture were journal metrics, which for some served as a means to self-assess their own performance, yet at the other end of the spectrum, such numerical values were considered flawed and irrelevant. A half of the chief editors would indicate interest in the statistics, usually provided by the publisher. Only those whose journals were up for review through publishers felt that clicks, subscriptions, and other metrics mattered in practice. When asked specifically about the impact factor, replies ranged from moderate interest (“it's important for the publisher for sure – but it's also important for us”) to complete defiance (“fuck the impact factor”).
Although all chief editors, except one, professed to be aware of the impact factor among similar metrics, there were no attempts at influencing journal policy from publishers or affiliated academic associations. On the other hand, the editors often admitted being pleased about their journal’s success, and since this success was typically validated by high journal rankings and peer recognition within the field, some felt that careful self-reflection was needed when assessing potential ‘high impact’ manuscripts.
The thing goes back to the idea of rankings and stuff. So maybe I should publish more canonical stuff if I want to get higher. I don't want to think that way, but I know that. So how is that going to affect my practice?
Again, the question was conventionally tied to the reality of academic careers and work. Even if the editors did not consider the impact factor relevant, many of their submitters did. In this way, the metrics had a direct impact on the journal’s profile and prestige, and whenever such metrics were not disclosed, the editors could receive requests to make them transparent.
We occasionally get a request from an author for what our journal impact factor is. That typically comes up when an author is up for review, promotion or tenure. And I've written letters back to them saying it’s not our job to participate in the tenure review.
To sum, the chief editors’ personal viewpoints regarding the review process, its management, and related journal metrics were occasionally in conflict with the values they acknowledged or how their journal operated in practice. By and large, the chief editors were aware of this and often actively pursued solutions, which nonetheless were difficult to implement.