Reflections on guest editing a Frontiers journal

Reflections on guest editing a Frontiers journal

In this blogpost, the authors critically discuss their experience as guest editors for a Frontiers journal. They aim to foster open scholarly debate about Frontiers publishing practices, triggered by Frontiers hindering such debate on their own pages.

The idea for this blog post emerged in the context of a special issue with the online journal Frontiers in Research Metrics and Analytics. We, a group of researchers that can broadly be associated with science & technology studies and meta-research, were invited by Frontiers to guest edit what they call a ‘Research Topic’, suggesting it could focus on innovations in peer review practices. We accepted the invitation, and subsequently launched a call for contributions around the topic of “Change and Innovation in Manuscript Peer Review”. The resulting collection appeared in January 2022 and contains six articles we are very proud of. They touch on such topics as the specificities of peer review in law journals, the changing role of (guest) editors amidst increased use of editorial management systems, and mechanisms and labels to assure quality in book publishing.

We were aware of previous criticism of Frontiers’ approach to scholarly publishing (see for instance here, or here for a recent example) and intensively discussed whether we should embark on this project. We came to the conclusion that the topic is important and timely, especially in the context of a journal that itself represents (and pushes) new peer review and editorial practices. That said, working with Frontiers forced us to develop a form of reflexivity about our own publishing process we would have rather liked to do without. More specifically, we aimed to publish our reflections on the editorial practices we encountered during our editorship as part of the introduction to the Research Topic, but Frontiers did not allow us to. After more than half a year of discussions - and particularly long periods of silence from Frontiers - we decided to publish our editorial as a preprint and write this blog post to inform the scientific community about our experiences.

Concerns about the editorial process

Our worries began with the organisation of the peer review process itself. Frontiers forces users into a relatively rigid workflow that foresees contacting a large number of potential reviewers for submissions. Reviewers are selected by an internal artificial intelligence algorithm on the basis of keywords automatically attributed by the algorithm based on the content of the submitted manuscript and matched with a database of potential reviewers, a technique somewhat similar to the one used for reviewer databases of other big publishers. While the importance of the keywords for the match can be manually adjusted, the fit between submissions and the actually required domain expertise to review them is often less than perfect. This would not be a problem were the process of contacting reviewers fully under the control of the editors. Yet the numerous potential reviewers are contacted by means of a preformulated email in a quasi-automated fashion, apparently under the assumption that many of them will reject anyway. We find this to be problematic because it ultimately erodes the willingness of academics to donate their time for unpaid but absolutely vital community service. In addition, in some cases it resulted in reviewers being assigned to papers in our Research Topic that we believed were not qualified to perform reviews. Significant amounts of emailing and back-and-forth with managing editors and Frontiers staff were required to bypass this system, retract review invitations and instead focus only on the reviewers we actually wanted to contact. As it turns out, the editorial management system is so rigidly set up, that even Frontiers’ own staff does not always have the ability to adjust key settings.

Another concern we had is the pacing of the review and publication process. Frontiers aims to avoid unnecessary delays in the reviewing of submissions, a goal we wholeheartedly subscribe to. Yet the intended workflow is such that reviewers have only seven days to complete their reports as a default, with the possibility to extend the deadline to twenty-one days - however, again at the cost of a cumbersome process of emailing with Frontiers staff. Also, automatically generated review invitations as described above are sent out if the editors do not send out sufficiently many review invitations themselves within three days, including weekends, holidays and (as was the case with us) summer breaks. While we see how short deadlines can contribute to fast dissemination, we feel that the current standards might jeopardize the quality of the review process.

A third element of the rigidly organised review process we found to be a mixed blessing concerns the level of editorial control that editors maintain. In fact, editors are encouraged to accept manuscripts as soon as they receive two recommendations for publication by reviewers (regardless of how many other reviewers recommend rejection). This holds for all review rounds. Especially in combination with the factors mentioned above, i.e. potentially unqualified reviewers being invited and high requirements on review speed, this potentially creates additional challenges to the quality of the editorial process.

Hindered to voice reflections

As referred to before, a learning experience of a questionable sort was our attempt to publish an editorial that reflected on these issues. We naturally intended to include our editorial in the very special issue we edited. However, upon submission of our draft we received a message informing us that our text was not in accordance with the guidelines of Frontiers. They insisted that the text could not be published unless we took out the two paragraphs of rather critical reflections on Frontiers’ editorial process. We insisted that these reflections were an essential element of our editorial and closely related to the content of our Research Topic, which dealt with the impact of editorial processes on knowledge production and dissemination. In addition, we felt that being forced to erase the reflections drastically impacted on our editorial freedom. This then led to several emails back and forth, among others including Frontiers’ head of research integrity and various in-house editorial staff members. When the issue could not be resolved through correspondence, we ultimately scheduled a zoom call with Frontiers’ Chief Executive Editor (CEE). We once again explained our stance regarding the appropriateness of reflecting on our editorial process in our editorial.

In our meeting, the CEE confirmed that such a reflective element was appropriate and that Frontiers was of course ‘very willing to listen to our feedback’. However, he felt that an editorial was not the right place to voice such reflections. There were concerns about “our editorial lacking context”. Apparently, the issues we identified were specific to our own process and were in no way indicative of Frontiers’ general practices. We have reasons to doubt the veracity of this claim.

Subsequently, we were promised that the CEE would come up with a suggested solution to the situation in the week following our call. After four months and six reminders, we have still not heard back from Frontiers. That is why we decided to publish our editorial as a preprint (in line with Frontiers’ own preprint policies) and publish this blog post to inform the scientific community about our process. We informed the Frontiers staff about the publication of the preprint and this blog post in advance, but once again without response from their side.

Towards open scholarly debate

By writing this blog, we aim to share our experiences as guest editors at Frontiers, contributing to the ongoing debate about changing publishing and editorial models. We are generally in favour of improving and innovating editorial and peer review processes and find several elements of Frontiers’ editorial model interesting, including the Open Identities and Open Reports formats of review, and creating a forum for authors, reviewers and editors to interact. However, we have concerns about other elements, believing that they affect the quality and integrity of the process and published record. We believe that openness about our experiences is important to support stakeholders in making informed decisions about how, where and with whom to engage in the publishing process. We much regret Frontiers’ attempts to hinder an open discussion about these aspects. Despite our reflections not being part of the Research Topic, where we still feel they would have fitted best, we hope our editorial/preprint and this blog post can trigger the open scholarly debate we believe to be essential.


Header image: Bench Accounting

4 Comments

Luwel

Doing regularly review work for Frontiers, your contribution is very informative.
From the point of view of the (potential) reviewer, I want to add one additional remark: in the first mail inviting someone to review a manuscript Frontiers provides only the title. As the abstract is not included in the invitation, it is often hard to judge if you have the required domain expertise. In case of a mismatch, after accepting and perusing the manuscript, the reviewer has to contact the editorial board and explain the reasons for his or her refusal to review the paper.

Maurine Montagnat

Many thanks for sharing your experience with Frontiers. I have a very similar one. I have been a scientist editor in Frontiers for a few years, and I have also led a "Research Topic", and I also had to fight exactly the way you did. At the end of this period I was starting to question myself about the editorial business and I was happy to have experienced the case of Frontiers from "inside". In 2018 (or 2019) I decided to quit Frontiers and did so at an editorial meeting in order to share my concerns with the other editors. In particular, the section we were working for (Cryospheric Sciences) was quite new, very successful, and the amount of the APC had suddenly increased! When I asked to have some detailed explanation about the justification of the APC amount, I received a "no way"...
I gave up my participation to Frontiers Editorial board and replaced it by a co-creating the first Overlay Journal in Mechanics (JTCAM, https://jtcam.episciences.org). My deep feeling is that we should invest in the creation of Commons of Knowledge (in the sense of Elinor Ostrom's concept) (see https://www.westminsterpapers.org/article/id/913/) and that refusing to serve for the "capitalist" editorial system in one way to do so. I can therefore only encourage each of us to invest time in creating Diamond Open Access journals with the support of your Institutions.

damir.kalpic@fer.hr

I had an unsuccessful experience with my duty of assigned editor of a paper. Attempt of Frontiers to base the process mostly on AI algorithms I found rather irritating. It corresponds to widely practiced scientometrics-based evaluation of scientists. Scientists are predominantly intelligent folks who can cunningly abuse the system which is allegedly very objective. Blindly believing that science is necessarily being built in harmonised increments requires many citations, what enables forming of artificial excellence within entire groups of participants. How would Einstein's theory “relate to previous work“, what is for many referees a compulsory precondition? I believe that human judgment concerning the essence of the papers should be revived and evaluation of scientists should at least partly depend upon their impact on the society.

Marcus Oliveira

Thanks for voicing your editorial experiences at Frontiers. This is critical to foster a debate on which model of scientific publishing we want as a community. As it stands, I'm positive that Frontiers publishing practices only benefits their profits and investors rather than Science. There are plenty of evidence to support this as here (https://forbetterscience.com/2019/07/11/frontiers-and-robert-jan-smits-emails-reveal-how-plan-s-was-conceived/), and here (https://forbetterscience.com/2018/11/13/did-frontiers-help-robert-jan-smits-design-plan-s/.). The existence, and strong support from our community, of such corrupted editorial publishing system is a clear symptom that science, just like arts, became a big business where publishers commercially exploits researchers with questionable policies just to increase their revenues and profits. The question is: do we want this?
Your reflections are fully in line with dozens of personnal perspectives I had with colleagues that edited/reviewed there and stated the pressures and pushes they received to accelerate the reviewing process with very questionble quality standards to publish as much papers as possible at the shortest time. I'm quite worried as in my personal point of view, Frontiers is not alone and even reputable scientific journals have collapsed to the market pressures as many of them have endorsed the "transformative agreements" of plan s and are turning golden OA at fast pace. Our culture is based on a mechanism that values prestige and this is a direct product of where (high impact factor journals) rather than what we publish. As long as this quantophrenic culture stands on scientific publishing system (https://www.scielo.br/j/aabc/a/gZ7MfbHTB3Bdc5X45Z5NhKN/abstract/?lang=en), only the Wall St bulls will benefit from these corrupted practices. We need a culture shock to bring scientific publishing back for us, but we are blinded by the shine of gold open access.

Add a comment