Abstract
Is peer review an exploitative or a fairly compensated system?
According to Larivière et al. (2015), five publishers (Elsevier, Taylor & Francis, Wiley-Blackwell, Springer, and SAGE Publications) accounted for 50% of all publications in 2013, with annual profit margins for several of these entering the billion US dollars mark. One of the successes of the publishing business model is to exploit labor for free, employing the “for the good of the community” rationale to squeeze as much labor for as low a cost as possible (Teixeira da Silva and Katavić, 2016). This original rationale existed as an excuse by for-profit publishers to claim that by keeping peer review free, this would remove financial incentives that could corrupt peers’ objective judgement, while serving a “good deed” for other academics. Peer review was touted, and promoted, as a “community service,” expected to be completed voluntarily, and this marketing gimmick has now become the mainstream publishing model. Academics can now begin to appreciate that the system is dysfunctional, exploitative, and greedy (Teixeira da Silva and Dobránszki, 2015), and that they have been exploited, all in the name of science. One of the biggest problems of traditional peer review is that the content of such reports remains known only to the authors and to the editors who have handled a manuscript. This implies that both excellent peer reviews, as well as useless or unprofessional ones, remain a black box. Wider academia and the public remain in the dark regarding the content of such reports. It is also known that within this corrupted and gamed system, publishers have been fooled by a pool of dishonest academics and others, with fake peer reviews, fake peer reviewers and fake authorship, fraudulent forms of academic publishing that are abounding in dishonesty (Teixeira da Silva, 2017a). The secrecy behind this quality control step led some whistle-blowers and critics of the system—such as Retraction Watch and PubPeer—to begin to question, expose, and then shame, among others, the flawed peer review system that has falsely rewarded academics with academically flawed published papers, and in some cases, with financial rewards as a result of the “gamed” system that encourages academics to publish in a journal with a journal impact factor (JIF).
By convincing academics, at a global scale, that their work has value, or merit, by a simple integer, led to the JIF becoming the biggest “false rewards” system in academic publishing, a culture that was continued and propagated by ClarivateTM Analytics (Teixeira da Silva and Bernès, 2017) when it purchased the metric and other associated services, such as ScholarOne®, from Thomson Reuters in 2016 (Teixeira da Silva, 2017b). This is emphasized by the fact that access to proprietary lists of JIFs continues to be sold (paywalled) via Journal Citation Reports (JCR) or Web of Science and marketed as a way for academics and librarians to make the best choice of journals, or for institutions to quantify author productivity. 1 Should such lists and information not be open to academics, as Elsevier has done with CiteScore, a competing metric (Teixeira da Silva and Memon, 2017)? The so-called crisis of trust that is occasionally referred to as existing in science—primarily by critics and skeptics of the current peer review system—may have in fact been initiated by the large for-profit publishers seeking to expand their profit margins while exploiting academics, at four levels: first with their copyrighted intellect in exchange for a PDF file; second, failure to provide financial remuneration for peer review services, where peer reviewers—equivalent to professional advisors or consultants—were exploited for free; third, as for peer reviewers, where the management skills and goodwill of voluntary editors were exploited for free, or whose names were publicly displayed simply to satisfy their egos; fourth, a new pennies-on-the-dollar “compensation” scheme, which came into force when services such as Publons, discussed next, emerged.
Peer reviewers, most of whom are academics themselves, with inherent biases and time-constraints, are also seeking to further their own publications, and are under great strain, making the peer pool stressed, over and above its undercompensated and exploited status. In such a state, peer reports become delayed, sometimes for months, or in extreme cases, years (Teixeira da Silva and Dobránszki, 2017), leading some editors to reject some papers upon or soon after submission for unfair or non-academic reasons in a desperate bid to process an overwhelming amount of submissions (Teixeira da Silva et al., 2017b).
Publons, a smart business decision, or a real solution to the academic crisis of trust?
There is conflicting information regarding the start of Publons. 2 The Wikipedia page states that it was launched in 2012, 3 while a paper by Smith (2016) indicates that it was founded in 2013. Usually an initiative is founded before it is launched. Publons 4 confirms, however, what Wikipedia states: “Publons was founded by Andrew Preston and Daniel Johnston in 2012 to address the static state of peer-reviewing practices in scholarly communication, with a view to encourage collaboration and speed up scientific development.” The founders aimed to achieve this by allowing peer review work to be publicly rewarded through the creation of a Publons account which would credit them for their peer review and post-publication peer review—that is, comments on a published paper after it has been published—efforts. Where publishers allowed, peer reports were open, advancing the open peer review movement, but in most cases, peer reports remain a black box, that is, the same level of secrecy that is involved in traditional peer review is being promoted by Publons. Consequently, one of the arguments that the first author of this paper and others made in September of 2016, after the Retraction Watch co-founders Ivan Oransky and Adam Marcus, two science watchdogs (Teixeira da Silva, 2016), decided to cover the Publons “Sentinels of Science” award, 5 was that both good and bad, complete and incomplete and professional and unprofessional peer reports were apparently being equally rewarded at Publons, that is, all peer reports were apparently being treated equally, irrespective of their open versus closed status, or their quality. 6 Curiously, Retraction Watch was recently nominated as one of the candidates for Publons’ Sentinel Award “for outstanding advocacy, innovation or contribution to scholarly peer review” 7 while SAGE recently became an official sponsor of the Publons Peer Review Award. 8
Some of the other most salient arguments critical of Publons and the pseudo-reward system in place were: (a) based on data for 2016’s most prolific peer reviewer, Jonas Ranstam, at 38 US cents per review, peer review was clearly the most underpaid job on the planet, and a monetary reward paid by the same exploitative oligopolic publishers indicated by Larivière et al. (2015); (b) the possibility of generating fake peer reports, for any reason, that could then be rewarded by actual or fake entities; (c) the use of “volunteers” by for-profit companies, that is, free labor in exchange for superficial rewards; (d) the possibility that peer reviewers are completing peer review during working hours (Figure 1), which might be a violation of their work contracts, or may be an abuse of tax-payers’ money; (e) the induction of an unhealthy state of competition as peer reviewers vie to reach the top of the peer review ladder in search of a “Sentinel of Science” status, and the possible corruption of academic peer review by the same managers of the JIF, ClarivateTM Analytics. The latter concern expressed nine months earlier became a reality when ClarivateTM Analytics purchased Publons on 1 June 2017. 9

Trimmed screenshots from the “Weekly review punchcard” from the first author’s Publons account, (a) and (b) showing apparently identical patterns of work for the cumulative academic peer reviewer pool. It is evident that the image has simply been cloned, or that the data has not been updated, making the data misleading to the public. Most work was done during weekdays. The top page of Publons indicates the following facts: 155,000+ Researchers; 800,000+ Reviews; 25,000+ Journals. Therefore, which one of the graphs ((a) or (b)) was based on this data? One can only conclude that the data of the other graph is an “alternative fact” (i.e. a false truth).
Very importantly, the data in Figure 1 shows that most work was done during the week (Monday to Friday), suggesting that peer reviewers are working on peer reviews for mainly for-profit publishers, possibly during working hours. Does such work violate work contracts, and what is the ethics of using tax-payers’ money to pay for a salary to do peer review for for-profit publishers, who then charge the same academic institutes huge fees to access publications that were peer reviewed by the academics they funded (i.e. tax payers suffer a double exploitation)? Are employers who pay academics who complete peer review for for-profit publishers, and who pay for their open access fees and/or subscription costs, aware that their academics are offering free labor, and are they cognizant of the time-table in which this function and activity is being performed? The deceptive—because there is likely no way that the data can be identical after nine months—use of data, exemplified in Figure 1, underlies the issue of trust in ClarivateTM Analytics, already documented previously upon its launch (Teixeira da Silva and Bernès, 2017). The sale of Publons to ClarivateTM Analytics led some academics to immediately cancel their Publons account, as indicated on Twitter, based on principle 10 .
Curiously, Elsevier is not a Publons partner or sponsor (Figure 2). This fact, which might explain why Elsevier requires the permission of their journal editors before peer reviewers can openly post their reports on Publons, is discussed next. Schiermeier (2017) described the experience of Jon Tennant, whose post was turned down by Publons when he wanted to receive credit for his pro bono peer review work. Tennant challenged Elsevier by tweeting: “You never said my peer review was confidential.” I challenged Elsevier over who owns peer review.’ 11 As for the majority of peer reviewers, Tennant did not expect to be denied sharing his post on the basis of the confidentiality of peer review, especially given that reviewers do not sign confidentiality agreements, nor do they transfer the copyright of their peer review reports to publishers or journals. In other words, unlike authors, who are required, by traditional subscription journals, to transfer their copyright to publishers, reviewers do not usually transfer their copyright to publishers or journals during peer review. Thus, during peer review, such reports remain confidential, but following publication of the paper, peers who hold copyright of their reports are at liberty to share these with the public, such as on a platform like Publons, if they wish, provided that doing so does not violate any signed or agreed-upon peer–publisher contract.

Why is Elsevier not a Publons partner? Screenshot (8 June 2017) from: https://publons.com/home/
The acquisition of Publons by ClarivateTM Analytics may be of benefit to publishers and journals, as it is likely to provide an expansive pool of trustworthy peer reviewers and weed out fake peer reviews (Curry, 2017) and peer review rings. However, some academics are skeptical of the issue pertinent to whether the recent acquisition is going to solve the problems of flawed peer review, or whether the commodification of peer review will aggravate the problems of peer review. Exploring views of commenters on media sites and Twitter threads reveals a mixed reaction. On the positive side, some academics envisage an opportunity for a system that will provide recognition for responsible peer review, a system that will highlight their expertise by turning their work into a measurable research output (van Noorden, 2014) as it will enable reviewers to create a verifiable record of their reviewing services to their disciplines, a record that they can use to boost their portfolio, and, most importantly, an expectation that it could improve peer review and make it more efficient (Curry, 2017, and comments therein). On the other hand, it is reasonable to expect that many academics may be skeptical regarding how the acquisition deal will restore the shrinking pool of peer review. This skepticism was manifested by the deletion of some Publons accounts upon news of the acquisition, and by questioning whether awarding peer review with points will encourage some academics to game the new system and provide superficial peer reviews. The latter is a serious concern if academics feel the pressure of mandatory peer review, for example, the recognition of peer review by a promotion and tenure committee may pressurize academics to review. Another issue that may be raised is that many researchers may be reluctant to provide their data to commercial entities out of fear of some hidden plans to exploit their peer review work, that is, the reports they had uploaded to the Publons platform before the acquisition deal. Such exploitation could be in the form of pay-to-access peer reports for use in the training of peer reviewers, data curation or as yet unknown risks. Indeed, Andrew Preston, the co-founder and CEO of Publons, hinted in a fairly recent interview with Retraction Watch
12
at the marketing of such services: “We believe that the scale of Clarivate Analytics will help us to coordinate publishers, funders, and institutions to first of all raise awareness of the issues and then build market-leading solutions. A joint effort will improve the situation for everyone.” Even though Preston made a compelling case regarding curtailing fake peer reports, he did not address whether any fake peer reports or fake peer reviewers were listed on Publons, nor did he address that journals that were caught in citation rings, or involved in citation manipulation, were being rewarded rather than punished. For example,
Nevertheless, despite the aforementioned controversial views on the Publons–ClarivateTM Analytics deal, a major advantage of this recent acquisition is the admission that Publons is helping academics showcase their expertise and that recognition and crediting peer review is now a reality (Masic, 2016). Academics will no longer shy away from demanding recognition for their peer review, and the de facto crediting of peer review may lead to de jure recognition of reviewers’ time, effort and expertise. The de jure recognition of peer review is very feasible if peer reviewer reports are assigned digital object identifiers (DOIs) and are linked to their ORCID (Open Researcher and Contributor ID) accounts (Rajpert-De Meyts et al., 2016). Such integration would pave the way to citing peer review reports and would allow for assigning points, or merit credit, to ORCID accounts that may be automatically linked to such reports. Once again, the risk that a competitive platform that stimulates a race for “peer points” or “peer credits” cannot be ruled out, an aspect that was not pointed out by Stephen Curry or by commentators to his
Can peer review be properly compensated, and is Publons the right way forward?
Yogendra Kumar Mishra, who was once one of the top Publons reviewers, reported that the time he spent on reviewing an article would fall between three and 12 hours (van Noorden, 2014). Using Mishra’s reported experience as the base, assuming that seven hours is a realistic average time required to complete the task of peer reviewing a given article, and that an academic is expected to review four papers per month, this means that a rough estimate of the time a reviewer is spending on peer review is about 28 hours per month. Our example illustrates how time as a metric can be used to estimate and compensate peer review. Certainly, the time metric as a measure of output is dependent on other factors, including expertise. For instance, an expert reviewer is clearly expected to spend less time reviewing a well written paper in their field, or may spend more time harshly scrutinizing submissions close to their area of expertise, an inference we make based on the findings of Gallo et al. (2016). 16 Another metric that can also be considered is the added value of a given reviewer report to every manuscript, notwithstanding the subjectivity of such a metric in comparison with the time metric. Although the time metric can be used to measure and financially compensate responsible peer reviewers, such an approach is discouraged and is rarely used, out of concern of the inherent conflict of interest in receiving money for peer review (Rajpert-De Meyts et al., 2016). A curiously self-serving trio of blog posts 17 by representatives of the for-profit publishing industry that provided global academics with the current model pro bono exploitative peer review system seemed to fail to find any enlightening solutions, offering instead, superficial “badges” and online icons as a “fair” system of compensation for hours of sometimes arduous work.
Trying to compensate, or reward, good peer reviewers, journals and publishers have used various incentives, including free access to a number of articles, a waiver of article processing charges for a number of articles, issuing certificates of excellence or badges, discounts for books, and acknowledging peer reviewers by mentioning them in meetings and editorials, or publishing a list of reviewers in appendices to given journal issues and appointing best reviewers to editorial boards (Gasparyan et al., 2015; Rajpert-De Meyts et al., 2016).
The current climate of busy academia is placing additional pressure on authors to increase their output, inducing a shrinking pool of responsible reviewers in the face of an explosive increase in submitted manuscripts (Fox et al., 2017; Kovanis et al., 2016), a general decrease in research funding and, most importantly, the emergence of open access journals that charge exorbitant article processing charges (Al-Khatib and Teixeira da Silva, 2017), including by Publons sponsors. In economic terms, we highlight Publons’ older mission statement, 18 that is, “to speed up science and give the experts involved in peer-review the recognition they deserve for their effort” 19 (Rajpert-De Meyts et al., 2016), a mission that encouraged institutions like The British Institute of Radiology to officially partner with Publons to recognize reviewers. 20 We urge publishers and ClarivateTM Analytics to explore, by valid unbiased studies, available measures and incentives, including monetary compensation (Diamandis, 2015), to give peer reviewers the just and fair recognition they deserve for their efforts.
