The Rise of the Peer Review Bots

This content has been archived. It may no longer be accurate or relevant.

From Plagiarism Today:

Back in June, behavior scientist Jean-François Bonnefon tweeted about a rejection he had received from an unnamed scientific journal. What made the rejection interesting was that it didn’t come from a human being, but from a bot.

An automated plagiarism-detection tool had determined his paper had a “high level of textual overlap with previous literature” and summarily un-submitted it. However, according to Bonnefon, the sections flagged were elements such as their affiliations and methods, which are logically going to be very similar to previous works.

. . . .

Increasingly, the use of bots in peer review is moving past spelling, grammar and plagiarism checking. As the number of journals and submissions continues to rise, publishers are seeking more and more help from technology.

. . . .

The academic publishing industry has been undergoing some insane growth over the past decade. According to one estimate at University World News, there is approximately 30,000 journals publishing some 2 million articles per year right now. This says nothing of the number of articles submitted. This comes as the global scientific output is estimated to be doubling every nine years.

This rapid growth has put a huge burden on editors and peer reviewers alike. With only 20% of scientists performing peer reviews, this explosive growth is being shouldered by a relatively small group of researchers, many of whom have extremely short careers.

With those workloads increasing, publishers have been regularly turning to technology to help make the peer review process faster and more effective. Some key examples include:

The peer review platform Scholar One teaming with UNSILO to produce an AI that can interpret natural language, extract the key findings and compare those findings to other papers in its database.

  1. Statcheck, an automated tool that can analyze the statistics in paper and spot any abnormalities.
  2. ScienceIE, a competition for teams to create algorithms that could extract the basic facts and ideas from scientific papers.
  3. Publisher Elsevier has developed EVISE, an AI system that, in addition to checking a work for plagiarism, suggests potential peer reviewers and handles correspondence.
  4. Artificial Intelligence Review Assistant or AIRA, which combines other available tools to automatically detect plagiarism, ethical concerns and other potential issues in a paper so they can be flagged for further review. It also suggests peer reviewers.

These are just some of the systems that have either been developed or are in development that have the goal of automating pieces of the peer review process.

While none of these tools aim to replace human peer reviewers and, instead, seek to help them, they still all sit as a front line between the person submitting the paper and the peer reviewer. In all cases, a paper that runs afoul of these bots will find an uphill climb to getting published.

This creates a series of uncomfortable questions. What if the bots are wrong? How will researchers respond? And how will it impact science?

. . . .

YouTube has become well-known for as a place where videos are automatically (or semi-automatically) claimed or removed without cause. Some of this is just poor implementation of Content ID, but much of this is due to poor data and poor matching.

But, more to the point, these bots have changed YouTube culture. YouTubers often spend as much time trying to avoid Content ID or other YouTube bots as they do making their videos. Those who make a living on YouTube are constantly in a fight to avoid demonetization and copyright claims. That has an impact on what they create and how.

Though YouTubers are well known for loudly protesting YouTube’s policies, they have also been very quick to adapt. As YouTube policies and practices have changed, YouTubers have quickly adapted to ensure that their latest uploads aren’t caught in the filters.

It’s an ongoing cycle of YouTube making a change, either in policy, matching or content matched, and YouTubers moving to get around it.

With YouTube, this cycle is very quick because YouTubers typically know what the bots said about their videos almost instantly. With academic publishing that iteration period will be much longer because of the delay in getting the feedback. Still, there’s no reason to assume the same cycle won’t repeat.

. . . .

Even without the aid of advanced bots, editors and peer reviewers have sought out ways to streamline the process and make it easier to determine what research is valuable and what is not worthy of publication.

In recent years, one of the tools that has become the most used and most controversial is the p value. To extremely oversimplify it, a p value (or p-value) is a measure of probability. It basically measures the likelihood that you would get the dataset you found in a study if the null hypothesis were true. As such, a lower p value is considered more statistically valid and only papers with a p value of less than .05 are considered “publishable”.

To be clear, the use of p value in this manner is very controversial. However, that hasn’t stopped many publications from us p value as a gatekeeping tool.

Researchers have responded to this by what is known as p value hacking or p value manipulation. This can be done many different ways including excluding data that raises the p value, limit findings to that which has an appropriately low p value and so forth.

Link to the rest at Plagiarism Today

PG notes that a great many academic publishers are money machines that endlessly repeat how valuable they are for scientists, researchers, etc., around the world as they regularly increase their subscription charges.

The OP caused PG to wonder whether trade publishers do any sort of check for plagiarism before they release a book.

4 thoughts on “The Rise of the Peer Review Bots”

  1. PG notes that a great many academic publishers are money machines that endlessly repeat how valuable they are for scientists, researchers, etc., around the world as they regularly increase their subscription charges.

    Many of the scientific journals no longer have subscription fees. They have gone to an open access model where they don’t charge people to read them but do charge a large fee (several thousand dollars large) to authors of published articles. If anything, this system is even more vulnerable to abuse: subscribers need to feel that what you’re putting out is of value, while for the authors, the value is being able to put the article down on their CV with people actually reading it being a secondary consideration at best.

    • Z – Thanks for your comment. I had heard about this “innovation,” but was not aware of how widely it had spread.

      I hadn’t thought about it, but are the domain experts who vet the submitted articles for the open access journals still not paid a meaningful amount by the journal for the time they spend on this task?

      As you undoubtedly know, this model is very close to the business model of sleazy vanity presses in the pay-to-publish fiction and general non-fiction world.

      • PG, unless things have changed recently scholarly peer reviewers are rewarded by the personal satisfaction that they are advancing knowledge, as the work itself is unpaid. It is normally anonymous so they cannot even claim credit for the articles they review (“I really improved XXX’s latest article” is frowned upon) though I guess that at times it provides a useful excuse for getting out of other work.

        Of course, the predatory journals only talk about peer review, they don’t need to do it.

        In Physics the whole publication process is about prestige on the way to getting tenure as almost everything of note is published on the arxiv.org well ahead of any journal appearance.

    • Maybe some disciplines have many that are open access, but in science and healthcare fields (where the highest priced journals are) it is only *some* that have moved to fully open access.

      Most journals have a mixture, where some articles are free to view, some are free after a period of time, and some require a subscription. Then they charge the authors to publish, and require extra money to make it open access. Then they increase the subscription costs even more, especially for libraries, to “offset” the “hosting costs” of the open access articles. Plus, many of them still restrict printing/copying on the “open access” articles so they can charge people for a pdf copy.

      PG, absolutely this is not just very close, it is the business model of sleazy vanity presses, and they are really proud of it.

      There are some fully open access journals that are really good, and I have seen some move to also having open peer review, so that people can see the article, and see who reviewed it and what they said. That way each article is its own academic dialogue.

Comments are closed.