From Plagiarism Today:
Back in June, behavior scientist Jean-François Bonnefon tweeted about a rejection he had received from an unnamed scientific journal. What made the rejection interesting was that it didn’t come from a human being, but from a bot.
An automated plagiarism-detection tool had determined his paper had a “high level of textual overlap with previous literature” and summarily un-submitted it. However, according to Bonnefon, the sections flagged were elements such as their affiliations and methods, which are logically going to be very similar to previous works.
. . . .
Increasingly, the use of bots in peer review is moving past spelling, grammar and plagiarism checking. As the number of journals and submissions continues to rise, publishers are seeking more and more help from technology.
. . . .
The academic publishing industry has been undergoing some insane growth over the past decade. According to one estimate at University World News, there is approximately 30,000 journals publishing some 2 million articles per year right now. This says nothing of the number of articles submitted. This comes as the global scientific output is estimated to be doubling every nine years.
This rapid growth has put a huge burden on editors and peer reviewers alike. With only 20% of scientists performing peer reviews, this explosive growth is being shouldered by a relatively small group of researchers, many of whom have extremely short careers.
With those workloads increasing, publishers have been regularly turning to technology to help make the peer review process faster and more effective. Some key examples include:
The peer review platform Scholar One teaming with UNSILO to produce an AI that can interpret natural language, extract the key findings and compare those findings to other papers in its database.
- Statcheck, an automated tool that can analyze the statistics in paper and spot any abnormalities.
- ScienceIE, a competition for teams to create algorithms that could extract the basic facts and ideas from scientific papers.
- Publisher Elsevier has developed EVISE, an AI system that, in addition to checking a work for plagiarism, suggests potential peer reviewers and handles correspondence.
- Artificial Intelligence Review Assistant or AIRA, which combines other available tools to automatically detect plagiarism, ethical concerns and other potential issues in a paper so they can be flagged for further review. It also suggests peer reviewers.
These are just some of the systems that have either been developed or are in development that have the goal of automating pieces of the peer review process.
While none of these tools aim to replace human peer reviewers and, instead, seek to help them, they still all sit as a front line between the person submitting the paper and the peer reviewer. In all cases, a paper that runs afoul of these bots will find an uphill climb to getting published.
This creates a series of uncomfortable questions. What if the bots are wrong? How will researchers respond? And how will it impact science?
. . . .
YouTube has become well-known for as a place where videos are automatically (or semi-automatically) claimed or removed without cause. Some of this is just poor implementation of Content ID, but much of this is due to poor data and poor matching.
But, more to the point, these bots have changed YouTube culture. YouTubers often spend as much time trying to avoid Content ID or other YouTube bots as they do making their videos. Those who make a living on YouTube are constantly in a fight to avoid demonetization and copyright claims. That has an impact on what they create and how.
Though YouTubers are well known for loudly protesting YouTube’s policies, they have also been very quick to adapt. As YouTube policies and practices have changed, YouTubers have quickly adapted to ensure that their latest uploads aren’t caught in the filters.
It’s an ongoing cycle of YouTube making a change, either in policy, matching or content matched, and YouTubers moving to get around it.
With YouTube, this cycle is very quick because YouTubers typically know what the bots said about their videos almost instantly. With academic publishing that iteration period will be much longer because of the delay in getting the feedback. Still, there’s no reason to assume the same cycle won’t repeat.
. . . .
Even without the aid of advanced bots, editors and peer reviewers have sought out ways to streamline the process and make it easier to determine what research is valuable and what is not worthy of publication.
In recent years, one of the tools that has become the most used and most controversial is the p value. To extremely oversimplify it, a p value (or p-value) is a measure of probability. It basically measures the likelihood that you would get the dataset you found in a study if the null hypothesis were true. As such, a lower p value is considered more statistically valid and only papers with a p value of less than .05 are considered “publishable”.
To be clear, the use of p value in this manner is very controversial. However, that hasn’t stopped many publications from us p value as a gatekeeping tool.
Researchers have responded to this by what is known as p value hacking or p value manipulation. This can be done many different ways including excluding data that raises the p value, limit findings to that which has an appropriately low p value and so forth.
Link to the rest at Plagiarism Today
PG notes that a great many academic publishers are money machines that endlessly repeat how valuable they are for scientists, researchers, etc., around the world as they regularly increase their subscription charges.
The OP caused PG to wonder whether trade publishers do any sort of check for plagiarism before they release a book.