Depending on to whom you’re speaking, AI is either a groundbreaking technology that is going to cause major upheavals in our society or it’s fad that will probably burn itself out in due time.
. . . .
But, while there has been wide exploration as to what AI means for a variety of jobs, what hasn’t gotten nearly as much attention is what it means for writing and plagiarism, in particular when looking at academic integrity.
Though the idea of robots writing school papers might seem to be the realm of science fiction, the truth is robots are already writing content. In September 2017, the Washington Post announced their AI reporter, dubbed Heliograf, had penned some 850 stories in the prior year. This included some 300 reports from the Rio Olympics.
The year before, the AP announced that it was going to exploit AI to produce some 3000 quarterly earnings reports, up from 300 per quarter the prior year.
In short, you’ve likely already read things written by an AI and didn’t realize it. While these dispatches are, generally, short and formulaic, the technology is moving forward and dipping its toes into more and more complicated tasks.
. . . .
Currently, there is no commercially available AI tools that students, or any writers, can use as a substitute for original work. All of the tools that do exist are enterprise-level tools that aid in the writing of short, formulaic work that don’t really require a human author.
. . . .
The closest thing that does exist are so-called automated paraphrasing tools that are able to semi-intelligently replace words in text with synonyms. Though they have been touted as a threat to academic integrity, they are best known for producing low-quality work that, while able to pass inspection in a plagiarism checker, is ultimately unreadable.
But while it’s easy to make fun of these primitive tools, it’s important to remember that, just over a decade ago, they were pinnacle of technology in article rewriting and were actually expensive systems that spammers paid thousands of dollars per year to use.
What was previously a high-prized secret, in 2019, can be found by any student with a Google search and is easily used by simply pasting text into a form. No money or skill is required.
. . . .
Just like we didn’t jump straight into self-driving cars, we’re not going to jump straight into bots that fully automate the writing process.
Instead, the first real push into AI will likely be automated editing tools. Much like how lane assist and automated braking are a natural extension of cruise control, automated editing tools are a natural extension of spelling and grammar checkers that we have now.
To some extent, we’re already seeing this. Grammarly, for example, offers a suite of writing assistance tools, many of them going beyond regular grammar/spell checking and replicating the function of a human editor. The Hemingway Editor does something similar using the famous author as its inspiration.
As time goes on, tools such as these will become better and better substitutes for human editors. Though it’s unlikely that they’ll fully replace human editors, they will be able to provide more of the function and edit works in more significant ways.
However, at this point such tools don’t raise any real ethical concerns. The changes to the work are decidedly human-driven. It is up to the author to approve or reject the proposed changes. This leaves no question to authorship of the work and certainly is no more dangerous from a plagiarism perspective than a human editor.
. . . .
[I]t stands to reason that the tools will become more and more automated. We’ll go from approving every little change that suggested to tools that, in large, edit a work at the push of a button.
These tools could, in theory, also fact check a work. For example, if a student lists the wrong year for a big battle in their essay, it could correct the statement.
From an authorship standpoint, this is when things start to get complicated. The first time students or reporters are dealing with authorship and AI likely won’t be trying to take credit for what an AI wrote, but trying to blame the AI for mistakes the program inevitably makes.
However, this raises an interesting authorship question. If an AI significantly edits a work and those changes are not expressly approved by the original author, who really is writing the piece? Is it all the responsibility of the original author? A joint authorship? Or is the AI responsible for its mistakes.
. . . .
The endgame for AI and writing is, obviously, push button writing. The ability to feed a bot a topic and some paramaters and then have it spit out a fully formed work.
. . . .
[W]hat happens when an AI bot commits plagiarism, libel or some other literary crime? Who takes responsibility?
Historically, it’s been as cut and dry as to say “If your name appears on it, you’re the author and responsible for it.” We’ve even adopted this approach when ghostwriters are involved, holding the named author to blame. Does that work with AI?