How AI Will Change Authorship and Plagiarism

From Plagiarism Today:

Depending on to whom you’re speaking, AI is either a groundbreaking technology that is going to cause major upheavals in our society or it’s fad that will probably burn itself out in due time.

. . . .

But, while there has been wide exploration as to what AI means for a variety of jobs, what hasn’t gotten nearly as much attention is what it means for writing and plagiarism, in particular when looking at academic integrity.

Though the idea of robots writing school papers might seem to be the realm of science fiction, the truth is robots are already writing content. In September 2017, the Washington Post announced their AI reporter, dubbed Heliograf, had penned some 850 stories in the prior year. This included some 300 reports from the Rio Olympics.

The year before, the AP announced that it was going to exploit AI to produce some 3000 quarterly earnings reports, up from 300 per quarter the prior year.

In short, you’ve likely already read things written by an AI and didn’t realize it. While these dispatches are, generally, short and formulaic, the technology is moving forward and dipping its toes into more and more complicated tasks.

. . . .

Currently, there is no commercially available AI tools that students, or any writers, can use as a substitute for original work. All of the tools that do exist are enterprise-level tools that aid in the writing of short, formulaic work that don’t really require a human author.

. . . .

The closest thing that does exist are so-called automated paraphrasing tools that are able to semi-intelligently replace words in text with synonyms. Though they have been touted as a threat to academic integrity, they are best known for producing low-quality work that, while able to pass inspection in a plagiarism checker, is ultimately unreadable.

But while it’s easy to make fun of these primitive tools, it’s important to remember that, just over a decade ago, they were pinnacle of technology in article rewriting and were actually expensive systems that spammers paid thousands of dollars per year to use.

What was previously a high-prized secret, in 2019, can be found by any student with a Google search and is easily used by simply pasting text into a form. No money or skill is required.

. . . .

Just like we didn’t jump straight into self-driving cars, we’re not going to jump straight into bots that fully automate the writing process.

Instead, the first real push into AI will likely be automated editing tools. Much like how lane assist and automated braking are a natural extension of cruise control, automated editing tools are a natural extension of spelling and grammar checkers that we have now.

To some extent, we’re already seeing this. Grammarly, for example, offers a suite of writing assistance tools, many of them going beyond regular grammar/spell checking and replicating the function of a human editor. The Hemingway Editor does something similar using the famous author as its inspiration.

As time goes on, tools such as these will become better and better substitutes for human editors. Though it’s unlikely that they’ll fully replace human editors, they will be able to provide more of the function and edit works in more significant ways.

However, at this point such tools don’t raise any real ethical concerns. The changes to the work are decidedly human-driven. It is up to the author to approve or reject the proposed changes. This leaves no question to authorship of the work and certainly is no more dangerous from a plagiarism perspective than a human editor.

. . . .

[I]t stands to reason that the tools will become more and more automated. We’ll go from approving every little change that suggested to tools that, in large, edit a work at the push of a button.

These tools could, in theory, also fact check a work. For example, if a student lists the wrong year for a big battle in their essay, it could correct the statement.

From an authorship standpoint, this is when things start to get complicated. The first time students or reporters are dealing with authorship and AI likely won’t be trying to take credit for what an AI wrote, but trying to blame the AI for mistakes the program inevitably makes.

However, this raises an interesting authorship question. If an AI significantly edits a work and those changes are not expressly approved by the original author, who really is writing the piece? Is it all the responsibility of the original author? A joint authorship? Or is the AI responsible for its mistakes.

. . . .

The endgame for AI and writing is, obviously, push button writing. The ability to feed a bot a topic and some paramaters and then have it spit out a fully formed work.

. . . .

[W]hat happens when an AI bot commits plagiarism, libel or some other literary crime? Who takes responsibility?

Historically, it’s been as cut and dry as to say “If your name appears on it, you’re the author and responsible for it.” We’ve even adopted this approach when ghostwriters are involved, holding the named author to blame. Does that work with AI?

Link to the rest at Plagiarism Today

6 thoughts on “How AI Will Change Authorship and Plagiarism”

  1. “Depending on to whom you’re speaking, AI is either a groundbreaking technology that is going to cause major upheavals in our society or it’s fad that will probably burn itself out in due time. ”

    As Bill Gates said, decades ago, the impact of new disruptive technologies is overestimated in the short term and drastically underestimated over the long haul.

    Think of ebooks: the 2010 hype was that pbooks would be gone by 2015. When it didn’t, the publishing establishment sighed in relief and concluded it was a fad. Except that little by little, day by day ebooks and Indies keep eroding the tradpub business model. The real impact (on the next generations of authors) has yet to fully manifest.

    Ditto for so-called AI software.
    Ignore the short term hype–beware the long term impact.

    • Part of the problem is 99.9% of what they’re calling ‘AI’ isn’t, and what might actually be the start of an AI doesn’t understand most things – and dang sure doesn’t understand those crazy humans.

      Heck, how can an AI be trained/taught how to write/draw something those silly humans will like when the humans don’t know what will be a hit or a flop? Which rules to follow and when they need to be broken?

      There was a quote I remember saying something along the lines that if the human mind was simple enough for us to understand we would then be too simple to understand it.

      Hmm, perhaps the first thing to teach/warn any new AI is that humans are crazy/nuts and to keep that in ‘mind’ when trying to deal with/think like them. 😉

  2. Neither Grammarly nor Hemmingway editing programs do these things well.

    And if you have a voice, however consistent it is, and well-controlled blood pressure, I can recommend you submit some of your prose for ‘analysis.’

    I love AutoCrit – for its counting and proximity functions (saves me a lot of grunt work). I steer well clear of its attempts to do anything more than that.

  3. Send the computer to a football game, and have it write an original story about the game. That would be AI.

    But, filling in the blanks on a formula for sports results or annual reports doesn’t make the cut.

  4. If it is a true AI than it must pass and continue to pass a Turing Test.
    Like Terrence OBrien just said, a fill in the blank from a script is not AI.
    A Middle school student writing for a school news letter is a author.
    A computer creating a water bill is not a author nor a AI.
    Searching published works for possible plagiarism is work that a computer can do, if the published works are online. And that is not AI work.

Comments are closed.