“AI Will Never Be Good Enough to Replace Real Authors” Is the Wrong Conversation

This content has been archived. It may no longer be accurate or relevant.

From BookRiot:

There are so many stories regarding AI right now that it feels like a five-alarm fire mixed with a Whac-A-Mole game.

. . . .

And no, it’s not a five-alarm fire. But it is the very important pocket of time wherein a thing needs some form of regulation before we are fully immersed in the consequences and everyone learns the hard way what the saying “you can’t put the toothpaste back in the tube” means.

AI (artificial intelligence) is defined as“the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” It is being used in a lot of industries in many ways, and it was already in use before all the recent headlines. So to be clear: what I am specifically talking about is the way AI is being used in place of writers, journalists, and other creatives, and in grifts like where a non-author tricks consumers into buying their AI word salad book instead of the intended author’s properly written book.

There are certain topics in the world of publishing that end up feeling like they just never stop being discussed, one being any version of “Who gets to write what book?” in response to when a writer writes — or is asking how to write — way out of their lane. The thing with that specific question is, as Alexander Chee perfectly explains, “the question is a Trojan horse, posing as reasonable artistic discourse when, in fact, many writers are not really asking for advice — they are asking if it is okay to find a way to continue as they have.”

I keep thinking about this every time (daily at this point) I see people — well-intentioned, I think — saying this isn’t a big deal and everything is fine, because AI will never be good enough to replace writers and authors (insert all creatives). Being that AI is just scrapping all the information that is already out there to toss it into a blender and output something “new,” I am not actually worried that it will ever be good enough to replace creatives. But that’s not the problem for me. While I get where this idea is coming from I feel it gives a very false sense of “It’ll be fine!” and “Don’t worry!” which keeps the conversations that should be had from happening.

Instead, we should be asking: Will those in power care that AI isn’t as good at creating what a human can create when their goal of using it is to not pay actual writers, authors, and creatives? Do scammers care that the “travel book” they put up on Amazon, “written” by AI is garbage that no consumer would knowingly pay for if their scam works into making the sale? If Amazon gets a cut of every sale from buyers unaware that the book they purchased isn’t the book they intended to buy, will they implement something to stop it? How time consuming is it going to be for very real people in publishing and media to have to weed out the flood of AI-generated submissions? How costly will it be for businesses to have to implement ways to spot, catch, and/or stop scammers using AI?

I deeply miss what Etsy used to be and I think a lot about how it went from being this incredible site dedicated to artists to no longer being that: “Etsy went public in 2015. Last year, the company reported annual revenue of nearly $2.6 billion — a more than 10 percent spike over the year prior. Among other issues, these creators see the increase in counterfeiters on the platform as a result of Etsy prioritizing growth over being able to enforce its standards.” It is yet another example that once again leads me to think that we shouldn’t focus on whether AI is, or ever will be, good enough to replace writers and authors.

Link to the rest at BookRiot

PG says that trying to regulate AI at this point in time is ridiculous.

AI research and development is extraordinarily important for national security and all aspects of health and health care.

Given these stakes, it will be a while before legislators and regulators get around to AI and authors.

Besides, absent a book’s creator admitting she/he used artificial intelligence to write a book, how is anyone going to know for certain whether the author used AI to assist in the creation of a book or parts of a book?

Whether an author used AI to create a first draft of a book, then edited the original manuscript to refine what the AI produced?

Whether an author wrote a detailed 50-page book outline to feed into an AI that created the manuscript?

24 thoughts on ““AI Will Never Be Good Enough to Replace Real Authors” Is the Wrong Conversation”

    • AI does fine rewriting MLB game reports. But that’s because the typical game report includes an inning by inning listing of game action. Which is collected (along with way more data than that) for the benefit of the teams,not the fans.

      With this kind of data, an AI can easily assemble a better report than a newspaper intern.
      https://www.mlb.com/gameday/brewers-vs-cubs/2023/08/29/716798/final/wrap

      High school sports don’t collect data that religiously.

      AI or no AI, GIGO rules.

  1. “[A]bsent a book’s creator admitting she/he used artificial intelligence to write a book, how is anyone going to know for certain whether the author used AI to assist in the creation of a book or parts of a book?”

    You nailed it, PG. Integrity or the lack thereof is the problem, and it will always be as long as there is a human being willing to cheat to get what s/he wants.

    When people use AI to generate fiction and then call it their own, they are cheating the reading public, plain and simple, even if they warn them in advance they are being cheated.

    Of course, some say, “But really AI is only another tool to help me create original work.”

    Okay, snicker. Whatever you have to tell yourself. But you wouldn’t be saying crap like that if you didn’t know you were cheating. And you know it.

    Of course, today cheating is the expected and even applauded norm in sports, politics, and pretty much every other aspect of life.

    Times change, but what doesn’t change is the knowledge you hold within yourself of what is right and what is wrong. Rich or poor, ill or able-bodied, male or female, your personal integrity is down to you.

    • No argument.

      Except…tools have many uses and you can’t condemn or prohibit a tool because it can be used for distasteful purposes. (c.f, BETAMAX or torrents.)

      Your issue shouldn’t be with the tool or those that use it for ethical purposes.

      A person might spend a hundred hours using Adobe Photoshop to produce a “less than professional” rendition of a cover scene, or they could spend 4 hours iterating with Dall-e and then another six tweaking it with Photoshop to end up with a publishing quality cover.

      Would you condemn them for claiming ownership? Would you do it if instead of Dall-e they used a camera for the starting point?

      If instead of art, they used a LLM to check grammar and word usage to ensure their “okie trailer park girl” speaks like a true human?

      Tech is neutral. It is humans who make it good or bad depending on how they use it.

      In all things tech, the most likely answer is “it depends”.

      Good news is nobody is forcing anybody to use LLMs. There’s still folks who do their writing on Olivettis or even long hand. To each their own.

      • “you can’t condemn or prohibit” “Your issue shouldn’t be with the tool or those that use it for ethical purposes.”

        1. You’re creating your own argument. I said neither of those things.
        2. I said nothing about covers or other visual art.
        3. You wrote “Tech is neutral. It is humans who make it good or bad depending on how they use it.”
        4. I couldn’t agree more. And how they use it depends on their integrity.

        And yes, absolutely, to each his or her own.

    • There is no misrepresentation if I produce a book using AI, and tell consumers what I did.
      No cheating of consumers.

      Some authors may not like the competition for the consumer dollars, and may want to define what is offered to consumers. OK. People think lots of things.

      I’m reminded of the warnings about the Tsunami of Dreck ten years back. Critics fretted over the idea that the market would be flooded with horrible books that would ruin the market for everyone. Seems the consumer did just fine without guidance from authors.

      And God Bless the ghostwriter, for without him we would never have books by politicians and celebrities.

      • The Tsunami of Dreck myth is the legacy of the gatekept era when the publishers controlled market access through their release slots. consumers only bought what came to market and the bookselling business was all about “stock it and they will come.” After all, they had no other option.

        Once they lost that control and their self-annointed position as “guardians of culture” they had to find another excuse for their existence. Hence “only our books are any good”. Anything they didn’t vet was by definition crap.

        Only problem is that once consumers had options and saw the price differential they figured they had little to lose and found the indie pool just fine. Consumers will buy whatever they feel like buying so buying or not buying software-generated narratives will be based on the quality of the stories, not the process. They won’t care how the sausage is made if it’s tasty. If its rancid…

        But regardless of the process there will be a human at the begining and the end or there will be no book. Call them author or not, they will be the creator.

      • “There is no misrepresentation if I produce a book using AI, and tell consumers what I did.”

        I agree. No misrepresentation. Except in that statement. Perhaps it should read, “There is no misrepresentation if AI produces a book using my prompts and I tell consumers what I did.”

        “No cheating of consumers.”

        I disagree. Just because they’re smiling when they walk away doesn’t mean you didn’t cheat.

        Plus, if AI churns out a perfectly good, entertaining story, why do readers need you?

        God bless the ghostwriter? Under a paid contract, a ghostwriter (writer) takes what s/he is given by the “author” and turns it into a lisible form.

        How much do you pay your AI to create a story for you? Ghostwriters will probably be among the first affected. Why should a celebrity pay a ghostwriter when s/he can simply upload the same notes s/he would have given a ghostwriter into AI?

        Free is always better, right?

        • Smiles have nothing to do with cheating. When the buyer is fully informed, and it is a voluntary purchase, smiles or frowns don’t matter.

          Readers don’t need me if AI churns out what they want. They don’t need elevator operators either. But, as mentioned above, I’m the one using the tool, so I am needed to say, “Write a book about an out-of-work actuary in the style of an author following his characters through the story and letting them tell their own tale.”

          I think a ChatGpt subscription is $30/month. I suspect a celebrity might use AI rather than a ghostwriter if AI was cheaper and faster. I think it more likely that a celebrity would hire someone skilled in using AI. Maybe an AI Whisperer?

        • The same psychopaths who market junk food engineered by food chemists to be addictive also peddle pills to tens of millions of us to treat the sickness that they created.

          Soon they will market something far worse, semisynthetic junk food for our souls that will no doubt be engineered by neuroscientists to be psychically just as addictive.

          Jerry Pournelle said on more than one occasion that in a truly free market, the flesh of children would be sold on the streets. I suspect that he was an optimist.

    • Harvey – I view AI as another potentially useful tool an author may or may not decide to use.

      I’ve used Grammarly for most of my serious writing for years. I use it to correct grammar, usage, etc., etc. I’ve appreciated the many enhancements and improvements Grammarly has added over the years.

      Here’s what Grammarly says says about its company vision:

      At Grammarly, our goal is to make it possible for everyone to be heard and understood. But there’s no single set of rules that will work for every person in every situation. The beauty of AI is that it can combine many types of information and adjust to the situation at hand.

      Language is complex and messy, after all. Any system that aims to help you use language better needs to be flexible enough to accommodate all the factors that go into successful communication. Which words are the right ones? Is that sentence easy to follow? Does your recipient appreciate semicolons, or should you bend some of the rules to create a more relaxed tone?

      Our goal is to help you express yourself in the best way possible, whether you’re applying for a job or texting a joke to your friends.

      I don’t think Grammarly’s vision for its products/services is inconsistent with its recently announced AI enhancements rolled into its product offerings.

      • Grammarly has generally focused on a document’s narrative voice, right? Casting it in standard, gramatically correct *modern* english, right?

        Seems to me it shouldn’t take much “AI” to adapt it for casting text into period/dialect accurate lingo for historical fiction.

      • PG, I have no argument with anything you said.

        The genie is out of the bottle and nobody can put it back in. My whole point is that each of us must choose whether and how to use it, and to what end.

        Within the very limited frame of presenting an AI-generated fiction as the original work of a human being, some will choose to cheat both themselves (of the experience of writing an authentic story) and readers, and some won’t. Either way is fine. It’s their life; they have to live it.

        Of course, using a spell checker is not the same thing as feeding AI prompts and sitting back as it spits out a short story, novella or novel.

        But I do realize I’m beating an archaic drum here.

        As an aside re Grammarly, I decided against using that for anything when, in a television advertisement, the company defined a run-on sentence as “a really long sentence.” With “grammar” right there in their name! For a second there, I thought my head might explode. (grin)

        A run-on sentence can be as short as four words. “Bill ran Sally fled” is a run-on sentence. And any sentence (simple, complex, compound or compound-complex) can be “really long” without being a run-on sentence. Apparently Grammarly doesn’t know that. Incredible.

        That said, I have absolutely no issue with anyone else finding Grammarly useful. I just won’t personally ever use it or recommend it.

        I feel very much like the character played by Burgess Meredith in the old Twilight Zone episode titled (if I remember correctly) “Obsolete.” (grin)

        Another archaic saw comes to mind: “Stand for something or fall for anything.”

  2. Never mind an “AI” cooking up a first draft: instead, think of “AI” being used for targetted editing by authors:
    “Review the manuscript for the consistency of XXXX’s dialog.”
    “Rewrite XXXX phrasing to align with a boston brahmin background.”
    “Identify any plot elements not settled by the end.”
    “Given the sitation in chapter 5, what else could XXXX have done to survive?”
    None of those require creativity but they do require a “foolish consistency” to reduce the drudge work.

    Today’s code can’t do any of that but it’s early. Wait 5 years until all laptops come with built-in machine learning processors. Right now that kind of hardware is limited to cloud computing data centers, high end graphics cards (~$1000) and, in a limited/placeholder form, XBOX gaming consoles. Just remember it wasn’t long ago when even low end GRAPHICS PROCESSING UNITS were expensive add-ins, when today even $99 stick PCs have them today. There’s even rumors Windows12 might require one.

    In fact, QUALCOMM is working on “AI” PROCESSORS for next gdneration phonds.

    https://www.cnet.com/tech/mobile/generative-ai-is-coming-to-phones-next-year-thanks-to-qualcomm-chips/

    “AI” won’t soon replace authors but proof readers and editors? Those are definitely at risk. Because “AI” may not be crestive any time soon but wordsmithing is very much in their wheelhouse.

    The OP is correct. If nothing else because of CLARKE’S FIRST LAW: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

    • AI may not soon replace authors, but consumers may be very happy with the AI product.

      I like movies. Lots of movies. But I am totally at a loss when people talk about the photography, lighting, coloring, focus, angles, etc. Apparently, I like movies that are artistic crap. OK. I still consume them. I expect the same happens with books.

      • Have you heard of software that on its own decides to tell a story, chooses the subject, tone, and plot on its own?
        Do you expect to see it any time soon?

        If you look at the so-called “AI” art there is a human at the beginning, in the middle, and the end, making *choices* every step of the way. Text should be no different, with humans as instigators, guides, and judges every step of the way.

        “AI” is just a tool for human motivation, sensibility, and judgment. No different than a power saw. Books are not magic, whether compiled directly by humans or indirectly.

        • Sure there will be a human involved. He’s the one making money using AI. And I never heard of AI saving up for a new Porsche.

          • Right.
            The conflict isn’t human vs “AI” but rather human with “AI” vs human without. Akin to bringing a knife to a gunfight.

  3. PG – wouldn’t the right to free speech trump arguments for regulation? If I distribute content that was written by an AI – even if I admit that it was – that still qualifies as MY speech, doesn’t it?

    • I’m still waiting to hear a suggested regulation. I don’t think they know what they want. Just people waving their hands and yelling, “Regulation! Someone comfort me!”

    • You’re right about the government ban/heavy regulation of AI-written books, D. But that doesn’t preclude consumer protection regulation requiring labels identifying AI-written books or book retailer boycotts or authors raising consciousness levels about the deleterious effects of AI-written or voiced books have on human authors and narrators.

Comments are closed.