How Collaborating With Artificial Intelligence Could Help Writers of the Future

From The Literary Hub:

Art has long been claimed as a final frontier for automation—a field seen as so ineluctably human that AI may never master it. But as robots paint self-portraits, machines overtake industries, and natural language processors write New York Times columns, this long-held belief could be on the way out.

Computational literature or electronic literature—that is, literature that makes integral use of or is generated by digital technology—is hardly new. Alison Knowles used the programming language FORTRAN to write poems in 1967 and a novel allegedly written by a computer was printed as early as 1983. Universities have had digital language arts departments since at least the 90s. One could even consider the mathematics-inflected experiments of Oulipo as a precursor to computational literature, and they’re experiments that computers have made more straightforward. Today, indie publishers offer remote residencies in automated writing and organizations like the Electronic Literature Organization and the Red de Literatura Electrónica Latinoamericana hold events across the world. NaNoGenMo—National Novel Generation Month—just concluded its sixth year this April.

As technology advances, headlines express wonder at books co-written by AI advancing in literary competitions and automated “mournful” poetry inspired by Romance novels—with such resonant lines as “okay, fine. yes, right here. no, not right now” and “i wanted to kill him. i started to cry.” We can read neo-Shakespeare (“And the sky is not bright to behold yet: / Thou hast not a thousand days to tell me thou art beautiful.”), and Elizabeth Bishop and Kafka revised by a machine. One can purchase sci-fi novels composed, designed, blurbed, and priced by AI. Google’s easy-to-use Verse by Verse promises users an “AI-powered muse that helps you compose poetry inspired by classic American poets.” If many of these examples feel gimmicky, it’s because they are. However, that doesn’t preclude AI literature that, in the words of poet, publisher, and MIT professor Nick Montfort, “challenges the way [one] reads[s] and offers new ways to think about language, literature, and computation.”

. . . .

Ross Goodwin’s 1 the Road (2018) is often described as one of the first novels written completely by AI. To read it like a standard novel wouldn’t get one far, though whether that says more about this text or the traditional novel could be debated. Much of the book comprises timestamps, location data, mentions of businesses and billboards and barns—all information collected from Four Square data, a camera, GPS, and other inputs. But the computer also generated characters: the painter, the children. There is dialogue; there are tears. There are some evocative, if confused, descriptions: “The sky is blue, the bathroom door and the beam of the car ride high up in the sun. Even the water shows the sun” or “A light on the road was the size of a door, and the wind was still so strong that the sun struck the bank. Trees in the background came from the streets, and the sound of the door was falling in the distance.” There is a non-sequitur reference to a Nazi and dark lines like “35.416002034 N, -77.999832991 W, at 164.85892916 feet above sea level, at 0.0 miles per hour, in the distance, the prostitutes stand as an artist seen in the parking lot with its submissive characters and servants.”

K Allado-McDowell, who in their role with the Artist + Machine Intelligence program at Google supported 1 the Road, argued in their introduction to the text that 1 the Road represented a kind of late capitalist literary road trip, where instead of writing under the influence of amphetamines or LSD, the machine tripped on an “automated graphomania,” evincing what they more recently described to me as a “dark, normcore-cyberpunk experience.”

To say 1 the Road was entirely written by AI is a bit disingenuous. Not because it wasn’t machine-generated, but rather because Goodwin made curatorial choices throughout the project, including the corpus the system was fed (texts like The Electric Kool-Aid Acid TestHell’s Angels, and, of course, On the Road), the surveillance camera mounted on the Cadillac that fed the computer images, and the route taken. Goodwin, who is billed as the book’s “writer of writer,” leans into the questions of authorship that this process raised, asking: is the car the writer? The road? The AI? Himself? “That uncertainty [of the manuscript’s author] may speak more to the anthropocentric nature of our language than the question of authorship itself,” he writes.

AI reconfigures how we consider the role and responsibilities of the author or artist. Prominent researchers of AI and digital narrative identity D. Fox Harrell and Jichen Zhu wrote in 2012 that the discursive aspect of AI (such as applying intentionality through words like “knows,” “resists,” “frustration,” and “personality”) is an often neglected but equally pertinent aspect as the technical underpinnings. “As part of a feedback loop, users’ collective experiences with intentional systems will shape our society’s dominant view of intentionality and intelligence, which in turn may be incorporated by AI researchers into their evolving formal definition of the key intentional terms.”

That is, interactions with and discussions about machine intelligence shape our views of human thought and action and, circularly, humanity’s own changing ideologies around intelligence again shape AI; what it means to think and act is up for debate. More recently, Elvia Wilk, writing in The Atlantic on Allado-McDowell’s work, asks, “Why do we obsessively measure AI’s ability to write like a person? Might it be nonhuman and creative?” What, she wonders, could we learn about our own consciousness if we were to answer this second question with maybe, or even yes?

This past year, Allado-McDowell released Pharmako-AI (2020), billed as “the first book to be written with emergent AI.” Divided into 17 chapters on themes such as AI ethics, ayahuasca rituals, cyberpunk, and climate change, it is perhaps one of the most coherent literary prose experiments completed with machine learning, working with OpenAI’s large language model GPT-3. Though the human inputs and GPT-3 outputs are distinguished by typeface, the reading experience slips into a linguistic uncanny valley: the certainty GPT-3 writes with and the way its prose is at once convincingly “human” but yet just off unsettles assumptions around language, literature, and thought, an unsettling furthered by the continuity of the “I” between Allado-McDowell and GPT-3.

. . . .

But as AI “thinking” reflects new capacities for human potential, it also reflects humanity’s limits; after all, machine learning is defined by the sources that train it. When Allado-McDowell points out the dearth of women and non-binary people mentioned by both themselves and by GPT-3, the machine responds with a poem that primarily refers to its “grandfather.” Allado-McDowell intervenes: “When I read this poem, I experience the absence of women and non-binary people.” “Why is it so hard to generate the names of women?” GPT asks, a few lines later.

Why indeed. Timnit Gebru, a prominent AI scientist and ethicist, was forced out of Google for a paper that criticized the company’s approach to AI large language models. She highlighted the ways these obscure systems could perpetuate racist and sexist biases, be environmentally harmful, and further homogenize language by privileging the text of those who already have the most power and access.

Link to the rest at The Literary Hub

One of the comments in the items PG looked at in connection with this post claimed that Pharmako-AI was not the first book written by GPT-3. The commenter claimed that GPT-3 Techgnosis; A Chaos Magick Butoh Grimoire was the first GPT-3-authored book.

While looking for GPT-3 Techgnosis; A Chaos Magick Butoh Grimoire on Amazon, PG found Sybil’s World: An AI Reimagines Herself and Her World Using GPT-3 and discovered that there was a sequel to GPT-3 Techgnosis; A Chaos Magick Butoh Grimoi called Sub/Urban Butoh Fu: A CYOA Chaos Magick Grimoire and Oracle (Butoh Technomancy Book 2)

19 thoughts on “How Collaborating With Artificial Intelligence Could Help Writers of the Future”

  1. Maybe it’s my background speaking, but when I see breathless presentations of “book creation “AI” I wonder what (non-academic) purpose book creation software might serve. (Not talking about glorified grammar checkers but software like the OP and PG cite.) Book mills? Straightjacket editing? None?

    Narrative books are an expression of the thoughts and feelings of the author. That can range from thoughtful meditations on weighty matters to fun entertainment to “give me money”. Software, regardless of plumbing, is mechanistic, without thoughts or feelings. (Or need for cash.) Plus, there is no shortage of humans willing to create as an outlet for their thoughts and feelings. Often for free.

    As the saying goes: “Who asked for tbis?”

    • Publishers who want to cut out the messy business of dealing with – and paying – authors, editors, proofreaders, etc.? The “author” can then be reduced to their proper role as celebratory publicists whose job is to sell the books. (You may be able to tell that I’ve recently reread The Silver Eggheads).

      Incidentally, having looked at the samples of the books the OP mentioned, I’ d hoped to be able to say that I saw no need for real live human authors to worry about the competition. However, I’m not sure that I actually reached anything written by an AI, just a lot of confusing puffery which was enough to ensure that I didn’t buy the books (lower prices, like free would help).

      • See, I’m aware humans have a tendency to look at the world through *their* needs and desires and do my best to factor it out as imperfectly as I can. Try to see whatsomdbody else might see or fail to see because of *their* biases.

        But with this particular faux-AI concept I just can’t find any economic justification.
        First because it requires software that understands humsn culture and culture mutates constantly.
        Second, because success would render the product worthless to publishers. What makes books economically viable is, like any economic product, scarcity. Not necessarily absolute scarcity but rather specific, niche scarcity. Novelty. Publishers get to charge for a given book not so much for its uniqueness (books are ubiquitous) but for how hard it is to find a match to a specific buyer. Tastes vary which is why “the same but different” is profitable. Up to a point. Variations on a theme, any theme, quickly loses most of its value. (Seen any sparkly vampires making a splash?) And that is how software authors have to work. They can’t pull stories out of nowhere, even the most creative human can’t. They’ll need a seed to work from. And then the software goes to town: “the same but different” times a million. A kaleidoscope of narratives. Remember the whole devaluation of literature anti-Indie diatribe? Well, say I create a software package that creates, say, romcoms by the zillion. Each different, each amusing enough to please. I could sell a copy to each tradpub, say a few hundred. Or a copy to each commercial romance writer, thousands. Or, I could sell it direct to the readers, tens of millions. Hmm, what to do…

        Remember WANG word processors? They sold hardware to businesses who used it to get rid of typing pools. Soon other businesses realized the Wang systems’ value was the software. So we got corporate word processing software. For mainframes, minicomputers, and eventually PCs. But then personal computers got cheap and the software migrated to the end users. Today you can get excellent word processing for free. When people buy Word Processing software tbey’re buying convenience, ease of use, compatibility, and support, not the basic capability. Basic word processing is commercially speaking worthless.

        The same for a successful storytelling app that could actually replace authors. It would also replace trade book publishers. I’m not sure tradpub execs are quite that clueless.

        My expectation (story fodder?) is that when good-enough storytelling software emerges it will signal the beginning of the end of trade publishing as a viable big bucks business. Creative narrative prose will go the say of poetry, commercially speaking. People will create and people will consume but it won’t be much of a business.

        Now, as I said, I have serious doubts it is doable at all but even if it were, the biggest tradpubs would doubtlessly order a hit on the creator so it never gets to the consumer level. 😉
        😀
        (Academics are weird.)

  2. The heart of the article, “…machine learning is defined by the sources that train it.” Yes, programming. Who is doing the programming? Who is the mommy and/or daddy that’s raising little XP7000? What human preferences, beliefs, patterns, feelings, prejudices, hatreds, loves, likes, predilections, and desires… will be programmed into little XP7000, the new machine Hemingway, or should I say, Toni Morrison. Because, everything I say has to go by the censors who will either vet or kill… what I say.

    • Some folks think software is magic.
      But the dominant boundary condition is still GIGO, regardless of the category.
      As with TAY and Zo:

      https://en.m.wikipedia.org/wiki/Zo_(bot)

      As Zo proves, an enduring problem with AI writers/editors is the mores they will be “trained” on will change every other year. This isn’t a problem for academia but a production system will need constant updates to keep up with memes, fads, changing values, and even the language itself. Just as games and other forms of productivity software, regular (probably monthly) updates will be required. What happens when an update makes the “AI” change its analysis of the work?

      It might be useful for transient content (sports news) but fiction anc other stuff tbat aspires to endure may not be helped much.

      The best I would hope for this decade would be an automated proofreader that might tease out intent. But I’m not holding my breath.

      • Well, I just used the Marlowe AI (Basic free version) to run my almost-done latest novel through. (I had forgotten about it after our convo here last year) And it was pretty interesting. It showed how much dialogue vs narrative and exactly where it all occurs in the text as displayed in a bar chart. Other nuggets included: which repetitive phrases and how many of them; average sentence length; grade level; complexity score; most used adverbs and how many of them; adjectives; passive verbs; and even total number of all punctuations. Some of this is just fun info, but some—like the adverbs—is very useful in this last editing-cleanup phase. And especially when I got to compare the parameters to one of their comp samples. Turns out that I have shorter sentences and a lower grade level than The Da Vinci Code, which I take as a compliment!

          • I think you’re missing something. Or I am.

            Example: Marlowe says I used the adverb suddenly 24 times in my latest. If the goal is to reduce “_ly” adverbs, then it’s a simple matter of Finding each use (easy to do) and deciding whether to keep or re-write. I could, of course, do this without Marlowe, but seeing it rendered graphically with charts is a little kick in the pants for me.

            • No, I don’t think so.
              I was just asking how much confidence you have in software vs yourself.
              Curiosity.

              • It’s not an either/or. I self-edit (and hire editors) and I use self-editing tools. There’s no downside to adding another tool to the toolbox, in my view. Especially when it’s free (Basic plan) and it literally turns around a “report” in about 10 mins on a full-length manuscript.

            • Harald, I understand why you find the analyses of value. What is not clear to me is why such software is supposedly imbued with “AI”. But then most of what is puffed up as AI, at least in the production environment, is just a useful bunch of algorithms with not even a smattering of intelligence.

              By the way, is the goal really to reduce “_ly” adverbs? I thought that this was just one of the many “rules” that writers have to take with a pinch of salt. If there is a verb to replace your verb+adverb combination then fine, but sometimes the adverb provides a necessary graduation. But you already know this perfectly well so I should probably not even be asking this question.

              • Why “AI”? I have no idea, but that’s what they say: “Marlowe is an artificial intelligence that helps authors improve their novels and long-form fiction….” And the website is titled: https://authors.ai/marlowe

                RE: “_ly” adverbs… that’s just one example, and something I’m guilty of and trying to improve on (knowing the issues involved). Other interesting conclusions/infographics include things like “% dialogue.” I know I’m light on dialogue, so when I see that The Girl with the Dragon Tattoo is 40%, The Da Vinci Code and Recursion are both 30%, and I’m at 20%, it quickly encourages me to keep working on that.

          • Hello PG. It’s a different animal. With Grammarly, you’re working through the document (at least I am) and reviewing its suggestions for fixes. With Marlowe (free version), it generates “reports” with stats of the whole document. THEN you go back to your doc and take actions.

            I guess one way to look at it is like this: Grammarly is Instance Based, and Marlowe is Stat Based. My POV, of course.

              • Correct. They call it an “analysis tool” (‘see what Marlowe thinks of your novel’).

                I thought it added an interesting side angle to the OP, especially since they keep mentioning AI all over their site.

                • They’re using “AI” for the same reason Facebook is now Meta and Google is now Alphabet: it’s a meaningless buzzword they can hype and hide behind to appear cool.
                  Particularly useful for impressing investors who’ll never understand how they do what they do.

    • From a very large perspective, the programmer teaches the machine to read, that doesn’t mean he chooses what it will read. Anyone can provide basic training. But the machine can then learn to do things the programmer doesn’t know how to do.

      I’ve trained neural nets, and I don’t know how the program reaches the results it does. I can’t do it.

Comments are closed.