From The Atlantic:
Creative artificial intelligence provokes a strange mixture of contempt and dread. People say things such as “AI art is garbage” and “It’s plagiarism,” but also “AI art is going to destroy creativity itself.” These reactions are contradictory, but nobody seems to notice. AI is the bogeyman in the shadows: The obscurity, more than anything the monster has actually perpetrated, is the source of loathing and despair.
Consider the ongoing feud between the Writers Guild of America and the Alliance of Motion Picture and Television Producers. The writers are on strike, arguing, among other things, that studios should not be able to use AI tools to replace their labor. “It’s important to note that AI software does not create anything. It generates a regurgitation of what it’s fed,” the WGA has claimed. “Plagiarism is a feature of the AI process.” The AMPTP, for its part, has offered “annual meetings to discuss advancements in technology.” Neither side knows exactly what it’s talking about, but they feel they have to fight about it anyway.
So little of how we talk about AI actually comes from the experience of using it. Almost every essay or op-ed you read follows the same trajectory: I used ChatGPT to do a thing, and from that thing, I can predict catastrophic X or industry-altering Y. Like the camera, the full consequences of this technology will be worked out over a great deal of time by a great number of talents responding to a great number of developments. But at the time of writing, almost all the conversation surrounding generative AI is imaginary, rooted not in the use of the tool but in extrapolated visions.
So when Jacob Weisberg, the CEO of Pushkin Industries, called me one Friday in January and asked if I wanted to write an AI-generated novel, I said yes immediately. To be more precise, he asked if I wanted to be the producer of an AI that would “write” a novel. It was the exact kind of opportunity to dive headfirst into a practical extended application of the new technology that I’d been looking for. The experience has been, in equal measures, phantasmagoric and grounding.
My conclusion is informed but unsatisfying. Creative AI is going to change everything. It’s also going to change nothing.
Using AI to write fiction is not unfamiliar to me. I’ve been using artificial intelligence to write short stories since 2017, when I published an early “algostory” in Wired; I also produced a 17 percent computer-generated horror story for the Los Angeles Review of Books called “The Thing on the Phone” in 2021, and the short “Autotuned Love Story,” built out of stylistic bots, for Lithub a year later. But these experiments were mostly lyrical. What Weisberg was proposing was entirely different: The novel would have to be 95 percent computer-generated, relatively short (about 25,000 words), and of excellent quality (there would be no point in creating yet another unimaginative mass of GPT text; readers could just do that themselves).
Because I was making derivative art, I would go all the way, run into the limitations, into the derivative: The plot would be a murder mystery about a writer killed by tech that is supposedly targeting writers. I called it Death of an Author. I worked out the plot during a long skate with my daughter and a walk with my son (better techniques than any machine could offer), and began taking copious notes.
The experiment would attempt to be compulsively readable, a page-turner. At first, I tried to get the machines to write like my favorite, Jim Thompson, the dime-store Dostoevsky. It couldn’t come close: The subterfuge of Thompson’s writing, a mille-feuille of irony and horror with subtle and variable significance, was too complex for me to articulate to the machine. This failure is probably due to my own weakness rather than the limitations of the AI. Raymond Chandler, however, I had better results with. I sort of know what Raymond Chandler is doing and could explain it, I thought, to a machine: driving, electric, forceful, active prose with flashes of bright beauty.
My process mostly involved the use of ChatGPT—I found very little difference between the free service and the paid one that utilizes the more advanced GPT-4 model—and Sudowrite, a GPT-based, stochastic writing instrument. I would give ChatGPT instructions such as “Write an article in the style of the Toronto Star containing the following information: Peggy Firmin was a Canadian writer who was murdered on a bridge on the Leslie Street Spit on August 14 with no witnesses.” Then I’d paste the output into Sudowrite, which gives you a series of AI-assisted options to customize text: You can expand, shorten, rephrase, and “customize” a selection. For example, you can tell Sudowrite to “make it more active” or “make it more conversational,” which I did with almost every passage in Death of an Author. But you can also give it a prompt such as “Make it more like Hemingway.”
Link to the rest at The Atlantic