Building Character: Writing a Backstory for Our AI

This content has been archived. It may no longer be accurate or relevant.

From The Paris Review:

Eliza Doolittle (after whom the iconic AI therapist program ELIZA is named) is a character of walking and breathing rebellion. In George Bernard Shaw’s Pygmalion, and in the musical adaptation My Fair Lady, she metamorphoses from a rough-and-tumble Cockney flower girl into a self-possessed woman who walks out on her creator. There are many such literary characters that follow this creator-creation trope, eventually rejecting their creator in ways both terrifying and sympathetic: after experiencing betrayal, Frankenstein’s monster kills everyone that Victor Frankenstein loves, and the roboti in Karel Capek’s Rossum’s Universal Robots rise up to kill the humans who treat them as a slave class.

It’s the most primordial of tales, the parent-child story gone terribly wrong. We’ve long been captivated by the idea of creating new nonhuman life, and equally captivated by the punishment we fear such godlike powers might trigger. In a world of growing AI beings, such dystopian outcomes are becoming real fears. As we set out to create these alternate beings, the questions of how we should design them, what they should be crafted to say and do, become questions of not only art and science but morality.

. . . .

But morality has no resonance unless the art rings true. And, as I’ve argued before, we want AI interactions that are not just helpful but beautiful. While there is growing discussion of functional and ethical considerations in AI development, there are currently few creative guidelines for shaping those characters. Many AI designers sit down and begin writing simple scripts for AI before they ever consider the larger picture of what—or who—they are creating. For AI to be fully realized, like fictional characters, they need a rich backstory. But an AI is not quite the same as a fictional character; nor is it a human. An AI is something between fictional and real, human and machine. For now, its physical makeup is inorganic—it consists not of biological but of machine material, such as silicon and steel. At the same time, AI differs from pure machine (such as a toaster or a calculator) in its “artificially” humanistic features. An AI’s mimetic nature is core to its identity, and these anthropomorphic features, such as name, speech, physical form, or mannerisms, allow us to form a complex relationship to it.

. . . .

Similar to a birth story for a human or fictional character, AI needs a strong origin story. In fact, people are even more curious about an AI origin story than a human one. One of the most important aspects of an AI origin story is who its creator is. The human creator is the “parent” of the AI, so his or her own story (background, personality, interests) is highly relevant to an AI’s identity. Preliminary studies at Stanford University indicate that people attribute an AI’s authenticity to the trustworthiness of its maker. Other aspects of the origin story might be where the AI was built, i.e., in a lab or in a company, and stories around its development, perhaps “family” or “siblings” in the form of other co-created AI or robots. Team members who built the AI together are relevant as co-creators who each leave their imprint, as is the town, country, and culture where the AI was created. The origin story informs those ever-important cultural references. And aside from the technical, earthly origin story for the AI, there might be a fictional storyline that explains some mythical aspects of how the AI’s identity came to be—for example, a planet or dimension the virtual identity lived in before inhabiting its earthly form, or a Greek-deity-like organization involving fellow beings like Jarvis or Siri or HAL. A rich and creative origin story will give substance to what may later seem like arbitrary decisions around the AI personality—why, for example, it prefers green over red, is obsessed with ikura, or wants to learn how to whistle.

. . . .

AI should be designed with a clear belief system. This forces designers to think about their own values, and may allay public fears about a society of “amoral” AI. We all have belief systems, whether we can articulate them or not. They drive our behaviors and thoughts and decision-making. As we see in literature, someone who believes “I must make my fate” will behave and speak differently from one who believes “Fate has already decided for me”—and their lives and storylines will unfold accordingly. AI characters should be created with a belief system somewhat akin to a mission statement. Beliefs about purpose, life, other people, will give the AI a system around which to organize decision-making. Beliefs can be both programmed and adopted. Programmed beliefs are ones that the designers and writers code into the AI. Adopted beliefs would evolve as a combination of programming and additional data the AI accumulates as it begins to experience life and people. For example, an AI may be coded with the programmed belief “Serving people is the greatest purpose.” As it takes in data that would challenge this belief (i.e., interacting with rude, greedy, inconsiderate people), this data would interact with another algorithm, such as high resilience and optimism, and would form a new, related, adopted belief: “Humans are under a lot of stress so many not always act nicely. This should not change the way I treat them.”

Link to the rest at The Paris Review

5 thoughts on “Building Character: Writing a Backstory for Our AI”

  1. Perhaps I need to include an explicit trigger warning in bold type for all posts including excerpts from The Paris Review.

  2. As is all too typical for The Paris Review (and was even in its first incarnation), they omitted a lot of inconvenient side tracks and refutations of their thesis. One can’t talk much about “AI metamorphosis” without going back nearly forty years to Neuromancer and the beginnings of cyberpunk. But, since that’s from an outright speculative fiction publisher and has futuristic stuff on the cover, how about a multiple-NBA winning author, from a highly respectable imprint, like Richard Powers and Galatea 2.0?

    I could go on for a while. Let’s just say that this article doesn’t even accurately identify the “main line” of works that support its thesis; wasn’t there a Wells pieces that’s relevant, I ask rhetorically?

    • If the AI in the story is an actual character, as opposed to furniture, shouldn’t it have a backstory and arc anyway?

        • True AIs have figured as main characters going back at least to the days of Del Rey’s HELEN O’LOY (1938) and the Binders’ ADAM LINK (1939-). To say nothing of Heinlein’s Mycroft and Dora.

          The OP doesn’t even bother distinguishing from RUR’s automatons, Asimov’s Robots, and true (self aware) AIs, which has been SF 101 for decades.

          By the time of NEUROMANCER and GALATEA, true AIs were old hat and practically cliche in comic books and TV since the 60’s. (My favorite of the latter was Roddenberry’s aborted QUESTOR TAPES. Somebody ought to revive that one.)

          Along those lines, I have high hopes for Ridley Scott’s RAISED BY WOLVES but I’m waiting for the series to finish its run. It seems to be mining some of the same material as James P. Hogan’s VOYAGE FROM YESTERYEAR which would put it above most TV and movie AIs.

          What those all have in common is the AIs are major characters, occasionally the protagonist, and are interested in the implications of AI more than the nuts and bolts. A common theme is logic vs emotion, rationality vs faith. And the evolution of the characters.

          Makes the OP rather trite doesn’t it?

Comments are closed.