Home » Writing Tools » Can robots really write novels?

Can robots really write novels?

30 October 2012

From the BBC:

Machines can already drive trains, beat humans at chess and conduct countless other tasks. But what happens if technology starts getting more creative – can a machine ever win the Booker Prize for fiction?

In George Orwell’s fiction, by 1984 the “proles” were entertained by books produced by a machine.

In real life, robots have been capable of writing a version of love letters for over 60 years.

But how far away are books written by robots?

Well they have already happened, in their hundreds of thousands.

Professor Philip Parker, of Insead business school, created software that has generated over 200,000 books, on as varied topics as 60 milligram containers of fromage frais to a Romanian crossword guide.

Amazon currently lists over 100,000 titles under his name.

. . . .

Fiction is often criticised for being a factory process of using formula and “write by numbers” approaches. Creative writing programmes have been likened to working “from a pattern book” by Booker-nominated author Will Self.

Certain pieces of writing software provide templates that will automatically create the structure of a novel and once written, can tell you how easy the novel is to read.

“No novel writing package will write your book for you,” says software firm NewNovelist.

“They certainly can help you complete your novel and make sure it is composed correctly.”

. . . .

Russian Alexander Prokopovich is said to be responsible for the first successful book to be created by robots. It was published in 2008 and was written in the style of Japanese author Haruki Murakami in a variation on Leo Tolstoy’s Anna Karenina.

. . . .

Prof Parker’s software, still in prototype, would allow characters to be decided, locations to be set, genre fixed and plot mechanisms chosen. It then creates anything from 3,000-word flash fiction to a 300,000-word novel.

He has even done public experiments with poetry.

“A computer works very well with rules and the most obvious way is poetry,” he says.

“We did a blind test between a Shakespearean sonnet and one that the computer had written. A majority of people surveyed preferred ours.

“That’s not to say it was better, Shakespeare is a genius, but it was what people preferred.” 

. . . .

“The idea of a computer winning the Nobel Prize for physics is not too unlikely, citing a computer as joint recipient. It’s obviously not a huge leap to think of something similar happening in fiction.”

Link to the rest at BBC and thanks to Brendan for the tip.

Passive Guy has already mentioned Professor Parker here and here, but the topic is always good for a post plus the headlines make prime Twitterbait.

Writing Tools

23 Comments to “Can robots really write novels?”

  1. For a very amusing take on this subject, read Fritz Leiber’s The Silver Eggheads. I need my Wordwooze!

  2. P.G.

    I’ve just read my first Haruki Marukami, (IQ84.) The thought of trying to copy that style is daunting.

    I think Alexander Reynolds was a little too swift in his condemnation, perhaps he feels he has something to worry about:)

    The poetry seemed decent enough, and journalism has been dead since, “Good night and good luck.”

    brendan

  3. I continue to think this is just silly, and meant to stir up controversy. A computer did not write the fiction. The person who progammed the computer wrote the fiction. They basically wrote one book, and asked the computer to generate a bunch of stories using the same parameters.

    Professor Parker needs to own the fact that he is a writer, and stop putting it off on the computers.

    Take some responsiblity for your work, Professor.

  4. Nonfiction is probably a safe bet for robots. But what would robot-written fiction look like? If you could put enough interesting elements in and let some really ingenious algorithms stir them together, I’d love to see the result.

    The robots in my book, though, listen to a different muse. I have one particularly driven AI that could probably write a stirringly beautiful epitaph for humanity, but there would be no one left to read it. Which brings up another fun question: if robots were to write speculative fiction, would theirs be as skeptical and fearful of the future of humanity and technology as ours is?

  5. “if robots were to write speculative fiction, would theirs be as skeptical and fearful of the future of humanity and technology as ours is?”

    Shad,

    Interesting question. It sent me to Amz. Got yer book. Pretty decent so far, nicely written. Like the genre.

    If you’re a robot, damn you:)

    brendan

  6. “The person who progammed the computer wrote the fiction.”

    I’d test that by asking the programmer to read the first half of each book, then tell us what happens in the second half. What if he can’t?

    • Terrence, it really is impossible for a computer to surprise the person who programmed it.

      For example, if the person programmed it told it to pick among 12 endings, then the computer will pick among 12 endings. It will not choose a 13th ending. Computers can not create, they follow instructions.

      I want to add that I really do think this “Professor” is just trying to get a reaction.

      First of all, he calls computers “robots”. They are computers, Mr. Parker, not robots.

      Second, saying that people prefer computer generated poems to Shakespeare? Please. Most people prefer American Idol to Shakespeare.

      He’s just trying to provoke a reaction.

      • “[I]t really is impossible for a computer to surprise the person who programmed it.”

        I’m guessing you’ve never had to debug a misbehaving complex program, then…

        If you give your story-writing program 12 possible endings and tell it to pick one, then of course it will always pick one of the 12. But you wouldn’t write a story-writing program like that, unless you had no idea what you were doing, or wanted a source of formulaic genre fiction, or were setting up a straw man to prove that computers couldn’t write fiction.

        You’d program the computer with a model of human psychology, so that it could decide what a character might do in any particular situation, and a model of one or more societies or cultures, so that it could decide which situations were likely. You’d probably also have to model one or more readers, so that the computer could judge their likely reactions and decide whether the story was worth writing.

        A better question might be why anyone would want to spend all that time and money programming a computer to do this, when there are already millions of biological computers running better models of individuals and societies and writing stories with (in most cases) nobody offering them money to do so.

        • Steven,

          You said:

          “You’d program the computer with a model of human psychology, so that it could decide what a character might do in any particular situation, and a model of one or more societies or cultures, so that it could decide which situations were likely. You’d probably also have to model one or more readers, so that the computer could judge their likely reactions and decide whether the story was worth writing.”

          Right. That’s the same thing as programming the computer to pick among 12 endings, it’s just more complicated math. The computer is not making decisions. It is following along paths of “if this, do that”. The models of human psychology and societies/cultures are subjective factors that come from the programer and are programmed into the computer. Which means that anything subjective or creative comes from the programmer, and any book that is generated is based on the programmers’ idea of what a book is – i.e., the programmer wrote the book, and used a computer to generate it according to their ideas.

          Computers do not write books, they generate data.

          The reason he is doing this – he likes attention. He also probably has an axe to grind, with some off-base ideas about human creativity. I think he should focus instead on writing the books he clearly wants to write.

          • Well yes, most computer programs are deterministic – they follow fixed rules, and given the same input, will always produce the same output. Many modern computer systems are so large and complex that one person can’t understand all the rules and all the possible interactions between the parts. Therefore by definition, it would be capable of surprising a person. And there’s no reason why a program can’t change its rules, if it’s designed with that ability.

            I don’t believe there’s anything special or incomprehensible about subjectivity or creativity (although I don’t assert that we understand them fully at the moment). Nor do I believe there’s anything a human mind can do that a computer will never be able to replicate. But nor do I believe there’s much point in programming computers to do what humans can do, only faster and cheaper. Better to program them to do what humans can’t, or do badly.

            • Stephen, what I find interesting is there seems to be a group of people that will argue that there is something special in a computer beyond its algorithms. That somehow the algorithms create a certain special magic, so the computer becomes more than its program.

              But those same people seem to argue, just as vehemently, that there is nothing ‘special’ about the human brain, that it is completely understandable, replicable and limited by its construction. There is nothing special in the human brain that is able to transcend. Computers, yes. Humans, no.

              As for the human psyche, my sense is they don’t believe it exists, despite the fact that it quite obviously does exist.

              • When I say that modern computer systems are too complex for one person to understand, I mean that in the sense that most countries’ tax codes are too complex for one person to understand – accountants and lawyers have to specialise, and if they come across a problem outside their area of expertise, they have to research it or ask someone else.

                I’m not aware of anyone who claims that algorithms “create a certain special magic”. Are you perhaps thinking of emergence – complex or large-scale patterns arising from lots of simple interactions? It happens in nature – sand dunes and crystals are some examples. It can be surprising when you see it, but I think that’s just because you can understand the rules without understanding their consequences.

                I don’t see any incompatibility between believing that computer programs can exhibit emergence and believing that there’s “nothing special” about the human brain. The brain is the most complex thing we know of, but I believe its complexity is purely the result of physical causes – atoms, molecules, cells and the interactions between them. Therefore, while we don’t currently understand it fully and can’t fully replicate its behaviour, I don’t believe we will never be able to understand or replicate it.

                • Steven, I agree that it may, at some future point, be plausible that we will understand or even be able to replicate the brain.

                  But I also want to point out that your belief that the brain is purely a result of physical causes is just that – a belief. It is not provable, it is speculation.

                  Which means that, at this point, you and I are debating beliefs. You believe the brain is limited by biology, and I am not sure of that. I believe it may be, but I also am open to the belief that it may not be, or that there may be types of biological factors that we are not yet aware of.

                  But to go back to the question of creativity, I think it would be very difficult for a programmer to program a computer to have:

                  sentience
                  emotions
                  intution
                  insight
                  original thought
                  irrational thought (outside of its programming)
                  synergistic thought (including elements that are not in the programming)

                  I agree that we don’t fully understand creativity, but I hope you agree that all of these factors play into creativity. So, I do not think you can replicate human creativity in a computer. At least not yet.

                  I’ve found our discussion also to be very interesting, Steven!

                  • I think we’ll have to agree to disagree here. As a general rule, I try not to believe (that word again!) in the existence of anything for which there’s no credible evidence. (I admit that my standard for “credible” is somewhat flexible.) As I’m not aware of any evidence that a complete explanation of the brain’s workings will require non-physical causes, I don’t believe that it has any such causes.

                    I agree that we don’t currently understand creativity (or many other aspects of thought) well enough to program a computer to replicate them. But because I believe that the brain is ultimately just a very large number of atoms interacting, I have to believe that there will come a time when computers are able to do everything the brain can. (As long as someone is sufficiently motivated to spend the time and effort to figure out how to do it.)

  7. Hello,
    Phil Parker here. I am enjoying your conversation. Here is some clarifying background. Mila writes: “It really is impossible for a computer to surprise the person who programmed it.“ Actually, being the person you refer to, I am often surprised by what the algorithms produce (in fact, 99.9% of what is produced is new to me). Let me explain why. We have two broad genres: literature and non-fiction. Within each, there are sub-genres and sub-sib genres etc. Within literature, there is poetry. Within poetry there is haiku, within haiku there is traditional, and modern, etc. The more one drills down to define an area of literature at the sub-sub-sub genre level, the more formulaic it becomes – the writing becomes more constrained and therefore able to be imitated by computer algorithms. The “genre” is very predictable, but what the algorithms produce within the genre is impossible to predict, as the store of latent knowledge it draws on is too much for any one person to know in advance. A good example is a simple definition. We started with definitions before we did poems. The first definition below is written by a human (a group at Princeton University), the others by a computer using graph theory and cluster analysis (applied mathematics):

    Definitions of zealously:

    1. In a zealous manner. [Human = Wordnet]
    2. In an enthusiastic, fervid, ardent or fervent manner. [Eve - graph theoretic]
    3. In a fanatical manner. [Eve - graph theoretic]
    4. In a burning or intense manner. [Eve - graph theoretic]
    5. In an eager, anxious or solicitous manner. [Eve - graph theoretic]
    6. In an energetic, active, warm or hardworking manner. [Eve - graph theoretic]
    7. In an earnest or serious manner. [Eve - graph theoretic]
    8. In an avid or ambitious manner. [Eve - graph theoretic]
    9. In a strong or strenuous manner. [Eve - graph theoretic]
    10. Adverbial inflection of the adjective zealous.[Eve - graph theoretic]

    Some people prefer the computer generated definitions as they do not rely on the root-word to explain the meaning.

    Here is how it works. The genre supplies the rules (or contraints) = “In a ADJECTIVEX manner”. The linguistic graph provides associated strength of “belonging” to the concept. The cluster analysis ensures that adjectives within each definition are more similar to each other, than to other adjectives in other definitions. Valences on the graph indicate which definition is more “powerful”, hence the order of the definitions (the first one being more likely to best define the word, the later ones, while also relevant, being definitions of more obscure uses. “Edge” values in the graph (e.g. the strength between any two words in our minds) indicates that there are diminishing returns to the number of definitions, so the algorithm can “judge” as an editor would, that some words need 3 definitions, while others need 6 or 7 to cover the breadth of uses (a word’s ambiguity). None of these things are predictable in advance. You can read more definitions created this way at http://www.totodefinition.com. Extending this to literature, you can read poems generated in a similar fashion at http://www.totopoetry.com.

    Mila writes “ Professor Parker needs to own the fact that he is a writer, and stop putting it off on the computers. Take some responsiblity for your work, Professor.” While I take ownership for the programming and alogorithms, the actually definitions are not at all what I would write without the use of the computers. So maybe they are co-authored in an abstract sense.

    Mila also writes “he calls computers “robots”. They are computers, Mr. Parker, not robots.” Actually, I call the programs algorithms; it is the journalist that used the term robot, not me; I agree that the term robot is not a good one to use. I recalled over the phone that he agreed with me, but he went with the word robot anyway, perhaps to engage readers (my guess). About this being a method of getting attention, this is actually a research project in applied mathematics (learning the limits to this in terms of using algorithms for “discovery”). The fiction aspect is really only a hobby that journalists call me about (I am not actively marketing this). If you want to read about the core work I am involved in, you may find this of interest:
    http://gulfnews.com/news/gulf/uae/education/campus-in-abu-dhabi-helps-make-farming-easier-with-new-radio-technology-1.1041606

    This is an interesting discussion, many thanks for the comments.

    • Thanks for the detail, Phil.

      • My pleasure.
        Phil

        • Phil,

          I am so sorry. This is not the first time I have found that carelessly hurtful words and judgements I have said on the internet come back to haunt me. I really regret this, and hope to learn better; if I am going to disagree, there is no reason not to do so with tact and diplomacy. And certainly not to attack someone’s character and motivation without thoroughly understanding the situation.

          I am truly sorry.

          I understand now that you are experimenting with mathematics and how it can emulate human behavior. I am sorry if I maligned your intentions as a scientist.

          I probably should stop there, since I think that is enough, but…I guess I will share with you that I still hold fast to my belief that you have written the poems and novels, not the computer. No matter how complicated the algorythms are, the computers are still simply following instructions. It is not possible to walk up to a computer and ask it to produce fiction, sit down and wait for it to write something without telling it how to write it first.

          I guess I see this as simliar to someone who writes a symphony. It is not possible for them to play all the instruments and recreate the music themselves without an orchestra. But they still wrote the symphony.

          At any rate, I hope you’ll understand that my apology is sincere. I will be more careful in the future.

          • Mira,

            Absolutely no apologies required. This is a great discussion. I think what makes the algorithms I am working on (compared to a composer) different is that many of the programs we have created dynamically “learn” before they “create” — in essence the programs first learn semantic webs (that I myself can never understand), then follow rules. There are, therefore, two levels of automatation: (1) learning, and (2) executing.

            The best analogy is a teacher that tells the students how poetry works, and gives them a dictionary, thesaurus, and rules. The students have, up to that moment, learned a semantic web. Combining the semantic web with the teacher’s instructions, the students do the work and create original poems. I see the algorithms as teachers, and “learned” semantic webs (via automation). The programs, like rows of students, then execute in batches.

            What the above has taught me is that similar programs can be applied to create “speculation engines”. Some areas of scientific research are under-funded (e.g. tropical agriculture). For example, consider this sentence “If aphids attack plantX, the likely abatement strategy will be Y.” PlantX might be any species (e.g. a cultivar of rice), and Y might be highest likelihood abatement strategy (e.g. the misting the leaves). These are, in essence, original sentences which inform or focus attention to possibilities. Thusfar the results are encouraging, in that we seem to get reasonable speculations using similar linguistic approaches to poetry. We instruct the engine the rules of scientific inferences, and then let it loose to loop across the possibilities until it discovers something that is likely to be useful (the rest is discarded automatically). The credit would go to the original scientists who established the rules or informed the engine, and the algorithms (less so to me as I may not even be able to comprehend what the resulting sentences mean).

            Anyway, it is an intriguing area of discussion. Thanks again for the conversation.
            Cheers
            Phil

            • Phil,

              I think an apology definitely was required, but thank you for being so gracious.

              I think we may need to agree to disagree on this, since I don’t believe we will find a meeting of the minds. :) I still believe that since you told the computer how poetry or a novel works, you are the author of the work itself, utilzing a tool that expands on your vision, even if it goes in directions you did not predict. Another person might have told the computer a completely different way literature works, and the end result would have been different: the computer would have generated a different poem or novel.

              It does raise the question of whether ‘what literature or poetry is’ is static and quantifiable. Since different cultures respond to story differently, I tend to think it is not, but it would be interesting to see if there were commonalities.

              It sounds like the research you are conducting has broad applications for persuasion, which is fascinating.

              Thank you for the conversation, I found it very interesting as well, and again, thank you for your graciousness.

    • Phil,

      I’m asking not just on behalf of myself, but for a whole bunch of love-lorn fans of Terminator, The Sarah Connor Chronicles. (A cancelled TV show, which had a self aware cyborg as a major character.)

      Do you think that Robots/cyborgs/gynoids, might become self aware, and that something like the character Cameron Phillips might exist?

      Or do you think that the future contains robotics rather more like those of David Levy’s, “Love and Sex with Robots,” or the Japanese version, where they are domestic assistants?

      http://www.amazon.com/Love-Sex-Robots-ebook/dp/B000XUACXM/ref=sr_1_2?ie=UTF8&qid=1351783638&sr=8-2&keywords=love+and+sex+with+robots

      brendan

      • Good question, of course time will tell.

        It may, of course, not matter, as we are creatures of perception. If I perceive the robot to be self aware, and react and/or interact with it accordingly, then in my mind it is self aware. Children often have compassion for their toys, as the Castaway did for Wilson. Whatever the robot then says, to me, is from a self-aware entity (whether it is or not becomes irrelevant).

        We did some experiments with folks, asking them, for example, to describe the poet of a computer generated poem. If not told ahead of time, one has trouble knowing if a computer algorithm was used. One therefore assumes there is a soul behind the curtain. The descriptions people offer are of humans with creativity and emotions (e.g. the poet is an angry person, or is clever, well read, etc.). You can do this experiment yourself using poems from my poetry site = http://www.totopoetry.com

        We also did experiments on periferal cues and signals. If one sees a 1 minute video clip with text and narration, perhaps with the first text word being “Chinese” and is asked how many times the word Chinese was orally mentionned in the audio, people think that it is mentioned 4 or 5 times, when, in fact, the word Chinese was never spoken. In other words, people’s minds create (or are creative) far beyond what is on the printed page. Creativity, therefore, is in part in the readers mind. The interaction of algorithm and reader, therefore generates creativity, whether or not the algorithm is deemed to be creative.

        Just a thought.

Sorry, the comment form is closed at this time.

Page optimized by WP Minify WordPress Plugin