A Neural Network Wrote the Next ‘Game of Thrones’ Book Because George R.R. Martin Hasn’t

This content has been archived. It may no longer be accurate or relevant.

From Motherboard:

Minutes after the epic finale of the seventh season of Game of Thrones, fans of the show were already dismayed to hear that the final, six-episode season of the series isn’t set to air until spring 2019.

For readers of the A Song of Ice and Fire novel series on which the TV show is based, disappointment stemming from that estimated wait time is laughable. The fifth novel in seven-novel series, A Dance with Dragons, was published in 2011 and author George R.R. Martin has been laboring over the The Winds of Winter since, with no release date in sight. With no new source material, producers of the TV series have been forced to move the story forward themselves since late season 6.

Tired of the wait and armed with technology far beyond the grand maesters of Oldtown, full-stack software engineer Zack Thoutt is training a recurrent neural network (RNN) to predict the events of the unfinished sixth novel. Read the first chapter of the book here.

“I’m a huge fan of Game of Thrones, the books and the show,” said Thoutt, who had just completed a Udacity course on artificial intelligence and deep learning and used what he learned to do the project. “I had worked with RNNs a bit in that class and thought I’d give working with the books a shot.”

. . . .

“It is trying to write a new book. A perfect model would take everything that has happened in the books into account and not write about characters being alive when they died two books ago,” Thoutt said. “The reality, though, is that the model isn’t good enough to do that. If the model were that good authors might be in trouble. The model is striving to be a new book and to take everything into account, but it makes a lot of mistakes because the technology to train a perfect text generator that can remember complex plots over millions of words doesn’t exist yet.”

. . . .

“I start each chapter by giving it a prime word, which I always used as a character name, and tell it how many words after that to generate,” Thoutt said. ” I wanted to do chapters for specific characters like in the books, so I always used one of the character names as the prime word … there is no editing other than supplying the network that first prime word.”

George R.R. Martin isn’t going to be calling for writing tips anytime soon, but Thoutt’s network is able to write mostly readable sentences and is packed with some serious twists.

Link to the rest at Motherboard

PG predicts AI-written books will be common within two years.

He doesn’t know if they will be very good, but it will be interesting to watch the technology develop.

PG also predicts that AI-written books won’t put good human authors out of business.

21 thoughts on “A Neural Network Wrote the Next ‘Game of Thrones’ Book Because George R.R. Martin Hasn’t”

  1. I can’t help thinking that if J.K. Rowling had taken so long between books, the Harry Potter series would have been completed in at least a dozen different, very satisfying ways by fanfic authors early enough that when she finally published the last book, the demand would have greatly diminished.

    As for AIs, the question isn’t “Can a computer write a novel?”, it’s “Can computers consistently write novels that people will want to pay to read?” And to that question, I think the answer is and will always be: no. Art is a way for humans to connect to other humans. Machines will never be able to satisfy that need.

  2. Given enough monkeys keyboards and time we can have them write ‘anything’ – but now we can skip buying the bananas and having to clean up after them.

    The ‘real’ trick would be to then have another AI decide what of that output humans would like to see/read. And such an AI could then go through trad-pubs’ slush piles in seconds and recommend the top 10/20 to publish (maybe even suggest what price and contract the writer might be willing to sign …)

  3. I’ve used neural nets for years in commodity trading. They can be very good, and are the basis of today’s high speed trading where most trades are determined and entered by the net.

    The most interesting thing about them is we don’t know how they make decisions. A net is fed gobs of training data, and it determines the criteria it will use to make decisions.

    It doesn’t come up with rules we can understand. The neural layers interact with each other based on the training data, but it doesn’t generate if/then criteria we are used to.

    The programmer knows how the program is written, but doesn’t know how that program built the net from the training data. He can’t duplicate it.

    And Joe could produce a novel every three or four seconds. Or thousands per day…

    • One of the active areas of research and development is making neural nets explain themselves. The opacity of neural net decisions is seen as a defect and an obstacle to adoption. For instance, in medicine, doctors have been extremely resistant to neural net decisions because they can’t be explained.

      The path, or paths, a neural net takes to reach a conclusion, is not inherently unknowable, but extremely complex and humans don’t have the time or capacity to tease out the decision criteria. But perhaps with the aid of computer, they can.

      In the realm of speculation, a neural net that could explain itself would be a step toward artificial consciousness. My guess is that a deep learning tool could help an author, but not replace an author. I leap to that conclusion after reading about deep learning systems that carry on conversations that degenerate into private languages. Without a human, I suspect stories told by machines would turn into incomprehensible private codes.

      The difference between self-driving cars, for example, and story generators is that a safe drive that reaches its destination is easily distinguished from a drive that crashes or ends in the wrong place, but a successful story does not have such easily determined criteria. You may say that a successful book has lots of sales, but that’s an elusive criterion. A crash is a crash, but a book that sold in 2016 may not sell in 2018. What can the machine do with that?

      • When the US Army started with neural nets they tried training one to identify US tanks and Soviet tanks from the aerial photographs. It worked great in training, but failed miserably in practice.

        Puzzled researchers tried to analyze what the net was doing. Then someone noticed all the pics of Soviet tanks were taken on a cloudy day, and the American ones were taken on a sunny day.

        The net had simply decided US tanks had shadows, and Soviet tanks didn’t.

      • > In the realm of speculation, a neural net that could explain itself would be a step toward artificial consciousness.

        “If you can’t explain it simply, you don’t understand it well enough.” – Albert Einstein

        • I am not sure how Einstein’s comment applies. He assumed that the universe is simple. He is likely right. But on another level, Brownian motion is impossibly complex. Are we talking about the basic principles of the universe, or why my dog sat on the power strip and turned off my computer? They are not the same.

      • “One of the active areas of research and development is making neural nets explain themselves.”

        I had a light bulb moment when I read this sentence. Could the neural net be doing something similar to the sub-conscious of a human brain? Without the layer of conscious logic we impose on our thoughts to make communication possible?

        • Human unconscious may be similar, but I would hesitate to say that a neural net’s decision is the same as the human unconscious. We know exactly how a neural net works, but the results are hard to understand. I suspect the human unconscious is much more complex than today’s neural nets, but I agree about the importance of the layer of conscious logic.

          • Not the same, no. I suspect much of our subconscious processing is emotional, and I doubt a neural net will ever be able to simulate emotions. It was more the odd lack of linear logic.

          • In the markets today, nets are being trained with data that they had a significant hand in generating.

            Suppose I am training a wheat net using the last three years trade data. A great deal of the last three years trades were bid/offered and covered by nets.

            No individual net is responsible for those all trades, but the aggregate result of all their trades makes up the trade history which then becomes the training data.

            I keep wondering what we would see if we eliminated all human control, and let the nets trade each day, then add that days’s trading data to their accumulated training data. Convergence? Chaos? Alternating patterns and chaos? No trades at all?

            This isn’t science fiction. It’s happening everyday.

            • Fascinating! I haven’t done any development on neural nets for quite a few years, but at one point, I was experimenting with developing neural nets to predict data center failures. The project was promising but I couldn’t justify the hardware costs at that time and I had to drop it.

              In those days, you designed in some rules to prevent the net from falling into boring states like a Game of Life that degenerates into a repeating pattern. These rules were constraints on the learning system itself, not part of the net. I would imagine the end state of the trading system would depend on the interaction of the constraints built into each net used in the market. But I am not sure what kind of constraints are built into state-of-the-art learning systems, so I am just speculating, but it’s great speculation.

              Where is Isaac Asimov when we need him?

  4. It would be really interesting to get one of those AIs to write the first draft of my next book—and then go through and clean it up. Perhaps then I could produce a novel every three or four weeks instead of every three or four months.

    • That would be great as long as you’re not depending on Australia for your sales. Apparently, they’re only expected to read for one hour per annum.

      • Yes, I loved that one.
        And in the article : “In children, reading can help with literacy”. Wow ! That was unexpected !

  5. It’s as good as some of the stuff I’ve found on Amazon.

    Maybe it will rank up there with “The Eye of Argon.”

Comments are closed.