Generating Music With Artificial Intelligence

This content has been archived. It may no longer be relevant

Since PG’s earlier post on using artificial intelligence to write fiction generated some interesting comments.

From Medium:

I started playing piano when I was five years old. I used to practice for about an hour every day and let me tell you, an hour felt like forever. I didn’t stop thought, and I kept on practicing though, because I really liked music.

Fast forward a few years and I started doing some really advanced stuff. My hands were literally flying all over the keyboard and I could play with my eyes closed. Just kidding. I wasn’t actually that good but I hope that for a second you thought I was a piano prodigy or something.

I loved almost every aspect of playing the piano. The sound of the music, the feel of the keys… everything except for music theory. It’s like if you took an old dude obsessed with rules and you combined him with musical creativity and ingenuity. Musical grammar, rules to follow when analyzing and writing music, key signatures and time signatures. It’s all a bunch of random stuff floating across the page that you need to remember.

. . . .

But wait, a ton of data? Lots of rules and patterns? Sequences and sequences of notes? This sounds like a perfect job for (dramatic piano music) machine learning!

Unfortunately, it’s not that easy.

. . . .

A super quick overview of Recurrent Neural Nets:

  • Vanilla neural networks are bad at sequential or temporal data, they also need fixed input sizes
  • Recurrent Neural Networks solve this problem by having subsequent iterations transfer data from the last, meaning that information gets passed through the network each run through
  • By taking the output of one forward pass and feeding it into the next, you can generate completely new sequences of data. This is known as sampling.

. . . .

After doing some research and learning more about using Recurrent Neural Networks to generate music, I found that it works pretty well. And it’s actually super sick.

. . . .

But it still doesn’t have that oomph to it if you get what I mean. I don’t think this will be replacing the Mozart on my Spotify playlist any time soon. Although it’s super cool that this piece of music was generated entirely by a neural network, given the context, I think most people would be able to tell that it was either composed by either a machine or by me.

Link to the rest at Medium

The OP includes a recording of the output of Alex’s neural network work, the one that lacked oomph.

Alex eventually located a more advanced version of what he was trying to do called The MAESTRO Dataset and Wave2Midi2Wave.

Here’s an example of what this more sophisticated neural network system did, starting with a piece composed by Domenico Scarlatti. The entire recording below was created and synthesized via computer.

5 thoughts on “Generating Music With Artificial Intelligence”

  1. Not bad, but I wonder how well it would manage without Scarlatti’s input. The whole piece reminds me of what a snide contemporary said about Vivaldi: he didn’t write 600 concertos, but wrote the same concerto 600 times.

    I imagine a neural net could grind out another 600 Vivaldi concertos a lot faster than Vivaldi could, but the results probably would not be any more worth hearing than Vivaldi’s worst and most hackneyed pieces.

  2. This guy needs to read “The MacAuley Circuit” by Robert Silverberg, published in 1958. It’s about a virtuoso who thinks he’s been made obsolete by an AI music generator, but realizes it’s just a new type of instrument that can take music to a whole new level with his expert input.

    • No, although my discovery of the OP arose from the AI fiction discussion.

      I don’t necessarily believe that, if AI music is possible, that automatically means AI fiction is possible.

      • My browsers must be on the blink. 😉

        I agree; music is a very different creature from fiction.
        As Silverberg pointed out, a music converter black box is more of an instrument than a creator. At least until it no longer requires an input.

        Conceptually the OP isn’t very different from a Moog.
        Just software instead of hardware.
        (In both cases humans set the parameters of the sound stream.)

Comments are closed.