Google’s AI Invents Sounds Humans Have Never Heard Before

This content has been archived. It may no longer be accurate or relevant.

From Wired:

Jesse Engel is playing an instrument that’s somewhere between a clavichord and a Hammond organ—18th-century classical crossed with 20th-century rhythm and blues. Then he drags a marker across his laptop screen. Suddenly, the instrument is somewhere else between a clavichord and a Hammond. Before, it was, say, 15 percent clavichord. Now it’s closer to 75 percent. Then he drags the marker back and forth as quickly as he can, careening though all the sounds between these two very different instruments.

“This is not like playing the two at the same time,” says one of Engel’s colleagues, Cinjon Resnick, from across the room. And that’s worth saying. The machine and its software aren’t layering the sounds of a clavichord atop those of a Hammond. They’re producing entirely new sounds using the mathematical characteristics of the notes that emerge from the two. And they can do this with about a thousand different instruments—from violins to balafons—creating countless new sounds from those we already have, thanks to artificial intelligence.

. . . .

Engel and Resnick are part of Google Magenta—a small team of AI researchers inside the internet giant building computer systems that can make their own art—and this is their latest project. It’s called NSynth, and the team will publicly demonstrate the technology later this week at Moogfest, the annual art, music, and technology festival, held this year in Durham, North Carolina.

The idea is that NSynth, which Google first discussed in a blog post last month, will provide musicians with an entirely new range of tools for making music. Critic Marc Weidenbaum points out that the approach isn’t very far removed from what orchestral conductors have done for ages—“the blending of instruments is nothing new,” he says—but he also believes that Google’s technology could push this age-old practice into new places. “Artistically, it could yield some cool stuff, and because it’s Google, people will follow their lead,” he says.

. . . .

Magenta is part of Google Brain, the company’s central AI lab, where a small army of researchers are exploring the limits of neural networks and other forms of machine learning.

. . . .

NSynth begins with a massive database of sounds. Engel and team collected a wide range of notes from about a thousand different instruments and then fed them into a neural network. By analyzing the notes, the neural net—several layers of calculus run across a network of computer chips—learned the audible characteristics of each instrument. Then it created a mathematical “vector” for each one. Using these vectors, a machine can mimic the sound of each instrument—a Hammond organ or a clavichord, say—but it can also combine the sounds of the two.

In addition to the NSynth “slider” that Engel recently demonstrated at Google headquarters, the team has also built a two-dimensional interface that lets you explore the audible space between four different instruments at once. And the team is intent on taking the idea further still, exploring the boundaries of artistic creation. A second neural network, for instance, could learn new ways of mimicking and combining the sounds from all those instruments. AI could work in tandem with AI.

Link to the rest at Wired

What does this have to do with writing?

PG isn’t completely certain, but perhaps it might work as a writing prompt for a scifi novel.

Music of the spheres has long been an idea that the relationships of various stars, planets, moons, etc., created a form of music.

From Wikipedia:

An ancient philosophical concept that regards proportions in the movements of celestial bodies—the Sun, Moon, and planets—as a form of musica (the Medieval Latin term for music). This “music” is not usually thought to be literally audible, but a harmonic, mathematical or religious concept. The idea continued to appeal to thinkers about music until the end of the Renaissance, influencing scholars of many kinds, including humanists. Further scientific exploration has determined specific proportions in some orbital motion, described as orbital resonance.

Perhaps musica machina will become another form of relationship between machines and humans.

Or perhaps PG’s diet coke intake is out of sync with the universe today.

5 thoughts on “Google’s AI Invents Sounds Humans Have Never Heard Before”

  1. Lovecraftian: new instrument plays music that opens the spheres and releases things Man Was Not Meant To Know.

    Urban Fantasy: new instrument summons previously unknown type of eldrich spirit. Fans go wild, tour group forms and makes fortune.

    PNR: new instrument allows new sounds that accidentally solve communication problems between humans and [mythical or legendary creatures]. Romance ensues.

    • I can’t bring their names to mind at the moment – but I think they’ve all been done (perhaps not a “new instrument” as the central factor, though).

      Music has been intertwined with literature for a very long time.

Comments are closed.