Home » Fantasy/SciFi » This Week in AI, February 15th, 2018

This Week in AI, February 15th, 2018

16 February 2018

PG chose this item because of its title. It frames something that sounds like magic to many people, Artificial Intelligence, as if it is commonplace and prosaic, something like “First Visitors This Season Arrive from Alpha Centauri.”

From Udacity:

Alex Irpan, a software engineer at Google, wrote an excellent article on the current difficulties of getting deep reinforcement learning to work. For example, even after weeks of optimizing hyperparameters and explotation-exploration rates, these models are still highly sensitive to initial conditions. A 30% failure rate is seen as “working.”

Irpan makes the argument that most attempts with deep RL fail but no one talks about it publicly, we only see the few cases where the problems are simplified enough to be feasible. He’s optimistic though. This is still a new field – the breakthrough Atari DQN paper was published only 3 years ago – so there is plenty of room for more research and advancement.

Link to the rest at Udacity


4 Comments to “This Week in AI, February 15th, 2018”

  1. I recently read an amazing article on Medium – about the human brain and how it functions. The title of the article is:

    ‘Your Cortex Contains 17 Billion Computers’

    The article goes on to explain, in layperson’s terms, why the title is not a ridiculous exaggeration.

    Mark Humphries, the author of the article wasn’t thinking about AI when he wrote about 17 billion computers in the brain, but as a sci-fi writer, my brain immediately made the connection. How can any computer we are capable of imagining /right now/ possibly have the computing ‘power’ to compete against 17 billion?

    Maybe one day we’ll discover a different way of making computers. Maybe they’ll be so big they span the entire planet, and maybe then they’ll come close to that lump of grey matter between our ears. But it won’t happen soon.

    If anyone’s interested, the article is here:


    • A new way of making computers is already waiting in the wings: quantum computing. No one knows where it will go. I equate the state of quantum computing today to the state of binary digital computing a century ago when Turing was contemplating its limits and Von Neumann was thinking about making computers programmable. We don’t even know how best to talk about quantum algorithms yet.

      IBM has a public quantum computing simulator and thousands of developers are in a conversation on what to do with quantum computing and how to do it. Microsoft has a subsidiary building quantum computers.

      When I consider how isolated the pioneers of binary digital computing were, the level of collaboration and its potential staggers me. One possibility is that the cryptography that digital currencies rely on will become trivial to crack at will.

      Who knows? Quantum computing may fizzle like Google glasses, or it may soar like the cloud. Exciting times, these.

      • I know of quantum computing but I have no idea how it works. Perhaps it will trump our 17 billion, in which case we’ll all be out of work. 🙂

  2. Only if we let it.

    One thing people keep forgetting in all these “tech doomsday” scenarios is that it is humans who are the primary actors in all of them and it takes willful action to set them in motion.

    It’s a lot like the “killer robots!” scenarios that have been around forever. The only way you get a rampaging killer robot is if someone is stupid enough to build a killer robot in the first place. Which, alas, the russians are actually doing.

    As for quantum computing, here:


    There are several ways it can be made to work, which is why it is so fuzzy a “technology”. There are actually as many different types of quantum computers as there are research efforts. It’s all in the approach taken.

    In some ways, a quantum computer is closer to an analog computer (or a slide rule) than to the digital computers that have dominated computing for the past 80 years or so.

    Think of it like writing a story by the “what happens next method”: you define the setting and the characters and a trigger event. Then you figure out what happens. Which defines a new set of conditions that triggers new actions and reactions in a chain until you reach the end. That is pretty much how digital computing works: a chain of calculations and branching decisions that sequentially calculates “what happens next”. (With lots of fancy tricks and parallelisms added in to speed execution.)

    Quantum computing can, theoretically, look at the initial scenario and figure out all possible outcomes simultaneously and discard all but the most likely, “correct” answer. Like reading the opening of a book and jumping to the epilogue.

    It is doubtful anybody will be balancing a checkbook with a quantum computer but the technology should be a godsend to scientists and engineers dealing with system simulations.

    In the early days of computing, some analog computers were built to model specific systems so that varying a parameter or two would tell them how the system would respond to the changes. It was faster than running calculations by hand. In time, digital computers have become so much faster that building digital models is much more effective and, in fact, most professions these days have dedicated model building software that gets configured to simulate entire classes of systems. That is the most likely near term use of Quantum computers: simulating chemical reactions or weather systems or entire galaxys.

    Hard things.
    ‘Cause easy things can already be handled by digital computers fast and easy and dirt cheap. 🙂

Sorry, the comment form is closed at this time.