AI Simply Needs a Kill Switch

This content has been archived. It may no longer be accurate or relevant.

From The Wall Street Journal:

The best lesson for artificial intelligence may be Thursday’s “rapid unscheduled disassembly” of SpaceX’s Starship rocket, aborted four minutes after launch. With ChatGPT prompting speculation about mankind’s destruction, you should know that techies have obsessed seemingly forever over what’s known as the Paper Clip Theory—the idea that if you told an artificial-intelligence system to maximize the production of paper clips, it would fill the whole world with paper clips. Another version of the theory, Strawberry Fields Forever, has AI using every piece of available dirt to grow strawberries. Scary, right? So are “Halloween” movies.

Not to be outdone, decision theorist (huh?) Eliezer Yudkowsky recently wrote in Time magazine that the “most likely result of building a superhumanly smart AI” is that “literally everyone on Earth will die.” Literally everyone! That’s ludicrous, as is most clickbait these days. Sam Altman, CEO of ChatGPT creator OpenAI, told podcaster Lex Fridman, “There is some chance of that.” C’mon now.

Apparently, Pandora’s box has opened and is spewing its evils, which ignores all the good uses of large language models that will transform software coding, the workplace, education and more. Sadly, our geniuses in government appear to be the remaining few who still read Time magazine. So bring on the regulators to shut the box heroically.

Earlier this month, the Commerce Department initiated the process of regulating artificial intelligence, with Assistant Secretary Alan Davidson suggesting, “We know that we need to put some guardrails in place to make sure that they are being used responsibly.” Bad idea. Guardrails are for children bowling at birthday parties. AI is in its infancy, and we don’t yet know how it will change industries and society. Don’t freeze it now.

If the U.S. regulates AI, research will just move somewhere that doesn’t regulate it, maybe the Bahamas, where the unkempt coders of the future could keep cranking away. Or worse, China. Google CEO Sundar Pichai told CBS’s “60 Minutes” that he wants global AI regulation. Elon Musk and a group of AI experts wrote an open letter calling for an immediate six-month pause of “giant AI experiments.” Isn’t it interesting that those who need to catch up are pushing for a pause?

We don’t need onerous new regulations. There are already laws on the books for false advertising, copyright infringement and human endangerment. Do we really need bureaucrats who still use machines programmed in outdated Cobol to create regulations for a nascent technology they don’t understand? But to assuage worries, I would recommend one tiny rule for AI development: Include a kill switch.

Link to the rest at The Wall Street Journal

7 thoughts on “AI Simply Needs a Kill Switch”

  1. The Techlord of Mars and the Rationalist Harry Potter Fanficcer are worrying about “literal AI” scenarios seen in Colossus The Forbin Project and the Terminator franchise. ChatGPT, Stable Diffusion, etc, are strictly speaking expert systems with no sapience or autonomy. Expert systems are only a threat to the dumb, either in the sense of taking their jobs or in the “Sorcerer’s Apprentice” sense of people not knowing when to second-guess the expert system, or how to shut it off.

  2. Yawn.
    More clueless pontificating.

    Simple fact:

    We’re already up to our ears in “AI”.

    Lost in all the pearl clutching over GPT apps is that LLMS are not the only form of machine learning software. Neither the first commercially useful nor the most important any time soon.

    That title belongs to machine vision and the subsets of image analysis and pattern recognition. Both are extremely useful in the sciences and image processing.

    Those fancy cellphone camera that can spit out beautiful pictures despite phone shake, dubious lighting, misaligned cameras? “AI” all of it.

    Ditto for the software that found the remants of yet another human variant in ancient north africa or the versions that identify buried cities in the jungles or under the sands or that identifies proteins potentially useful for drugs. Or… Pick your subject, look closely, and anything that relies on software is already evolving though Machine learning. After all, what is big data for?

    And these systems are already supplementing and/or replacing humans all over. Right down to your supermarket or membership stores. I ran into them last month at SAMS. And that’s in PR. Folks just don’t notice ’cause they look like floor cleaners a lot of the time. Which are also driven by “AI”.

    Here, check this:
    https://m.youtube.com/watch?v=LscylAtY98U&pp=ygUEY25iYw%3D%3D

    A tedious, almost impossible job to get right can be assigned to a human at $30K a year or a robot at $400 a month. Hard choice, huh? And it does the floors while it’s at it.

    Pretty much every advanced control system in use these days is based on one form of “AI” or another. Anti-skid systems and self-parking on cars? No car elves under the hood, just semiconductors and software. Alexa and Siri and Cortana? Software and semiconductors.

    Want to see something awesome? Look online for videos of the SpaceX boosters self-landing on barges at sea after launching astronauts to the space station. And guess who controls the ship all the way to docking? One chance to guess. For triple the fun, look up the video of the Falcon heavy simultaneosly landing *three* boosters, one at sea and two side by side at Cape Canaveral.

    Last week, SpaceX launched the first full test version of the in-development STARSHIP system. As almost always happens on first lauches of an entirely new system, several things went wrong and it had a “rapid unexpected disassembly”, aka, ” kaboom”. Not a system failure, though. Because the humans pushed a button. Like all well designed systems, their rockets all come with a kill switch.

    Indeed, all the above mentioned uses come with kill switches.

    No idiotpolitician or legal eagle had to order them because all software comes with a kill switch, variously known as “reset”, ” three finger salute”, “power off”, or ” pull the plug”.

    We’ve been living with various forms of “AI” since the last century.
    (Look up SAGE – Semi Automatic Ground Environment for Skynet’s grandpa and Collosus’ daddy. And a real thing. 1950’s.)

    All these folks are doing is showing off their ignorance because the new software variant is encroaching on their ivory towers. And the things they fret over have long since been dealt with in the real world. Kill switch and all.

    ctrl–alt–del.

    What’s that old saying? “Better to remain silent and be thought a fool than to speak and to remove all doubt.” Abraham Lincoln.

    A lot of the latter going around.

    —-
    (The quote really is not Lincoln’s. It is actually a Bible verse. Two for that matter. Proverbs 17:28 and Proverbs 12:23. “Even fools who keep silent are considered wise; when they close their lips, they are deemed intelligent.” Proverbs 17:28. “One who is clever conceals knowledge, but the mind of a fool broadcasts folly.” Proverbs 12:23.)

  3. There are already laws on the books for false advertising, copyright infringement and human endangerment.

    And exactly how often do those sort of things deter actual misconduct? Hmm, does “human endangerment” include, say, broadcasting election denialism as fact, as a WSJ corporate affiliate just settled a matter for money? How about the — putting it as nicely as I can, since this is a family-friendly publication — dubious history of both the publication itself and its affiliates in all aspects of “copyright infringement” as both alleged-infringer and alleged-rightsholder?

    My point is only that “civil liability” can be part of a picture for regulating unwanted activity. It can be an extremely important part. It’s never a complete answer because someone can always decide that a horribly destructive activity is merely “the price of doing business” — ask the Sackler family, or any tobacco heirs, or… Yes, there are significant risks from going too far the other direction, too; I just think the OP’s perspective is, well, warped by its institutional imperative of knowing the price of everything and the cost of nothing.

Comments are closed.