The Real Threat From A.I. Isn’t Superintelligence. It’s Gullibility.

This content has been archived. It may no longer be accurate or relevant.

From Slate:

The rapid rise of artificial intelligence over the past few decades, from pipe dream to reality, has been staggering. A.I. programs have long been chess and Jeopardy! Champions, but they have also conquered poker, crossword puzzles, Go, and even protein folding. They power the social media, video, and search sites we all use daily, and very recently they have leaped into a realm previously thought unimaginable for computers: artistic creativity.

Given this meteoric ascent, it’s not surprising that there are continued warnings of a bleak Terminator-style future of humanity destroyed by superintelligent A.I.s that we unwittingly unleash upon ourselves. But when you look beyond the splashy headlines, you’ll see that the real danger isn’t how smart A.I.s are. It’s how mindless they are—and how delusional we tend to be about their so-called intelligence.

Last summer an engineer at Google claimed the company’s latest A.I. chatbot is a sentient being because … it told him so. This chatbot, similar to the one Facebook’s parent company recently released publicly, can indeed give you the impression you’re talking to a futuristic, conscious creature. But this is an illusion—it is merely a calculator that chooses words semi-randomly based on statistical patterns from the internet text it was trained on. It has no comprehension of the words it produces, nor does it have any thoughts or feelings. It’s just a fancier version of the autocomplete feature on your phone.

Chatbots have come a long way since early primitive attempts in the 1960s, but they are no closer to thinking for themselves than they were back then. There is zero chance a current A.I. chatbot will rebel in an act of free will—all they do is turn text prompts into probabilities and then turn these probabilities into words. Future versions of these A.I.s aren’t going to decide to exterminate the human race; they are going to kill people when we foolishly put them in positions of power that they are far too stupid to have—such as dispensing medical advice or running a suicide prevention hotline.

It’s been said that TikTok’s algorithm reads your mind. But it’s not reading your mind—it’s reading your data. TikTok finds users with similar viewing histories as you and selects videos for you that they’ve watched and interacted with favorably. It’s impressive, but it’s just statistics. Similarly, the A.I. systems used by Facebook and Instagram and Twitter don’t know what information is true, what posts are good for your mental health, what content helps democracy flourish—all they know is what you and others like you have done on the platform in the past and they use this data to predict what you’ll likely do there in the future.

Don’t worry about superintelligent A.I.s trying to enslave us; worry about ignorant and venal A.I.s designed to squeeze every penny of online ad revenue out of us.

And worry about police agencies that gullibly think A.I.s can anticipate crimes before they occur—when in reality all they do is perpetuate harmful stereotypes about minorities.

The reality is that no A.I. could ever harm us unless we explicitly provide it the opportunity to do so—yet we seem hellbent on putting unqualified A.I.s in powerful decision-making positions where they could do exactly that.

Part of why we ascribe far greater intelligence and autonomy to A.I.s than they merit is because their inner-workings are largely inscrutable. They involve lots of math, lots of computer code, and billions of parameters. This complexity blinds us, and our imagination fills in what we don’t see with more than is actually there.

Link to the rest at Slate

3 thoughts on “The Real Threat From A.I. Isn’t Superintelligence. It’s Gullibility.”

  1. Computers do what they do – very fast. Program them to do something ‘stupid’ and you will get ‘stupid very fast.’

    You’re more likely to teach an ape to think than a machine. But both need to be supervised. Because of the buttons humans have built that can be pushed.

  2. The thing the except gets right – “AI” isn’t intelligence, it’s data interpolation or statistics (take your pick). Interpolation works well with the same thing, but can’t extrapolate to new conditions (heck, a lot of people don’t, either).

    When things change is a major way, AI isn’t reliable, just like many pollsters have a tough time when voting coalitions are shifting around, and likely voters don’t want to talk to pollsters.

    • re: pollsters – And not just in the US. (As they just learned in Brazil, among other countries.)
      People have been taught not to trust organizations (mostly by those very orgs) so most have learned to keep their private thoughts private.

Comments are closed.