AI risk ≠ AGI risk

This content has been archived. It may no longer be accurate or relevant.

From The Road to AI We Can Trust:

Is AI going to kill us all? I don’t know, and you don’t either.

But Geoff Hinton has started to worry, and so have I. I’d heard about Hinton’s concerns through the grapevine last week, and he acknowledged them publicly yesterday.

Amplifying his concerns, I posed a thought experiment:

Soon, hundreds of people, even Elon Musk, chimed in.

. . . .

It’s not often that Hinton, Musk, and I are even in partial agreement. Musk and I also both signed a letter of concern from the Future of Life Institute [FLI], earlier this week, which is theoretically embargoed til tomorrow but is easy enough to find.

I’ve been getting pushback

1 and queries ever since I posted the Hinton tweet. Some thought I had misinterpreted the Hinton tweet (given my independent sourcing, I am quite sure I didn’t); others complained that I was focusing on the wrong set of risks (either too much on the short-term, or too much on the long term).

One distinguished colleague wrote to me asking “won’t this [FLI] letter create unjustified fears of imminent AGI, superintelligence, etc?” Some people were so surprised by my amplifying Hinton’s concerns that a whole Twitter thread popped up speculating about my own beliefs:

My beliefs have not in fact changed. I still don’t think large language models have much to do with superintelligence or artificial general intelligence [AGI]; I still think, with Yann LeCun, that LLMs are an “off-ramp” on the road to AGI. And my scenarios for doom are perhaps not the same as Hinton’s or Musk’s; theirs (from what I can tell) seem to center mainly around what happens if computers rapidly and radically self-improve themselves, which I don’t see as an immediate possibility.

But here’s the thing: although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don’t have to be superintelligent to create serious problems. I am not worried, immediately, about “AGI risk” (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call “MAI risk”—Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed—both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. A company called Adept.AI just raised $350 million dollars to do just that, to allow large language models to access, well, pretty much everything (aiming to “supercharge your capabilities on any software tool or API in the world” with LLMs, despite their clear tendencies towards hallucination and unreliability).

Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access.

Link to the rest at The Road to AI We Can Trust