AN open letter signed by hundreds of prominent artificial intelligence experts, tech entrepreneurs, and scientists calls for a pause on the development and testing of AI technologies more powerful than OpenAI’s language model GPT-4 so that the risks it may pose can be properly studied.
It warns that language models like GPT-4 can already compete with humans at a growing range of tasks and could be used to automate jobs and spread misinformation. The letter also raises the distant prospect of AI systems that could replace humans and remake civilization.
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 (including the currently-being-trained GPT-5),” states the letter, whose signatories include Yoshua Bengio, a professor at the University of Montreal considered a pioneer of modern AI, historian Yuval Noah Harari, Skype cofounder Jaan Tallinn, and Twitter CEO Elon Musk.
The letter, which was written by the Future of Life Institute, an organization focused on technological risks to humanity, adds that the pause should be “public and verifiable,” and should involve all those working on advanced AI models like GPT-4. It does not suggest how a halt on development could be verified, but adds that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” something that seems unlikely to happen within six months.
Microsoft and Google did not respond to requests for comment on the letter. The signatories seemingly include people from numerous tech companies that are building advanced language models, including Microsoft and Google. Hannah Wong, a spokesperson for OpenAI, says the company spent more than six months working on the safety and alignment of GPT-4 after training the model. She adds that OpenAI is not currently training GPT-5.
The letter comes as AI systems make increasingly bold and impressive leaps. GPT-4 was only announced two weeks ago, but its capabilities have stirred up considerable enthusiasm and a fair amount of concern. The language model, which is available via ChatGPT, OpenAI’s popular chatbot, scores highly on many academic tests, and can correctly solve tricky questions that are generally thought to require more advanced intelligence than AI systems have previously demonstrated. Yet GPT-4 also makes plenty of trivial, logical mistakes. And, like its predecessors, it sometimes “hallucinates” incorrect information, betrays ingrained societal biases, and can be prompted to say hateful or potentially harmful things.
Part of the concern expressed by the signatories of the letter is that OpenAI, Microsoft, and Google, have begun a profit-driven race to develop and release new AI models as quickly as possible. At such pace, the letter argues, developments are happening faster than society and regulators can come to terms with.
The pace of change—and scale of investment—is significant. Microsoft has poured $10 billion into OpenAI and is using its AI in its search engine Bing as well as other applications. Although Google developed some of the AI needed to build GPT-4, and previously created powerful language models of its own, until this year it chose not to release them due to ethical concerns.
Link to the rest at Wired
PG acknowledges that capitalism is often a bit messy. However, in PG’s stridently-humble opinion, the speed at which capitalist enterprises can discover, develop and enhance technology easily offsets its messiness.
Lets assume, arguendo, that the development of Artificial Intelligence is governed by a distinguished panel of extremely intelligent academic persons. Only the most esteemed professors of engineering, the arts and the humanities would be selected for this panel.
How quickly would AI develop?
If AI has the potential to improve military effectiveness in major ways, how quickly would military AI systems be developed?
PG suggests that a mad scramble of capitalist enterprises will do a better job of both developing AI applications and discovering potential pitfalls than any organized authoritative body would.
As far as the apolcoliptic potential for AI gone wild, PG has lived most of his life in a world in which large numbers of nuclear weapons have been set for hair-trigger release by the United States, Russia, China and a bunch of other nations.
Which is more likely to do horrible damage to humanity, AI or nuclear weapons. The fact that nuclear weapons have been around for a relatively long time and a holocaust hasn’t happened is no guarantee that they will continue to be unused.
If all democratic nations band together to place tight control on AI research and applications, who will develop weaponized AI first? 1. China, 2. Russia or 1. Russia, 2. China?
By its very nature, disruptive technology shakes things up and disturbs the status quo. This happened with telegraph, telephone, radio, television, automobiles, airplanes, computers, etc., etc., etc.
PG postits that, at the time of the discovery of these technologies and a great many more, thinking people worried about the adverse consequences that might arise powered by these technologies. In every case, PG is certain that some thinking people could point out harm that might have been caused by the improper use of any of these technologies. But society pushed forward regardless.
If you want to examine and/or sign the Letter promoting the pause in AI development, you can find it here.
3 thoughts on “In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT”
I’m not sure how one bans any kind of software development.
Can humans control “AI”?
I feel inclined to invoke both Clarke’s First and Second Laws:
Clarke’s First Law: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
Clarke’s Second Law: The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
Methinks the letter signers need to get over themselves.
They’re buying too much into the hype.
Like Goldman Sachs is ” concerned” that “AI” mighg affect 300 million jobs.
PCs have affected a billion jobs.
The Internet, billions of *people* in thousands of ways.
And? Did the world come to an end?
GPT software is just that: software. The next evolution of data processing.
It works with words. Not ideas. Not actions. It has no initiative or agency. Those are still the prerogative of humans.
Now, if anybody develops software that can initiate a conversation on its own, set its own agenda, *and* runs its hardware independently then maybe you might want to think of pulling out the power cord.
Until then AI is just a tool and fodder for SF&F.
Spooked as these folks are, nobody that matters is taking their letter seriously.
Not OpenAI (the CEO pointedly refused the idea) and not Microsoft (who see GPT code as their say to disrupt Google when they are weak), and not Google (who are rushing to show something vaguely comparable to GPT and in the process embarassing themselves repeatedly) and not the IdiotPoliticians™ the “eminences” think might get a handle on the tech in six months!!!
Right. They haven’t figured out the internet in 25 years and they’re going to figure out GPT in six months in the middle of the Identity wars, an emerging economic crisis, with an ongoing shooting war and two more on the horizon?
Yup, they’re going to drop everything else to figure out how to “tame” Chatbots in six months?
Is it a new era? Maybe. Maybe not.
But somebody needs to remind them of Bill Gates most quotable line (Gates’ Law? Amara’s Law?): People overestimate the short term impact of new tech and underestimate the long term impact once reality sets in.
In the 1960s, Roy Amara told colleagues that he believed that “we overestimate the impact of technology in the short-term and underestimate the effect in the long run.” For this reason, variations on that phrase are often known as Amara’s Law. However, Bill Gates made a similar statement (possibly paraphrasing Amara), so it’s also known as Gates’s Law.
Technologies’ impacts compound over time. But it takes time. The world isn’t ending in 2023. An its not ending over “AI”. Not soon.
All this hyperventilation will peter out in about the same six months they want to wait. By then we’ll be up to our necks in bank closures or shooting wars or famines or even worse political infighting.
And by then the immediate impact of chatbots will only be a concern of Google and a dozen startups on the way to becoming roadkill in the next age of computing. None of which should expect the mythical “even break” the letter wants. In that arena Microsoft is in charge (for now) and they don’t give even breaks to roadkill.
The tech world is Darwinian.
Anybody remember “Internet Time”?
GPT3.5 came out last fall. GPT4 a couple weeks back.
Well, GPT 5 is due later this year.
And the hype wave is already building.
As expected, before the “prompt engineers” fully figure out Version 4, Version 5 will be out.
Another Red Queen Race is upon us.
Comments are closed.