AI Doomers Take Center Stage at the UK’s AI Summit

From Bloomberg via Yahoo Finance:

A fierce debate over how much to focus on the supposed existential risks of artificial intelligence defined the kickoff of the UK’s AI Safety Summit on Wednesday, highlighting broader tensions in the tech community as lawmakers propose regulations and safeguards.

Tech leaders and academics attending the Summit at Bletchley Park, the former home of secret World War II code-breakers, disagreed over whether to prioritize immediate risks from AI — such as fueling discrimination and misinformation — verses concerns that it could lead to the end of human civilization.

Some attendees openly worried so-called AI doomers would dominate the proceedings — a fear compounded by news that Elon Musk would appear alongside British Prime Minister Rishi Sunak shortly after the billionaire raised the specter of AI leading to “the extinction of humanity” on a podcast. On Wednesday, the UK government also unveiled the Bletchley Declaration, a communique signed by 28 countries warning of the potential for AI to cause “catastrophic harm.”

“I hope that it doesn’t get dominated by the doomer, X-risk, ‘Terminator’-scenario discourse, and I’ll certainly push the conversation towards practical, near-term harms,” said Aidan Gomez, co-founder and chief executive officer of AI company Cohere Inc., ahead of the summit.

Top tech executives spent the week trading rhetorical blows over the subject. Meta Platforms Inc.’s chief AI scientist Yann LeCun accused rivals, including DeepMind co-founder Demis Hassabis, of playing up existential risks of the technology in an attempt “to perform a regulatory capture” of the industry. Hassabis then hit back in an interview with Bloomberg on Wednesday, calling the criticisms preposterous.

On the summit’s fringes, Ciaran Martin, the former head of the UK’s National Cyber Security Center, said there’s “genuine debate between those who take a potentially catastrophic view of AI and those who take the view that it’s a series of individual, sometimes-serious problems, that need to be managed.”

“While the undertones of that debate are running through all of the discussions,” Martin said, “I think there’s an acceptance from virtually everybody that the international, public and private communities need to do both. It’s a question of degree.”

In closed-door sessions at the summit, there were discussions about whether to pause the development of next-generation “frontier” AI models and the “existential threat” this technology may pose “to democracy, human rights, civil rights, fairness, and equality,” according to summaries published by the British government late Wednesday.

Between seminars, Musk was “mobbed” and “held court” with delegates from tech companies and civil society, according to a diplomat. But during a session about the risks of losing control of AI, he quietly listened, according to another attendee, who said the seminar was nicknamed the “Group of Death.”

Matt Clifford, a representative of the UK Prime Minister who helped organize the summit, tried to square the circle and suggest the disagreement over AI risks wasn’t such a dichotomy.

“This summit’s not focused on long-term risk; this summit’s focused on next year’s models,” he told reporters on Wednesday. “How do we address potentially catastrophic risks — as it says in the Bletchley Declaration — from those models?” he said. “The ‘short term, long term’ distinction is very often overblown.”

By the end of the summit’s first day, there were some signs of a rapprochement between the two camps. Max Tegmark, a professor at the Massachusetts Institute of Technology who previously called to pause the development of powerful AI systems, said “this debate is starting to melt away.”

Link to the rest at Yahoo Finance

3 thoughts on “AI Doomers Take Center Stage at the UK’s AI Summit”

    • And this is their response:

      https://www.msn.com/en-us/news/politics/why-biden-s-ai-executive-order-only-goes-so-far/ar-AA1jeG3F?cvid=a0d2383bf69d45eda8ee70834a1bf31d&ei=6

      Note:
      “…the requirements will apply to models that are trained using an amount of computational power above a set threshold of 100 million billion billion operations. No AI models have yet been trained using this much computing power. OpenAI’s GPT-4, the most capable publicly available AI model, is estimated by research organization Epoch to have been trained with five times less than this amount.”

      …and:

      A Biden Administration official said that the threshold was set such that current models wouldn’t be captured but the next generation of state-of-the-art models likely would, according to Scharre, who also attended the briefing.”

      “Computational power is a “crude proxy” for the thing policymakers are really concerned about—the model’s capabilities—says Scharre. But Kaushik points out that setting a compute threshold could create an incentive for AI companies to develop models that achieve similar performance while keeping computational power under the threshold, particularly if the reporting requirements threaten to compromise trade secrets or intellectual property.”

      TL/DR: The EO won’t change anything in the real world but it will look good to the support media. And it’ll help Ethan Hunt.

Comments are closed.