How a Fervent Belief Split Silicon Valley—and Fueled the Blowup at OpenAI

From The Wall Street Journal:

Over the past few years, the social movement known as effective altruism has divided employees and executives at artificial-intelligence companies across Silicon Valley, pitting believers against nonbelievers.

The blowup at OpenAI showed its influence—and the triumphant return of chief executive Sam Altman revealed hard limits, capping a bruising year for the divisive philosophy.

Coming just weeks after effective altruism’s most prominent backer, Sam Bankman-Fried, was convicted of fraud, the OpenAI meltdown delivered another blow to the movement, which believes that carefully crafted artificial-intelligence systems, imbued with the correct human values, will yield a Golden Age—and failure to do so could have apocalyptic consequences.

OpenAI, which released ChatGPT a year ago, was formed in part on the principles of effective altruism, a broad social and moral philosophy that influences the AI research community in Silicon Valley and beyond. Some followers live in private group homes, where they can brainstorm ideas, engage in philosophical debates and relax playing a four-person variant of chess known as Bughouse. The movement includes people devoted to animal rights and climate change, drawing ideas from rationalist philosophers, mathematicians and forecasters of the future.

Supercharged by hundreds of millions of dollars in tech-titan donations, effective altruists believe a headlong rush into artificial intelligence could destroy mankind. They favor safety over speed for AI development. The movement, which includes people who helped shape the generative-AI boom, is insular and multifaceted but shares a belief in doing good in the world—even if that means simply making a lot of money and giving it to worthy recipients.

Altman, who was fired by the board Friday, clashed with the company’s chief scientist and board member Ilya Sutskever over AI-safety issues that mirrored effective-altruism concerns, according to people familiar with the dispute.

Voting with Sutskever, who led the coup, were board members Tasha McCauley, a tech executive and board member for the effective-altruism charity Effective Ventures, and Helen Toner, an executive with Georgetown University’s Center for Security and Emerging Technology, which is backed by a philanthropy dedicated to effective-altruism causes. They made up three of the four votes needed to oust Altman, people familiar with the matter said. The board said he failed to be “consistently candid.”

The company announced Wednesday that Altman would return as chief executive and Sutskever, McCauley and Toner would be replaced. Emmett Shear, a tech executive favoring a slowdown in AI development and recruited as the interim CEO, was out.

Altman’s dismissal had triggered a company revolt that threatened OpenAI’s future. More than 700 of about 770 employees had called for Altman’s return and threatened to jump ship to Microsoft, OpenAI’s biggest investor. Sutskever said Monday he regretted his vote.

“OpenAI’s board members’ religion of ‘effective altruism’ and its misapplication could have set back the world’s path to the tremendous benefits of artificial intelligence,” venture capitalist and OpenAI investor Vinod Khosla wrote in an opinion piece for The Information.

Altman toured the world this spring warning that AI could cause serious harm. He also called effective altruism an “incredibly flawed movement” that showed “very weird emergent behavior.”

The effective-altruism community has spent vast sums promoting the idea that AI poses an existential risk. But it was the release of ChatGPT that drew broad attention to how quickly AI had advanced, said Scott Aaronson, a computer scientist at the University of Texas, Austin, who works on AI safety at OpenAI. The chatbot’s surprising capabilities worried people who had previously brushed off concerns, he said.

The movement has spread among the armies of tech-industry scientists, investors and executives racing to create AI systems to mimic and eventually surpass human ability. AI can bring global prosperity, but it first must be prevented from wreaking havoc, according to those in the movement.

. . . .

Google and other companies are trying to be the first to roll out AI systems that can match the human brain. They largely regard artificial intelligence as a tool to advance work and economies at great profit.

The movement’s high-profile supporters include Dustin Moskovitz, a co-founder of Facebook, and Jann Tallinn, the billionaire founder of Skype, who have pledged billions of dollars to effective-altruism research. Before his fall, Bankman-Fried had also pledged billions. Elon Musk has called the writings of effective altruism’s co-founder William MacAskill “a close match for my philosophy.”

Marc Andreessen, the co-founder of venture-capital firm Andreessen Horowitz, and Garry Tan, chief executive of the startup incubator Y Combinator, have criticized the movement. Tan called it an insubstantial “virtue signal philosophy” that should be abandoned to “solve real problems that create human abundance.”

Urgent fear among effective-altruists that AI will destroy humanity “clouds their ability to take in critique from outside the culture,” said Shazeda Ahmed, a researcher who led a Princeton University team that studied the movement. “That is never good for any community trying to solve any trenchant problem.”

The turmoil at OpenAI exposes the behind-the-scenes contest in Silicon Valley between people who put their faith in markets and effective altruists who believe ethics, reason, mathematics and finely tuned machines should guide the future.

. . . .

One fall day last year, thousands of paper clips in the shape of OpenAI’s logo arrived at the company’s San Francisco office. No one seemed to know where they were from, but everybody knew what they meant.

The paper clip has become a symbol of doom in the AI community. The idea is that an artificial-intelligence system told to build as many paper clips as possible might destroy all of humanity in its drive to maximize production.

The prank was done by an employee at crosstown rival, Anthropic, which itself sprang from divisions over AI safety.

Dario Amodei, OpenAI’s top research scientist, split from the company, joined by several company executives in early 2021. They started Anthropic, an AI research company friendly to effective altruists.

Bankman-Fried had been one of Anthropic’s largest investors and supported the company’s mission, which favored AI safety over growth and profits. 

. . . .

The fear of futuristic AI systems hasn’t stopped even those worried about safety from trying to build artificial general intelligence or AGI—advanced systems that match or outdo the human brain. 

At OpenAI’s holiday party last December, Sutskever addressed hundreds of employees and their guests at the California Academy of Science in San Francisco, not far from the museum’s dioramas of stuffed zebras, antelopes and lions.

“Our goal is to make a mankind-loving AGI,” said Sutskever, the company’s chief scientist.

“Feel the AGI,” he said. “Repeat after me. Feel the AGI.”

Effective altruists say they can build safer AI systems because they are willing to invest in what they call alignment: making sure employees can control the technology they create and ensure it comports with a set of human values. So far, no AI company has said what those values should be.

At Google, the merging this year of its two artificial intelligence units—DeepMind and Google Brain—triggered a split over how effective-altruism principles are applied, according to current and former employees.

DeepMind co-founder Demis Hassabis, who has long hired people aligned with the movement, is in charge of the combined units.

Google Brain employees say they have largely ignored effective altruism and instead explore practical uses of artificial intelligence and the potential misuse of AI tools, according to people familiar with the matter.

One former employee compared the merger with DeepMind to a forced marriage, “making many people squirm at Brain.”

. . . .

Arjun Panickssery, a 21-year-old AI safety researcher, lives with other effective altruists at Andromeda House, a five-bedroom, three-story home a few blocks from the University of California, Berkeley campus.

They host dinners, and visitors are sometimes asked to reveal their P(doom)—estimates of the chances of an AI catastrophe. 

Berkeley, Calif., is an epicenter of effective altruism in the Bay Area, Panickssery said. Some houses designate “no-AI” zones to give people an escape from constant talk about artificial intelligence. 

Open Philanthropy’s then-CEO Holden Karnofsky had once lived with two senior OpenAI executives, according to Open Philanthropy’s website. Since 2015, Open Philanthropy, a nonprofit that supports effective-altruism causes—has given away $327 million to AI-related causes, including $30 million to OpenAI, its website shows. 

When Karnofsky was engaged to Daniela Amodei, now Anthropic’s president, they were roommates with Amodei’s brother Dario, now Anthropic’s CEO.

In August 2017, Karnofsky and Daniela Amodei married in an effective-altruism-theme ceremony. Wedding guests were encouraged to donate to causes recommended by Karnofsky’s effective-altruism charity, GiveWell, and to read a 457-page tome by German philosopher Jürgen Habermas beforehand. 

“This is necessary context for understanding our wedding,” the couple wrote on a website for the event.

. . . .

The effective-altruism movement dates back roughly two decades, when a group of Oxford University philosophers and those they identified as “super-hardcore do-gooders,” were looking for a marketing term to promote their utilitarian version of philanthropy.

Adherents believe in maximizing the amount of good they do with their time. They can earn as much money as possible, then give much of it away to attack problems that government and traditional nonprofits are ignoring or haven’t solved. They focus on ideas that deliver the biggest impact or help the largest number of people per dollar spent. 

Bankman-Fried, who was convicted this month, said he was building his fortune only to give most of it away.

. . . .

The gatherings and events, held around the world, are often closed to outsiders. Organizers of a recent effective-altruism conference in New York declined the request of a Wall Street Journal reporter to attend, saying in an email that there was “a high bar for admissions.”

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Intelligence is the ability to learn from your mistakes. Wisdom is the ability to learn from the mistakes of others.

Author Unknown

He’s an intelligent man, but it takes something more than intelligence to act intelligently.

Fyodor Dostoevsky, Crime and Punishment

Hubris is interesting, because you get people who are often very clever, very powerful, have achieved great things, and then something goes wrong – they just don’t know when to stop.

Margaret MacMillan

Hubris and science are incompatible.

Douglas Preston

Hubris calls for nemesis, and in one form or another it’s going to get it, not as a punishment from outside but as the completion of a pattern already started.

Mary Midgley

8 thoughts on “How a Fervent Belief Split Silicon Valley—and Fueled the Blowup at OpenAI”

  1. What’s really sad is that “effective altruism” is just a variation on “maximizing profit”: They both presume that the ends justify the means. Both of them pretend, though, that “their” ends are the only just/rightful ones.

    Methinks we’ve fought a few wars over that.

    • And a few more to come, fairly soon, too.
      (Keep an eye on Guyana.)

      However, the OpenAI mess wasn’t about how to maximize profit. Remember that the *staff* threatened to quit enmasse, leaving behind *stock options*. Rather, the board was upset that ChatGPT made *any* money, that it was successful, and that the tech is rapidly moving into the mainstream of the enterprise market. In effect they wanted to minimize profits and the return to the investors paying for their precious research.

      They wanted to eat their cake and have it too. That might work in China where they pay with endless monopoly money (or in Berkeley) but not in silivalley.

  2. I’m grateful for the few years in Silicon Valley that made all the subsequent news articles I read more grounded. Not happy, however, that I aged out of employment considerations there so very very quickly.

    Plus I hated the “go left, young man” attitude about the place. Couldn’t find a sane historically literate person much of anywhere.

    • It is actually sad to see a place so dependent on sound market economics being run by such clueless ideologues.

      Corporate publishing is somewhat understandable given their roots but the silivalley crowd are supposed to be worldly enough to understand basic economics. No wonder Seattle and Austin are taking over the lead in high tech moving forward.

      By all reports, at least half the OpenAI board should never have been there to start with. At least Summers and Taylor know their way around tech company boardrooms but it remains to be seen who else joins them long term. Hopefully somebody who understands the difference between control system AIs and symbol manipulation AIs.

    • I suspect that part of the reason is the founder effect–the more conservative tech types stayed home, the liberal ones packed up and left and went to Silicon Valley, and so when the neutrals went there they ended up taking after the people around them and going left.

  3. Good expose of EA, the *other* cult infiltrating the California business world along with DEI.

    However, it skips over the two direct events preceding the “Days of OpenAI” soap opera, one that happened monday, and entire role of Microsoft CEO Nadella, all of which are critical to understand what OpenAI is likely to become.

    First, though, a bit of history:
    OpenAI was founded as a not-for profit research group by a pair of academics, Altman, and Musk. Musk put in $100M. After a few years they discovered the project required *big* money. Musk offered to buy the group but was rejected: instead, they set up a “capped profit” subsidiary to solicit venture capital with the promise to split any profits up to 100x investment with the money guys and the rest going to fund more research. That is the OpenAI behind the GPT products. It has gone through several rounds of funding, giving investors shares in the subsidiary and like rational silivalley startups, stock options for the staff. By the time ChatGPT was released, Microsoft had chipped in $3B. ChatGPT exploded which presented a problem and and opportunity: there was a market for their tech that could help fund the ongoing research but running the thing burns money like crazy ($0.05 in datacenter costs per query, at retail). Early this year, MS offered up $10B for 49% of OpenAI revenues in return for full privileged access to the tech, source code, and data. (Not widely discussed is that an unspecified part of that $10B isn’t money, but rather free/discounted datacenter support–all GPT software runs exclusively on AZURE.)

    MS is combining the OpenAI tech with their inhouse tech (and META’S LLAMA tech. They’re not relying solely on OpenAI) to put generative AI into everything they do. So OpenAI is important to MS.

    Second, a few weeks ago, one of the EA board members published a paper that implied OpenAI cut corners to get ChatGPT out and that the reason ANTHROPIC’s CLAUDE chartbot was six months behind was because they had delayed to bake in more security. Altman confronted her for badmouthing the very company she was supposed to oversee. The back and forth reached the point Altman tried to get the rest of board to force her out they way they had forced out the two non-EA board members. He failed.

    Last week, OpenAI anounced they had developed a framework to allow non-programmers to craft custom GPT models off their own data. Something no other generative model supports (particularly not ANTHROPIC). This made the EA crowd testy. Out of nowhere, they fired him from the board and as CEO and kicked the last friday, claiming that he was less than consistently candid. No explanation, no proof. No warning to the investors. They also kicked the Open AI president and Altman supporter off the board.

    Investors were furious.

    Saturday, the investors and staff asked for explanations. None given. Execs and employees started quitting. Under pressure the board agreed to meet with Altman to discuss how he might come back. No agreement.
    Sunday morning, MS announced they were hiring Altman and the OpenAI expatriate execs and staff that chose to follow him as CEO for a new MS AI research lab. The board received a letter signed by 717 of the 750 employees promising to resign and go to Microsoft with Altman unless the board reinstated Altman and resigned.

    (Microsoft was effectively buying OpenAI for zero with no regulatory interference possible.)

    The board dug in.
    They were willing to let the company collapse for their “principles” even if it meant hand the keys to their to microsoft for free.
    The non-MS investors threatened to sue the board members personally.
    No dice.
    Monday, MS said they wanted to work with Altman and his people, whether at OpenAI or in Microsoft made no difference.
    Tuesday…something happened…
    Wednesday Altman and the Pres were back, a new “starter” board is in, to pick 4 more members (9 total. Doubtful any will be EA. Also, a promise of governance changes. Presumably to do away with the schizoid “nonprofit/capped profit” structure.

    Now, as of tuesday, two things came out, one verified: The EA board offered to merge OpenAI with Anthropic, giving away their tech and skeletal remains to a competitor. For “some” reason thebAnthroouc EA crew decided they wanted nothing to do with the jess and ramifications. Imagine that.

    Unverified in the tech media, just online: the EA board was contacted monday by the Manhattan DA that sent Bankman-Fried to jail, demanding proof of whatever Altman had done to merit getting fired. Presumably to open a case against him…or the board members. They rushed to consult with their lawyers, replied they had nothing, and…well, they’re gone.

    If anybody needs further proof of what EA is about (clueless virtue signaling akin to google’s infamous “do no evil promise”) , no more evidence is necessary than the last week’s drama.

    For more details (this is just a summary) THE ATLANTIC and the NYT have detailed exposes, naming names, with the full story. Unlike Enron, this is unlikely to end up in a movie because nobody would buy protagonists as stupid as the OpenAI board.

    Oh, one thing: Microsoft stock jumped and stayed high through and after the whole mess.
    And Nadella came out looking like a strong, decisive CEO who crafted a win-win for his company.

    https://www.msn.com/en-us/money/companies/msft-ceo-satya-nadella-s-response-to-the-openai-board-debacle-is-a-masterclass-on-taking-fast-decisive-action/ar-AA1ki3JP

    They pay him big bucks but he earned a years worth in five days.

      • Yes, you dodged a bullet there.
        I spent a month one weekend there and couldn’t get out fast enough.
        Total waste of good geography, that state.

Comments are closed.