- A former OpenAI employee says the company is going down the same path as the Titanic with its safety decisions.
- William Saunders warned against the arrogance surrounding the safety of the Titanic, which was considered “unsinkable.”
- Saunders, who was at OpenAI for three years, was critical of the company’s governance.
A former safety employee at OpenAI said his company is following in the footsteps of the White Star Line, the company that built the Titanic.
“I really didn’t want to end up working for Titanic AI, and that’s why I resigned,” said William Saunders, who worked for three years as a member of the technical staff on OpenAI’s superalignment team.
He spoke on an episode of tech YouTuber Alex Kantrowitz’s podcast, which was released on July 3.
“In my three years at OpenAI, I’ve asked myself a few times: Is OpenAI’s path more like the Apollo program or more like the Titanic?” he said.
Software engineers’ concerns largely stem from OpenAI’s plans to reach Artificial General Intelligence — the point at which AI can teach itself — while also introducing paid products.
“They’re on track to change the world, but when they release something, their priorities are more like a product company. And I think that’s what’s most troubling,” Saunders said.
Apollo vs Titanic
As Saunders spent more time at OpenAI, he felt leaders were making decisions that were more akin to “building the Titanic, prioritizing getting newer, shinier products out the door.”
He prefers a mood like the Apollo space program, which he describes as an example of an ambitious project that “focused on careful prediction and risk assessment” while pushing scientific boundaries.
“Even when a major problem occurs, like Apollo 13, they have enough redundancy, and are able to adapt to the situation to bring everyone back safely,” he said.
The Titanic, on the other hand, was built by the White Star Line as it competed with its rivals to build larger cruise ships, Saunders said.
Saunders worries that, like the Titanic’s protection, OpenAI may be relying too heavily on its current actions and research for AI safety.
“A lot of work went into making the ship safe and building watertight compartments so they could say it was unsinkable,” he said. “But at the same time, there weren’t enough lifeboats for everyone. So when the disaster happened, a lot of people died.”
To be sure, the Apollo missions were conducted against the backdrop of the Cold War space race with Russia. They also involved some serious casualties, including three NASA astronauts who died in 1967 from an electrical fire during a test.
Explaining his analogy further in an email to Business Insider, Saunders wrote: “Yes, the Apollo program had its tragedies. It is impossible to develop AGI or any new technology without risk. What I would like to see is companies taking all reasonable steps to prevent these risks.”
OpenAI needs more ‘lifeboats’, says Saunders
Saunders told BI that a “major disaster” for AI could manifest itself in a model that could launch large-scale cyberattacks, persuade people en masse in a campaign, or help build biological weapons.
In the short term, OpenAI should invest in additional “life rafts,” such as delaying the release of new language models so teams can research their potential harms, he said in his email.
While on the superalignment team, Saunders led a group of four staffers dedicated to understanding how AI language models behave — something he says humans have little knowledge of.
“If in the future we build AI systems that are as intelligent or more intelligent than most humans, we will need techniques to be able to tell whether these systems are hiding their abilities or motivations,” he wrote in an email.
In his interview with Kantrowitz, Saunders added that the company’s staff often discusses theories about how the reality of AI becoming a “very transformative” force could happen in just a few years.
“I think when companies talk about this, they have an obligation to work hard to prepare for it,” he said.
But he is disappointed with OpenAI’s actions so far.
In his email to BI, he said: “While there are employees at OpenAI doing good work to understand and prevent risks, I do not see adequate prioritization of this work.”
Saunders left OpenAI in February. The company then disbanded its superalignment team in May, days after announcing GPT-4o, its most advanced publicly available AI product.
OpenAI did not immediately respond to a request for comment sent outside of regular business hours by Business Insider.
Tech companies like OpenAI, Apple, Google, and Meta have been involved in an AI arms race, sparking a frenzy of investment in what is widely predicted to be the next major industry disruptor akin to the internet.
The rapid pace of development has prompted some employees and experts to warn that better corporate governance is needed to avoid future disasters.
In early June, a group of former and current employees at Google’s Deepmind and OpenAI — including Saunders — published an open letter warning that current industry oversight standards are insufficient to protect against a catastrophe for humanity.
Meanwhile, OpenAI co-founder and former chief scientist Ilya Sutskever, who led the company’s superalignment division, stepped down later that month.
He founded another startup, Safe Superintelligence Inc., which he said will focus on AI research while ensuring “safety always comes first.”