Early last year, a hacker gained access to the internal messaging systems of OpenAI, the creator of ChatGPT, and stole details about the design of the company’s AI technologies.
The hacker gleaned details from discussions on an online forum where employees discussed OpenAI’s latest technologies, according to two people familiar with the incident, but was unable to access the systems where the company hosts and builds its artificial intelligence.
OpenAI executives disclosed the incident to employees at a town hall meeting at the company’s San Francisco offices in April 2023 and informed its board, according to the two people, who discussed sensitive company information on condition of anonymity.
But executives decided not to share the information publicly because no customer or partner information was stolen, the two people said. Executives did not view the incident as a national security threat because they believed the hacker was an individual with no known ties to a foreign government. The company did not notify the FBI or any other law enforcement agency.
For some OpenAI employees, the news raised concerns that foreign adversaries like China could steal AI technology that, while now primarily intended for work and research, could eventually endanger U.S. national security. It also raised questions about how seriously OpenAI takes security and exposed fractures within the company over the risks of artificial intelligence.
After the breach, Leopold Aschenbrenner, a technical program manager at OpenAI responsible for ensuring that future AI technologies do not cause serious harm, sent a memo to OpenAI’s board, saying the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets.
Mr. Aschenbrenner said OpenAI fired him this spring for leaking other information outside the company and has argued that his firing was politically motivated. He alluded to the breach in a recent podcast, but details of the incident have not been previously reported. He said OpenAI’s security is not robust enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.
“We understand the concerns Leopold raised while he was at OpenAI, and they did not lead to his departure,” OpenAI spokeswoman Liz Bourgeois said. Referring to the company’s efforts to create artificial general intelligence, a machine capable of doing everything the human brain can do, she added: “While we share his commitment to building safe general intelligence, we disagree with many of the claims he has since made about our work. This includes his characterizations of our security, including this incident, which we addressed and shared with our board before he joined the company.”
Fears that a hack of a U.S. tech company might have ties to China aren’t unreasonable. Last month, Microsoft President Brad Smith testified on Capitol Hill about how Chinese hackers used the tech giant’s systems to launch a massive attack on federal government networks.
However, under federal and California law, OpenAI cannot bar people from working at the company based on their nationality, and policy researchers have said that excluding foreign talent from U.S. projects could significantly hamper AI progress in the United States.
“We need the brightest minds to work on this technology,” Matt Knight, OpenAI’s chief security officer, told The New York Times in an interview. “There are some risks involved, and we need to figure out what those are.”
(The Times has sued OpenAI and its partner, Microsoft, alleging copyright infringement over news content related to AI systems.)
OpenAI isn’t the only company building increasingly powerful systems using evolving AI technology. Some, including Meta, which owns Facebook and Instagram, freely share their designs with the world as open-source software. They argue that the dangers posed by current AI technologies are minimal, and that sharing code allows engineers and industry researchers to identify and fix problems.
Today’s artificial intelligence systems can help spread false information online, including in the form of text, still images and, increasingly, video. They are also starting to eliminate some jobs.
Companies like OpenAI and its competitors Anthropic and Google are adding safeguards to their AI applications before offering them to individuals and businesses, hoping to prevent people from using the apps to spread misinformation or cause other problems.
But there’s not much evidence that current AI technologies pose a significant risk to national security. Studies by OpenAI, Anthropic, and others over the past year have found that AI is not significantly more dangerous than search engines. Daniela Amodei, Anthropic’s co-founder and president, has said its latest AI technology wouldn’t pose a major risk if its designs were stolen or shared freely with others.
“If someone else owned it, could that be extremely damaging to a large part of society? Our answer is, ‘No, probably not,’” she told the Times last month. “Could it accelerate the process for a bad actor in the future? Maybe. It’s really speculative.”
Yet researchers and tech executives have long feared that AI could one day help create new biological weapons or help break into government computer systems. Some even believe it could destroy humanity.
Several companies, including OpenAI and Anthropic, have already begun locking down their technical operations. OpenAI recently created a safety and security committee to study how to manage the risks posed by future technologies. The committee includes Paul Nakasone, a former Army general who led the National Security Agency and Cyber Command. He was also appointed to OpenAI’s board of directors.
“We started investing in security long before ChatGPT,” Knight said. “We’re looking not only to understand and anticipate risks, but also to build resilience.”
Federal officials and state lawmakers are also pushing for government regulations that would bar companies from marketing certain AI technologies and impose multimillion-dollar fines if their technologies cause harm. But experts say those dangers won’t materialize for years, if not decades.
Chinese companies are building their own systems that are almost as powerful as leading American systems. By some measures, China has eclipsed the United States as the top producer of AI talent, with the country generating nearly half of the world’s top AI researchers.
“It’s not crazy to think that China will soon be ahead of the United States,” said Clement Delangue, CEO of Hugging Face, a company that hosts many open-source AI projects around the world.
Some researchers and national security officials say that the mathematical algorithms at the heart of today’s AI systems, while not dangerous today, could become so and are calling for tighter controls on AI labs.
“Even though the worst-case scenarios are relatively unlikely, if they have a high impact, it’s our responsibility to take them seriously,” Susan Rice, a former domestic policy adviser to President Biden and former national security adviser to President Barack Obama, said at an event in Silicon Valley last month. “I don’t think it’s science fiction, as many like to claim.”