Really support
independent journalism
Our mission is to provide unbiased, fact-based reporting that holds governments accountable and reveals the truth.
Whether it’s $5 or $50, every contribution counts.
Support us to deliver agenda-free journalism.
A hacker gained access to artificial intelligence developer OpenAI’s internal messaging systems and “stole details” of its technology, it has emerged.
The data breach occurred earlier this year, but the company chose not to make it public or notify authorities because it did not consider the incident a threat to national security.
Sources close to the matter said The New York Timesthat the hacker retrieved details about AI technologies from discussions in an online forum where employees discussed OpenAI’s latest technologies.
However, they failed to penetrate the systems where the company hosts and builds its artificial intelligence, the sources said.
OpenAI executives disclosed the incident to employees at a meeting at the company’s San Francisco offices in April 2023. The board of directors was also informed.
However, sources told the newspaper that executives decided not to share the news publicly because no customer or partner information was stolen.
The incident was not considered a national security threat because OpenAI officials believed the hacker was an individual with no known ties to a foreign government. As a result, OpenAI executives reportedly did not notify the FBI or other law enforcement agencies.
But for some employees, The temperature The news has reportedly sparked fears that foreign adversaries like China could steal AI technology that could potentially endanger U.S. national security.
It also raised questions about how seriously OpenAI treated security and exposed fractures within the company over the risks of artificial intelligence.
Following the breach, Leopold Aschenbrenner, OpenAI’s technical program manager responsible for ensuring that future AI technologies do not cause serious harm, sent a memo to the company’s board of directors.
Aschenbrenner argued that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets.
He also said that OpenAI’s security was not strong enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.
Aschenbrenner later claimed that OpenAI fired him this spring for leaking other information outside the company, and argued that his firing was politically motivated. He alluded to the leak in a recent podcast, but the details of the incident have not been previously reported.
“We appreciate the concerns Leopold raised while he was at OpenAI, and this did not lead to his separation,” OpenAI spokeswoman Liz Bourgeois said. The New York Times.
“While we share his commitment to building a safe AGI, we disagree with many of the assertions he has since made about our work.
“This includes his characterizations of our security, including this incident, which we addressed and shared with our board of directors before he joined the company.”