The bill, authored by state Sen. Scott Wiener, a Democrat from San Francisco, has been condemned by tech industry leaders who say it could alienate engineers looking to develop AI tools in the state and add bureaucratic red tape that could squeeze out less-advanced startups.
Opponents of the bill argue that developers could face prison time if their technology is used to harm people, a claim Wiener strongly denies.
After a California Senate committee approved the bill earlier this month, Alice Friend, Google’s head of AI and emerging technology policy, wrote to the committee chairman arguing that the bill’s provisions are “technically infeasible” and would “punish developers even when they behave responsibly.”
Wiener said the legislation is needed to forestall the most extreme potential risks of AI and build trust in the technology. He said passage of the legislation is urgent given Republicans’ promise to repeal President Biden’s 2023 executive order that would use the Defense Production Act to require AI companies to share information about safety testing with the government.
“This action by President Trump makes it even more important for California to act to advance strong AI innovation,” Wiener said on X last week.
The bill has put Sacramento at the epicenter of the fight over government regulation of AI, and it also highlights the limits of Silicon Valley’s enthusiasm for government oversight, even as key leaders such as OpenAI CEO Sam Altman have publicly urged policymakers to act.
By making previously voluntary initiatives mandatory, Nicole Turner Lee, director of the Center for Technology Innovation at the Brookings Institution, said Wiener’s bill goes beyond what tech industry leaders are willing to accept.
“This signals the need for big tech companies to be more accountable, but it has not been welcomed by the industry,” Lee said.
Dylan Hoffman, TechNet’s executive director for California and the Southwest, said Friend’s letter, along with earlier letters from Meta and Microsoft, show the “weight and importance” companies are placing on the action. “It’s a very unusual step for a company to come out from behind the scenes of an industry trade group and put their name on a letter.”
Spokespeople for Google, OpenAI and Meta declined to comment. “Microsoft has not taken a position on this bill and will continue to support federal law as the primary means of regulating the issues it addresses,” said Robin Hines, senior director of government relations at Microsoft.
Even before Wiener introduced his bill in February, California had established itself as the de facto tech legislature in the U.S. After years of debate in the Legislature, California passed the nation’s most far-reaching digital privacy law in 2018, and the California Department of Transportation is the key regulator of self-driving cars.
On AI, Biden’s executive order in October was Washington’s most ambitious effort to regulate the fast-growing technology, but Republicans have announced plans to repeal the order if Trump wins on Nov. 5, leaving states to take the lead on stricter AI regulation.
According to TechNet, an industry group whose members include OpenAI and Google, more than 450 AI-related bills are currently being considered in legislative sessions in state capitals across the US this year. California has more than 45 bills under consideration, but many of them have died or are stuck in committee.
But Wiener’s bill is the most visible and controversial of them all. The bill would require all AI companies deploying a certain amount of computing power to test their models for potential “catastrophic” risks, such as aiding in the development of chemical or biological weapons, hacking critical infrastructure, or blacking out the power grid. Companies would submit safety reports to a new government agency, the Frontier Models Division (FMD), which would have the power to update which AI models are covered by the law, but critics say this could lead to even more uncertainty.
The bill tasks the government with building a cloud computing system. It allows researchers and startups to develop AI without relying on the huge costs of large tech cloud companies.
Dan Hendrix, founder of the nonprofit Center for AI Safety, served as an adviser to the bill. Last year, he opened The letter, signed by prominent AI researchers and executives, states that AI This will pose threats to humanity, such as nuclear war and pandemics.
Others argue that such risks have been overstated and are likely years away from occurring, or even never, and skeptics of the bill point out that there is no standard way to test for such risks even if they were imminent.
“Size is the wrong metric,” said Oren Etzioni, an AI researcher and founder of TrueMedia.org, a nonprofit that works to detect AI deepfakes. “It doesn’t affect this case, but there are potentially more dangerous models out there.”
The focus on “catastrophic” risks has frustrated some AI researchers, who say AI poses more specific harms, such as injecting racist or sexist bias into tech tools or providing new avenues for tech companies to siphon off people’s personal data. Other bills pending in the California Assembly also aim to address such issues.
The bill’s focus on catastrophic risks has led Meta’s AI head Yann LeCun to call Hendrix “the leader of a doomsday cult.”
“The idea that taking society-wide risks from AI seriously makes you a ‘post-apocalyptic cult leader’ is absurd,” Hendrix said.
Hendrix recently founded a company called Grey Swan, which develops software to assess the safety and security of AI models. On Thursday, tech news site Pirate Wires ran a story alleging that the company poses a conflict of interest for Hendrix because it could get business helping companies comply with the AI Act if it passes.
“Critics accuse me of some elaborate money-making scheme, but in fact I have spent my professional life advocating for AI safety issues,” Hendrix said. “I made clear any theoretical conflicts of interest as early as possible. Any profit I make from this small startup is only a small fraction of the economic interests that drive the actions of those who oppose this bill.”
Hendrix has recently come under fire from Silicon Valley residents, but corporate leaders who oppose the legislation have issued similar warnings about the dangers of powerful AI models. Senior AI executives from Google, Microsoft and OpenAI signed a letter circulated by Hendrix’s group last May that warned that humanity was facing an “AI-driven extinction crisis.” At a congressional hearing that same month, Altman said AI was “It will cause great harm to the world”
Last year, OpenAI joined forces with fellow startup Anthropic, Google and other tech companies to form an industry group to develop safety standards for new and powerful AI models. Last week, the ITI tech industry group, which includes Google, Meta and others, released a set of best practices for “high-risk AI systems,” including proactive testing.
Yet those same companies oppose the idea of writing the promise into law.
In a June 20 letter from startup incubator Y Combinator, the founders argued against imposing special oversight on projects that use large amounts of computing power. “Such specific metrics may not fully capture the capabilities and risks associated with future models,” the letter said. “Avoiding overregulation of AI is crucial.”
Startup leaders also worry that the bill would make it harder for companies to develop and release “open source” technology that anyone can use and modify. In a March post on X, Republican vice presidential candidate JD Vance said open source was key to building a politically unbiased model of OpenAI and Google’s technology.
Wiener responded to industry feedback and criticism by amending the bill, including to say that open source developers are not liable for security issues caused by third-party modifications to their technology, but industry critics say those changes don’t go far enough.
Meanwhile, other bills pending in the California Legislature have received less attention from the tech industry.
Rep. Rebecca Bauer Kahan, a Democrat who represents the eastern Bay Area suburbs, has authored several AI bills pending in Congress, including one that would require companies to test AI models for bias. Another of her bills would prohibit developers from using children’s personal information to train AI models without parental consent, potentially challenging the tech industry practice of harvesting training data from websites.
artificial intelligence Other California lawmakers have introduced bills It would require tech companies to publish summaries explaining the data used to develop their AI models, create tools to detect AI-generated content, and apply digital watermarks to make AI-generated content identifiable, as some companies, including Google, are already trying to do.
“We’d be happy to see the federal government take the lead here,” Bauer-Kahan said, “but in a situation where the federal government won’t act and pass legislation like this, we feel the city of Sacramento needs to take action.”