California lawmakers have advanced a bill that would regulate powerful artificial intelligence systems
SACRAMENTO, Calif. — California lawmakers voted Tuesday to advance legislation that would require artificial intelligence companies to test their systems and add safeguards to prevent them from potentially being manipulated to take down the state’s power grid or help make chemical weapons — scenarios that experts say could be possible in the future as the technology evolves at breakneck speed.
The bill, the first of its kind, aims to reduce the risks created by AI. It is fiercely opposed by venture capitalists and technology companies, including Meta, the parent company of Facebook and Instagram, and Google. They say the regulation targets developers and should instead focus on those who use and exploit AI systems for nefarious purposes.
Democratic Sen. Scott Wiener, the bill’s author, said the proposal would provide reasonable safety standards by preventing “catastrophic harm” from extremely powerful AI models that could be created in the future.
The requirements would only apply to systems that cost more than $100 million in training computing power. No current AI model had reached that threshold as of July.
Wiener criticized the campaign team of opponents of the bill during a legislative hearing Tuesday, saying it was spreading inaccurate information about his measure. His bill does not create new criminal charges for AI developers whose models have been exploited to create societal harm if they have tested their systems and taken steps to mitigate the risks, Wiener said.
“This bill will not send any AI developers to jail,” Wiener said. “I would ask people to stop pretending that.”
Under the bill, only the state attorney general could prosecute violations.
Democratic Gov. Gavin Newsom has touted California as one of the first states to adopt and regulate AI, saying the state could soon deploy generative AI tools to address highway congestion, make roads safer and provide tax advice. At the same time, his administration is considering new rules against AI-related discrimination in hiring practices. He declined to comment on the bill, but warned that overregulation could put the state in a “perilous position.”
A growing coalition of tech companies says the requirements would discourage companies from developing large AI systems or keeping their technology open source.
“The bill will make the AI ecosystem less secure, jeopardize the open source models that startups and small businesses rely on, rely on standards that don’t exist, and introduce regulatory fragmentation,” Rob Sherman, Meta’s vice president and deputy chief privacy officer, wrote in a letter to lawmakers.
Opponents want to wait for more guidance from the federal government. Supporters of the bill said California can’t wait, citing the hard lessons they learned by not acting soon enough to rein in social media companies.
The proposal, backed by some of the most renowned AI researchers, would also create a new state agency to oversee developers and provide best practices.
State lawmakers also considered two ambitious measures Tuesday to better protect Californians from the potential dangers of AI. One would combat discrimination related to automation when companies use AI models to review resumes and apartment rental applications. The other would prohibit social media companies from collecting and selling data from people under 18 without their consent or that of their guardians.