California Sen. Scott Wiener (D-San Francisco) is widely known for his relentless push for housing and public safety legislation, and his legislative record has made him one of the tech industry’s favorites.
But his bill, the “Safe and Secure Innovation for Cutting-Edge Artificial Intelligence Models” bill (also known as SB 1047), which would require companies that train “cutting-edge models” that cost more than $100 million to conduct safety tests and be able to shut down the models if a safety incident occurs, has drawn ire from the industry, with venture capital giants Andreessen Horowitz and Y Combinator publicly denounced the bill.
I spoke with Wiener this week about SB 1047 and his criticisms of it. Below is our conversation (condensed for length and clarity).
Kelsey Piper: I want to present and invite you to respond to my objections to SB 1047. One of the concerns here is that this bill would prohibit models from being publicly used or made available to the public if there is an unreasonable risk of serious harm.
What is an unreasonable risk? Who decides what is reasonable? Many in Silicon Valley are highly skeptical of regulation and do not trust that discretion will be exercised and not abused.
Senator Scott Wiener: To me, SB 1047 is a light bill in many ways. It’s a serious bill, it’s a big bill. I think it’s an impactful bill, but it’s not hardcore. This bill doesn’t require licensing. People, including some CEOs, have argued that there should be a licensing requirement. I rejected it.
There are people who think there should be strict liability, which is the rule in most products liability cases. I have rejected that. [AI companies] No permission is required from the institution to publish [model]They have to do the safety testing that they say they are doing or will do, and if that safety testing reveals significant risks — and we define those risks as catastrophic — they have to take steps to mitigate the risks. We seek to mitigate the risks, not eliminate them.
There is already a legal standard today that if a developer releases a model and that model is used in a way that harms someone or something, they can be sued. That standard will probably be a negligence standard as to whether they acted reasonably. This is much broader than the liability that the bill provides. Under the bill, only the Attorney General can sue, whereas in tort law anyone can sue. Model developers already have a much broader potential liability than this.
Yes, I have seen some opposition to this bill that seems to be based on a misunderstanding of tort law, such as “this is like making engine manufacturers liable for car accidents.”
Yes, indeed. If someone crashes their car and something in the engine design causes the crash, the engine manufacturer can be sued. They have to be proven negligent in some way.
I’ve talked to startup founders, VCs, and big tech companies about this, and I’ve never heard any argument against the reality that liability still exists today, and that the liability that exists today is extremely pervasive.
You can certainly hear the contradictions. Some of the naysayers have said, “This is all science fiction, and the safety obsessives are part of a cult, and they’re not realistic, and they’re limited in their capabilities.” Of course, that’s not true. These are powerful models that have great potential to make the world a better place. I’m very optimistic about AI. I’m not a pessimist in any way. And they say, “We can’t be responsible if these catastrophes happen.”
Another challenge with this bill is that open source developers stand to gain greatly from the adoption of Meta. [the generously licensed, sometimes called open source AI model] There are llamas out there, and they’re right to fear that this bill will make Meta reluctant to release in the future for fear of liability — of course, if the model was really so risky, no one would want to release it — but the concern is that these fears might make companies overly conservative.
Not just with Llama, but with open source in general, we take criticism from the open source community very seriously, and we’ve engaged with people in the open source community and made fixes in direct response to the open source community.
Shutdown clause requirements [a provision in the bill that requires model developers to have the capability to enact a full shutdown of a covered model, to be able to “unplug it” if things go south] One after the other, it was high on people’s list of concerns.
We’ve added an amendment to clarify that we are not responsible for shutting down a model once we no longer own it. Open source people who open source a model are not responsible for shutting down a model.
And the other thing we did was make a fix around people doing tweaks. If you make more than minimal changes to the model, or significant changes to the model, at some point it effectively becomes a new model and the original developer is no longer responsible. There are a few other smaller fixes, but these are the big ones that we made in direct response to the open source community.
Another challenge I’ve heard is, “Why are you focusing on this and not all of the more pressing issues in California?”
Whenever I’m working on an issue, I hear people say, “Isn’t there something more important to do?” Well, I work tirelessly on housing. I work tirelessly on mental health and addiction treatment. I work tirelessly on public safety. I have bills on auto theft and people selling stolen property on the street. I’m also working on bills to encourage AI innovation and make sure we do it in a responsible way.
As a policymaker, I have been a strong supporter of high tech and our nation’s often-under-attack high tech environment. I have supported California’s net neutrality laws, which promote an open and free Internet.
But I’ve seen with technology that we can fail to get ahead of very obvious problems. We saw that with data privacy as well. We finally passed a data privacy law here in California, and everyone who opposed it, for the record, said the same thing: it would kill innovation and no one would want to work here.
My goal here is to create a lot of space for innovation while also facilitating the responsible deployment, training, and release of these models. The argument that this bill will kill innovation and drive companies out of California is heard with almost every bill. But I think it’s important to understand that this bill doesn’t just apply to those who develop models in California, it applies to anyone who does business in California. So, even if you’re in Miami, you need to do this unless and until you’re planning on cutting yourself off from California.
I want to talk about one of the interesting things about the debate around this bill, and that’s the fact that it’s incredibly popular everywhere except Silicon Valley. It passed the state senate 32-1 with bipartisan approval. One poll showed that 77 percent of Californians support it, and more than half strongly support it.
But everyone who hates it is in San Francisco. Why did this end up on your bill?
In some ways, I am the best author of this bill to represent San Francisco, because I am surrounded by AI and immersed in AI. The genesis of this bill is that I started talking to leading AI technologists and startup founders. It was early 2023, and I had a few salons and dinners with AI people, and some ideas started to take shape. So, in some ways, I am the best author of this bill because I have access to incredibly smart people in technology. In another way, I am the worst author because there are people who are unhappy in San Francisco.
What I struggle with as a reporter is conveying to people who aren’t in San Francisco, who aren’t part of these conversations, that AI is something that has really big, really high stakes.
It’s very exciting, because when you start to imagine — could we find a cure for cancer? Could there be highly effective treatments for various viruses? Could there be breakthroughs in clean energy that nobody has imagined? There are so many exciting possibilities.
But with every powerful technology comes risk. [This bill] It’s not about eliminating risk – life is full of risks – but how can we at least make sure that our eyes are wide open – understanding the risks and, if there are ways to mitigate them, taking them?
That’s all we’re asking for in this Bill, and I think the vast majority of people will support it.
This story was originally Future perfect tense Newsletter. Register here!