Thanks for the great post!
Also it’s California, so there’s some chance this happens, seriously please don’t do it, nothing is so bad that you have to resort to a ballot proposition, choose life
Why are you saying this? In what sense "nothing is so bad"?
The reason why people who have libertarian sensibilities, distrust for government track record in general and specifically for its track record in tech regulation are making exception in this case is the future AI strong potential for catastrophic and existential risks.
So, why people who generally dislike the mechanism and track record of California ballot propositions should not make an exception here as well?
The whole point of all this effort around SB 1047 is that "nothing is so bad" is an incorrect statement.
And especially given that you are correctly saying:
Thus I reiterate the warning: SB 1047 was probably the most well-written, most well-considered and most light touch bill that we were ever going to get. Those who opposed it, and are now embracing the use-case regulatory path as an alternative thinking it will be better for industry and innovation, are going to regret that. If we don’t get back on the compute and frontier model based path, it’s going to get ugly.
There is still time to steer things back in a good direction. In theory, we might even be able to come back with a superior version of the model-based approach, if we all can work together to solve this problem before something far worse fills the void.
But we’ll need to work together, and we’ll need to move fast.
Sure, there is still a bit of time for a normal legislative effort (this time with a close coordination with Newsom, otherwise he will just veto it again), but if you really think that if a normal route fails, the ballot route is still counter-productive, you need to make a much stronger case for that.
Especially given that the ballot measure will probably pass with large margin and flying colors...
failure to enforce copyright endangers that copyright
I believe that's true of trademarks, but not copyright.
I've been following Zvi's coverage on SB 1047 and I don't think I have a good understanding of why large companies like OpenAI and Meta are so against it. I don't understand why they would spend so much capital to even get Nanci Pelosi to oppose it. I can't find a good source for describing their opposition. Obviously we can't trust what the companies themselves are saying because as Zvi has pointed out they're mostly distorting the facts or making bogus claims.
But beyond the public PR there probably still are actual reasons why they oppose it so strongly - my assumption is that:
- SB 1047 will force large AI companies to guide large model development and "safety certification" in a way that would sound good to non-technical users reading media headlines. So this would end up having model development have a large PR but technically useless aspect to it.
- SB 1047 will be a compliance nightmare where large companies will have to add significant friction in model development and "safety" for not much real-world benefit to what they think they would be doing anyway.
Anyone have a source for a good analysis on what large companies are actually against this?
My expectation is that it's the same reason there was so much outcry about the completely toothless Executive Order. To quote myself:
The EO might be a bit more important than seems at a glance. My sense is that the main thing it does isn't its object-level demands, but the fact that it introduces concepts like "AI models" and "model weights" and "compute thresholds" and "datacenters suitable for training runs" and so on into the framework of the legislation.
That it doesn't do much with these variables is a secondary matter. What's important is that it defines them at all, which should considerably lower the bar for defining further functions on these variables, i. e. new laws and regulations.
I think our circles may be greatly underappreciating this factor, accustomed as we are to thinking in such terms. But to me, at least, it seems a bit unreal to see actual government documents talking about "foundation models" in terms of how many "floating-point operations" are used to "train" them.
Coming up with a new fire safety standard for restaurants and passing it is relatively easy if you already have a lot of legislation talking about "restaurants" — if "a restaurant" is a familiar concept to the politicians, if there's extant infrastructure for tracking the existence of "restaurants" nationwide, etc. It is much harder if your new standard needs to front-load the explanation of what the hell a "restaurant" even is.
By analogy, it's not unlike academic publishing, where any conclusion (even the extremely obvious ones) that isn't yet part of some paper can't be referred to.
Similarly, SB 1047 would've introduced a foundational framework on which other regulations could've later been built. Keeping the-government-as-a-system unable to even comprehend the potential harms AGI Labs could unleash is a goal in itself.
If we cannot do compute governance, and we cannot do model-level governance, then I do not see an alternative solution. I only see bad options, a choice between an EU-style regime and doing essentially nothing.
It feels like this and some other parts of the post implies the EU AI Act does not apply compute governance and model-level governance, which seems false as far as I can tell (semantics?)
From a summary of the act: "All providers of GPAI models that present a systemic risk* – open or closed – must also conduct model evaluations, adversarial testing, track and report serious incidents and ensure cybersecurity protections."
*"GPAI models present systemic risks when the cumulative amount of compute used for its training is greater than 1025 floating point operations (FLOPs)"
I only see bad options, a choice between an EU-style regime and doing essentially nothing.
What issues do you have with the EU approach? (I assume you mean the EU AI Act.)
Thoughtful/informative post overall, thanks.
It’s over, until such a future time as either we are so back, or it is over for humanity.
Gavin Newsom has vetoed SB 1047.
Newsom’s Message In Full
Quoted text is him, comments are mine.
It is worth pointing out here that mostly the ‘certain safeguards and policies’ was ‘have a policy at all, tell us what it is and then follow it.’ But there were some specific things that were requires, so Newsom is indeed technically correct here.
Queue the laugh track. No, that’s not why California leads, but sure, whatever.
He signed a bunch of other AI bills. It is quite the rhetorical move to characterize those bills as ‘thoughtful’ in the context of SB 1047, which (like or hate its consequences) was by far the most thoughtful bill, was centrally a transparency bill, and was clearly an accountability bill. What you call ‘fair’ is up to you I guess.
Yes. This is indeed the key question. Do you target the future more capable frontier models that enable catastrophic and existential harm and require they be developed safely? Or do you let such systems be developed unsafely, and then put restrictions on what you tell people you can do with such systems, with no way to enforce that on users let alone on the systems themselves? I’ve explained over and over why it must be the first one, and focusing on the second is the path of madness that is bad for everyone. Yet here we are.
Bold mine. Read that again. The problem, according to Newsom, with SB 1047 was that it did not put enough restrictions on smaller AI models, and this could lead to a ‘false sense of security.’ He claims he is vetoing the bill because it does not go far enough.
Do you believe any of that? I don’t. Would a lower threshold (or no threshold!) on size have made this bill more likely to be signed? Of course not. A more comprehensive bill would have been more likely to be vetoed, not less likely.
Centrally the bill was vetoed, not because it was insufficiently comprehensive, but rather because of one or more of the following:
You can say Newsom genuinely thought the bill would do harm, whether or not you think this was the result of lies told by various sources. Sure. It’s possible.
You can say Newsom was the subject of heavy lobbying, which he was, and did a political calculation and did what he thought was best for Gavin Newsom. Sure.
I do not buy for a second that he thought the bill was ‘insufficiently comprehensive.’
If it somehow turns out I am wrong about that, I am going to be rather shocked, as for rather different reasons will be everyone who is celebrating that this bill went down. It would represent far more fundamental confusions than I attribute to Newsom.
No, the bill does not restrict ‘basic functions.’ It does not restrict functions at all. The bill only restricts whether the model is safe to release in general. Once that happens, you’re in the clear. Whereas if you regulate by function, then yes, you will put a regulatory burden on even the most basic functions, that’s how that works.
More importantly, restricting on the basis of ‘function’ does not work. That is not how the threat model works. If you have a sufficiently generally capable model it can be rapidly put to any given ‘function.’ If it is made available to the public, it will be used for whatever it can be used for, and you have very little control over that even under ideal conditions. If you open the weights, you have zero control, telling rival nations, hackers, terrorists or other non-state actors they aren’t allowed to do something doesn’t matter. You lack the ability to enforce such restrictions against future models smarter than ourselves, should they arise and become autonomous, as many inevitably would make them. I have been over this many times.
Bold mine. The main thing SB 1047 would have done was to say ‘if you spend $100 million on your model, you have to create, publish and abide by some chosen set of safety protocols.’ So it’s hard to reconcile this statement with thinking SB 1047 is bad.
Newsom clearly wants California to act without the federal government. He wants to act to create ‘proactive guardrails,’ rather than waiting to respond to harm.
The only problem is that he’s buying into an approach that fundamentally won’t work.
This also helps explain his signing other (far less impactful) AI bills.
Again, he’s clearly going to be signing a bunch of bills, one way or another. It’s not going to be this one, so it’s going to be something else. Be careful what you wish for.
Newsom’s Explanation Does Not Make Sense
His central point is not Obvious Nonsense. His central point at least gets to be Wrong: He is saying AI regulation should be based on not putting restrictions on frontier model development, and instead it should focus on restricting particular uses.
But again, if you care about catastrophic risks: That. Would. Not. Work.
He doesn’t understand, decided to act as if he doesn’t understand, or both.
The Obvious Nonsense part is the idea that we shouldn’t require those training big models to publish their safety and security protocols – the primary thing SB 1047 does – because this doesn’t impact small models and thus is insufficiently effective and might give a ‘false sense of security.’
This is the same person who warned he was primarily worried about the ‘chilling effect’ of SB 1047 on the little guy.
Now he says that the restrictions don’t apply to the little guy, so he can’t sign the bill?
He wants to restrict uses, but doesn’t want to find out what models are capable of?
What the hell?
‘Falls short.’ ‘Isn’t comprehensive.’ The bill wasn’t strong enough, says Newsom, so he decided nothing was a better option, weeks after warning about that ‘chilling effect.’
If I took his words to have meaning, I would then notice I was confused.
Sounds like we should had our safety requirements also apply to the less expensive models made by ‘little tech’ then, especially since those people were lying to try and stop the bill anyway? Our mistake.
Well, his, actually. Or it would be, if he cared about not dying. So could go either way.
The idea that he’d have signed the bill if it was more ‘comprehensive’?
That’s rather Obvious Nonsense, more commonly known as bullshit.
When powerful people align with other power or do something for reasons that would not sound great if said out loud, and tell you no, they don’t give you a real explanation.
Instead, they make up a reason for it that they hadn’t mentioned to you earlier.
Often they’ll do what Newsom does here, by turning a concern they previously harped on onto its head. Here, power expresses concern you’ll hurt the little guy, so you exempt the little guy? Power says response is not sufficiently comprehensive. Veto. Everything else that took place, all the things you thought mattered? They’re suddenly irrelevant, except insofar as they didn’t offer a superior excuse.
To answer the question about whether there is one person who is willing to say they think Newsom’s words are genuine, the answer is yes. That person was Dean Ball, for at least some of the words. I did not see any others.
Newsom’s Proposed Path of Use Regulation is Terrible for Everyone
So what’s does Newsom say his plan is now?
It sure looks like he wants use-based regulation. Oh no.
Jam tomorrow, I suppose.
Given he’s centrally consulting Dr. Fei-Fei Li, together with his aim of targeting particular uses, it sounds like a16z did get to Newsom in the end, he has been regulatory captured (what a nice term for it!), and we have several indications here he does indeed intend to pursue the worst possible path of targeting use cases of AI rather than the models themselves.
For a relatively smart version of the argument that you should target use cases, here is Timothy Lee, who does indeed realize the risk the use-based regulations will be far more onerous, although he neglects to consider the reasons it flat out won’t work, citing ‘safety is not a model property,’ the logic of which I’ll address later but to which the response is essentially ‘not with that attitude, and you’re not going to find it anywhere else in any way you’d find remotely acceptable.’
In other ways, such calls make no sense. If you’re proposing, as he suggests here, ‘require safety in your model if you restrict who can use it, but if you let anyone use and modify the model in any way and use it for anything with no ability to undo or restrict that, then we should allow that, nothing unsafe about that’ then any reasonable person must recognize that proposal as Looney Tunes. I mean, what?
A lot of the usual suspects are saying similar things, renewing their calls for going down exactly that path of targeting wrong mundane use, likely motivated in large part by ‘that means if we open the weights then nothing that happens as a result of it would be our responsibility or our fault’ and in large part by ‘that’ll show them.’
Do they have any idea what they are setting up to happen? Where that would inevitably go, and where it can’t go, even on their own terms? Tim has a glimmer, as he mentions at the end of his post. Dean Ball knows and has now chosen to warn about it.
Alas, most have no idea.
This is shaping up to be one of the biggest regulatory own goals in history.
Such folks often think they are being clever or are ‘winning,’ because if we focus on ‘scientifically proven’ harms then we won’t have to worry about existential risk concerns. The safety people will be big mad. That means things are good. No, seriously, you see claims like ‘well the people who advocate for safety are sad SB 1047 failed, which means we should be happy.’ Full zero sum thinking.
Let’s pause for a second to notice that this is insane. The goal is to be at the production possibilities frontier between (innovation, or utility, or progress) and preventing catastrophic and existential harms.
Yes, we can disagree about how best to do that, whether a given policy will be net good, or how much we should value one against the other. That’s fine.
But if you say ‘those who care about safety think today made us less safe, And That’s Wonderful,’ then it seems like you are kind of either an insane person or a nihilistic vindictive f***, perhaps both?
That’s like saying that you know the White Sox must have a great offense, because look at their horrible pitching. And saying that if you want the White Sox to score more runs next year, you should want to ensure they have even worse pitching.
(And that’s the charitable interpretation, where the actual motivation isn’t largely rage, hatred, vindictiveness and spite, or an active desire for AIs to control the future.)
Instead, I would implore you, to notice that Newsom made it very clear that the regulations are coming, and to actually ask: If use-based regulations to reduce various mundane and catastrophic risks do come, and are our central strategy – even if you think those risks are fake or not worth caring about – what will that look like? Are you going to be happy about it?
If the ‘little guy’ or fans of innovation think this would go well for them, I would respond: You have not met real world use-based risk-reduction regulations, or you have forgotten what that looks like.
It looks like the EU AI Act. It looks like the EU. Does that help make this clear?
There would be a long and ever growing list of particular things you are not allowed to permit an AI to do, and that you would be required to ensure your AI did do. It will be your responsibility, as a ‘deployer’ of an AI model, to ensure that these things do and do not happen accordingly, whether or not they make any sense in a given context.
This laundry list will make increasingly little sense. It will be ever expanding. It will be ill defined. It will focus on mundane harms, including many things that California is Deeply Concerned about that you don’t care about even a little, but that California thinks are dangers of the highest order. The demands will often not be for ‘reasonable care’ and instead be absolute, and they will often be vague, with lots of room to expand them over time.
You think open models are going to get a free pass from all this, when anyone releasing one is very clearly ‘making it available’ for use in California? What do you think will happen once people see the open models being used to blatantly violate all these use restrictions being laid down?
All the things such people were all warning about with SB 1047, both real and hallucinated, in all directions? Yeah, basically all of that, and more.
Kudos again to Dean Ball in particular for understanding the danger here. He is even more apprehensive than I am about this path, and writes this excellent section explaining what this kind of regime would look like.
I certainly agree this is no way to run a civilized economy. I also know that one of the few big civilized economies, the EU, is running in exactly this way across the board. If all reasonable people knew this would rule out a large percentage of SB 1047 opponents, as well as Gavin Newsom, has potentially sensible people.
It would be one thing if that approach greatly reduced existential risk at great economic cost, and there was no third option available. Then we’d have to talk price and make a tough decision.
Newsom’s Proposed Path of Use Regulation Doesn’t Prevent X-Risk
Instead, it does more of damage, without the benefits. What does such an approach do about actual existential risks from AI, by any method other than ‘be so damaging that the entire AI industry is crippled’?
It does essentially nothing, plausibly making us actively less safe. The thing that is dangerous is not any particular ‘use’ of the models. It is creating or causing there to exist AI entities that are highly capable and especially ones that are smarter than ourselves. This approach lets that happen without any supervision or precautions, indeed encourages it.
That is not going to cut it. Once the models exist, they are going to get deployed in the ways that are harmful, and do the harmful things, with or without humans intending for that to happen (and many humans do want it to happen). You can’t take the model back once that happens. You can’t take the damage back once that happens. You can’t un-exfiltrate the model, or get it back under control, or get the future back under control. Danger and ability to cause catastrophic events are absolutely model properties. The only ones who can possibly hope to prevent this from happening without massive intrusions into everything are the developers of the model.
If you let people create, and especially if you allow them to open the weights of, catastrophically dangerous AI models if deployed in the wrong ways, while telling people ‘if you use it the wrong way we will punish you?’
Are you telling me that has a snowball’s chance in hell of having them not deploy the AI in the wrong ways? Especially once the wrong way is as simple as ‘give it a maximizing instruction and access to the internet?’ When they’re as or more persuasive than we are? When on the order of 10% of software engineers would welcome a loss of human control over the future?
Whereas everyone, who wants to do anything actually useful with AI, the same as people who want to do most any other useful thing in California, would now face an increasing set of regulatory restrictions and requirements that cripple the ability to collect mundane utility.
All you are doing is disrupting using AI to actually accomplish things. You’re asking for the EU AI Act. And then that, presumably, by default and in some form, is what you are going to get if we go down this path. Notice the other bills Newsom signed (see section below) and how they start to impose various requirements on anyone who wants to use AI, or even wants to do various tech things.
It won’t come back ‘stronger,’ in this scenario. It will come back ‘wrong.’
Also note that one of SB 1047’s features was that only relative capabilities were targeted (even more so before the limited duty exception was forcibly removed by opponents), whereas a regime where ‘small models are dangerous too’ is central to thinking will hold models and any attempt to ‘deploy’ them to absolute standards of capability by default, rather than asking whether they cause or materially enable something that couldn’t have otherwise been done, or asking whether your actions were reasonable.
Note how other bills didn’t say ‘take reasonable care to do or prevent X,’ they mostly said ‘do or prevent X’ full stop and often imposed ludicrous standards.
Newsom Says He Wants to Regulate Small Entrepreneurs and Academia
Nancy Pelosi is very much not on the same page as Gavin Newsom.
Pelosi did not stop to actually parse Newsom’s statement. But that cannot surprise us, since she also did not stop to parse SB 1047, a bill that would not have impacted ‘small entrepreneurs’ or ‘academia’ in any way at all.
Whereas Newsom specifically called out the need to check ‘depolyers’ of even small models for wrong use cases, an existential threat to both groups.
If that’s the way things go, as is reasonably likely, then you are going to wish, so badly, that you had instead helped steer us towards a compute-based and model-based regime that outright didn’t apply to you, that was actually well thought out and debated and refined in detail, back when you had the chance and the politics made that possible.
What If Something Goes Really Wrong?
Then there’s the question of what happens if a catastrophic event did occur. In which case, things plausibly spin out of control rather quickly. Draconian restrictions could result. It is very much in the AI industry’s interest for such events to not happen.
That’s all independent of the central issue of actual existential risks, which this veto makes more likely.
I am saying, even if you don’t think the existential risks are that big a deal, that you should be very worried about Newsom’s statement, and where all of this is heading.
So if you are pushing the rhetoric of use-based regulation, I urge you to reconsider. And to try and steer things towards regulatory focus on the model layer and compute thresholds, or development of new other ideas that can serve similar purposes, ‘while you have the chance.’
Could Newsom Come Around?
None of this means Newsom couldn’t come around in the future.
There are scenarios where this could work out well next year. Here are some of them:
Newsom clearly wants California to ‘lead’ on AI regulation, and pass various proactive bills in advance of anything going wrong. He is going to back and sign some bills, and those bills will be more impactful than the ones he signed this session. The question is, will they be good bills, sir?
Here is Scott Weiner’s statement on the veto. He’s not going anywhere.
Here’s Dan Hendrycks.
Timing is Everything
I have seen exactly one person make the claim that Newsom isn’t bullshitting, and that Newsom’s words have meaning.
That person is Dean Ball, who also pointed out the detail that Newsom vetoed at a time designed to cause maximum distraction.
Why would Newsom want to make his veto as quiet as possible, especially if he wanted to dispel any possible ‘chilling effect’?
Because the bill was very popular, so he didn’t want people to know he vetoed it.
SB 1047 Was Popular
There were various people who chimed in to support SB 1047 in the last few days. I did not stop to note them. Nor did I note the latest disingenuous arguments trotted out by bill opponents. It’s moot now and it brings me great joy to now ignore all that.
We should note, however: For the record, yes, the bill was very popular. AIPI collaborated with Dean Ball to craft a more clearly neutral question wording, including randomizing argument order.
Most striking is that these results closely mirror the previous AIPI poll results, which we now know were not substantially distorted by question wording. They previously found +39 support, 59%-20%. The new result is +37 support, 62%-25%, well within the margin of error versus the old results from AIPI.
The objection that this is a low-salience issue where voters haven’t thought about it and don’t much care is still highly valid. And you could reasonably claim, as Ball says explicitly, that voter preferences shouldn’t determine whether the bill is good or not.
We should look to do more of this adversarial collaborative polling in the future. We should also remember it when estimating the ‘house effect’ and ‘pollster rating’ of AIPI on such issues, and when we inevitably once again see claims that their wordings are horribly biased even when they seem clearly reasonable.
Also, this from Anthropic’s Jack Clark seems worth noting:
Jack Clark is engaging in diplomacy and acting like Newsom was doing something principled and means what he says in good faith. That is indeed the right move for Jack Clark in this spot.
I’m not Jack Clark.
What Did the Market Have to Say?
Gavin Newsom did not do us the favor of vetoing during market hours. So we cannot point to the exact point where he vetoed, and measure the impact on various stocks, such as Nvidia, Google, Meta and Microsoft.
That would have been the best way to test the impact of SB 1047 on the AI industry. If SB 1047 was such a threat, those stocks would go up on the news. If they don’t go up, then that means the veto wasn’t impactful.
There is the claim that the veto was obvious given Newsom’s previous comments, and thus priced in.
There are two obvious responses.
Then, when the market opened on Monday the 30th, after the veto, again there was no major price movement. This is complicated by potential impact from Spruce Pine, and potential damage to our supply chains for quartz for semiconductors there, but it seems safe to say that nothing major happened here.
The combined market reaction, in particular the performance of Nvidia, is incompatible with SB 1047 having a substantial impact on the general ecosystem. You can in theory claim that Google and Microsoft benefit from a bill that exclusively puts restrictions on a handful of big companies. And you can claim Meta’s investors would actually be happy to have Zuckerberg think better of what they think is his open model folly. But any big drop in AI innovation and progress would hurt Nvidia.
If you think that this was not the right market test, what else would be a good test instead? What market provides a better indication?
What Newsom Did Sign
The one that most caught my eye was his previous decision to sign AB 2013, requiring training data transparency. Starting on January 1, 2026, before making a new AI system or modification of an existing AI system publicly available for Californians to use, the developer or service shall post documentation regarding the data used to train the system. The bill is short, so here’s the part detailing what you have to post:
This is not the most valuable transparency we could get. In particular, you get the information on system release, not on system training, so once it is posted the damage will typically be largely done from an existential risk perspective.
However this is potentially a huge problem.
In particular: You have to post ‘the sources or owners of the data sets’ and whether you had permission from the owners to use those data sets.
Right now, the AI companies use data sources they don’t have the rights to, and count on the ambiguity involved to protect them. If they have to admit (for example) ‘I scraped all of YouTube and I didn’t have permission’ then that makes it a lot easier to cause trouble in response to that. It also makes it a lot harder, in several senses, to justify not making such trouble, as failure to enforce copyright endangers that copyright, which is (AIUI, IANAL, etc) why often owners feel compelled to sue when violations are a little too obvious and prominent, even if they are fine with a particular use.
The rest of it seems mostly harmless, for example I presume everyone is going to answer #2 with something only slightly less of a middle finger than ‘to help the system more accurately predict the next token’ and #9 with ‘Yes we cleaned the data, so that bad data wouldn’t corrupt the system.’
What is a ‘substantial modification’ of a system? If you fine-tune a system, does that count? My assumption would mostly be yes, and you mostly just mumble ‘synthetic data’ as per #12?
Everyone’s favorite regulatory question is, ‘what about open source’? The bill does not mention open source or open models at all, instead laying down rules everyone must follow if they want to make a model available in California. Putting something on the open internet for download makes it available in California. So any open model will need to be able to track and publish all this information, and anyone who modifies the system will have to do so as well, although they will have the original model’s published information to use as a baseline.
What else we got? We get a few bills that regularize definitions, I suppose. Sure.
Otherwise, mostly a grab bag of ‘tell us this is AI’ and various concerns about deepfakes and replicas.
I covered that one above.
Paths Forward
Wait till next year, as they say. This is far from over.
This raises the importance of maintaining the Biden Executive Order on AI. This at least gives us a minimal level of transparency into what is going on. If it were indeed repealed, as Trump has promised to do on day one, we would be relying for even a minimum of transparency only on voluntary commitments from top AI labs – commitments that Meta and other bad actors are unlikely to make and honor.
The ‘good’ news is that Gavin Newsom is clearly down for regulating AI.
The bad news is that he wants to do it in the wrong way, by imposing various requirements on those who deploy and use AI. That doesn’t protect us against the threats that matter most. Instead, it only can protect us against the mostly mundane harms that we can address over time as the situation changes.
And the cost of such an approach, in terms of innovation and mundane utility, risks being extremely high – exactly the ‘little guys’ and academics who were entirely exempt from SB 1047 would likely now be hit the hardest.
If we cannot do compute governance, and we cannot do model-level governance, then I do not see an alternative solution. I only see bad options, a choice between an EU-style regime and doing essentially nothing.
The stage is now potentially set for the worst possible outcomes.
There will be great temptation for AI notkilleveryoneism advocates to throw their lot in with the AI ethics and mundane harm crowds.
I am 110% with Dean Ball here.
Especially: The safety community that exists today, that is concerned with existential risks, really is mostly techno-optimists. This is a unique opportunity, while everyone on all sides is a techno-optimist, and also rather libertarian, to work together to find solutions that work. That window, where the techno-optimist non-safety community has a dancing partner that can and wants to actually dance with them, is going to close.
From the safety side’s perspective, deciding who to work with going forward, one can make common cause with those who have concerns different from yours – if others want to put up stronger precautions against deepfakes and voice clones and copyright infringement or other mundane AI harms than I think is ideal, or make those requests more central, there has to be room for compromise when doing politics. If you also get what you need. One cannot always insist on a perfect bill.
What we must not do is exactly what so many people lied and said SB 1047 was doing – which is to back a destructive bill exactly because it is destructive. We need to continue to recognize that imposing costs is a cost, doing damage is damaging, destruction is to be avoided. Some costs may be necessary along the way, but the plan cannot be to destroy the village in order to save it.
Even if we successfully work together to have those who truly care about safety insist upon only backing sensible approaches, events may quickly be out of our hands. There are a lot more generic liberals, or generic conservatives, than there are heterodox deeply wonky people who care deeply about us all not dying and the path to accomplishing that.
There is the potential for those other crowds to end up writing such bills entirely without the existential risk mitigations and have that be how all of this works, especially if opposition forces continue to do their best to poison the well about the safety causes that matter and those who advocate to deal with them.
Alternatively, one could dream that now that Newsom’s concerns have been made clear, those concerned about existential risks might decide to come back with a much stronger bill that indeed does target everyone. That is what Newsom explicitly said he wants, maybe you call his bluff, maybe it turns out he isn’t fully bluffing. Maybe he is capable of recognizing a policy that would work, or those who would support such a policy. There are doubtless ways to use the tools and approaches Newsom is calling for to make us safer, but it isn’t going to be pretty, and those who opposed SB 1047 are really, really not going to like them.
Meanwhile, the public, in the USA and in California, really does not like AI, is broadly supportive of regulation, and that is not going to change.
Also it’s California, so there’s some chance this happens, seriously please don’t do it, nothing is so bad that you have to resort to a ballot proposition, choose life:
Thus I reiterate the warning: SB 1047 was probably the most well-written, most well-considered and most light touch bill that we were ever going to get. Those who opposed it, and are now embracing the use-case regulatory path as an alternative thinking it will be better for industry and innovation, are going to regret that. If we don’t get back on the compute and frontier model based path, it’s going to get ugly.
There is still time to steer things back in a good direction. In theory, we might even be able to come back with a superior version of the model-based approach, if we all can work together to solve this problem before something far worse fills the void.
But we’ll need to work together, and we’ll need to move fast.