Not make “any materially false or misleading statement” about catastrophic risk from its frontier models, its management of catastrophic risk, or its compliance with its frontier AI framework.
The risk of any statement being considered "materially false or misleading" is an incentive for AI companies to avoid talking about catastrophic risk.
There's an exception for statements "made in good faith and reasonable under the circumstances"; I would guess it's pretty hard to prove the contrary in court?
I wouldn't know about what works in court, but not saying anything (in interviews or posts on their site and such) is probably even safer, unless the sky is already on fire or something. It seems to be a step in an obviously wrong direction, a friction that gets worse if the things AI company representative would've liked to say happen to be sufficiently contrary to prevailing discourse. Like with COVID-19.
If that were the only provision of the bill, then yes, that would be a problem, but the bill requires them to publish summaries of (1) their up-to-date framework for assessing and mitigating catastrophic risks and (2) their assessments of catastrophic risks for specific models.
Whether this thing in particular is a problem or not doesn't depend on the presence of other things in there, even those that would compensate for it.
California Governor Gavin Newsom signed SB 53 on September 29. I think it’s a pretty great step, though I certainly hope future legislation builds on it.
I wrote up my understanding of what the text actually does; I welcome any corrections (and certainly further analysis for what this all means for AI safety)!
The law requires major AI companies to:
Note that violations are punishable by fines up to $1M per violation, as enforced by the California Attorney General, and that the bill would not apply if Congress preempts state AI legislation.
“Large frontier developers” are defined as developers of models trained with >10^26 FLOP who also had >$500M in revenue the previous calendar year. They must do the following.