cfoster0

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Other proponents of the bill (longform, 1-3h)

[...]

Charles Foster

Note: I wouldn't personally call myself a proponent, but I'm fine with Michaël putting me in that bucket for the sake of this post.

I’m not sure if you intended the allusion to “the tendentious assumption in the other comment thread that courts are maximally adversarial processes bent on on misreading legislation to achieve their perverted ends”, but if it was aimed at the thread I commented on… what? IMO it is fair game to call out as false the claim that

It only counts if the $500m comes from "cyber attacks on critical infrastructure" or "with limited human oversight, intervention, or supervision....results in death, great bodily injury, property damage, or property loss."

even if deepfake harms wouldn’t fall under this condition. Local validity matters.


I agree with you that deepfake harms are unlikely to be direct triggers for the bill’s provisions, for similar reasons as you mentioned.

cfoster0134

If you read the definition of critical harms, you’ll see the $500m doesn’t have to come in one of those two forms. It can also be “Other grave harms to public safety and security that are of comparable severity”.

I was trying to write a comment to explain my reaction above, but this comment said everything I would have said, in better words.

cfoster04013

OK, in case this wasn't clear: if you are a Californian and think this bill should become law, don't let my comment excuse you from heeding the above call to action. Contacting your representatives will potentially help move the needle.

cfoster04917

Unfortunately, due to misinformation and lobbying by big tech companies, SB 1047 is currently stalled in the Assembly Appropriations Committee.

This is extremely misleading. Any bill that would have non-negligible fiscal impact (the threshold is only $150,000 https://apro.assembly.ca.gov/welcome-committee-appropriations/appropriations-committee-rules) must be put in the Appropriation Committee “Suspense File” until after the budget is prepared. That is the status of SB 1047 and many many other bills. It has nothing to do with misinformation or lobbying, it is a part of the standard process. I believe all the bills that make it out of the Suspense File will be announced at the hearing this Thursday.

More on this: https://calmatters.org/newsletter/california-bills-suspense-file/

What's the evidence that this document is real / written by Anthropic?

Axios first reported on the letter, quoting from it but not sharing it directly:

https://www.axios.com/2024/07/25/exclusive-anthropic-weighs-in-on-california-ai-bill

The public link is from the San Francisco Chronicle, which is also visible in the metadata on the page citing the letter as “Contributed by San Francisco Chronicle (Hearst Newspapers)”.

https://www.sfchronicle.com/tech/article/wiener-defends-ai-bill-tech-industry-criticism-19596494.php

Left the following comment on the blog:

I appreciate that you’re endorsing these changes in response to the two specific cases I raised on X (unlimited model retraining and composition with unsafe covered models). My gut sense is still that ad-hoc patching in this manner just isn’t a robust way to deal with the underlying issue*, and that there are likely still more cases like those two. In my opinion it would be better for the bill to adopt a different framework with respect to hazardous capabilities from post-training modifications (something closer to “Covered model developers have a duty to ensure that the marginal impact of training/releasing their model would not be to make hazardous capabilities significantly easier to acquire.”). The drafters of SB 1047 shouldn’t have to anticipate every possible contingency in advance, that’s just bad design.

* In the same way that, when someone notices that their supposedly-safe utility function for their AI has edge cases that expose unforseen maxima, introducing ad-hoc patches to deal with those particular noticed edge cases is not a robust strategy to get an AI that is actually safe across the board.

You want to learn an embedding of the opportunities you have in a given state (or for a given state-action), rather than just its potential rewards. Rewards are too sparse of a signal.

More formally, let's say instead of the Q function, we consider what I would call the Hope function: which given a state-action pair (s, a), gives you a distribution over states it expects to visit, weighted by the rewards it will get. This can still be phrased using the Bellman equation:

Hope(s, a) = rs' + f Hope(s', a')

The "successor representation" is somewhat close to this. It encodes the distribution over future states a partcular policy expects to visit from a particular starting state, and can be learned via the Bellman equation / TD learning.

cfoster053

On reflection these were bad thresholds, should have used maybe 20 years and a risk level of 5%, and likely better defined transformational. The correlation is certainly clear here, the upper right quadrant is clearly the least popular, but I do not think the 4% here is lizardman constant.

Wait, what? Correlation between what and what? 20% of your respondents chose the upper right quadrant (transformational/safe). You meant the lower left quadrant, right?

Load More