AI strategy & governance. ailabwatch.org. ailabwatch.substack.com.
I guess I'll write non-frontpage-y quick takes as posts instead then :(
I'd like to be able to see such quick takes on the homepage, like how I can see personal blogposts on the homepage (even though logged-out users can't).
Are you hiding them from everyone? Can I opt into seeing them?
I failed to find a way to import to Slack without doing it one by one.
Bores knows, at least for people who donate via certain links. For example, the link in this post is https://secure.actblue.com/donate/boresai?refcode=lw rather than https://secure.actblue.com/donate/boresweb.
I'm annoyed that Tegmark and others don't seem to understand my position: you should try for great global coordination but also invest in safety in more rushed worlds, and a relatively responsible developer shouldn't unilaterally stop.
(I'm also annoyed by this post's framing for reasons similar to Ray.)
Part is thinking about donation opportunities, like Bores. Hopefully I'll have more to say publicly at some point!
Recently I've been spending much less than half of my time on projects like AI Lab Watch. Instead I've been thinking about projects in the "strategy/meta" and "politics" domains. I'm not sure what I'll work on in the future but sometimes people incorrectly assume I'm on top of lab-watching stuff; I want people to know I'm not owning the lab-watching ball. I think lab-watching work is better than AI-governance-think-tank work for the right people on current margins and at least one more person should do it full-time; DM me if you're interested.
Good point. I think compute providers can steal model weights, as I said at the top. I think they currently have more incentive to steal architecture and training algorithms, since those are easier to use without getting caught, so I focused on "algorithmic secrets."
Separately, are Amazon and Google incentivized to steal architecture and training algorithms? Meh. I think it's very unlikely, since even if they're perfectly ruthless their reputation is very important to them (plus they care about some legal risks). I think habryka thinks it's more likely than I do. This is relevant to Anthropic's security prioritization — security from compute providers might not be among the lowest-hanging fruit. And I think Fabien thinks it's relevant to ASL-3 compliance, and I agree that ASL-3 probably wasn't written with insider threat from compute providers in mind. But I'm not sure it's relevant to ASL-3 compliance? The ASL-3 standard doesn't say that actors are only in scope if they seem incentivized to steal stuff; the scope is based on actors' capabilities.
I agree that whether Anthropic has handled insider threat from compute providers is a crux. My guess is that Anthropic and humans-at-Anthropic wouldn't claim to have handled this (outside of the implicit claim for ASL-3) and they would say something more like that's out of scope for ASL-3 or oops.
Separately, I just unblocked you. (I blocked you because I didn't like this thread in my shortform, not directly to stifle dissent. I have not blocked anyone else. I mention this because hearing about disagreement being hidden/blocked should make readers suspicious but that's mostly not correct in this case.)
Edit: also, man, I tried to avoid "condemnation" and I think I succeeded. I was just making an observation. I don't really condemn Anthropic for this.
I think "Overton window" is a pretty load-bearing concept for many LW users and AI people — it's their main model of policy change. Unfortunately there's lots of other models of policy change. I don't think "Overton window" is particularly helpful or likely-to-cause-you-to-notice-relevant-stuff-and-make-accurate-predictions. (And separately people around here sometimes incorrectly use "expand the Overton window" to just mean with "advance AI safety ideas in government.") I don't have time to write this up; maybe someone else should (or maybe there already exists a good intro to the study of why some policies happen and persist while others don't[1]).
Some terms: policy windows (and "multiple streams"), punctuated equilibrium, policy entrepreneurs, path dependence and feedback (yes this is a real concept in political science, e.g. policies that cause interest groups to depend on them are less likely to be reversed), gradual institutional change, framing/narrative/agenda-setting.
Related point: https://forum.effectivealtruism.org/posts/SrNDFF28xKakMukvz/tlevin-s-quick-takes?commentId=aGSpWHBKWAaFzubba.
I liked the book Policy Paradox in college. (Example claim: perceived policy problems are strategically constructed through political processes; how issues are framed—e.g. individual vs collective responsibility—determines which solutions seem appropriate.) I asked Claude for suggestions on a shorter intro and I didn't find the suggestions helpful.
I guess I think if you work on government stuff and you [don't have poli sci background / aren't familiar with concepts like "multiple streams"] you should read Policy Paradox (although the book isn't about that particular concept).