Great post! I agree with a lot of the reasoning and am also quite worried about insufficient preparedness for short timelines.
On security, you say:
Model weights (and IP) are secure: By this, I mean SL4 or higher...
I think it's worth explicitly stating that if AI companies only manage to achieve SL4, we should expect OC5 actors to successfully steal model weights, conditional on them deciding it's a top-level priority for them to do so.
However, this implication doesn't really jive with the rest of your comments regarding security and the amount of frontier actors. It seems to me that a lot of pretty reasonable plans or plan-shaped-objects, yours included, rely to an extent on the... (read more)
Great post! I agree with a lot of the reasoning and am also quite worried about insufficient preparedness for short timelines.
On security, you say:
I think it's worth explicitly stating that if AI companies only manage to achieve SL4, we should expect OC5 actors to successfully steal model weights, conditional on them deciding it's a top-level priority for them to do so.
However, this implication doesn't really jive with the rest of your comments regarding security and the amount of frontier actors. It seems to me that a lot of pretty reasonable plans or plan-shaped-objects, yours included, rely to an extent on the... (read more)