I don’t believe that datacenter security is actually a problem (see another argument).
Sorry, is your claim here that securing datacenters against highly resourced attacks from state actors (e.g. China) is going to be easy? This seems like a crazy claim to me.
(This link you cite isn't about this claim, the link is about AI enabled cyber attacks not being a big deal because cyber attacks in general aren't that big of a deal. I think I broadly agree with this, but think that stealing/tampering with model weights is a big deal.)
The China Problem: Plan B’s 13% risk doesn’t make sense if China (DeepSeek) doesn’t slow down and is only 3 months behind. Real risk is probably the same as for E, 75% unless there is a pivotal act.
What about the US trying somewhat hard to buy lead time, e.g., by sabotaging Chinese AI companies?
The framework treats political will as a background variable rather than a key strategic lever.
I roughly agree with this. It's useful to condition on (initial) political will when making a technical plan, but I agree raising political will is important and one issue with this perspective is it might incorrectly make this less salient.
Not a crux for me ~at all. Some upstream views that make me think "AI takeover but humans stay alive" is more likely and also make me think avoiding AI takeover is relatively easier might be a crux.
I expect a roughly 5.5 month doubling time in the next year or two, but somewhat lower seems pretty likely. The proposed timeline I gave consistent with Anthropic's predictions requires <1 month doubling times (and this is prior to >2x AI R&D acceleration, at least given my view of what you get at that level of capability).
I'd guess swe bench verified has an error rate around 5% or 10%. They didn't have humans baseline the tasks, just look at them and see if they seem possible.
Wouldn't you expect thing to look logistic substantially before full saturation?
Wouldn't you expect this if we're close to saturating SWE bench (and some of the tasks are impossible)? Like, you eventually cap out at the max performance for swe bench and this doesn't correspond to an infinite time horizon on literally swe bench (you need to include more longer tasks).
I agree probably more work should go into this space. I think it is substantially less tractable than reducing takeover risk in aggregate, but much more neglected right now. I think work in this space has the capacity to be much more zero sum (among existing actors, avoiding AI takeover is zero sum with respect to the relevant AIs) and thus can be dodgier.
Seems relevant post AGI/ASI (human labor is totally obsolete and AIs have massively increased energy output) maybe around the same point as when you're starting to build stuff like Dyson swarms or other massive space based projects. But yeah, IMO probably irrelevant in the current regime (for next >30 years without AGI/ASI) and current human work in this direction probably doesn't transfer.
I think the case in favor of space-based datacenters is that energy efficiency of space-based solar looks better: you can have perfect sun 100% of the time and you don't have an atmosphere in the way. But, this probably isn't a big enough factor to matter in realistic regimes without insane amounts of automation etc.
In addition to hitting higher energy from a given area, you also can get the same energy 100% of the time (without issues with night or clouds). But yeah, I agree, and I don't see how you get 50x efficiency even if transport to space (and assembly/maintenance in space) were free.
I'm just literally assuming that Plan B involves a moderate amount of lead time via the US having a lead or trying pretty hard to sabotage China, this is part of the plan/assumptions.