Some AI safety methods/mechanisms can be tacked onto many kinds of AI systems. But separately, some paths to powerful AI are safer or more alignable than others:

WBE seems very unlikely to appear before strong de novo AI. But other relatively-safe-paths may be competitive (i.e. not require much extra-cost and capabilities-sacrifice relative to unsafe paths). This has important implications-- it means that AI developers should prioritize those paths, and especially should differentially publish research on those paths to differentially boost others on those paths.[2]

Which paths to powerful AI are relatively safe and potentially competitive, and thus should be boosted?

This question is a more focused successor to Which possible AI systems are relatively safe?

  1. ^

    Paul says "My guess is that if you hold capability fixed and make a marginal move in the direction of (better LM agents) + (smaller LMs) then you will make the world safer. It straightforwardly decreases the risk of deceptive alignment, makes oversight easier, and decreases the potential advantages of optimizing on outcomes."

  2. ^

    There's a quote I'm forgetting on differential technological development like if there’s an unsafe path and a safer path and the unsafe path is ahead (in terms of capabilities), we should rush to make progress on the safer path so that it gets ahead and even non-safety-motivated researchers switch to the safer path.

New Answer
New Comment