Summary: A proposal meant to contribute to producing [value-aligned machine intelligence] is 'advanced-safe' if it is robust to advanced agent scenarios where the AI becomes much smarter than its human developers.
A proposal for a value-alignment methodology, or some aspect of that methodology, is allegedly 'advanced-safe' if that proposal is allegedly robust to scenarios where the agent:
Much of the reason to be worried about the value alignment problem for cognitively powerful agents is that there are problems like EdgeInstantiation or UnforeseenMaximums which don't materialize before an agent is advanced, or don't materialize in the same way, or as severely. There are problems of dealing with minds smarter than our own, doing things we didn't imagine, that seem qualitatively different from designing a toaster oven to not burn down a house or even from designing a general AI system that is dumber than human. This means that the concept of 'advanced safety' is importantly different from the concept of robust pre-advanced AI.
We have observed in practice that many proposals for 'AI safety' do not seem to have been thought through against advanced agent scenarios, so there seems to be a practical urgency to emphasizing the concept and the difference.
Key problems of advanced safety that are new or qualitatively different compared to pre-advanced AI safety include:
Non-advanced-safe methodologies may conceivably be useful if a KnownAlgorithmNonrecursiveAgent can be created that is (a) powerful enough to be relevant and (b) can be known not to become advanced. Even here there may be grounds for worry that such an agent finds unexpectedly strong strategies in some particular subdomain - that it exhibits flashes of domain-specific advancement that break a non-advanced-safe methodology.
As an extreme case, an 'omni-safe' methodology allegedly remains value-aligned, or fails safely, even if the agent suddenly becomes omniscient and omnipotent (acquires delta probability distributions on all facts of interest and has all describable outcomes available as direct options). Thinking about the 'Omni' scenario is meant to highlight any step on which we've presumed, in a non-failsafe way, that the agent must not obtain definite knowledge of some fact or that it must not have access to some strategic option.
E.g., if a new regime has suddenly been entered, posing new EdgeInstantiation problems, then perhaps newly available strategic options should not be taken, the agent acting very conservatively pending a programmer consultation. An 'omni-safe' proposal would try to have the above be a rule that applied regardless of what sort of new regime opened up, rather than trying to imagine the probable limits of new regimes and design a rule to operate only inside those guessed limits.
Similarly, rather than design an AI that is meant to be continuously monitored for unexpected power gains by programmers who then have one minute to press a pause button - which implicitly assume that no such new regime can open, and catastrophe follow, in a short-enough timespan that a programmer wouldn't have one minute to think - an omni-safe proposal would design the AI to detect the new power and pause and wait, rather than having the methodology fail catastrophically if the new power was gained too quickly. Even if it seemed extremely unreasonable that some amount of cognitive power could be gained in less than a minute, especially when no such previous sharp power gain had occurred even in the course of a day, etcetera, the omni-safe way of thinking would say to just not build an agent that is unsafe if these kinds of background variables have 'unreasonable' settings.
Running thought experiments against the 'omni' scenario reflects the proposal that a good agent design just shouldn't fail unsafely no matter what knowledge or options it acquires. Why should it, if value alignment and corrigibility have otherwise been handled correctly? Why try to guess what facts an advanced agent can't figure out or what strategic options it can't have? Why make that guess a load-bearing proposition that kills us if we're wrong? Why design an agent that we expect will hurt us if it knows too much or can do too much?
The idea is not so much that we can't lower-bound the speed or upper-bound the power of an advanced agent, as that any problems highlighted by the omni scenario must reflect some kind of underlying flaw in a proposed methodology. Suppose NASA found that an alignment of four planets would cause a rocket's program to crash and the engines to explode. They wouldn't say, "Oh, we're not expecting any alignment like that for the next hundred years, so we're still safe." They'd say, "Wow, that sure was a major bug in the program." Correctly designed programs just shouldn't make the rocket explode under any conditions. If any specific scenario exposes a behavior like that, it shows that some general case is not being handled correctly.