Ben Cottier

Ben Cottier's Comments

Ben Cottier's Shortform

RE: https://futureoflife.org/2020/05/15/on-the-future-of-computation-synthetic-biology-and-life-with-george-church/

Church's views on AI seem far away from my and most people's views in the AI risk community, and really intrigued me. It would be great to try distil and summarise these views to update on it properly.

Ben Cottier's Shortform

Model of the threat and interventions for mesa-optimization

  • Consider a chain model
    • Base optimizer
    • -> Mesa optimizer
      • Produced through optimization of base objective in training environment
    • -> Misalignment (base objective != mesa objective)
      • Different kinds of misalignment
        • Proxy: mesa objective is a proxy for the base objective in the training environment
          • Side-effect: optimizing mesa objective happens to optimize base objective
          • Instrumental: optimizing base objective happens to optimize mesa objective
        • Approximate: objectives differ due to an approximation error, caused by limits of representation in mesa-optimizer's model
        • Suboptimal: optimizing mesa objective happens to optimize base objective due to a flaw
      • What makes this particularly concerning relative to the "standard" alignment problem? It's that the misalignment may be particularly difficult to detect (see next step). The mesa-optimizer has been "screened" by the base optimizer during training; it has to align to the base objective to some extent, at least in the training environment. This either means it is aligned, or deceptively misaligned. The base probability of a misaligned mesa-optimizer is lower than a misaligned system in general, but conditioned on it happening, it could carry greater risk.
    • -> Deployment
      • Not enough evidence of risk based on training and testing in controlled environment
      • Mesa optimizer is deceptive, to increase its chance of deployment
    • -> Distribution shift
      • Deployment environment differs from training environment or controlled testing environment
    • -> Unanticipated behaviour
      • The distribution shift causes the misalignment to manifest in different behaviour
      • There is a "standard" problem of robustness to distribution shift that applies to any machine learning model, not just mesa-optimizers. What makes a mesa-optimizer particularly concerning? It's that optimization will tend to make the unanticipated behaviour more adversarial and influence-seeking. It can freely pursue an objective rather than merely perform a specified task.
    • -> Catastrophe
      • The unanticipated behaviour is so bad that it leads to a permanent, huge loss of value.
      • Ways this doesn't happen:
        • The behaviour is counter-productive: perhaps self-destructive, not directed or consistent enough to lead to anything bad (depends on how sensitive the domain is to random error)
        • Intervention from other actors (depends on decisive strategic advantage)
  • Interventions
    • Base -> Mesa: prevent mesa-optimizer arising
      • Alternative methods of subsystem generation beyond search
    • Mesa -> Misalignment: prevent mesa-optimizer from being misaligned
      • Standard alignment methods
      • Specific methods for base optimizer to align mesa optimizer
    • Misalignment -> Deployment: don't deploy misaligned mesa-optimizer
      • Transparency
      • Interpretability
    • Deployment -> Distribution shift: ensure deployment environment does not differ significantly from training environment or controlled testing environment
      • The converse is probably more tractable, in which case intervention comes before Base -> Mesa
    • Distribution shift -> Unanticipated behaviour: prevent unanticipated behaviour by being aware of this fact, followed by deferring or failing gracefully
      • Out-of-distribution detection
      • Well-calibrated uncertainty estimation
      • Corrigibility
    • Unanticipated behaviour -> Catastrophe: prevent catastrophe
      • Internal
        • Impact measurement
      • External
        • Shut down
        • Stop or steer the behaviour in various ways, depending on the nature of the behaviour
Will AI undergo discontinuous progress?

Thanks, that makes sense. To clarify, I realise there are references/links throughout. But I forgot that the takeoff speeds post was basically making the same claim as that quote, and so I was expecting a reference more from the biology space. And there are other places where I'm curious what informed you, e.g. the progress of guns, though that's easier to read up on myself.

Gary Marcus: Four Steps Towards Robust Artificial Intelligence
A team of people including Smolensky and Schmidhuber have produced better results on a mathematics problem set by combining BERT with a tensor products (Smolensky et al., 2016), a formal system for representing symbolic variables and their bindings (Schlag et al., 2019), creating a new system called TP-Transformer.

Notable that the latter paper was rejected from ICLR 2020, partly for unfair comparison. It seems unclear at present whether TP-Transformer is better than the baseline transformer.

Will AI undergo discontinuous progress?

I think this is a good analysis, and I'm really glad to see this kind of deep dive on an important crux. The most clarifying thing for me was connecting old and new arguments - they seem to have more common ground than I thought.

One thing I would appreciate is more in-text references. There are a bunch of claims here about e.g. history, evolution with no explicit reference. Maybe it seems like common knowledge, but I wasn't sure whether to believe some things, e.g.

Evolution was optimizing for fitness, and driving increases in intelligence only indirectly and intermittently by optimizing for winning at social competition. What happened in human evolution is that it briefly switched to optimizing for increased intelligence, and as soon as that happened our intelligence grew very rapidly but continuously.

Could you clarify? I thought biological evolution always optimizes for inclusive genetic fitness.

Clarifying some key hypotheses in AI alignment

Thanks! Comments are much appreciated.

Why the arrow from "agentive AI" to "humans are economically outcompeted"? The explanation makes it sounds like it should point to "target loading fails"??

It's been a few months and I didn't write in detail why that arrow is there, so I can't be certain of the original reason. My understanding now: humans getting economically outcompeted means AI systems are competing with humans, and therefore optimising against humans on some level. Goal-directedness enables/worsens this.

Looking back at the linked explanation of the target loading problem, I understand it as more "at the source": coming up with a procedure that makes AI actually behave as intended. As Richard said there, one can think of it as a more general version of the inner-optimiser (mesa-optimiser) problem. This is why e.g. there's an arrow from "incidental agentive AGI" to "target loading fails". Pointing this arrow to it might make sense, but to me the connection isn't strong enough to be within the "clutter budget" of the diagram.

Suggestion: make the blue boxes without parents more apparent? e.g. a different shade of blue? Or all sitting above the other ones? (e.g. "broad basin of corrigibility" could be moved up and left).

Changing the design of those boxes sounds good. I don't want to move them because the arrows would get more cluttered.