Pitfalls for AI Systems via Decision Theoretic Adversaries
This post accompanies a new paper related to AI alignment. A brief outline and informal discussion of the ideas is presented here, but of course, you should check out the paper for the full thing.
As progress in AI continues to advance at a rapid pace, it is important to know how advanced systems will make choices and in what ways they may fail. When thinking about the prospect of superintelligence, I think it’s all too easy and all too common to imagine that an ASI would be something which humans, by definition, can’t ever outsmart. But I don’t think we should take this for granted. Even if an AI system seems very intelligent -- potentially even superintelligent -- this doesn’t mean that it’s immune to making egregiously bad decisions when presented with adversarial situations. Thus the main insight of this paper:
The Achilles Heel hypothesis: Being a highly-successful goal-oriented agent does not imply a lack of decision theoretic weaknesses in adversarial situations. Highly intelligent systems can stably possess "Achilles Heels" which cause these vulnerabilities.
More precisely, I define an Achilles Heel as a delusion which is impairing (results in irrational choices in adversarial situations), subtle (doesn’t result in irrational choices in normal situations), implantable (able to be introduced) and stable (remaining in a system reliably over time).
In the paper, a total of 8 prime candidates Achilles Heels are considered alongside ways by which they could be exploited and implanted:
- Corrigibility
- Evidential decision theory
- Causal decision theory
- Updateful decision theory
- Simulational belief
- Sleeping beauty assumptions
- Infinite temporal models
- Aversion to the use of subjective priors
This was all inspired by thinking about how, since paradoxes can often stump humans, they might also fool certain AI systems in ways that we should anticipate. It surveys and augments work in decision theory involving dilemmas and paradoxes in context of this hypothesis and makes a handful of novel contributions involving implantation. My hope is that this will lead to insights on how to better model and build advanced AI. On one hand, Achilles Heels are a possible failure mode which we want to avoid, but on the other, they are an opportunity for building better models via adversarial training or the use of certain Achilles Heels for containment. This paper may also just be a useful reference in general for the topics it surveys.
For more info, you’ll have to read it! Also feel free to contact me at scasper@college.harvard.edu.
Has anyone else noticed this paper is much clearer on definitions and much more readable than the vast majority of AI safety literature, much of what it draws on? Like it has a lot of definitions that could be put in an "encyclopedia for friendly AI" so to speak.
Some extra questions:
It's really nice to hear that the paper seems clear! Thanks for the comment.
For 2-3, I can give some thoughts, but these aren't necessarily through through much more than many other people one could ask.
I have thought of a similar idea: "philosophical landmines" (PL) to stop unfriendly AI. PL are tasks which a simple in formulation but could halt an AI as they require infinite amount of computation to solve. Examples include Buridan ass problem, the problem if AI is real or just possible, the problem of being in simulation or not, other anthropic riddles and pascal mugging-like stuff.
Best such problems should be not published as they could be used as our last defence against UFAI.
I think that AI capable of being nerd-sniped by these landmines will probably be nerd-sniped by them (or other ones we haven't thought of) on its own without our help. The kind of AI that I find more worrying (and more plausible) is the kind that isn't significantly impeded by these landmines.
Yes, landmines is the last level of defence, which have very low probability to work (like 0.1 per cent). However, If AI is stable to all possible philosophical landmines, it is a very stable agent and has higher chances to keep its alignment and do not fail catastrophically.
Thanks for the comment. +1 to it. I also agree that this is an interesting concept: using Achilles Heels as containment measures. There is a discussion related to this on page 15 of the paper. In short, I think that this is possible and useful for some achilles heels and would be a cumbersome containment measure for others which could be accomplished more simply via bribes of reward.