Epistemic status: trying to vaguely gesture at vague intuitions. A similar idea was explored here under the heading "the intelligibility of intelligence", although I hadn't seen it before writing this post. As of 2020, I consider this follow-up comment to be a better summary of the thing I was trying to convey with this post than the post itself. The core disagreement is about how much we expect the limiting case of arbitrarily high intelligence to tell us about the AGIs whose behaviour we're worried about.
There’s a mindset which is common in the rationalist community, which I call “realism about rationality” (the name being intended as a parallel to moral realism). I feel like my skepticism about agent foundations research is closely tied to my skepticism about this mindset, and so in this essay I try to articulate what it is.
Humans ascribe properties to entities in the world in order to describe and predict them. Here are three such properties: "momentum", "evolutionary fitness", and "intelligence". These are all pretty useful properties for high-level reasoning in the fields of physics, biology and AI, respectively. There's a key difference between the first two, though. Momentum is very amenable to formalisation: we can describe it using precise equations, and even prove things about it. Evolutionary fitness is the opposite: although nothing in biology makes sense without it, no biologist can take an organism and write down a simple equation to define its fitness in terms of more basic traits. This isn't just because biologists haven't figured out that equation yet. Rather, we have excellent reasons to think that fitness is an incredibly complicated "function" which basically requires you to describe that organism's entire phenotype, genotype and environment.
In a nutshell, then, realism about rationality is a mindset in which reasoning and intelligence are more like momentum than like fitness. It's a mindset which makes the following ideas seem natural:
- The idea that there is a simple yet powerful theoretical framework which describes human intelligence and/or intelligence in general. (I don't count brute force approaches like AIXI for the same reason I don't consider physics a simple yet powerful description of biology).
- The idea that there is an “ideal” decision theory.
- The idea that AGI will very likely be an “agent”.
- The idea that Turing machines and Kolmogorov complexity are foundational for epistemology.
- The idea that, given certain evidence for a proposition, there's an "objective" level of subjective credence which you should assign to it, even under computational constraints.
- The idea that Aumann's agreement theorem is relevant to humans.
- The idea that morality is quite like mathematics, in that there are certain types of moral reasoning that are just correct.
- The idea that defining coherent extrapolated volition in terms of an idealised process of reflection roughly makes sense, and that it converges in a way which doesn’t depend very much on morally arbitrary factors.
- The idea that having having contradictory preferences or beliefs is really bad, even when there’s no clear way that they’ll lead to bad consequences (and you’re very good at avoiding dutch books and money pumps and so on).
To be clear, I am neither claiming that realism about rationality makes people dogmatic about such ideas, nor claiming that they're all false. In fact, from a historical point of view I’m quite optimistic about using maths to describe things in general. But starting from that historical baseline, I’m inclined to adjust downwards on questions related to formalising intelligent thought, whereas rationality realism would endorse adjusting upwards. This essay is primarily intended to explain my position, not justify it, but one important consideration for me is that intelligence as implemented in humans and animals is very messy, and so are our concepts and inferences, and so is the closest replica we have so far (intelligence in neural networks). It's true that "messy" human intelligence is able to generalise to a wide variety of domains it hadn't evolved to deal with, which supports rationality realism, but analogously an animal can be evolutionarily fit in novel environments without implying that fitness is easily formalisable.
Another way of pointing at rationality realism: suppose we model humans as internally-consistent agents with beliefs and goals. This model is obviously flawed, but also predictively powerful on the level of our everyday lives. When we use this model to extrapolate much further (e.g. imagining a much smarter agent with the same beliefs and goals), or base morality on this model (e.g. preference utilitarianism, CEV), is that more like using Newtonian physics to approximate relativity (works well, breaks down in edge cases) or more like cavemen using their physics intuitions to reason about space (a fundamentally flawed approach)?
Another gesture towards the thing: a popular metaphor for Kahneman and Tversky's dual process theory is a rider trying to control an elephant. Implicit in this metaphor is the localisation of personal identity primarily in the system 2 rider. Imagine reversing that, so that the experience and behaviour you identify with are primarily driven by your system 1, with a system 2 that is mostly a Hansonian rationalisation engine on top (one which occasionally also does useful maths). Does this shift your intuitions about the ideas above, e.g. by making your CEV feel less well-defined? I claim that the latter perspective is just as sensible as the former, and perhaps even more so - see, for example, Paul Christiano's model of the mind, which leads him to conclude that "imagining conscious deliberation as fundamental, rather than a product and input to reflexes that actually drive behavior, seems likely to cause confusion."
These ideas have been stewing in my mind for a while, but the immediate trigger for this post was a conversation about morality which went along these lines:
R (me): Evolution gave us a jumble of intuitions, which might contradict when we extrapolate them. So it’s fine to accept that our moral preferences may contain some contradictions.
O (a friend): You can’t just accept a contradiction! It’s like saying “I have an intuition that 51 is prime, so I’ll just accept that as an axiom.”
R: Morality isn’t like maths. It’s more like having tastes in food, and then having preferences that the tastes have certain consistency properties - but if your tastes are strong enough, you might just ignore some of those preferences.
O: For me, my meta-level preferences about the ways to reason about ethics (e.g. that you shouldn’t allow contradictions) are so much stronger than my object-level preferences that this wouldn’t happen. Maybe you can ignore the fact that your preferences contain a contradiction, but if we scaled you up to be much more intelligent, running on a brain orders of magnitude larger, having such a contradiction would break your thought processes.
R: Actually, I think a much smarter agent could still be weirdly modular like humans are, and work in such a way that describing it as having “beliefs” is still a very lossy approximation. And it’s plausible that there’s no canonical way to “scale me up”.
I had a lot of difficulty in figuring out what I actually meant during that conversation, but I think a quick way to summarise the disagreement is that O is a rationality realist, and I’m not. This is not a problem, per se: I'm happy that some people are already working on AI safety from this mindset, and I can imagine becoming convinced that rationality realism is a more correct mindset than my own. But I think it's a distinction worth keeping in mind, because assumptions baked into underlying worldviews are often difficult to notice, and also because the rationality community has selection effects favouring this particular worldview even though it doesn't necessarily follow from the community's founding thesis (that humans can and should be more rational).
Although I don't necessarily subscribe to the precise set of claims characterized as "realism about rationality", I do think this broad mindset is mostly correct, and the objections outlined in this essay are mostly wrong.
This seems entirely wrong to me. Evolution definitely should be studied using mathematical models, and although I am not an expert in that, AFAIK this approach is fairly standard. "Fitness" just refers to the expected behavior of the number of descendants of a given organism or gene. Therefore, it is perfectly definable modulo the concept of a "descendant". The latter is not as unambiguously defined as "momentum" but under normal conditions it is quite precise. The actual structure and dynamics of biological organisms and their environment is very complicated, but this does not preclude the abstract study of evolution, i.e. understanding which sort of dynamics are possible in principle (for general environments) and in which way they depend on the environment etc. Applying this knowledge to real-life evolution is not trivial (and it does require a lot of complementary empirical research), as is the application of theoretical knowledge in any domain to "messy" real-life examples, but that doesn't mean such knowledge is useless. On the contrary, such knowledge is often essential to progress.
I wonder whether the OP also doesn't count all of computational learning theory? Also, physics is definitely not a sufficient description of biology but on the other hand, physics is still very useful for understanding biology. Indeed, it's hard to imagine we would achieve the modern level of understanding chemistry without understanding at least non-relativistic quantum mechanics, and it's hard to imagine we would make much headway in molecular biology without chemistry, thermodynamics et cetera.
Once again, the OP uses the concept of "messiness" in a rather ambiguous way. It is true that human and animal intelligence is "messy" in the sense that brains are complex and many of the fine details of their behavior are artifacts of either fine details in limitations of biological computational hardware, or fine details in the natural environment, or plain evolutionary accidents. However, this does not mean that it is impossible to speak of a relatively simple abstract theory of intelligence. This is because the latter theory aims to describe mindspace as a whole rather than describing a particular rather arbitrary point inside it.
The disagreement here seems to revolve around the question of, when should we expect to have a simple theory for a given phenomenon (i.e. when does Occam's razor apply)? It seems clear that we should expect to have a simple theory of e.g. fundumental physics, but not a simple equation for the coastline of Africa. The difference is, physics is a unique object that has a fundumental role, whereas Africa is just one arbitrary continent among the set of all continents on all planets in the universe throughout its lifetime and all Everett branches. Therefore, we don't expect a simple description of Africa, but we do expect a relatively simple description of planetary physics that would tell us which continent shapes are possible and which are more likely.
Now, "rationality" and "intelligence" are in some sense even more fundumental than physics. Indeed, rationality is what tells us how to form correct beliefs, i.e. how to find the correct theory of physics. Looking an anthropic paradoxes, it is even arguable that making decisions is even more fundumental than forming beliefs (since anthropic paradoxes are situations in which assigning subjective probabilities seems meaningless but the correct decision is still well-defined via "functional decision theory" or something similar). Therefore, it seems like there has to be a simple theory of intelligence, even if specific instances of intelligence are complex by virtue of their adaptation to specific computational hardware, specific utility function (or maybe some more general concept of "values"), somewhat specific (although still fairly diverse) class of environments, and also by virtue of arbitrary flaws in their design (that are still mild enough to allow for intelligent behavior).
This line of thought would benefit from more clearly delineating descriptive versus prescriptive. The question we are trying to answer is: "if we build a powerful goal-oriented agent, what goal system should we give it?" That is, it is fundamentally a prescriptive rather than descriptive question. It seems rather clear that the best choice of goal system would be in some sense similar to "human goals". Moreover, it seems that if possibilities A and B are such that is ill-defined whether humans (or at least, those humans that determine the goal system of the powerful agent) prefer A or B, then there is no moral significance to choosing between A and B in the target goal system. Therefore, we only need to determine "human goals" within the precision to which they are actually well-defined, not within absolute precision.
The question is not whether it is "fine". The question is, given a situation in which intuition A demands action X and intuition B demands action Y, what is the morally correct action? The answer might be "X", it might be "Y", it might be "both actions are equally good", or it might be even "Z" for some Z different from both X and Y. But any answer effectively determines a way to remove the contradiction, replacing it by a consistent overarching system. And, if we actually face that situation, we need to actually choose an answer.
Well, it would manifest as a failure to create a complete and deterministic theory of computable physics. If your physics doesn't describe absolutely everything, hypercomputation can hide in places it doesn't describe. If your physics is stochastic (like quantum mechanics for example) then the random bits can secretly follow a hypercomputable pattern. Sort of "hypercomputer of the gaps". Like I wrote before, there actually can be situations in which we graduall
... (read more)