Twitter user @atroyn claims that recursive self-improvement is impossible because of Kolmogorov complexity. Quoting most of[1] the argument here:

here is an argument against the possibility of recursive self improvement of any 'intelligent' computer program, based on kolmogorov complexity.

intelligence is the ability to make correct predictions about the state of the world given available information.

each program which makes predictions about the world has a kolmogorov complexity corresponding to the length of the shortest string which can express that program

for a given program p call this complexity k.

(unfortunately k(p) is in general uncomputable, the proof reduces to the halting problem, but that's not important here)

more intelligence (in our definition) implies the ability to predict more of the world more accurately, i.e. to express more of the world's complexity - this implies that a more intelligent program p2 necessarily has more complexity than a less intelligent p1

to see that this is necessarily so, note that if we could predict the world equally accurately as p1's prediction with a program p0 with k0 < k1, then we have a contradiction since k1 was supposed to be the minimal expression of intelligence at that level

in order to get recursive self improvement, you need a program p1 which is capable of emitting p2 which is better able to predict the world than p1 - i.e., we need p1 to emit p2 such that k2 > k1

but this is a contradiction.

[...]

The mistake here is the assumption that a program that models the world better necessarily has a higher Kolmogorov complexity. Originally, Kolmogorov complexity measured the complexity of bit strings. But we're talking about predictors here, things that observe the world and spit out probability distributions over observed outcomes. In the context of predictors, Kolmogorov complexity measures the complexity of a function from observations to predictions.

In the case of ideal Bayesian reasoning, we can nail down such a function just by specifying a prior, eg. the Solomonoff prior. (Plus, an approximation scheme to keep things computable, I guess.) This doesn't take a very large program to implement. But a non-ideal reasoner will screw up in many cases, and there's information contained in the exact way it screws up for each set of observations. Such reasoners can have an almost arbitrarily high Kolmogorov complexity, and they're all worse than the ideal Bayesian program.

In other words, the successor program has Kolmogorov complexity less than or equal to that of its predecessor, but so what? That doesn't imply that it's worse.

(Also, Kolmogorov complexity doesn't care about how much time a program takes to run at all, but in the real world it's an important consideration, and a target for self-improvement.)

That concludes this post: without the assumption that higher Kolmogorov complexity is better, the whole argument falls apart.


  1. The rest of the thread briefly touches on the issue of how an AI could know that its successor would necessarily be an improvement. The discussion there is kind of doomed since it's done with the goal of showing that the successor has lower or equal Kolmogorov complexity than the original, which is uninteresting, though we can see right away that it must be true, assuming that the original writes the successor before observing the world at all. But there's an interesting version of the question, which asks about the set of axioms used by the systems to reason about the world, rather than the Kolmogorov complexity. See this paper by Yudkowsky and Herreshoff for details. ↩︎

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 2:53 PM

For an argument like this, the author needs to immediately show that it doesn't 'prove too much'. I.e evolution is impossible, a child learning with a growing brain also...

I had a kinda different take (copied from twitter)

This tweet is an interesting argument but I think a central flaw is ignoring how long the program has to run to produce the prediction.

For example, “AlphaZero-after-1-game-of-self-play” and “AlphaZero-after-10²⁰-games-of-self-play” have essentially the same Kolmogorov complexity. After all, they have the exact same source code, apart from like 4 characters that specify the number of games to play. But there’s a real sense in which the latter is better at Go than the former. Specifically, it’s better in the sense of “I don’t want to sit around while it does 10²⁰ games of self-play, I want to play Go right now.”

Another way to think about this argument: Suppose AI_1 builds AI_2, and then AI_2 does X. Well, it’s also true that “AI_1 did X”—specifically, “AI_1 did X” by building AI_2.

In a certain sense, this is true! But that’s a pretty weird way to think about things! AI_2 does in fact exist here!

K-complexity asks us to forget about AI_2, by focusing the discussion on what happens given infinite time, as opposed to how it happens and how long it takes.

Then after sleeping on it I tweeted again:

I think the core true point in atroyn’s argument is: there is a-priori-unpredictable complexity in the world that can’t be deduced from an armchair, but rather has to be observed, and making a “more intelligent successor” does not substitute for that.

If you flip a coin and don’t tell me, then I don’t know whether it’s heads or tails. And I also can’t make a “more intelligent successor” that knows whether it’s heads or tails.

This is entirely true! But I claim people talking about recursive self-improvement are not making that mistake.

For example:

  • There’s an “overhang” of possible logical inferences that an AI could make on the basis of its existing knowledge, but doesn’t (e.g. if I tell an AI the axioms of math, it doesn’t instantaneously prove every possible theorem),
  • There’s an “overhang” of possible input data that an AI could download and scrutinize, but doesn’t (e.g. as of this writing, I believe no AI has watched all 100,000 years of YouTube)
  • There’s an “overhang” of possible plans that an AI could execute but doesn’t (e.g. an early AGI is unlikely to be simultaneously doing every remote job on the planet, while also starting a zillion new ambitious projects in parallel).

So an AI could self-improve in a way that allows it to go farther and faster on those metrics.

An obvious example is tweaking the assembly code to make the same AI run faster.

I also want to put self-replication into this category: going from “one instance of an AI” to “a million instances of the same AI running in parallel and collaborating” (e.g. by buying or stealing additional compute). If you think about it, I claim that should totally count as “self-improvement”, because after all one AI system is creating a more powerful AI “system”. The latter “system” is composed of many instances of the original AI, but so what? It should still count, IMO.

I think the OP here is also valid (and complementary).

[-]Ilio10mo21

Each of these points look valid, but there’s a much simpler refutation: ÂŤ Any good enough intelligence is smart enough to distribute part of its cognition to external devices. Âť.

Application: either my code includes wikipedia and whoever might change wikipedia just before I consult it, or it’s Kolmogorov complexity does not fully capture my capabilities. In a sense, this is showing the impact of putting too much confidence on a debatable picture of our capabilities and limitations as a single agent working from some cockpit.

(Reaction to the first sentence: "Is this going to be an argument that would imply that humans can't improve their own intelligence?")

Yeah, his first wrong statement in the argument is "a more intelligent program p2 necessarily has more complexity than a less intelligent p1".  I would use an example along the lines of "p1 has a hundred data points about the path of a ball thrown over the surface of the Moon, and uses linear interpolation; p2 describes that path using a parabola defined by the initial position and velocity of the projectile and the gravitational pull at the surface of the Moon".  Or "rigid projectiles A and B will collide in a vacuum, and the task is to predict their paths; p1 has data down to the atom about projectile A, and no data at all about projectile B; p2 has the mass, position, and velocity of both projectiles".  Or, for that matter, "p1 has several megabytes of incorrect data which it incorporates into its predictions".

It seems he may have confused himself into assuming that p1 is the most intelligent possible program of Kolmogorov complexity k1.  (He later says "... then we have a contradiction since k1 was supposed to be the minimal expression of intelligence at that level".  Wrong; k1 was supposed to be the minimal expression of that particular intelligence p1, not the minimal expression of some set of possible intelligences.)  Then it would follow that any more intelligent (i.e. better-predicting, by his definition) program must be more complex.

[-]Viliam10mo41

Seems to me that the usage of Kolmogorov complexity in this context is a red herring. Complexity of what: the program alone, or the program and the data it gets? The former is irrelevant, because the entire idea is that an intelligence at human level or higher can observe the environment and learn from it. The latter, assuming that we can make an unlimited number of observations and experiments, is potentially unlimited.

Mathematically speaking, a universal program (an interpreter that can simulate an arbitrary program described in its data) has a constant Kolmogorov complexity, and yet can simulate a program with arbitrarily high Kolmogorov complexity. (The extra complexity is in the data describing the simulated program.)

If we taboo "Kolmogorov complexity", it seems to me that the argument reduces to: "a machine cannot self-improve, because it can only build the machines it could simulate, in which case what's the point of actually building them?". Which, in some sense yes (assuming unlimited computing power and time), but the machine that is actually built can hypothetically run much faster than the simulated one.

As has been observed by other commenters, the argument fails to take into account runtime limitations - in the real world programs can self improve by finding (provably) faster programs that (provably) perform the same inferences that they do, which most people would consider self improvement. However the argument may be onto something: it is indeed true that a program p cannot output a program q with K(q) > K(p) by more than a constant (there is a short program which simulates any input program and then runs its output). Here K(p) is the length of the shortest program with the same behavior as p - in this case we seem to require p to both output another program q and learn to predict a sequence. It is also true that a high level of Kolmogorov complexity is required to eventually predict all sequences up to a high level of complexity: https://arxiv.org/pdf/cs/0606070.pdf. 
The real world implications of this argument are probably lessened by the fact that predictors are embedded, and can improve their own hardware or even negotiate with other agents for provably superior software. 

The mistake here is the assumption that a program that models the world better necessarily has a higher Kolmogorov complexity.

Perfect. A Turing machine doing Levin Search or running all possible Turing machines is the first example that came to my mind when I read Anton's argument against RSI-without-external-optimization-bits.

Another good example is the Goedel machine

I think he’s saying “suppose p1 is the shortest program that gets at most loss . If p2 gets loss , then we must require a longer string than p1 to express p2, and p1 therefore cannot express p2”.

This seems true, but I don’t understand its relevance to recursive self improvement.

The mistake here is the assumption that a program that models the world better necessarily has a higher Kolmogorov complexity.

I think Anton assumes that we have the simplest program that predicts the world to a given standard, in which case this is not a mistake. He doesn't explicitly say so, though, so I think we should wait for clarification.

But it's a strange assumption; I don't see why the minimum complexity predictor couldn't carry out what we would interpret as RSI in the process of arriving at its prediction.

The thing about the Pareto frontier of Kolmogorov complexity vs prediction score is that most programs aren't on it. In particular, it seems unlikely that p_1, the seed AI written by humans, is going to be on the frontier. Even p_2, the successor AI, might not be on it either. We can't equovicate between all programs that get the same prediction score, differences between them will be observable in the way they make predictions.

I don’t disagree with any of what you say here - I just read Anton as assuming we have a program on that frontier