For an argument like this, the author needs to immediately show that it doesn't 'prove too much'. I.e evolution is impossible, a child learning with a growing brain also...
I had a kinda different take (copied from twitter)
This tweet is an interesting argument but I think a central flaw is ignoring how long the program has to run to produce the prediction.
For example, âAlphaZero-after-1-game-of-self-playâ and âAlphaZero-after-10ÂČâ°-games-of-self-playâ have essentially the same Kolmogorov complexity. After all, they have the exact same source code, apart from like 4 characters that specify the number of games to play. But thereâs a real sense in which the latter is better at Go than the former. Specifically, itâs better in the sense of âI donât want to sit around while it does 10ÂČâ° games of self-play, I want to play Go right now.â
Another way to think about this argument: Suppose AI_1 builds AI_2, and then AI_2 does X. Well, itâs also true that âAI_1 did Xââspecifically, âAI_1 did Xâ by building AI_2.
In a certain sense, this is true! But thatâs a pretty weird way to think about things! AI_2 does in fact exist here!
K-complexity asks us to forget about AI_2, by focusing the discussion on what happens given infinite time, as opposed to how it happens and how long it takes.
Then after sleeping on it I tweeted again:
I think the core true point in atroynâs argument is: there is a-priori-unpredictable complexity in the world that canât be deduced from an armchair, but rather has to be observed, and making a âmore intelligent successorâ does not substitute for that.
If you flip a coin and donât tell me, then I donât know whether itâs heads or tails. And I also canât make a âmore intelligent successorâ that knows whether itâs heads or tails.
This is entirely true! But I claim people talking about recursive self-improvement are not making that mistake.
For example:
- Thereâs an âoverhangâ of possible logical inferences that an AI could make on the basis of its existing knowledge, but doesnât (e.g. if I tell an AI the axioms of math, it doesnât instantaneously prove every possible theorem),
- Thereâs an âoverhangâ of possible input data that an AI could download and scrutinize, but doesnât (e.g. as of this writing, I believe no AI has watched all 100,000 years of YouTube)
- Thereâs an âoverhangâ of possible plans that an AI could execute but doesnât (e.g. an early AGI is unlikely to be simultaneously doing every remote job on the planet, while also starting a zillion new ambitious projects in parallel).
So an AI could self-improve in a way that allows it to go farther and faster on those metrics.
An obvious example is tweaking the assembly code to make the same AI run faster.
I also want to put self-replication into this category: going from âone instance of an AIâ to âa million instances of the same AI running in parallel and collaboratingâ (e.g. by buying or stealing additional compute). If you think about it, I claim that should totally count as âself-improvementâ, because after all one AI system is creating a more powerful AI âsystemâ. The latter âsystemâ is composed of many instances of the original AI, but so what? It should still count, IMO.
I think the OP here is also valid (and complementary).
Each of these points look valid, but thereâs a much simpler refutation: « Any good enough intelligence is smart enough to distribute part of its cognition to external devices. ».
Application: either my code includes wikipedia and whoever might change wikipedia just before I consult it, or itâs Kolmogorov complexity does not fully capture my capabilities. In a sense, this is showing the impact of putting too much confidence on a debatable picture of our capabilities and limitations as a single agent working from some cockpit.
(Reaction to the first sentence: "Is this going to be an argument that would imply that humans can't improve their own intelligence?")
Yeah, his first wrong statement in the argument is "a more intelligent program p2 necessarily has more complexity than a less intelligent p1". I would use an example along the lines of "p1 has a hundred data points about the path of a ball thrown over the surface of the Moon, and uses linear interpolation; p2 describes that path using a parabola defined by the initial position and velocity of the projectile and the gravitational pull at the surface of the Moon". Or "rigid projectiles A and B will collide in a vacuum, and the task is to predict their paths; p1 has data down to the atom about projectile A, and no data at all about projectile B; p2 has the mass, position, and velocity of both projectiles". Or, for that matter, "p1 has several megabytes of incorrect data which it incorporates into its predictions".
It seems he may have confused himself into assuming that p1 is the most intelligent possible program of Kolmogorov complexity k1. (He later says "... then we have a contradiction since k1 was supposed to be the minimal expression of intelligence at that level". Wrong; k1 was supposed to be the minimal expression of that particular intelligence p1, not the minimal expression of some set of possible intelligences.) Then it would follow that any more intelligent (i.e. better-predicting, by his definition) program must be more complex.
Seems to me that the usage of Kolmogorov complexity in this context is a red herring. Complexity of what: the program alone, or the program and the data it gets? The former is irrelevant, because the entire idea is that an intelligence at human level or higher can observe the environment and learn from it. The latter, assuming that we can make an unlimited number of observations and experiments, is potentially unlimited.
Mathematically speaking, a universal program (an interpreter that can simulate an arbitrary program described in its data) has a constant Kolmogorov complexity, and yet can simulate a program with arbitrarily high Kolmogorov complexity. (The extra complexity is in the data describing the simulated program.)
If we taboo "Kolmogorov complexity", it seems to me that the argument reduces to: "a machine cannot self-improve, because it can only build the machines it could simulate, in which case what's the point of actually building them?". Which, in some sense yes (assuming unlimited computing power and time), but the machine that is actually built can hypothetically run much faster than the simulated one.
As has been observed by other commenters, the argument fails to take into account runtime limitations - in the real world programs can self improve by finding (provably) faster programs that (provably) perform the same inferences that they do, which most people would consider self improvement. However the argument may be onto something: it is indeed true that a program p cannot output a program q with K(q) > K(p) by more than a constant (there is a short program which simulates any input program and then runs its output). Here K(p) is the length of the shortest program with the same behavior as p - in this case we seem to require p to both output another program q and learn to predict a sequence. It is also true that a high level of Kolmogorov complexity is required to eventually predict all sequences up to a high level of complexity: https://arxiv.org/pdf/cs/0606070.pdf.
The real world implications of this argument are probably lessened by the fact that predictors are embedded, and can improve their own hardware or even negotiate with other agents for provably superior software.
The mistake here is the assumption that a program that models the world better necessarily has a higher Kolmogorov complexity.
Perfect. A Turing machine doing Levin Search or running all possible Turing machines is the first example that came to my mind when I read Anton's argument against RSI-without-external-optimization-bits.
I think heâs saying âsuppose p1 is the shortest program that gets at most loss . If p2 gets loss , then we must require a longer string than p1 to express p2, and p1 therefore cannot express p2â.
This seems true, but I donât understand its relevance to recursive self improvement.
The mistake here is the assumption that a program that models the world better necessarily has a higher Kolmogorov complexity.
I think Anton assumes that we have the simplest program that predicts the world to a given standard, in which case this is not a mistake. He doesn't explicitly say so, though, so I think we should wait for clarification.
But it's a strange assumption; I don't see why the minimum complexity predictor couldn't carry out what we would interpret as RSI in the process of arriving at its prediction.
The thing about the Pareto frontier of Kolmogorov complexity vs prediction score is that most programs aren't on it. In particular, it seems unlikely that p_1, the seed AI written by humans, is going to be on the frontier. Even p_2, the successor AI, might not be on it either. We can't equovicate between all programs that get the same prediction score, differences between them will be observable in the way they make predictions.
I donât disagree with any of what you say here - I just read Anton as assuming we have a program on that frontier
Twitter user @atroyn claims that recursive self-improvement is impossible because of Kolmogorov complexity. Quoting most of[1] the argument here:
The mistake here is the assumption that a program that models the world better necessarily has a higher Kolmogorov complexity. Originally, Kolmogorov complexity measured the complexity of bit strings. But we're talking about predictors here, things that observe the world and spit out probability distributions over observed outcomes. In the context of predictors, Kolmogorov complexity measures the complexity of a function from observations to predictions.
In the case of ideal Bayesian reasoning, we can nail down such a function just by specifying a prior, eg. the Solomonoff prior. (Plus, an approximation scheme to keep things computable, I guess.) This doesn't take a very large program to implement. But a non-ideal reasoner will screw up in many cases, and there's information contained in the exact way it screws up for each set of observations. Such reasoners can have an almost arbitrarily high Kolmogorov complexity, and they're all worse than the ideal Bayesian program.
In other words, the successor program has Kolmogorov complexity less than or equal to that of its predecessor, but so what? That doesn't imply that it's worse.
(Also, Kolmogorov complexity doesn't care about how much time a program takes to run at all, but in the real world it's an important consideration, and a target for self-improvement.)
That concludes this post: without the assumption that higher Kolmogorov complexity is better, the whole argument falls apart.
The rest of the thread briefly touches on the issue of how an AI could know that its successor would necessarily be an improvement. The discussion there is kind of doomed since it's done with the goal of showing that the successor has lower or equal Kolmogorov complexity than the original, which is uninteresting, though we can see right away that it must be true, assuming that the original writes the successor before observing the world at all. But there's an interesting version of the question, which asks about the set of axioms used by the systems to reason about the world, rather than the Kolmogorov complexity. See this paper by Yudkowsky and Herreshoff for details. â©ïž