479

LESSWRONG
LW

478
Kolmogorov Complexity
Personal Blog

4

The Kolmogorov complexity of a superintelligence

by Thomas
26th Jun 2011
1 min read
30

4

Kolmogorov Complexity
Personal Blog

4

The Kolmogorov complexity of a superintelligence
7cousin_it
2MrMind
1Zetetic
0cousin_it
5wedrifid
2Wei Dai
0cousin_it
3wedrifid
0Vladimir_Nesov
1wedrifid
2Paul Crowley
0Zetetic
-4wedrifid
-2Zetetic
0wedrifid
0Zetetic
1Miller
1cousin_it
5gwern
0timtyler
0gwern
4gjm
4timtyler
7wedrifid
2timtyler
0paulfchristiano
-2wedrifid
0paulfchristiano
-1wedrifid
0Manfred
New Comment
30 comments, sorted by
top scoring
Click to highlight new comments since: Today at 3:29 PM
[-]cousin_it14y70

The K-complexity of a superintelligence can't be higher than the K-complexity of a program that searches for a superintelligence, enumerating all programs in order and asking them hard questions to check their abilities. I don't see any reason why the latter should be over a megabyte, probably much less.

Reply
[-]MrMind14y20

It must be, because this verification procedure also produces infinites false positives, for example all the hash tables which happen to have the correct answers by chance. That is, the procedure doesn't produce more information than simply asking "Are you a superintelligence?".

Reply
[-]Zetetic14y10

I don't see any reason why the latter should be over a megabyte, probably much less.

In contrast, it seems to me like the verification procedure for a friendly superintelligence could be quite long. Do you agree with this? It would definitely be longer than for just any superintelligence, but I haven't got a clue about how much; just that we could potentially be adding some pretty heavy constraints.

Reply
[-]cousin_it14y00

Yes, I agree. An upper bound on the complexity of a friendly superintelligence would be the total information content of all human brains, but you could probably compress that a lot.

Reply
[-]wedrifid14y50

Yes, I agree. An upper bound on the complexity of a friendly superintelligence would be the total information content of all human brains, but you could probably compress that a lot.

False. The upper bound on the complexity of a Friendly superintelligence is [(total information content of all brains) + (minimum complexity of a what it takes to identify a superintelligence with a defined goal of deriving its objective from a set of agents according to some mechanism that represents what it would mean to be 'friendly' to them)].

ie. Just having all the information content of all human brains is not enough. You can't avoid defining Friendliness in the search program.

Reply
[-]Wei Dai14y20

I agree with wedrifid here. We don't seem to have a valid argument showing that "an upper bound on the complexity of a friendly superintelligence would be the total information content of all human brains". I would like to point out that if the K-complexity of friendly superintelligence is greater than that, then there is no way for us to build a friendly superintelligence except by luck (i.e., most Everett branches are doomed to fail to build a friendly superintelligence) or by somehow exploiting uncomputable physics.

Reply
[-]cousin_it14y00

Technically you can cheat by using the information in human brains to create upload-based superintelligences along the lines of Stuart's proposal, make them do the research for you, etc., so it seems likely to me that the upper bound should hold... but I appreciate your point and agree that my comment was wrong as stated.

Reply
[-]wedrifid14y30

When talking about upper bounds we cannot afford to just cheat saying humans can probably figure it out. That isn't an upper bound - it is an optimistic prediction about human potential. Moreover we still need a definition of Friendliness built in so we can tell whether this thing that the human researchers come up with is Friendliness or some other thing with that name. (Even an extremely 'meta' reference to a definition is fine but still requires more bits to point to which part of the humans is able to define Friendliness.)

Upper bounds are hard. But yes, I know your understanding of the area is solid and your ancestor comment serves as the definitive answer to the question of the post. I disagree only with the statement as made.

Reply
[-]Vladimir_Nesov14y00

False.

You are most likely not disagreeing with the intended meaning, so using words like "false" to motivate the clarification you were making is wrong.

Reply
[-]wedrifid14y10

You are most likely not disagreeing with the intended meaning, so using words like "false" to motivate the clarification you were making is wrong.

No Vladimir. My disagreement was with the statement and the statement was, indeed false. That doesn't mean I disagree with cousin_it's overall philosophy. It just means I am calling a single false claim false.

You are wrong.

Reply
[-]Paul Crowley14y20

I'm confused - a Friendly AI doesn't start with information about the insides of brains, it only starts with enough information to correctly identify human beings in order to know whose preferences to infer its values from.

Reply
[-]Zetetic14y00

but you could probably compress that a lot.

This definitely fits my intuition. It seems like CEV style friendliness, for example, could at least be reduced to something on the order of complexity of a single "averaged" human brain. This is all very vague of course, I'm mostly just trying to feel out some intuitions on this.

Reply
[-]wedrifid14y-40

It seems like CEV style friendliness, for example, could at least be reduced to something on the order of complexity of a single "averaged" human brain.

That doesn't sound like it passes the adequately-friendly-to-wedrifid test. I'd thermite such an AI rather than press the on button.

Reply
[-]Zetetic14y-20

That seems difficult to determine without unpacking what I mean by "averaged"; how did you come to that conclusion? (I'm wondering if you have a clearer concept of what it would mean to "average" over brains, my intuition is nothing more than a sort of vague feeling that makes me think that I should study certain topics and ask certain questions) I don't even know enough to accurately convey my intuition behind my use of "averaged", I was just hoping that it might elicit a response from someone that would give me some useful information that would, in turn, help me in my task of understanding CEV better.

Reply
[-]wedrifid14y00

Unfortunately nobody has a good answer to your question. At least none that would satisfy me. The aggregation problem is the hardest part of the problem and hasn't been answered adequately yet. Without such an answer CEV remains basically undefined. (This is something that troubles me.)

Reply
[-]Zetetic14y00

I see. Thanks; that is enlightening. I had gotten the impression that this was the case, but wasn't sure.

Reply
[-]Miller14y10

Interesting. How does the program determine hard questions (and their answers) without qualifying as generating them itself? That is, the part about enumerating other programs seems superfluous.

[Edit added seconds later] Ok, I see perhaps it could ask something like 'predict what's gonna happen 10 seconds from now', and then wait to see if the prediction is correct after the universe runs for that long. In that case, the parent program isn't computing the answer itself.

Reply
[-]cousin_it14y10

You don't need to be able to generate the answer to your hard problem yourself, only to check that the superintelligence's offered answer is correct. These two abilities are equivalent if computing resources are unlimited, but you could run the superintelligence for a limited time... This line of thought seems to lead into the jungle of complexity theory and you should probably take my comments with a grain of salt :-)

Reply
[-]gwern14y50

Legg's 2006 "Is There an Elegant Universal Theory of Prediction?" may be relevant.

(The answer, BTW, is 'no'; seems to be in the usual Godelian limit vein of thought: "In this paper, it is shown that although highly powerful algorithms exist, they are necessarily highly complex.")

Reply
[-]timtyler14y00

The paper seems not very quantative. It is not obvious from it whether a human needs a thousand bits, a million bits, a trillion bits - or whatever.

Reply
[-]gwern14y00

It would be quite impressive if it were able to...

My point was that Legg has shown, as I understand it, that any powerful prediction algorithm which is powerful enough to predict most/all of the universe (as one would expect a fearsome AGI to able to do) will be at least as complex as the universe it's predicting.

Reply
[-]gjm14y40

It seems likely that the answer depends on how rapidly and how reliably you want your "seed" to be able to turn into an actual working superintelligence. For instance:

Something along the lines of AIXI might have a very short program indeed, but not do anything useful unless you gave it more computing power and time than the entire universe can offer. (It is AIUI an open question whether that's actually finite. Let's suppose it is.)

A lots-of-humans simulator might well produce a superintelligence but its inhabitants might instead wipe themselves out in a (simulated) nuclear war, or just turn out not to be smart enough to make a working AI, or something.

Reply
[-]timtyler14y40

Some entertain the hypothesis that the K-complexity of the entire universe may be under 1000 bits - though nobody knows if that is true or not.

A few bacteria will have created the explosion that leads to any future superintelligence. Much of the resulting complexity came not from them, but from their environment, though. Also, that took a while. It seems as though time limits would be needed if you are looking for a different kind of answer.

Reply
[-]wedrifid14y70

Some entertain the hypothesis that the K-complexity of the entire universe may be under 1000 bits - though nobody knows if that is true or not.

More can be said about a single apple than about all the apples in the world.

Handing me an entire universe and saying "here you go, there is a superintelligence in there somewhere, find it yourself" does not qualify as giving me a superintelligence (in the information sense). In fact I need the same amount of information to find a superintelligence in the "1,000 bit universe" as the minimum K-complexity of the superintelligence itself.

The K-complexity of the entire universe is basically irrelevant.

Reply
[-]timtyler14y20

In fact I need the same amount of information to find a superintelligence in the "1,000 bit universe" as the minimum K-complexity of the superintelligence itself.

Well, perhaps minus 1,000 bits. Those 1,000 bits might be screening off a lot of universes with no superintelligences in them, so they could matter a great deal. If so, that's a smaller space to search - by a factor of 2^1,000.

Reply
[-]paulfchristiano14y00

A description of a superintelligence might be quite easy to find in our universe; for example, if the entire universe were tiled with AIs for most of its lifetime, it may not require much information to point to one of them once you have described the universe. If the Kolmogorov complexity of the universe were really only 1000, this could easily beat cousn_it's suggestion.

Reply
[-]wedrifid14y-20

This is an upper bound not a lower bound. That roughly means that whenever you don't have proof to the contrary you assume the worst.

Reply
[-]paulfchristiano14y00

What? Cousin_it gave an upper bound. I (and timtyler) are pointing out a plausible way the value could be significantly below that upper bound.

Reply
[-]wedrifid14y-10

It is true that the grandparent does not fit this context. Mistake!

Reply
[-]Manfred14y00

If what you really want is a program that can run on a real supercomputer and reach super-human intelligence in less than a month, that's not just measuring its complexity. But I'd guess it could fit on a floppy disk.

Reply
Moderation Log
More from Thomas
View more
Curated and popular this week
30Comments

The most simple, able to self-improve "seed" for a superintelligence must be how complex?

Give your (wild) estimation or a pure guess in the terms of bits.

Do you reckon it is 1000 bits enough? At least 1000000 is required? More than 10^20 bits?

I wonder what your approximate numbers are.

Thank you!