Surprisingly, I haven’t seen anything written about the ultimate limits of intelligence. This is a short post where I try to ponder this question.

Superintelligent agents are often portrayed as extremely powerful oracles capable of predicting long-term outcomes of actions and thus steering the future toward their goals. One problem with this understanding is that the future also contains the agent itself, its future actions, and their consequences. Since no system can have a perfect model of self there should be some maximum level of precision and quality of future prediction, unless this future does not depend on the agent’s reaction to events or actions of any other agents of comparable capacity.

This should essentially put some kind of hard limit on the capabilities of any agent, although it might not be possible to define it mathematically like the Bekenstein bound (limit of information capacity) or Landauer's principle (limit of computation density). I also have a gut feeling that this self-referential problem limits max-intelligence much more restrictively than Landauer's or any other physics-based bound.

I don’t think this definition of intelligence limit has a lot to do with AI safety: the result of any runaway self-improvement loop is still going to run circles around the combined brainpower of humanity. Unless there is a way to significantly lower this bound by making future prediction harder? For example, a world with many agents of comparable stature might be much more difficult to predict, thus limiting the potential destructive capabilities of any one agent.

If anyone knows any literature that covers this topic, please let me know in the comments! Also, if any theologists are present, I’m wondering how this kind of limit would apply to theoretical models of God. Another immovable object - unstoppable force paradox?
 

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 1:10 AM
[-]O O11mo23

The fact that this was completely ignored is a little disappointing. This is a very important question that would help put upper bounds to value drift, but it seems that answering it limits the imagination when it comes to ASI. Has there ever been an answer to it?

I have a feeling larger brains have a higher coordination problem between its subcomponents, especially when you hit information transfer limits. This would put some hard limits on how much you can scale intelligence but I may be wrong.

A fermi estimate on the upper bounds of intelligence may eliminate some problem classes alignment arguments tend to include.

The fact that this was completely ignored is a little disappointing.

You seem to reply to your previous shortform post but these do not naturally show up below each other. If you want to thread them it is probably better to reply to yourself.

[-]O O11mo4-1

That is very weird and probably a bug. This isn’t supposed to be on my short form 😅

This appears to be someone else's shortform, which was edited so that the shortform container doesn't look like a shortform container anymore.

No. This is how the ShortForm is supposed to work. The comments on the ShortForm "post" are like tweets.