Suppose that your current  estimate for possibility of an AI takeoff coming in the next 10 years is some probability x.  As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x.  And 10 years after that, it will be z > y.  My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year?  If so, how many decades (centuries) from now would you expect the inflection point in your estimate? 

New Comment
46 comments, sorted by Click to highlight new comments since:
[-]gwern210

This sounds like a probability search problem in which you don't know for sure there exists anything to find - the hope function.

I worked through this in #lesswrong with nialo. It's interesting to work with various versions of this. For example, suppose you had a uniform distribution for AI's creation over 2000-2100, and you believe its creation 90% possible. It is of course now 2011, so how much do you believe it is possible now given its failure to appear between 2000 and now? We could write that in Haskell as let fai x = (100-x) / ((100 / 0.9) - x) in fai 11 which evaluates to ~0.889 - so one's faith hasn't been much damaged.

One of the interesting things is how slowly one's credence in AI being possible declines. If you run the function fai 50*, it's 81%. fai 90** = 47%! But then by fai 98 it has suddenly shrunk to 15% and so on for fai 99 = 8%, and fai 100 is of course 0% (since now one has disproven the possibility).

* no AI by 2050

** no AI by 2090, etc.

EDIT: Part of the interestingness is that one of the common criticisms of AI is 'look at them, they were wrong about AI being possible in 19xx, how sad and pathetic that they still think it's possible!' The hope function shows that unless one is highly confident about AI showing up in the early part of a time range, the failure of AI to show up ought to damage one's belief only a little bit.


That blog post is also interesting from a mind projection fallacy viewpoint:

"What I found most interesting was, the study provides evidence that people seem to reason as though probabilities were physical properties of matter. In the example with the desk with the eight drawers and an 80% chance a letter is in the desk, many people reasoned as though “80% chance-of-letter” was a fundamental property of the furniture, up there with properties like weight, mass, and density.

Many reasoned that the odds the desk has the letter, stay 80% throughout the fruitless search. Thus, they reasoned, it would still be 80%, even if they searched seven drawers and found no letter. And these were people with some education about probability! One problem is people were tending to overcompensate to avoid falling into the Gambler’s Fallacy. They were educated, well-learned people, and they knew that the probability of a fair coin falling heads remains 50%, no matter how many times in a row heads have already been rolled. They seemed to generalize this to the letter search. There’s an important difference, though: the coin flips are independent of each other. The drawer searches are not.

In a followup study, when the modified questions were posed, with two extra “locked” drawers and a 100% initial probability of a letter, miraculously the respondents’ answers showed dramatic improvement. Even though, formally, the exercises were isomorphic."

For a non-uniform distribution we can use the similar formula (1.0 - p(before 2011)) / (1.0/0.9 - p(before 2011)) which is analogous to adding a extra blob of (uncounted) probability density (such that if the AI is "actually built" anywhere within the distribution including the uncounted bit, the prior probability (0.9) is the ratio (counted) / (counted + uncounted)), and then cutting off the part where we know the AI to have not been built.

For a normal(mu = 2050, sigma=10) distribution, in Haskell this is let ai year = (let p = cumulative (normalDistr 2050 (10^2)) year in (1.0 - p) / (1.0/0.9 - p))¹. Evaluating on a few different years:

  • P(AI|not by 2011) = 0.899996
  • P(AI|not by 2030) = 0.8979
  • P(AI|not by 2050) = 0.8181...
  • P(AI|not by 2070) = 0.16995
  • P(AI|not by 2080) = 0.012
  • P(AI|not by 2099) = 0.00028

This drops off far faster than the uniform case, once 2050 is reached. We can also use this survey as an interesting source for a distribution. The median estimate for P=0.5 is 2050, which gives us the same mu, and the median for P=0.1 was 2028, which fits with sigma ~ 17 years². We also have P=0.9 by 2150, suggesting our prior of 0.9 is in the ballpark. Plugging the same years into the new distribution:

  • P(AI|not by 2011) = 0.899
  • P(AI|not by 2030) = 0.888
  • P(AI|not by 2050) = 0.8181...
  • P(AI|not by 2070) = 0.52
  • P(AI|not by 2080) = 0.26
  • P(AI|not by 2099) = 0.017

Even by 2030 our confidence will have changed little.

¹Using Statistics.Distribution.Normal from Hackage.

²Technically, the survey seems to have asked about unconditional probabilities, not conditional on that AI is possible, whereas the latter is what we want. We may want then to actually fit a normal distribution so that cdf(2028) = 0.1/0.9 and cdf(2050) = 0.5/0.9, which would be a bit harder (we can't just use 2050 as mu).

This drops off far faster than the uniform case, once 2050 is reached.

The intuitive explanation for this behavior where the normal distribution drops off faster is because it makes such strong predictions about the region around 2050 and once you've reached 2070 with no AI, you've 'wasted' most of your possible drawers, to continue the original blog post's metaphor.

To get a visual analogue of the probability mass, you could map the normal curve onto a uniform distribution, something like 'if we imagine each year at the peak corresponds to 30 years in a uniform version, then it's like we were looking at the period 1500-2100AD, so 2070 is very late in the game indeed!' To give a crude ASCII diagram, the mapped normal curve would look like this where every space/column is 1 equal chance to make AI:

2k/2k1/2002/... 2040  2041       2042         2043             2044              2045                2046                   2047                    2048                          2049                              2050 etc.
[-][anonymous]10

Cool! Would it be easy for you to repeat this replacing the normal distribution with an exponential distribution? I think that's a more natural way to model "waiting for something".

You're right, the probability should drop off in a kind of exponential curve since an AI only "gets created at year X" if it hasn't been made before X. I did some thinking, and I think I can do one better. We can model the creation of the AI as a succession of subsequent "technological breakthroughs" for the most part, ie. unpredictable in advance insights about algorithms or processors or whatever that allow the project to "proceed to the next step".

Each step can have an exponential distribution for when it will be completed, all (for simplicity) with the same average, set so that the average time for the final distribution will be 50 years (from the year 2000). The final distribution is then P(totaltime = x) = P(sum{time for each step} = x) which is just the repeated convolution of the distribution for each step. The density distribution turns out to be fairly funky:

!}),

where n is the number of steps involved and a is the parameter to the exponential distributions, which for our purposes is n/50 so that the mean of the pdf is 50 years. For n=1 this is of course just the exponential distribution. For n=5 we get a distribution something like this. The cumulative distribution function is actually a bit nicer in a way, just:

}{(n-1)!})

where γ(s, x) is the lower incomplete gamma function, or Γ(s) - Γ(s, x), which is normalized by (n-1)!. Ok, let's plug in some numbers. The relevant Haskell expression is let ai n x = (let a = (n/50); p = incompleteGamma n (a*x) in (1.0 - p)/(1.0/0.9 - p))¹ where x is years since 2000 without AI. Then for the simple exponential case:

  • P(AI|not by 2011, n=1) = 0.88
  • P(AI|not by 2030, n=1) = 0.83
  • P(AI|not by 2050, n=1) = 0.77
  • P(AI|not by 2070, n=1) = 0.69
  • P(AI|not by 2080, n=1) = 0.65
  • P(AI|not by 2099, n=1) = 0.55

We seem to lose confidence at a somewhat constant gradual rate. By 2150 our probability is still 0.30 though, and it's only by 2300 that it drops to ~2%. Perhaps I need to just cut off the distribution by 2100. Anyway, for n=5 we have more conclusive results:

  • P(AI|not by 2011, n=5) = 0.8995
  • P(AI|not by 2030, n=5) = 0.88
  • P(AI|not by 2050, n=5) = 0.80
  • P(AI|not by 2070, n=5) = 0.61
  • P(AI|not by 2080, n=5) = 0.47
  • P(AI|not by 2099, n=5) = 0.22
  • P(AI|not by 2150, n=5) = 0.0077

So we don't change our confidence much until 2050, then quickly lose confidence, as that area contains "most of the drawers", metaphorically. We will have pretty much disproved strong AI by 2150.

¹ Using the Statistics package again, specifically Statistics.Math.. Note that incompleteGamma is already normalized so we don't need to divide by (n-1)!.

[-][anonymous]10

This is great. The fact that P(AI) is dropping off faster for large n than for small n is a little counterintuitive, right?

Isn't it just the law of large numbers?

This isn't even related to the law of large numbers, which says that if you flip many coins you expect to get close to half heads and half tails. This is as opposed to flipping 1 coin, where you expect to always get either 100% heads or 100% tails.

I personally expected that P(AI) would drop-off roughly linearly as n increased, so this certainly seems counter-intuitive to me.

Incidentally, I've tried to apply the hope function to my recent essay on Folding@home: http://www.gwern.net/Charity%20is%20not%20about%20helping#updating-on-evidence

[-]XiXiDu100

AI can beat humans at chess, autonomously generate functional genomics hypotheses, discover laws of physics on its own, create original, modern music, identify pedestrians and avoid collisions, answer questions posed in natural language, transcribe and translate spoken language, recognize images of handwritten, typewritten or printed text, produce human speech, traverse difficult terrain...

There seems to be a lot of progress in computer science but it doesn't tell us much about the probability, let alone timescale, of artificial general intelligence undergoing explosive recursive self-improvement. Do we even know what evidence we are looking for, would we recognize it?

How can we tell when we know enough to build a seed AI that can sprout superhuman skills, that are not hardcoded, as diverse as social engineering, from within a box? How do we even tell such a thing is possible in principle? What evidence could convince someone that such a thing is possible or impossible?

Just imagine you emulated a grown up human mind and it wanted to become a pick up artist, how would it do that with an Internet connection? It would need some sort of avatar at least and then wait for the environment to provide a lot of feedback.

So even if we're talking about the emulation of a grown up mind it will be really hard to acquire some capabilities. Then how is the emulation of a human toddler going to acquire those skills? Even worse, how is some sort of abstract AGI going to do it that misses all of the hard coded capabilities of a human toddler?

Can we even attempt to imagine what is wrong about a boxed emulation of a human toddler that makes it unable to become a master of social engineering in a very short time?

Can we imagine what is missing that would enable one of the existing expert systems to quickly evolve vastly superhuman capabilities in its narrow area of expertise?

If we are completely clueless about how a seed AI with the potential of becoming superhuman intelligent could be possible, how could we possible update our probability estimates by looking at technological progress? There is a lot of technological progress, even in the field of AI, but it doesn't seem to tell us much about AI going FOOM.

[-]knb10

Good points. That list of AI abilities alone is worth the upvote.

There is another question in this context. What would increase (decrease) the probability of the S event? Clearly the absence of another Earth despite of the Kepler's observations - increases it. What about Watson success? The fact we have no 10 Ghz processors?

Which are those relevant facts or events, which do influence? Can we say in a year or two - yes, since Wolframalpha manages it's own code, a self optimizer is highly likely in the next 10 years?

I think, one cannot do those estimations without pondering these.

[-][anonymous]10

I'd like to suggest a way to organize our thinking about this, but it doesn't quite directly bear on your question. Your question is: how should our confidence in the singularity ever occurring change as time goes on? A related and easier question is: if we grant that a singularity is 100% likely to occur eventually, how much danger should we feel at different times in the future? I'm ready to drop some jargon about this easier question: I think we should be considering the relative probability

(1) P(still alive at time t + 1 | still alive at time t)

which is the ratio P(still alive at time t+1) / P(still alive at time t) by Bayes theorem. By taking the logarithm and considering a small unit of time, this is approximately

(2) exp(- pdf(t) / (1 - cdf(t)))

where pdf and cdf are the probability density function and cumulative density function of "when a singularity will occur." I have seen this expression (2) called the "failure rate" or "hazard rate" of the distribution. Anyway and in particular you can compute the distribution from the hazard rate.

For instance, you might think (1) is constant, meaning something like "an attack could come at any time, but we have no reason to expect one time over another." In that case you are dealing with an exponential distribution. Or you might think that (1) is low now but will asymptotically approach some constant as time goes on, in which case you might be dealing with a gamma distribution. These gamma distributions are interesting in that they have a peak or mode) in the future, which is what I at first thought you might be getting at by "inflection point."

I find it impossible to predict without knowing specifics about the future scenario. As we get closer to creating an AI, we are almost guaranteed to find out more difficulties associated with it.

Maybe in 10 years we will find some unforeseen problem that we have no idea how to resolve, in which case of course my probability estimate would significantly drop.

Or, if we have not seen any significant progress in the field, I predict my estimate would remain constant for the first 30 years, then decrease every year progress is not being made.

If there is a continuous stream of progress that doesn't also reveal huge new barriers, then I don't believe it would ever go down. But, I find it hard to imagine any scenario that presents continual progress, doesn't show any major roadblocks, yet still has not managed to develop AI more than 200 years form now.

[-][anonymous]00

That is not called an "inflection point". It is called a "maximum

There seem to be two separate questions, when we will have artificial intelligence that approximates human intelligence, and when an AI takeoff will occur. If we get the first and the second doesn't happen shortly thereafter then we should strongly reduce our estimates for the second happening at all. But the second cannot happen until the first has happened. Moreover, if the two are extremely tied up (that is they will likely occur very close to each other), then the only reason we haven't observed such an event already might be that such an event is likely to wipe out the species that triggers it, and there's just survivorship bias. (This seems to be the sort of anthropic reasoning that just makes me feel very confused so I'm not sure this is a reasonable observation.)

I would say that if we don't have human-like AI in the next fifty years, and there's no obvious temporary barrier preventing technological improvement (i.e. global collapse of civilization or at least enough bad stuff to prevent almost any research) then I'd start seriously thinking that people like Penrose have a point. Note that doesn't mean that there's anything like a soul (Penrose's ideas for example suggest that an appropriately designed quantum computer could mimic a conscious entity), although that idea might also need to be on the table. I don't consider any of those sorts of hypotheses at all likely right now, but I'd say 50 years is about where the good rationalist should recognize their confusion.

But the second cannot happen until the first has happened.

Do you mean to say that only something that approximates human intelligence can initiate an "AI takeoff"? If so, can you summarize your reasons for believing that?

Do you mean to say that only something that approximates human intelligence can initiate an "AI takeoff"? If so, can you summarize your reasons for believing that?

So this is a valid point that betrays a possible unjustified leap in logic on my part. I think the thought process (although honestly I haven't thought about it that much) is something to the effect that any sufficiently powerful optimizer such that it can self-optimize for a substantial take-off is going to have to be able to predict and interact well enough with its environment that it will need to effectively solve the natural language problem and talk to humans (we are after all a major part of its environment until/unless it decides that we are redundant). But the justification for this is to some extent just weak intuition and the known sample of mind-space is very small, so intuitions informed by such experience should be suspect.

(nods) Yeah, agreed.

I would take it further, though. Given that radically different kinds of minds are possible, the odds that the optimal architecture for supporting self-optimization for a given degree of intelligence happens to be something approximately human seem pretty low.

On the other hand, is there any way to think about the odds of humans inventing a program capable of self-optimization which doesn't resemble a human mind?

I'm not sure.

I think if I had a better grasp of whether and why I think humans are (aren't) capable of building self-optimizing systems at all, I would have a better grasp of the odds of them being of particular types.

Do you mean to say that only something that approximates human intelligence can initiate an "AI takeoff"?

What reasons do I have to believe that some abstract optimization process could sprout capabilities like social engineering without them being hardcoded or a result of time-expensive interactions with its environment?

I admit I have no clue about machine learning or computer science and mathematics in general. So maybe I should ask, what reasons do I have to believe that Eliezer Yudkowsky has good reasons to believe that some algorithm could undergo explosive recursive self-improvement?

All I can imagine is that something might be able to look at a lot of data, like YouTube videos, and infer human language and social skills like persuasion. That sounds interesting, but...phew...is that even possible for a computationally bounded agent? I have absolutely no clue!

what reasons do I have to believe that Eliezer Yudkowsky has good reasons to believe that some algorithm could undergo explosive recursive self-improvement?

I approach the question this way: consider the set S of algorithms capable of creating intelligent systems.

Thus far, the only member of S we know about is natural selection... call that S1.

There are several possibilities:

  1. Human minds aren't in S at all. That means humans can't produce any AI.
  2. Human minds are in S... call us S2.... but are not significantly better at creating intelligences than natural selection is: S2 <= S1. That means humans can't produce superhuman AI.
  3. S2 > S1. That means humans can produce superhuman AI.

Given 1 or 2, recursive self-improvement isn't gonna happen.
Given 3: now consider a superhuman AI created by humans. Is it a member of S?

Again, three possibilities: not in S, S3 > S2, or S3 <= S2.
I can't see why a human-created superhuman AI would necessarily be incapable of doing any particular thing that human intelligences can do, so (S3 > S2) seems pretty likely given (S2 > S1).

Lather, rinse, repeat: each generation is smarter than the generation before.

So it seems to me that, given superhuman AI, self-optimizing AI is pretty likely. But I don't know how likely superhuman AI -- or even AI at all -- is. We may just not be smart enough to build intelligent systems.

I wouldn't count on it, though. We're pretty clever monkeys.

As for "explosive"... well, that's just asking how long a generation takes. And, geez, I dunno. How long does it take to develop a novel algorithm for producing intelligence? Maybe centuries, in which case the bootstrapping process will take millenia. Maybe minutes, in which case we get something orders of magnitude smarter than us by lunchtime.

Of course, at some point returns presumably diminish... that is, there's a point where each more-intelligent generation takes too long to generate. But it would be remarkable if humans happened to be anywhere near the top of that slope today.

An argument that is often mentioned is the relatively small difference between chimpanzees and humans. But that huge effect, increase in intelligence, rather seems like an outlier and not the rule. Take for example the evolution of echolocation, it seems to have been a gradual progress with no obvious quantum leaps. The same can be said about eyes and other features exhibited by biological agents.

Is it reasonable to assume that such quantum leaps are the rule, based on a single case study?

Maybe the fact that those other examples aren't intelligence supports the original argument that intelligence works in quantum leaps.

You can even take examples from within humanity, the smartest humans are capable of things far beyond the dumbest (I doubt even a hundred village idiots working together could do what Einstein managed), and in this case there is not even any difference in brain size or speed.

Maybe the fact that those other examples aren't intelligence supports the original argument that intelligence works in quantum leaps.

Why didn't it happen before then? Are there animals that are vastly more intelligent than their immediate predecessors? I don't see any support for the conclusion that what happened between us and our last common ancestor with the great apes is something that happens often.

You can even take examples from within humanity, the smartest humans are capable of things far beyond the dumbest...

I don't think this is much supported. You would have to account for different upbringing, education, culturual and environmental differences and a lot of dumb luck. And even the smartest humans are dwarfs standing on the shoulders of giants. Sometimes the time is simply ripe, thanks to the previous discoveries of unknown unknowns.

Why didn't it happen before then? Are there animals that are vastly more intelligent than their immediate predecessors? I don't see any support for the conclusion that what happened between us and our last common ancestor with the great apes is something that happens often.

Sure. But that's isn't so much evidence for intelligence not being a big deal as it is that there might be very few paths of increasing intelligence which are also increasing fitness. Intelligence takes a lot of resources and most life-forms don't exist in nutrition rich and calorie rich environments.

But there is other evidence to support your claim. There are other species that are almost as intelligent as humans (e.g. dolphins and elephants) that have not done much with it. So one might say that the ability to make tools is a useful one also and that humans had better toolmaking appendages. However, even this isn't satisfactory since even separate human populations have remained in close to stasis for hundreds of thousands of years, and the primary hallmarks of civilization such as writing and permanent settlements only arose a handful of times.

You would have to account for different upbringing, education, culturual and environmental differences and a lot of dumb luck.

I don't think this is relevant to most of Benelliot's point. Upbringing, education, culture, and environment all impact eventual intelligence for humans because we are very malleable creatures. Ben's remark commented on the difference between smart and dumb humans, not the difference between those genetically predisposed to be smarter or dumber (which seems to be what your remark is responding to).

Take for example the evolution of echolocation, it seems to have been a gradual progress with no obvious quantum leaps. The same can be said about eyes and other features exhibited by biological agents.

Yes, but these are features produced by evolution. Evolution doesn't work very much the same, and any AI would likely start with much of human knowledge already given.

Yes, but these are features produced by evolution.

There is a significant difference between intelligence and evolution if you apply intelligence to the improvement of evolutionary designs. But when it comes to unknown unknowns, what difference is there between intelligence and evolution? The only difference then seems to be that intelligence is goal-oriented, can think ahead and jump fitness gaps. Yet the critical similarity is that both rely on dumb luck when it comes to genuine novelty. And where else but when it comes to the dramatic improvement of intelligence does it take the discovery of novel unknown unknowns?

A basic argument supporting the risks from superhuman intelligence is that we don't know what it could possible come up with. That is why we call it a 'Singularity'. But why does nobody ask how it knows what it could possible come up with?

It seems to be an unquestioned assumption that intelligence is kind of a black box, a cornucopia that can sprout an abundance of novelty. But this implicitly assumes that if you increase intelligence you also decrease the distance between discoveries. I don't see that...

These seem like mainly valid points. However,

The only difference then seems to be that intelligence is goal-oriented, can think ahead and jump fitness gaps

seems to merit a response of "So, other than that, Mrs. Lincoln, how was the play?" Those are all very large differences. Let me add to the list: Intelligence can engage in direct experimentation. Intelligence can also observe and incorporate solutions that other optimizing agents (intelligent or not) have used for similar situations. All of these seem to be distinctions that make intelligence very different from other evolution. It isn't an accident that the technologies which have been most successful for humans such as writing are technologies which augment many of these different advantages that intelligence has over evolution.

It isn't an accident that the technologies which have been most successful for humans such as writing are technologies which augment many of these different advantages that intelligence has over evolution.

I agree. To be clear, my confusion is mainly about the possibility of explosive recursive self-improvement. I have a hard time to accept that it is very likely (e.g. easily larger than a 1% probability), that such a thing is practically and effectively possible, or at least that we will be able to come up with an algorithm that is capable of quickly surpassing a human set of skills without huge amounts of hard-coded intelligence. I am skeptical that we will be able to quickly approach such a problem, that it won't be a slow and incremental evolution slowly approaching superhuman intelligence.

As I see it, the more abstract a seed AI is, the closer it is to something like AIXI, the more time it will need to reach human level intelligence, let alone superhuman intelligence. The less abstract a seed AI is, the more work we will have to put into painstakingly hard-coding it to be able to help us improve its intelligence even further. And in any case, I don't think that dramatic quantum leaps in intelligence are a matter of speed improvements or the accumulation of expert systems. It might very well need some genuine novelty in the form of the discovery of unknown unknowns.

What is intelligence? Take a chess computer, it is arguably intelligent. It is a narrow form of intelligence. But what is it that differentiates narrow intelligence from general intelligence? Is it a conglomerate of expertise, some sort of conceptual revolution or a special kind of expert system that is missing? My point is, why haven't we seen any of our expert systems come up with true novelty in their field, something no human has thought of before? The only algorithms that have so far been capable of achieving this have been of evolutionary nature, not what we would label artificial intelligence.

Intelligence can also observe and incorporate solutions that other optimizing agents (intelligent or not) have used for similar situations.

Evolution was able to come up with altruism, something that works two levels above the individual and one level above society. So far we haven't been able to show such ingenuity by incorporating successes that are not evident from an individual or even societal position.

Your point is a good one, I am just saying that the gap between intelligence and evolution isn't that big here.

Let me add to the list, intelligence can engage in direct experimentation.

Yes, but evolution makes better use of dumb luck by being blindfolded. This seems to be a disadvantage but actually allows it to discover unknown unknowns that are hidden where no intelligent, rational agent would suspect them and therefore would never find them given evidence based exploration.

A minor quibble:

Yes, but evolution makes better use of dumb luck by being blindfolded. This seems to be a disadvantage but actually allows it to discover unknown unknowns that are hidden where no intelligent, rational agent would suspect them and therefore would never find them given evidence based exploration

Never is a very strong word and it isn't obvious that evolution will actually find things that intelligence would not. The general scale that evolution gets to work at is much longer term than intelligence has so far. If intelligence has as much time to fiddle it might be able to do everything evolution can (indeed, intelligence can even co-opt evolution by means of genetic algorithms). But, this doesn't impact your main point in so far as if intelligent were to need those sorts of time scales then one obviously wouldn't have an intelligence explosion.

I want to expand on my last comment:

Is it clear that the discovery of intelligence by evolution had a larger impact than the discovery of eyes? What evidence do we have that increasing intelligence itself outweighs its cost compared to adding a new pair of sensors?

What I am asking is how we can be sure that it would be instrumental for an AGI to increase its intelligence rather than using its existing intelligence to pursue its terminal goal? Do we have good evidence that the resources that are necessary to increase intelligence outweigh the cost of being unable to use those resources to pursue its terminal goal directly?

My main point regarding the advantage of being "irrational" was that if we would all think like perfect rational agents, e.g. closer to how Eliezer Yudkowsky thinks, we would have missed out on a lot of discoveries that were made by people pursuing “Rare Disease for Cute Kitten” activities.

How much of what we know was actually the result of people thinking quantitatively and attending to scope, probability, and marginal impacts? How much of what we know today is the result of dumb luck versus goal-oriented, intelligent problem solving?

What evidence do we have that the payoff of intelligent, goal-oriented experimentation yields enormous advantages over evolutionary discovery relative to its cost? What evidence do we have that any increase in intelligence does vastly outweigh its computational cost and the expenditure of time needed to discover it?

Evolution acting on intelligent agents has been able to do quite a bit of that for millions of years, though - for example via the topic I am forbidden to mention.

I do not doubt that humans can create superhuman AI, but I don't know how likely self-optimizing AI is. I am aware of the arguments. But all those arguments rather seem to me like theoretical possibilities, just like universal Turing machines could do everything a modern PC could do and much more. But in reality that just won't work because we don't have infinite tapes, infinite time...

Applying intelligence to itself effectively seems problematic. I might just have to think about it in more detail. But intutively it seems that you need to apply a lot more energy to get a bit more complexity. That is, humans can create superhuman intelligence but you need a lot of humans working on it for a long time and have a lot of luck stumbling upon unknown unknowns.

It is argued that the mind-design space must be large if evolution could stumble upon general intelligence. I am not sure how valid that argument is, but even if that is the case, shouldn't the mind-design space reduce dramatically with every iteration and therefore demand a lot more time to stumble upon new solutions?

Another problem I have is that I don't get why people here perceive intelligence to be something proactive with respect to itself. No doubt there exists some important difference between evolutionary processes and intelligence. But if you apply intelligence to itself, this difference seems to diminish. How so? Because intelligence is no solution in itself, it is merely an effective searchlight for unknown unknowns. But who knows that the brightness of the light increases proportionally with the distance between unknown unknowns? To have an intelligence explosion the light would have to reach out much farther with each generation than the increase of the distance between unknown unknowns...I just don't see that to be a reasonable assumption.

I do not doubt that humans can create superhuman AI, but I don't know how likely self-optimizing AI is.

What appears to be a point against the idea:

While we have proven that very powerful prediction algorithms which can learn to predict these sequences exist, we have also proven that, unfortunately, mathematical analysis cannot be used to discover these algorithms due to problems of Godel incompleteness.

This is from: Is there an Elegant Universal Theory of Prediction?

Lemma 3.3, can have arbitrarily high Kolmogorov complexity but nevertheless can be predicted by trivial algorithms.

This is from your link.

But if it can be predicted by a trivial algorithm, it has LOW Kolmogorov complexity.

Check with definition 2.4. In the technical sense used in the document, a predictor is not defined as being something that outputs the sequence - it is defined as something that eventually learns how to predict the sequence - making at most a finite number of errors.

Strings with high Kolmogorov complexity being "predicted" by trivial algorithms is quite compatible with this notion of "prediction".

So, above the last wrongly predicted output, the whole sequence is as complex as the (improved) predictor?

Here's an example from the paper that helps illustrate the difference: if the sequence is a gigabyte of random data repeated forever, it can be predicted with finitely many errors by the simple program "memorize the first gigabyte of data and then repeat it forever", though the sequence itself has high K-complexity.

[-]Thomas-20

No it has not. The algorithm for copying the first GB forever is small and the Kolmogorov's complexity is just over 1GB.

For the entire sequence.

Yes, but the predictor's complexity is much lower than 1GB.

The paper also gives an example of a single predictor that can learn to predict any eventually periodic sequence, no matter how long the period.

[-]Thomas-10

Predictor should remember what happened. It has learned. Now it's 1 GB heavy.

It looks like you just dislike the definitions in the paper and want to replace them with your own. I'm not sure there's any point in arguing about that.

[+]Thomas-50