by [anonymous]
1 min read13th Mar 20233 comments

13

Synopsis as tweeted by the author: "Some of my friends are very invested in predicting when AGI is supposed to arrive. The history of technology development shows that you can't time things like this."

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 12:49 AM

I mostly agree with the things in that link, but I also want to say that at the end of the day, I think it’s fine and healthy in general for someone to describe their beliefs in terms of a probability distribution even when they have very little to go on. So in this particular case, if someone says “I think AGI will probably (>80%) come in the next 10 years”, I would say “my own beliefs are different from that”, but I would not describe that person as “overconfident”, per se. It’s not like there’s a default timeline probability distribution, and this default is mostly >10 years, and you need great personal confidence & swagger to overrule that default. There is no default! The link agrees that the arguments for long timelines are just as sketchy as the arguments for short timelines. It’s still appropriate for people to do the best they can to form probabilistic beliefs.

[-][anonymous]1y10

This article is extremely wrong and kinda a waste of text.

Breaking down the author's points:

  1.  There were past AI hype predictions, such as the Lighthill Report.  Those predictions were made when they did not had robust, publicly usable systems nor deep benchmarks of human performance nor a history of large and steady gain nor very large budgets.  
  2. "the speculative engineers of spaceflight also produced many other possible designs which have not actually been built".  But eye wateringly expensive machines that would have cost the entire GDP of a superpower to build with no ROI. These facts were known at the time of the speculation. 
  3. ". Can existing RLHF techniques with much more compute suffice to build a recursively self-improving agent which bootstraps to AGI".  Yes, GPT-4 is already using RBRM, which is RL-machine feedback.  This is easily capable of self improvement.
  4. "The distance is short because I personally can’t think of obstacles  : Except the distance cannot be far because of instrumental convergence and AI criticality.

I think maybe this was mistitled. It seems to make a solid argument against certainty in AI timelines. It does not argue against the attempt or against taking seriously the distribution across attempts.

It raises the accuracy of some predictions of space flight, then notes that others were never implemented. It could well be that there are multiple viable ways to build a rocket, a steam engine, and an AGI.

Von Braun would weep at our lack of progress on space flight. But we did not progress because there don't actually seem to be near term economic incentives. There probably are for AGI.

Timelines are highly uncertain, but dismissing the possibility of short timelines makes as little sense as dismissing long timelines.