It's been over two and a half years since Paul put this blog post on takeoff speeds online. In particular, it argues that the "fast takeoff" undergone by humans is not very strong evidence that AIs will also undergo a fast takeoff, because evolution wasn't "optimising for" humans taking over the world.

I think this argument has been fairly influential - possibly disproportionately influential, given its brevity. I find it moderately persuasive, but not entirely so, and I'm currently working on a post explaining why. What I'm wondering is: have there been other critiques or responses to this argument? Because it currently seems to me like there's been very little public engagement with it.

New Answer
New Comment

4 Answers sorted by

riceissa

Oct 30, 2020

60

There was "My Thoughts on Takeoff Speeds" by tristanm.

Max Ra

Nov 16, 2021

Ω110

Thanks for asking, I just read the post and was also interested in other people's thoughts.

My thoughts while reading:

  1. Is the emergence of humans really a good example for a significantly discontinuous jump? I spontaneously imagined that the first humans weren't actually performing much better than other apes, and that it took a lot of time of cultural development before humans started clearly dominating via using their increased strategizing/planning/coordinating capabilities.
  2. Paul seemed unconvinced of the potential for major insights (or a "secret sauce") about how to design discontinuously superior AIs. He wondered about analogous examples were major insights led to significant technological advances. This probably is covered well by the AI Impacts project on discontinuous technological developments, which found 10 relatively clear instances, and e.g. the bridge length discontinuity was "based on a new theory of bridge design". 
  3. Regarding his argument why recursive self-improvement doesn't lead to fast takeoff: "Summary of my response: Before there is AI that is great at self-improvement there will be AI that is mediocre at self-improvement."I had the thought that there might be a "capability overhang" regarding self-improvement, because ML might currently underrate the progress that can be had here and rather spends time on other applications. I personally also find it plausible that a stable recursively self-improving architecture might be a candidate for a major insight that somebody might have someday.

Søren Elverlin

Nov 03, 2020

10

The AISafety.com Reading Group discussed this blog post when it was posted. There is a fair bit of commentary here: https://youtu.be/7ogJuXNmAIw

4 comments, sorted by Click to highlight new comments since: Today at 3:59 AM

I agree; the argument has been surprisingly influential & there has been surprisingly little critique/pushback, at least in public. I intended to write a critique myself but never got around to it; now it's climbing the ranks in my list of priorities because of exchanges like this. I'd love to give feedback on your version if you want! Could even collaborate.

I'd love to give feedback on your version if you want! Could even collaborate.

Ditto for me!

I am also interested in this.

[+][comment deleted]3y10