against "AI risk"

I expect losses of technological capability to be recovered with high probability.


On what timescale?

I find the focus on x-risks as defined by Bostrom (those from which Earth-originating intelligent life will never, ever recover) way too narrow. A situation in which 99% of humanity dies and the rest reverts to hunting and gathering for a few millennia before recovering wouldn't look much brighter than that -- let alone one in which humanity goes extinct but in (say) a hundred million years the descendants of (say) elephants create a new civilization. In particular, I can't see why we would prefer the latter to (say) a civilization emergi... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

8JoshuaZ8yWhy? This is highly non-obvious. To reach our current technological level, we had to use a lot of non-renewable resources. There's still a lot of coal and oil left, but the remaining coal and oil is harder to reach and much more technologically difficult to reliably use. That trend will only continue. It isn't obvious that if something set the tech level back to say 1600 that we'd have the resources to return to our current technology level.

against "AI risk"

by Wei_Dai 1 min read11th Apr 201291 comments


Why does SI/LW focus so much on AI-FOOM disaster, with apparently much less concern for things like

  • bio/nano-tech disaster
  • Malthusian upload scenario
  • highly destructive war
  • bad memes/philosophies spreading among humans or posthumans and overriding our values
  • upload singleton ossifying into a suboptimal form compared to the kind of superintelligence that our universe could support

Why, for example, is lukeprog's strategy sequence titled "AI Risk and Opportunity", instead of "The Singularity, Risks and Opportunities"? Doesn't it seem strange to assume that both the risks and opportunities must be AI related, before the analysis even begins? Given our current state of knowledge, I don't see how we can make such conclusions with any confidence even after a thorough analysis.

SI/LW sometimes gives the impression of being a doomsday cult, and it would help if we didn't concentrate so much on a particular doomsday scenario. (Are there any doomsday cults that say "doom is probably coming, we're not sure how but here are some likely possibilities"?)