Speaking only for myself, most of the bullets you listed are forms of AI risk by my lights, and the others don't point to comparably large, comparably neglected areas in my view (and after significant personal efforts to research nuclear winter, biotechnology risk, nanotechnology, asteroids, supervolcanoes, geoengineering/climate risks, and non-sapient robotic weapons). Throwing in all x-risks and the kitchen sink in, regardless of magnitude, would be virtuous in a grand overview, but it doesn't seem necessary when trying to create good source materials in... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Showing 3 of 6 replies (Click to show all)

I have a few questions, and I apologize if these are too basic:

1) How concerned is SI with existential risks vs. how concerned is SI with catastrophic risks?

2) If SI is solely concerned with x-risks, do I assume correctly that you also think about how cat. risks can relate to x-risks (certain cat. risks might raise or lower the likelihood of other cat. risks, certain cat. risks might raise or lower the likelihood of certain x-risks, etc.)? It must be hard avoiding the conjunction fallacy! Or is this sort of thing more what the FHI does?

3) Is there much ten... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

2Dmytry8y Seems like a prime example of where to apply rationality: what are the consequences to trying to work on AI risk right now? Versus on something else? Does AI risk work have good payoff? What's of the historical cases? The one example I know of is this: http://www.fas.org/sgp/othergov/doe/lanl/docs1/00329010.pdf [http://www.fas.org/sgp/othergov/doe/lanl/docs1/00329010.pdf] (thermonuclear ignition of atmosphere scenario). Can a bunch of people with little physics related expertise do something about such risks >10 years before? Beyond the usual anti war effort? Bill Gates will work on AI risk when it becomes clear what to do about it.
5endoself8y Are you including just the extinction of humanity in your definition of x-risk in this comment or are you also counting scenarios resulting in a drastic loss of technological capability?

against "AI risk"

by Wei_Dai 8y11th Apr 201291 comments

24


Why does SI/LW focus so much on AI-FOOM disaster, with apparently much less concern for things like

  • bio/nano-tech disaster
  • Malthusian upload scenario
  • highly destructive war
  • bad memes/philosophies spreading among humans or posthumans and overriding our values
  • upload singleton ossifying into a suboptimal form compared to the kind of superintelligence that our universe could support

Why, for example, is lukeprog's strategy sequence titled "AI Risk and Opportunity", instead of "The Singularity, Risks and Opportunities"? Doesn't it seem strange to assume that both the risks and opportunities must be AI related, before the analysis even begins? Given our current state of knowledge, I don't see how we can make such conclusions with any confidence even after a thorough analysis.

SI/LW sometimes gives the impression of being a doomsday cult, and it would help if we didn't concentrate so much on a particular doomsday scenario. (Are there any doomsday cults that say "doom is probably coming, we're not sure how but here are some likely possibilities"?)