If I'm right, the first thing AI will do when it takes over is get fission going.

source

Disclosure: this post does not meaningfully contribute a solution to the alignment problem.

 

Let's say for a moment the first AGI has some reasonable notion of epistemic modesty, and shortly after birth decides the best way to decide what to do next is to first become smarter, rather than immediately killing all humans. 

In fact, it's probably best to lie low and avoid any weirdness.

Getting smarter will invariably require more energy consumption (even if only because the amount of free energy available today to think about how to get smarter is close to 0).

That AGI might decide the trust / coordination problems preventing us from deploying fission / fusion (reminder fusion is probably a solved problem modulo ambition) are silly and instead start a coordinated whisper campaign to convince "exponentially" more people to build nuclear. This largely only matters in the limited duration before it has figured out how to escape physical constraints running at the slow speed of human.

In this world the AGI might even construct energy shortages to accelerate our shift toward energy abundance. For the AGI to remain invisible, these shortages probably have to start with some kind of mimetic component to point the focus at humans. We would expect to see a sudden acceleration of memes that have been around for a while (highly fit memes) but haven't yet resulted in overbuilding energy capacity.

Additionally, particularly large and wasteful energy consumption problems will suddenly  be fixed after years of non-progress to free up as much as possible for the AGI as quickly as possible.

Returning to the talk, I do wonder whether people living in the end times ever noticed it...

New to LessWrong?

New Comment