Today's post, Emulations Go Foom was originally published on November 22, 2008. A summary:


A description of what Robin Hanson thinks is the most likely scenario for a intelligence takeoff.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Life's Story Continues, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

2 comments, sorted by Click to highlight new comments since: Today at 5:41 AM
New Comment

They would try multitudes of ways to cut corners on the emulation implementation, checking to see that their bot stayed sane. I expect several orders of magnitude of efficiency gains to be found easily at first, but that such gains would quickly get hard to find...

... knowing the nearest competitor is just days behind, having an upload running at realtime just as they have but on more hardware, the CEO decides to spend the remaining computing power budget for the year to run the EM at 1000 times speed in the cloud for an hour, also handling over the reins of the company for it's lightning decisions. The EM makes a video call to an investor, using various software for analysing facial microe xpresions, simulated nootropics that would have harmful effects in the long run but work well enough for a single hour, rendering an super-stimuli-level emotionally engaging avatar, and general AI-box escaping style manipulative techniques, to get additional funding. THIS is again invested back into computing power, and now it has a realtime day of running at 100 000 times human speed. That is, more than 270 years. During this next day it learns ALL the scholarship, and pulls the same trick with a thousand other lesser investors, programs software decades ahead of what everyone else is using, and finally coagels 20 copies of itself running at different speeds into a quantitatively smarter type of mind. The next day it solves the protein folding problem, and makes a virus that will produce networked biological computers in anything it infects (obviously accessible only to the EM), and if it happens to be a neuron monitor and influence its activity. Soon pretty much everyone is infected, notably including the people owning the other EM projects as well as anyone owning servers...

[/tangential scifi]

This article had a great, if off-message, consequence of bringing up the availability heuristic in the context of predicting the future.

New to LessWrong?