Two talks from the Future of Humanity Institute are now online (this is the first time we've done this, so please excuse the lack of polish). The first is Anders Sandberg talking about brain emulations (technical overview), the second is myself talking of the risks of Oracle AIs (informal presentation). They can be found here:

Fesability of whole-brain emulation: http://www.youtube.com/watch?v=3nIzPpF635c&feature=related, initial paper at http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf, new paper still to come.

Thinking inside the box: Using and controlling an Oracle AI:http://www.youtube.com/watch?v=Gz9zYQsT-QQ&feature=related, paper at http://www.aleph.se/papers/oracleAI.pdf

New to LessWrong?

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 12:44 PM

Should probably be under discussion rather than main.

When Anders Sandberg uses a simulated-atmosphere / simulated-brain analogy and says roughly "We're interested in climate, not weather" I'm tempted to reply "Speak for yourself, buddy." Many will be interested in the "weather" of the brain as well as the "climate". This is especially true if a brain emulation is proposed for uploading.

It is quite possible that the brain, and the course of an individual life involving relationships, jobs, and so on, are both chaotic enough that a relatively minor looking variation in brain activity could lead to a vastly different life. And of course, a vastly different life will in turn change the "climate" of brain activity.

It's probably arguable that the expected utility, for an agent who obeys something like the Von Neumann-Morgenstern axioms, is the same. A life chosen by an emulated brain might miss out on a wonderful relationship or career, due to random fluctuations, that the organic brain would have enjoyed. But the exact reverse might be true instead, with equal probability.

Such an expected-utility argument will probably satisfy those who satisfy the axioms. In other words, a minority. And could in principle satisfy those who, on rational reflection, would satisfy the axioms. In other words, still a minority.

An oracle answering questions is indeed relatively easy - but performing inductive inference is even easier: no database of factual knowledge is needed, and training data is more ubiquitously available.