Normal Ending: Last Tears (6/8)

Why did the SuperHappies adopt the Babyeater's ethics? I thought that they exterminated them. Or is 6/8 an alternative to 5/8 instead of its sequel?

It might be better to number the sections 1, 2, 3, 4, 5A, 6A, 5B, 6B.

Sustained Strong Recursion

Eliezer's hard takeoff scenario for "AI go FOOM" is if the AI takes off in a few hours or weeks. Let's say that the AI has to increase in intelligence by a factor of 10 for it to count as "FOOM". If there is no increase in resources, then this means that intelligence has to double anywhere from once an hour to once every few days just through recursion or cascades. If intelligence doubles once a day, then this corresponds to an annual interest rate of about 10 to the 100th power. This is quite a large number. It seems more likely t... (read more)

Sustained Strong Recursion

I've been wondering how much of Moore's law was due to increasing the amount of human resources being devoted to the problem. The semiconductor industry has grown tremendously over the past fifty years, with more and more researchers all over the world being drawn into the problem. Jed, do you have any intuition about how much this has contributed?

Recursive Self-Improvement

Eliezer: If "AI goes FOOM" means that the AI achieves super-intelligence within a few weeks or hours, then it has to be at the meta-cognitive level or the resource-overhang level (taking over all existing computer cycles). You can't run off to Proxima Centauri in that time frame.

Recursive Self-Improvement

One source of diminishing returns is upper limits on what is achievable. For instance, Shannon proved that there is an upper bound on the error-free communicative capacity of a channel. No amount of intelligence can squeeze more error-free capacity out of a channel than this. There are also limits on what is learnable using just induction, even with unlimited resources and unlimited time (cf "The Logic of Reliable Inquiry" by Kevin T. Kelly). These sorts of limits indicate that an AI cannot improve its meta-cognition exponentially forever. At some point, the improvements have to level off.

...Recursion, Magic

Perhaps, in analogy with Fermi's pile, there is a certain critical mass of intelligence that is necessary for an AI to go FOOM. Can we figure out how much intelligence is needed? Is it reasonable to assume that it is more than the effective intelligence of all of the AI researchers working in AI? Or more conservatively, the intelligence of one AI researcher?

A Premature Word on AI

I think that Eliezer and Robin are both right. General AI is going to take a few big insights AND a lot of small improvements.

A Premature Word on AI

I think that Eliezer and Robin are both right. General AI is going to take a few big insights AND a lot of little improvements.

Einstein's Speed

One way to evaluate a Bayesian approach to science is to see how it has fared in other domains where it is already being applied. For instance, statistical approaches to machine translation have done surprisingly well compared to rule-based approaches. However, a paper by Franz Josef Och (one of the founders of statistical machine translation) shows that probabilistic approaches do not always perform as well as non-probabilistic (but still statistical) approaches. Basically, maximizing the likelihood of a machine translation system produces results that... (read more)

210y

That's an interesting notion. I don't see how Bayesian reasoning is restricted
to trying to maximize the likelihood of the 'best' theory'. One of its crowning
achievements is to avoid talking just about the best theory and using the full
ensemble at all times. You're perfectly free to ask any question of the
ensemble. This includes 'Which response minimizes some error function?'

The Dilemma: Science or Bayes?

I'm not a physicist, I'm a programmer. If I tried to simulate the Many-Worlds Interpretation on a computer, I would rapidly run out of memory keeping track of all of the different possible worlds. How does the universe (or universe of universes) keep track of all of the many worlds without violating a law of conservation of some sort?

08y

And you can simulate the single worlds interpretation on a computer without
running out of resources?
Infinity squared=Infinity, and if the universe is continuous, it can be said (in
a mathematical sense), that it takes no more processing power to do one universe
than multiples. Besides for the fact that you have to calculate all of the
worlds anyway just to get a single world.

38y

This comment is old, but I think it indicates a misunderstanding about quantum
theory and the MWI so I deemed it worth replying to. I believe the confusion
lies in what "World" means, and to whom. In my opinion Everrett's original
"Relative-State Formalism" is a much better descriptor of the interpretation,
but no matter.
The distinct worlds which are present after a quantum-conditional operation are
only distinct worlds according to the perspective of an observer who has engaged
in the superposition. To an external observer, the system is still in a single
state, albeit a state which is a superposition of "classical" states. For
example, consider Schrodinger's cat. What MWI suggests is that quantum
superposition extends even to the macroscopic level of an entire cat. However,
the evil scientist standing outside the box considers the cat to be in state
(Dead + Alive) / sqrt(2), which is a single pure state of the Cat System. Now
consider the wavefunction of the universe, which I suppose must exist if we take
MWI to its logical end. The universe has many subsystems, each of which may be
in superpositions of states according to external observers. But no matter how
subsystems might divide into superpositions of states, the overall state of the
universe is a single pure state.
In sum: for the universe to "keep track of worlds" requires no more work than
for there to exist a wavefunction which describes to state of the universe.

Configurations and Amplitude

If a photon hits two full mirrors at right angles, then its amplitude is changed by i*i = -1. Does it matter whether the second mirror turns the photon back towards its source, or causes the photon to continue in the direction it was going originally? Do you get -1 in both cases?

Does the Singularity Institute have plans for what to do if an unfriendly AI appears from nowhere? (Not that you should make such plans public.)