Imagine that an ultra-intelligent machine emerges from an intelligence explosion.  The AI (a) finds no trace of extraterrestrial intelligence, (b) calculates that many star systems should have given birth to star faring civilizations so mankind hasn’t pass through most of the Hanson/Grace great filter, and (c) realizes that with trivial effort it could immediately send out some self-replicating von Neumann machines that could make the galaxy more to its liking.  

Based on my admittedly limited reasoning abilities and information set I would guess that the AI would conclude that the zoo hypothesis is probably the solution to the Fermi paradox and because stars don’t appear to have been “turned off” either free energy is not a limiting factor (so the Laws of Thermodynamics are incorrect) or we are being fooled into thinking that stars unnecessarily "waste” free energy (perhaps because we are in a computer simulation).

 

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 8:25 AM

You are stating "I think ultra-intelligent machine will believe X", but this simply means that you believe X, so why the talk about ultra-intelligent machines? It serves no purpose.

(It looks like LW version of the "All reasonable/rational/Scottish people believe X" dark side rhetoric is "Ultra-intelligent machines will believe X".)

(b) is counterfactual today. Nobody can calculate how many nearby star systems should have given birth to star faring civilizations - since nobody knows p(origin of life). We can't even make life from plausible inorganic materials yet. We are clueless - and thus highly uncertain.

OK. I think that if an ultra-intelligent AI determines that (a), (b) and (c) are correct then the zoo hypothesis is probably the solution to Fermi's paradox. I think this last sentence "serves a purpose" because (a), (b) and (c) seem somewhat reasonable and thus after reading my post a reader would give a higher weight to the zoo hypothesis being true.

So you are using the ultra-intelligent AI as a kind of Omega, then? To establish that (a), (b), and (c) are definitely true?

This has the major assumption that the AI will conclude that it simply isn't the first to pass the great filter. I suspect that a strong AI in that sort of context would have good reason to think otherwise.

It's not a direct assumption because an implication of (a) and (b) is that the AI is extremely unlikely to be the first that has passed the great filter. But if the AI believes that no other explanation including the zoo hypothesis has a non-trivial probability of being correct then the AI would conclude that mankind probably is the first to have passed the great filter.

Why don't you explain your reasoning for your conclusion based on (a), (b), and (c)? Merely saying "I would guess that" is not persuasive.