Jun 30, 2018
TL;DR: The Great Filter from the Fermi paradox could be escaped by choosing a random strategy. However, if all civilizations acted randomly, this could be the actual cause of the Fermi paradox. Using a meta-random strategy solves this.
"Death in Damascus" is a decision theory problem about attempting to escape an omniscient agent who is able to predict your behavior.
It goes like this: “You are currently in Damascus. Death knocks on your door and tells you I am coming for you tomorrow. You value your life at $1,000 and would like to escape death. You have the option of staying in Damascus or paying $1 to flee to Aleppo. If you and death are in the same city tomorrow, you die. Otherwise, you will survive. Although death tells you today that you will meet tomorrow, he made his prediction of whether you’ll stay or flee yesterday and must stick to his prediction no matter what. Unfortunately for you, Death is a perfect predictor of your actions. All of this information is known to you.”
It was explored in the article “Cheating death in Damascus,” where a possible solution was suggested: a true random generator is used to choose between staying in Damascus and fleeing to Aleppo, and thus one has 0.5 chance of survival.
The Fermi paradox is a type of "Death in Damascus" problem. The Fermi paradox said that other civilizations are not observable for unknown reasons, and one of the solutions is the Great Filter which kills all young civilizations; for us, such a filter is ahead. This means that all the civilizations before us made the same mistake which resulted in their demise, and as we are a typical civilization, we will make the same mistake, too. However, we don’t know what this universal mistake is. Maybe we should not experiment with hadron collider. Maybe AI always goes rogue, kills everybody and later self-terminates (AI itself can’t explain the Fermi paradox, as it will spread through the universe). But maybe the decision not to create AI is fatal, as only AI can manage risks of synthetic biology and other catastrophic risks.
In other words, whatever rational strategy we take, this is exactly what has killed all previous civilizations; if we escape to Aleppo, Death will meet us there. In the original problem, Death is omniscient; in the case of the Fermi paradox, the omniscience is replaced by our typicality and mediocrity reasoning: as we are typical, we will make all the same mistakes.
In the attempt to cheat Death, to escape the typical Great Filter, we (assuming here some form of global policy coordination is solved) could take random strategy in the future. For example, we could use a random generator to choose which technologies to develop and which to abandon. In that case, we have a chance not to develop one dangerous technology which is the universal killer.
But what if this random strategy is the filter itself? That is, what if abandoning some technologies will make our civilization impaired and contribute to the extinction? In that case, we could implement a meta-random strategy. At first, we use a random coin to choose: should we try the random-abandoning strategy at all, or go ahead without any “anthropic updates”?
Now let’s try to estimate the success probability of the random strategy. If this strategy were very effective—for example, if it would save 1 of 10 civilizations, while the total number of the civilizations who got to our level of sophistication in the observable universe is 100—we would still expect to observe 10 civilizations (and as other civilizations will observe each other, they will not implement the strategy, as there is no Fermi paradox for them). So, if the strategy were very effective, there would be no Fermi paradox, and no need for such a strategy. Thus, the strategy make sense if it gives survival probability only 1/N, where N is total number of civilizations in the past light cone which perished because of late Great Filter. In other words, if we expect that the past light cone had 100 civilizations, all of which met their demise, we should choose around 7 (as 27=128) random binary choices in our strategy, and we will, at best, have 0.01 chance of survival—which is better than 0, but is still very small (assuming that wrong random choice will destroy us).
Now, we could use the same logic not for escaping the Great Filter, but for explaining the observable Fermi Paradox. If almost all civilizations try random strategies, they mostly perished, exactly because they tried non-optimal behavior. Thus, the Fermi paradox becomes a self-fulfilling prophecy. Why would civilization agree on such a seemingly reckless gamble? Because this replaces the unknowable probability of survival with a small but fixed probability to survive. However, in the case of meta-randomness cheating, this is not an explanation to the Fermi paradox, as at least half of civilizations will not try any cheating at all.
Surely, this is an oversimplification, as it ignores other explanations of the Fermi paradox, like Rare earth, which are more favorable for our survival (but presumably less likely if we accept Grace’s version of the Doomsday argument).