DoD has confirmed yesterday the authenticity of UAP videos and Yudkowsky made a bet: “So apparently there’s some kind of thing right now about supposed aliens. I haven’t looked into it at all, but I am happy to blindly bet against anything to do with visible intelligent aliens at 100:1 odds.”
So where are the other 99 odds – and how they affect our future and the probability of global risks? I created the biggest possible list of explanations of UAP and estimations of their effects on x-risks. There are several more extreme explanations than “aliens”, which challenge our model of the world: travellers between branches of Everettian multiverse, glitches in the Matrix, absurdity in Boltzmann Brains chains, etc.
Abstract: After 2017 NY Times publication, the stigma of the scientific discussion of the problem of so-called UAP (Unidentified Ariel Phenomena) was lifted. Now the question arises: how UAP will affect the future of humanity, and especially, the probability of the global catastrophic risks? To answer this question, we assume that the Nimitz case in 2004 was real and we will suggest a classification of the possible explanations of the phenomena. The first level consists of mundane explanations: hardware glitches, malfunction, hoaxes and lies. The second level involves explanations which are not changing completely our model of the world, but only updating it: new military tech, large psyop operation or new physical effect like ball lightning. The third level of explanations requires a complete overhaul of our world model and includes a large set of fantastic hypothesis: alien starships, interdimensional beings, glitches in the matrix, projections from the collective unconsciousness, Boltzmann brain’s experiences etc. The higher is the level of the hypothesis, the less probable it is, but the bigger is its consequences for possible global catastrophic risks. Thus, “integrating” over the field of all possible explanations, we find that UAP increases catastrophic risks. Technological progress increases our chances of direct confrontation with the phenomena. If the phenomenon has intelligence, the change of its behavior could be unexpected after we pass some unknown threshold. But we could lower the risks if we assume some reasonable policy of escaping confrontations.
The whole article is here: https://philpapers.org/rec/TURUAG