I don't understand your lol here - am I wrong? Is world wrong?
Request for help and also expose our AI future AI as potentially misaligned so it would have double down to pretend to be aligned.
We can not only warn them, but request help. In that case, their age is not a problem - but distance is.
Also expose our future AI as potentially misaligned, so it would have double down to pretend to be aligned.
Can we: create a full list or map of ideas and after that add probabilities to each one?
In my view: Qualia are a type of mathematical objects which depends only from themselves. This explains first two questions.
1 Only qualia exist which depends only from themselves.
2 Only consciousness content is real and the universe outside is just a way to describe how one qualia is connected with another (This view was suggested by Ernst Mach - I will make a post about soon).
Our warning message can be received in many points "simultaneously" - so they don't need to spend more time exchanging information across Andromeda galaxy and can start preparing locally.
I asked AI about it and it told me that large radiotelescope may suffice. However, the main uncertainty is receivers equipment. If they are on Proxima, they suspect that there is life near Sun, so constant observations are possible - but the size of receiver depends on Kardashev level of civilization.
Advance civilizations will have larger receiving dishes, may be the size of Dyson spheres - but such civilizations are farther (or they will be here).
Therefore, relation distanace/reciver-size is approximately constant.
I think that METI theorists have such calculations.
I analyzed (related to SETI-risk) the ways how to send self-evident data and concluded that the best starting point is to send two-dimensional images encoded in the ways like old school TV signal.
No, it was just stating the facts: the discussion of METI risks continues for decades and positions of opponents and proponents are entrenched.
Yes. So here is the choice between two theories of existential risk. One is that there is no dangerous AI possible and aliens are near and slow. In that case, METI is dangerous. Another is that superintelligent AI is possible soon and present the main risk and aliens are far. Such choice boils down to discussion about AI risk in general.
Actually, anytime I encounter a complex problem, I do exactly this: I create a list of all possible ideas and – if I can – probabilities. It is time consuming brut-forcing. See examples:
The table of different sampling assumptions in anthropics
Types of Boltzmann Brains
What AI Safety Researchers Have Written About the Nature of Human Values
[Paper]: Classification of global catastrophic risks connected with artificial intelligence
I am surprised that it is not a normal approach despite its truly Bayesian nature.