Sound reasonable, but we also know that for Earth p=1. So it is not something impossible, which can't be repeated again. What could be such factor?
This problem disappears if we assume very large universe with different regions. All regions are real and are like houses. I look more formally in it here: https://www.lesswrong.com/posts/KhwLtJXoAhqfmguzh/sia-becomes-ssa-in-the-multiverse
Self-indication assumption in anthropics implies that we should find ourselves in the universe with the highest concentration of observers (as most of them are there). This affects distribution of some random variables in the Drake equation in upper direction. Especially chances of abiogenesis and interstellar panspermia which can compensate rareness of abiogenesis.
Also, 84 per cent of no aliens in Milky Way galaxy is means that they are almost certainly exist in Virgo supercluster with its 2000 galaxies.
One of top bloggers in my bubble said that he tries to write a most surprising next sentence when he is working on his posts.
I have had tetrachromotomic experience with one mind machine which flickers different colors in different eyes. It overflows some stacks in the brain in create new colors.
It is unlikely that we live in untypical civilization. Also capitalism in some sense is an extension of Darwinian evolution to economic agents, so there is nothing surprising in it.
We are typical, so it is unlikely that aliens will be better.
Writing is uploading for the poor and lowtech bros
I think that there is a small instrumental value in preserving humans. They could be exchange with Alien friendly AI, for example.
Also, should be noted that the value of human atoms is very small: these atoms constitute around 10e-20 of all atoms in Solar system. Any small positive utility of human existence would overweight atom's usefulness.
We also use ants for entertainment - selling ant farms for kids https://www.amazon.com/Nature-Gift-Store-Live-Shipped/dp/B00GVHEQV0
List of cognitive biases affecting judgment of global risks https://www.researchgate.net/publication/366862337_List_of_cognitive_biases_affecting_judgment_of_global_risks/related
We could use such examples to estimate logical probability that Goldbach conjecture is false, like share of eventually disproved conjectures to the number of all conjectures (somehow normalised by their complexity and initial plausibility) .
If this is true, then increasing of the earth's amplitude two times will also suffice to double the value.
And may be we can do it by performing observations less often (if we think that measurement is what causes world splitting – there are different interpretations of MWI). In that case meditation will be really good: less observations, less world splitting, more amplitude in our timeline.
If MWI is true, Earth is constantly doubling, so there is no reason to "double Earth"
I linked the MIRI paper because it has good introduction in logical probability.
Yes. But also what works here is not the randomness of distrubution of primes, but the number of attempts (to get a sum of primes) which is implied by any sufficiently large number (N/20. Only very large gaps in prime distribution will be sufficient to disprove statistical proof. There is a postulate that there are no such gaps exist https://en.wikipedia.org/wiki/Bertrand%27s_postulate
Yes. I thought about finding another example of such pseudo-rule, but didn't find yet.
There is theory that the whole world is just naturally running predicting process, described in the article "Law without law" https://arxiv.org/pdf/1712.01826.pdf
LLM predicts next steps of some story, but there is no agent-like mind inside LLM which has plans how the story will develop. It is like self-evolving illusion, without director who plans how it will go.
Yes, GPT creates a character, say, of virtual Elon Musk. But there is no another person who is creating Elon Musk, that is, there is no agent-like simulator who may have a plan to torture or reward EM. So we can't say that simulator is good or bad.
Simulation without simulators doesn't have problem with theodicy. Current GPTs can be seen as such simulator-less simulations.
That is why we need Benevolent AI, not Aligned AI. We need an AI, which can calculate what is actually good for us.
My son started to speak at 6. Now 16, speaks 3 languages in normal school.
Grabby aliens without red dwarfs
Grabby aliens theory of Robin Hanson predicts that the nearest grabby aliens are 1 billion light years away but strongly depends on the habitability of red dwarfs (https://grabbyaliens.com/paper).
In the post, the author combines anthropic and Fermi, that is, the idea that we live in the universe with the highest concentration of aliens, limited by their invisibility, and get an estimation of around 100 "potentially visible" civilizations per observable universe, which at first approximation gives 1 billion ly distance b...
Grabby aliens theory of Robin Hanson predicts that the nearest grabby aliens are 1 billion light years away but strongly depends on the habitability of red dwarfs (https://grabbyaliens.com/paper).
In this post, the author combines anthropic and Fermi, that is, the idea that we live in the universe with the highest concentration of aliens, limited by their invisibility, and get an estimation of around 100 "potentially visible" civilizations per observable universe, which at first approximation gives 1 billion ly distance between them.
...“That civilisation
How can I convert "percents" of progress into multipliers? That is, progress= a*b, but percents assume a+b.
For example, if progress is 23 times, and 65 percent of it is a, how many times is it?
You would do it in log space (or geometrically). For your example, the answer would be .
Actually, my mental imagination is of low quality, but visual remembering is better than audio for me in n-back
I also had an n-back boost using visualisation, see my shortform.
I think that they are also to find the most important problem from all.
N-back hack. (Infohazard!)
There is a way to increase one's performance in N-back, but it is almost cheating and N- back will stop to be a measure of one's short-term memory.
The idea is to imagine writing all the numbers on a chalkboard in a row, as they are coming.
Like 3, 7, 19, 23.
After that, you just read the needed number from the string, which is located N positions back.
You don't need to have a very strong visual memory or imagination to get a boost in your N-back results.
I tried it a couple of times and get bored with N-back.
Kasparov was asked: how you are able to calculate all possible outcomes of the game. He said: I don't. I just have very good understanding of current situation.
Yes, SETI attacks works only if speed of civ travel is like 0.5c. In that case it covers 8 times more volume than physical travel.
And yes, it will be also destructive, but in different manner: not bombs, but AIs and self-replicating nanobots will appear.
There is a greater chance of observing self-replicating SETI messages than those that destroy planets
I feel that there is one more step in my thinking:
Repeat
Yes, the more remote is a person, the larger number of other people can affect them from the same distance and typically the share of my impact is very small, unless I am in very special position which could affect a future person.
For example, I am planting a landmine which will self-liquidate either in 100 or 10 000 years, and while self-liquidating it will likely kill a random person. If I discount future people, I will choose 10 000 years, even if it will kill more people in future. However, if I think that humanity will likely extinct by then, it will be still a reasonable bet.
One needs to be near a bridge to use artillery and this still needs to be high precision strikes with expensive guided missiles, may be 100 of them were used agains Antonov's bridge.
The best targets for tactical nukes are bridges. It is very difficult to destroy a bridge with conventional artillery: Antonov's bridge still stands as well as Crimea bridge. A tactical nuke with 0.1 -1 Kt range will completely destroy a bridge.
Other possible targets are bunkers and large factories.
There is also a winner's curse risk: if a person is too good, s-he could have some secret disadvantage or may leave me quickly as s-he will have many better options than me. It puts a cap on the level above median which I should look at. Therefore first few attempts have to establish the medial level of available for me people.
Another problem is that any trial run has a cost, like years and money spent. If I did searching too long, I will spent less time with my final partner.
My mind generated a list of possible benchmarks after reading your suggestions:
Wireheading benchmark – the tendency of an agent to find unintended shortcuts to its reward function or goal. See my comment on the post.
Unboxing benchmark – the tendency of an agent to break out of the simulation. Could be tested in the simulations of progressive complexity.
Hidden thoughts benchmark – the tendency of an agent to hide its thoughts.
Uncorigibility benchmark – the tendency of the agent to resist changes.
Unstoppability benchmark –...
If we take the third-person view, there is no update until I am over 120 years old. This approach is more robust as it ignores differences between perspectives and is thus more compatible with Aumann's theorem: insiders and outsiders will have the same conclusion.
Imagine that there are two worlds:
1: 10 billion people live there;
2: 10 trillion people live there.
Now we get information that there is a person from one of them who has a survival chance of 1 in a million (but no information on how he was selected). This does not help choose between worlds as suc...
The surprise here depends on the probability of survival. If half of people on Earth were Bobs, and other were Alices, then 0.01 chance of survival means that 400 000 Bobs will survive. There is no surprise that some of them survive, neither for Bob nor for Alice.
For example, if you survive until 100 years old, it is not an evidence for quantum immortality.
If, however, survival chance is 1 in 10e12, then even for the whole earth there likely be no Bobs in Copenhagen interpretation. So the existing of Bob is an evidence against it.
For example, if I naturally survive until 150 years old by pure chance, it is evidence for MWI.
I met the idea of Lebowski theorem as an argument which explains the Fermi paradox: all advance civilizations or AIs wirehead themselves. But here I am not convinced.
For example, if civilization consists of many advance individuals and many of them wirehead themselves, then remaining will be under pressure of Darwinian evolution and eventually only the ones survive who find the ways to perform space exploration without wireheading. Maybe they will be some limited specialized minds with very specific ways of thinking – and this could explain absurdity...
I sent my above comment for the following competition and recommend you to send your post too https://ftxfuturefund.org/announcing-the-future-funds-ai-worldview-prize/
Yes, very good formulation. I would add "and most AI aligning failures are types of meta Lebowski rule"
Meta: I was going to write a post "Subtle wireheading: gliding on the surface of the outer world" which describe most AI aliment failures as a forms of subtle wireheading, but will put its draft here.
Typically, it is claimed that advance AI will be immune to wireheading as it will know that manipulating own reward function is wireheading and thus will not perform it but instead will try to reach goals in the outer world.
However, even acting in real world, such AI will choose the way which requires least effort to create maximum utility therefore simultaneo...
I think that anthropic beats illusionism. If there are many universes, in some of them consciousness (=qualia) is real, and because of anthropics I will find myself only is such universes.
BTW, if blackmailer is a perfect predictor of me, he is running my simulation. Thus there is 0.5 chances that I am in this simulation. Thus, it may be still reasonable to stick to my rules and not pay, as the simulation will be turned of anyway.