Agree that AI will do a lot past simulations to predict possible variants of world history and even to try to solve Fermi paradox and-or predict behaviour of alien AIs. But it could be outweighed by FAI which tries to get most measure in its hands, for example to cure past sufferings via indexical uncertainty for any possible mind.
Yes, indeed, "measure monsters" could fight to get biggest share of measure over desirable observers thus effectively controlling them. Here I assume that "share of measure" is equal to the probability of finding oneself in that share under SIA. An example of such "measure monster" may be Friendly AI which want to prevent most people to be in hands of Evil AI, so it creates as much copies of people as it can.
Alternatively, very strong and universal Great Filter Doomsday argument is true, and Earth is the biggest possible concentration of observers in the universe and will go extinct soon. Larger civilizations are extremely rare.
But I think that you want to say that SIA prediction that we are already in "measure monster" is false, as we should observe much more observers, maybe a whole Galaxy densily packed with them.
You are most likely in the singularity post-civilization. But in simulation which it created. So no SIA-refutation here.
SIA can be made to deal in densities, as one must with infinities involved.
I didn't get what do you mean here.
One may also confuse the density of observers and the numbers of observers. You are more likely to be in the region with the highest number of observers, but not of the highest density. A region can win the biggest number of observers not because it has higher density, but because it is larger in size.
For example, Tokyo is densiest city, but most people live in rural India.
The SSA/SSI reference class of simulation-paranoid observers is huge
It is huge only if we add simulated beings and thus assume that simulation hypothesis is true. If not, it is only a few thousands LW and Bostrom readers.
There is a computer-independent version of simulation argument, it says that illusions are computationally cheaper than most real things and thus more frequent. Examples: movies, dreams.
The idea of simulation is a type of such infohazard, as a person may spent a lot of time guessing if he is real and if not, what type of simulation he lives.
Mueller in his article "Law without law: from observer states to physics via algorithmic information theory" suggested to use Solomonoff induction to go directly from one observer-state to another.
Thus he probably escapes the "world and claws" problem, but ends up with a variant of Egan's dust theory in mathematical world.
How it could be connected with GPT-like language models?
AI may find the ways to satisfy such people without causing harm to society. They could provide them an option to hunt on robots and even think that this is actually harming the ruling ASI.
In Telegram channels related to cryonics.