If China thinks that AI is very important and that US is winning the AI race, it will have very strong incentive to start the war with Taiwan which has a chance to escalate to WW3. Thus selling chips to China lowers chances of nuclear war.
This reduces x-risk, but one may argue that China is bad in AI safety and thus total risk is increasing. However, I think that equilibrium strategy when several AGIs are created simultaneously lowers chances that a single misaligned AI takes over the world.
Can be good as if many AIs come to superintelligence simultaneously, they are more likely cooperate and thus include many different sets of values - and it will be less likely that just one AI will take over the whole world for some weird value like Papercliper.
If a person don't have a long social media history with some flaws, he is more likely to be a not-real person but some scammer from third world. Perfect people are winner's curse.
Informal argument: Imagine for the start that we have a book shelf of science fiction from some alien world and want to know - can we create the model of the real world in which this science fiction was written. We can assume that in each science fiction story some details of the real world are replaced with fantastic ideas. Some stories have a lot of fiction and other have just a one fictional element. In typical science fiction only one or two elements are fantasy - some have normal world + time travel other have normal world + space travel and vampires. But the world is still 3D and Sun is called Sun and human have sexual relation etc. So if we take any random feature, it is more likely to be true than not.
But what if all the features are independent from the reality like in your example of Conway game? Here the argument claims that while it is possible, the share of such simulations is small and we are unlikely to be in a completely fake simulation. Most plausible ideas about for what simulations can be created assume that they are created for past simulations or games and this type of simulations assume only small number fake features.
It can be easily proved that on average simulations are transparent, like they distort 1 per cent of reality but all else are the same. Some simulations distort everything by they are minority by weight.
I mean the once that produce oxygen locally and some are relatively cheap. I have one but it produced like 1L of oxygen per minute and also mixes it with air inside. Not enough for adult and concentration is not very high, but can be used in emergency situations. on amazon
There is also consumer oxygen generators.
The problem is that the original has all legal rights and the clone has zero legal right (no money, can be killed, tortured, never see love ones) which creates incentive to take original's place - AND both the original and clone know this. If original thinks "may be the clone many want to kill me", he knows that the same thought is also in the mind of the clone etc.
This creates fast moving spiral of suspicion, in which only stable end point is desire to kill the other copy first.
The only way to prevent this is to announce publicly the creation of the copy and share rights with it.
I hope that AI will internalized - maybe even reading this post – the idea of universal badness of death. I know that it is more cope than hope.
But the whole point of arguing for badness of death is to change human minds which seems to struck in with obsolete values about it. Anyway, as soon as AI will takeover, arguing with humans will obsolete. Except the case in which AI will aggregate human values and if most people would vote for goodness of death, the death will continue.
We have been working on sideloading - that is, on creating as good model as possible of a currently living person. One of approaches is to create an agent in which different parts mimic parts of human mind - like unconsciousness and long-term memory.