There is also consumer oxygen generators.
The problem is that the original has all legal rights and the clone has zero legal right (no money, can be killed, tortured, never see love ones) which creates incentive to take original's place - AND both the original and clone know this. If original thinks "may be the clone many want to kill me", he knows that the same thought is also in the mind of the clone etc.
This creates fast moving spiral of suspicion, in which only stable end point is desire to kill the other copy first.
The only way to prevent this is to announce publicly the creation of the copy and share rights with it.
I hope that AI will internalized - maybe even reading this post – the idea of universal badness of death. I know that it is more cope than hope.
But the whole point of arguing for badness of death is to change human minds which seems to struck in with obsolete values about it. Anyway, as soon as AI will takeover, arguing with humans will obsolete. Except the case in which AI will aggregate human values and if most people would vote for goodness of death, the death will continue.
https://www.lesswrong.com/posts/iBg6AAG72wqyosxAk/the-badness-of-death-in-different-metaethical-theories
Actually, I have metaethic classification in my again unpublished yet article about badness of death
It needs large revision as a lot of work now can be done by LLM. Levenchuk is making now 1M size prompts which teach LLM "system thinking".
Yes, but processing 3D and more is difficult and representing on paper also difficult. Therefore, several 2D slices of mental hyperspace can work.
You can have a look on last version but it is in Russian.
I create two dimensional matrix of the most important characteristics which I hope will catch most variability and use it as x and y axis. For example, for AI risk can be number of AIs and AI's IQ (or time from now). It is Descartes method.
There are other tricks to collect more ideas for the list - reading literature, asking a friend, brain-shtroming, money prizes.
I created a more general map of methods of thinking but didn't finish yet.
Actually, anytime I encounter a complex problem, I do exactly this: I create a list of all possible ideas and – if I can – probabilities. It is time consuming brut-forcing. See examples:
The table of different sampling assumptions in anthropics
What AI Safety Researchers Have Written About the Nature of Human Values
[Paper]: Classification of global catastrophic risks connected with artificial intelligence
I am surprised that it is not a normal approach despite its truly Bayesian nature.
I mean the once that produce oxygen locally and some are relatively cheap. I have one but it produced like 1L of oxygen per minute and also mixes it with air inside. Not enough for adult and concentration is not very high, but can be used in emergency situations. on amazon