The synthetic population bomb:
In a world of abundance, it will be easy to create digital people, and those people will have as much of a claim to resources as anyone else. If we create people faster than we extract new resources, then the pie grows but each person's slice shrinks. In the end, we live in a technological wonderland but own fewer and fewer atoms.
it will be easy to create digital people, and those people will have as much of a claim to resources as anyone else
It seems necessary in such a world to require that the necessary resources are provisioned before new people are created, as a precondition of it being OK to create new people.
then the pie grows but each person's slice shrinks
If specifically those who decide to create new people have to provide for them, then only their pieces of the pie shrink. This way the incentives balance out Malthusian redistribution.
Good points. It's still pretty fraught though:
Funding your own existence doesn't lead to Malthusian issues, if it's not at the expense of those who didn't consent to this externality. The reachable universe though won't be getting any more resources, so property rights should be about shares in the reachable universe, to prevent a race to claim everything using new entities, at the expense of those who are not as immediately grabby (which can be a subtle long term alignment issue even if blatant takeover is averted).
Funding your own existence doesn't lead to Malthusian issues, if it's not at the expense of those who didn't consent to this externality.
Had to think about this for a while, but I'm assuming this means that by funding your own existence through consensual economic transactions, you're necessarily generating as much value as you consume, so you're not reducing the amount of pie available for everyone else.
That seems vaguely reasonable. I guess an important thing is then that new people aren't able to extract any resources through purely political means, which goes to your point about having hard limits on who provides for the new people.
Somehow it still feels suspicious. Like, if today we introduce a trillion initially-broke self-funding humans into the world, who all need a place to live, surely that makes it harder for me to rent an apartment, right? I'll admit that I know very little about economics.
I often wonder if there is a No-Alignment Theorem that says you can't always control the actions of an intelligent entity. Maybe something with the flavor of the undecidability of the halting problem or Godel's Incompleteness Theorems, where the issue stems from the fact that an intelligent entity can model itself and reflect from a distance on the goals you've given it.
I doubt such a thing exists, but it's fun to think about. It would also require a mathematical formulation of an intelligent entity, which seems to be quite a ways off. And even if such a theorem does exist, it would almost certainly be irrelevant for doing alignment in practice, the same way Godel's Incompleteness Theorems do not affect the day-to-day work of mathematicians.