This is a draft written by FSH, Postdoc at Hong Kong University, as part of the Center for AI Safety Philosophy Fellowship. This draft is meant to solicit feedback.
Abstract:
If a future AI system can enjoy far more well-being than a human per resource, what would be the best way to allocate resources between these future AI and our future descendants? It is obvious that on total utilitarianism, one should give everything to the AI. However, it turns out that every Welfarist axiology on the market also gives this same recommendation. Without resorting to deontological normative theories that suggest that we ought not always to create the world with the most value, or... (read 7287 more words →)