There are many different ideas about how utilitarians should value the number of future people. Unfortunately, it is difficult to take all of them into account when deciding among public policies, charities, etc. Arguments about principles like total utilitarianism, average utilitarianism, critical-level utilitarianism, etc. often come from a "global" perspective:
- Does the principle imply that we should have a very large population with a very low quality of life? (Repugnant Conclusion)
- If average utility is negative, does the principle imply that it's good to add additional people with slightly less negative utility? (Sadistic Conclusion)
- Is adding additional people valuable when the population is small, but less valuable when it is large? If so, how large does a population have to be to be considered "large"? ("diminishing marginal value" of people)
What these thought experiments have in common is that they aren't very good for making decisions. For instance, simply adding the condition "avoid the Repugnant Conclusion" to a cost-benefit analysis isn't very useful, since it doesn't give any concrete estimate of the value of additional lives. In this post, I'll give an heuristic that lets total, average, and critical-level utilitarianism be analyzed the same way for most decisions. For simplicity, I'll assume that everyone is identical; if people aren't identical, you need to explicitly normalize utility functions before comparing them, but as long you do that, the heuristic is still valid.
Suppose you have N people with utilities u1, ..., uN, and average utility uavg. Total utilitarianism (TU) would maximize the objective function wTU(N, uavg) = N*uavg. Average utilitarianism (AU) would maximize wAU(N, uavg) = uavg, and critical-level utilitarianism would maximize wCLU(N, uavg) = N*(uavg − u0) for some "critical utility" u0. The interpretation is that only lives with utility above u0 are worth living.
It is easy to use CLU in a cost-benefit analysis: creating an additional person with utility u is equally valuable as raising the utility of an existing person from u0 to u. For example, if utility is estimated using income, and $1000/year is the income level corresponding to u0, then creating a person with an income of $2000/year is about as good as doubling the income of someone making $1000/year. TU is the special case of CLU with u0 = 0, but if there is disagreement about what "zero utility" means, you can estimate the corresponding income level to estimate the magnitude of the disagreement - disagreement between $400 and $500/year is a lot less serious than between $400 and $40000/year.
In general, AU is not a special case of CLU: CLU's objective function is affected by pure changes in population, while AU's is not (∂wCLU/∂N != 0, unless uavg = u0). However, for small changes in N and uavg, AU is equivalent to CLU with u0 = uavg. So although AU and CLU are very different "globally", they are equivalent "locally" with the right choice of u0.
How small is a small change? Define the relative value of two choices as r=(change in w under Choice 1)/(change in w under Choice 2). If r > 1, Choice 1 is better, and if r < 1, Choice 2 is better. Then the discrepancy between AU and CLU is indicated by rAU / rCLU: if AU favors Choice 1 more than CLU does, this ratio will be larger. As it turns out, rAU / rCLU ≈ 1 - (ΔN / N) to first order in ΔN. If the population is 1% higher under Choice 1 than Choice 2, the discrepancy is only 1%, and as long as r is not extremely close to 1, AU and CLU will agree on which one is better.
But 1% of the world population is 70 million people, and virtually no policy will have that large of an effect. So when applying population ethics to real decisions, I think it's best to act as if CLU is true, and frame disagreements as disagreements about the right value of u0, and which income level corresponds to it. That way, it's much easier to see the practical implications of your viewpoint, and people who disagree in principle may find that they agree in practice about what u0 should be, and therefore about how to choose the best policy/charity/cause/etc. The main exception is existential risk prevention, where success will change the population by a very large amount.
PDF with detailed derivations (uses slightly different notation): https://drive.google.com/file/d/0B-zh2f7_qtukMFhNYkR4alRsSFk/edit?usp=sharing