People want different things, and the different possible disagreement resolving mechanisms include the different varieties of utilitarianism.
In this view, the fundamental issue is whether you want the new entity to be directly counted in the disagreement resolving mechanism. If the new entity is ignored (except for impact on the utility functions of pre-existing entities, including moral viewpoints if preference utility is used*), then there's no need to be concerned with average v. total utilitarianism.
A general policy of always including the new entity in the disagreement resolving mechanism would be extremely dangerous (utility monsters). Maybe it can be considered safe to include them under limited circumstances, but the Repugnant Conclusion indicates to me that the entities being similar to existing entities is NOT sufficient to make it safe to always include them.
(*) hedonic utility is extremely questionable imo - if you were the only entity in the universe and immortal, would it be evil not to wirehead?
Right - it's a little misleading to cast the decision procedure as if it was some person-independent thing. If you make a decision based on how happy you think the puppy will be, it's not because some universal law forced you to against your will, it's because you care how happy the puppy will be.
If there's some game-theory thing going on where you cooperate with puppies in exchange for them cooperating back (much like how Bertrand and Cedric are cooperating with each other), that's another reason to care about the puppy's preferences, but I don't think actual puppies are that sophisticated.
Sure, but there's still a meaningful question whether you'd prefer many moderately happy puppies or few very happy puppies to exist. Maybe tomorrow you'll think of a compelling intuition one way or the other.
Sure. But it will be my intuition, and not some impersonal law. This means it's okay for me to want things like "there should be some puppies, but not too many," which makes perfect sense as a preference about the universe, but practically no sense in terms of population ethics.
I don't know why the trade-off between population-size and average utility feels like it needs to be a mathematically justified, that function seems to be as much determined by arbitrary evolutionary selection as the rest of our utility functions.
Well, it would be nice if we happened to live in a universe where we could all agree on an agent-neutral definition of what the best actions to take in each situation are. It seems to be that we don't live in such a universe, and that our ethical intuitions are indeed sort of arbitrarily created by evolution. So I agree we don't need to mathematically justify these things (and maybe it's impossible) but I wish we could!
I just thought of an argument that pulls toward average utilitarianism. Imagine I'm about to read a newspaper which will tell me the average happiness of people on Earth: is it 8000 or 9000 "chocolate equivalent units" per person? I'd much rather read the number 9000 rather than 8000. In contrast, if the newspaper is about to tell me whether the Earth's population is 8 or 9 billion people, I don't feel any strong hopes either way.
Of course there's selfish value in living in a more populous world, more people = more ideas. But I suspect the difficulty of finding good ideas rises exponentially with their usefulness, so the benefit you derive from larger population could be merely logarithmic.
If I understand your second point, you're suggesting that part of our intuition seems to suggest large populations are better is that larger populations tend to make the average utility higher. I like that! It would be interesting to try to estimate at that human population level average utility would be highest. (In hunter/gatherer or agricultural times probably very low levels. Today probably a lot higher?)
I personally think that something more akin to minimum utilitarianism is more inline with my intuitions. That is, to a first order approximation, define utility as (soft)min(U(a),U(b),U(c),U(d)...) where a,b,c,d... are the sentients in the universe. This utility function mostly captures my intuitions as long as we have reasonable control over everyone's outcomes, utilities are comparable, and the number of people involved isn't too crazy.
I think I have a pretty simple solution to this: treat 0 as the point when each being is neither happy nor unhappy. Negative numbers are fine. You can still take the sum. In the example, this seems like just subtracting 10 from everyone, which is 0 in the dogless state and -3 with the dog. Thus: no puppy.
The puppy example seems pretty simple. Nonexistent things don't have preferences, so that cell in the table is "n/a". This is the same result as if you just ignored the heisenpuppy's value, but it wouldn't have to be (for instance, 14+2+5 > 10+10, so even if cedric only liked dogs a tiny amount, it'd be a net benefit to bring the dog into existence, but wouldn't be if the dog's happiness were unconsidered).
The rat king is similar to https://plato.stanford.edu/entries/repugnant-conclusion/ , but would be much stronger if you show the marginal decision, without the implication that if rats have a litter, that decision will continue to apply to all rats, regardless of situation. Say, there's 100 rats that are marginally happy because there's just enough food. Should they add 1 to their population?
I think the simulator is too far removed from decisions we actually make. It's not a very good intuition pump because we don't have instinctive answers to question. Alternately, it's just an empirical question - try both and see which one is better.
Your realistic examples are not extrapolate-able from the simple examples. Chickens hinges on weighting, not on existential ethics. There are very few people who claim it's correct because it's best for the chickens (though some will argue that it is better for the chickens, that's not their motivation). There are lots who argue that human pleasure is far more important than chicken suffering.
The parenthood question is murky because of massive externalities, and a fallacy in your premises - the impact on others (even excluding the potential child) is greater than on the parent. Also, nobody's all that utilitarian - in real decisions, people prioritize themselves.
Can you clarify which answer you believe is the correct one in the puppy example? Or, even better, the current utility for the dog in the "yes puppy" example is 5-- for what values you believe it is correct to have or not have the puppy?
Given the setup (which I don't think applies to real-world situations, but that's the scenario given) that they aggregate preferences, they should get a dog whether or not they value the dog's preferences. 10 + 10 < 14 + 8 if they think of the dog as an object, and 10 + 10 < 14 + 8 + 5 if they think the dog has intrinsic moral relevance.
It would be a more interesting example if the "get a dog" utilities were 11 and 8 for C and B. In that case, they should NOT get a dog if the dog doesn't count in itself. And they SHOULD get a dog if it counts.
But, of course, they're ignoring a whole lot of options (rows in the decision matrix). Perhaps they should rescue an existing dog rather than bringing another into the world.
I like your concept that the only "safe" way to use utilitarianism is if you don't include new entities (otherwise you run into trouble). But I feel like they have to be included in some cases. E.g. If I knew that getting a puppy would make me slightly happier, but the puppy would be completely miserable, surely that's the wrong thing to do?
(PS thank you for being willing to play along with the unrealistic setup!)
Cedric and Bertrand want to see a movie. Bertrand wants to see Muscled Duded Blow Stuff Up. Cedric wants to see Quiet Remembrances: Time as Allegory. There's also Middlebrow Space Fantasy. They are rational but not selfish - they care about the other's happiness as much as their own. What should they see?