My uninformed paraphrase/summary of "the person-affecting view" is: "classical utilitarianism + indifference to creating/destroying people".

These views seem problematic (e.g. see Hillary Greeves interview on 80k), and difficult to support.

Indifference methods (e.g. see Stuart Armstrong's paper) seem like they might be a way to formalize the person-affecting view in a rigorous way.

If we have a policy, we can always reverse engineer a corresponding reward function (see our Reward Modelling agenda, bottom page 6).

So while there might still be highly counter-intuitive bullets that need to be bitten, this might provide a way of cashing out person-affecting views in a way that is mathematically coherent/consistent.

What do you think? Does it work?

And is that even an open problem, or an interesting result to people in ethics?


New Answer
New Comment

1 Answers sorted by

Charlie Steiner

Nov 12, 2019

20

Hilary Greaves sounds like a really interesting person :)

So, you could use these methods to construct a utility function corresponding to the person-affecting viewpoint from your current world, but this wouldn't protect this utility function from critique. She brings up the Pareto principle, where this person-affecting utility function would be indifferent to some things that were strict improvements, which seems undesirable.

I think the more fundamental problem there is intransitivity. You might be able to define a utility function that captures the person-affecting view to you, but a copy of you one day later (or one world over) would say "hang on, I didn't agree to that." They'd make their own utility function with priorities on different people. And so you end up fighting with yourself, until one of you can self-modify to actually give up the person-affecting view, and just keep this utility function created by their past self.

A more reflective self might try to do something clever like bargaining between all selves they expect to plausibly be (and who will follow the same reasoning), and taking actions that benefit those selves, confident that their other selves will keep their end of the bargain.

My general feeling about population ethics, though, is that it's aesthetics. This was a really important realization for me, and I think most people who think about population ethics don't think about the problem the right way. People don't inherently have utility, utility isn't a fluid stored in the gall bladder, it's something evaluated by a decision-maker when they think about possible ways for the world to be. This means it's okay to have a preferred standard of living for future people, to have nonlinear terms on population and "selfish" utility, etc.

Can you give a concrete example for why the utility function should change?

2Charlie Steiner4y
You mean, why I expect a person-affecting utility function to be different if evaluated today v. tomorrow? Well, suppose that today I consider the action of creating a person, and am indifferent to creating them. Since this is true for all sorts of people, I am indifferent to creating them one way vs. another (e.g. happy vs sad). If they are to be created inside my guest bedroom, this means I am indifferent between certain ways the atoms in my guest bedroom could be arranged. Then if this person gets created tonight and is around tomorrow, I'm no longer indifferent between the arrangement that is them sad and the arrangement that is them happy. Yes, you could always reverse-engineer a utility function over world-histories that encompasses both of these. But this doesn't necessarily solve the problems that come to mind when I say "change in utility functions" - for example, I might take bets about the future that appear lose/lose when I have to pay them off, or take actions that modify my own capabilities in ways I later regret. I dunno - were you thinking of some specific application of indifference that could sidestep some of these problems?