Hilary Greaves sounds like a really interesting person :)
So, you could use these methods to construct a utility function corresponding to the person-affecting viewpoint from your current world, but this wouldn't protect this utility function from critique. She brings up the Pareto principle, where this person-affecting utility function would be indifferent to some things that were strict improvements, which seems undesirable.
I think the more fundamental problem there is intransitivity. You might be able to define a utility function that captures the person-affecting view to you, but a copy of you one day later (or one world over) would say "hang on, I didn't agree to that." They'd make their own utility function with priorities on different people. And so you end up fighting with yourself, until one of you can self-modify to actually give up the person-affecting view, and just keep this utility function created by their past self.
A more reflective self might try to do something clever like bargaining between all selves they expect to plausibly be (and who will follow the same reasoning), and taking actions that benefit those selves, confident that their other selves will keep their end of the bargain.
My general feeling about population ethics, though, is that it's aesthetics. This was a really important realization for me, and I think most people who think about population ethics don't think about the problem the right way. People don't inherently have utility, utility isn't a fluid stored in the gall bladder, it's something evaluated by a decision-maker when they think about possible ways for the world to be. This means it's okay to have a preferred standard of living for future people, to have nonlinear terms on population and "selfish" utility, etc.
My uninformed paraphrase/summary of "the person-affecting view" is: "classical utilitarianism + indifference to creating/destroying people".
These views seem problematic (e.g. see Hillary Greeves interview on 80k), and difficult to support.
Indifference methods (e.g. see Stuart Armstrong's paper) seem like they might be a way to formalize the person-affecting view in a rigorous way.
If we have a policy, we can always reverse engineer a corresponding reward function (see our Reward Modelling agenda, bottom page 6).
So while there might still be highly counter-intuitive bullets that need to be bitten, this might provide a way of cashing out person-affecting views in a way that is mathematically coherent/consistent.
What do you think? Does it work?
And is that even an open problem, or an interesting result to people in ethics?