To be fair (cough), your argument that '5 people means the pie should be divided into 5 equal parts' assumes several things...
1) Each person, by virtue of merely being there, is entitled to pie.
2) Each person, by virtue of merely being there, is entitled to the same amount of pie as every other person.
While this division of the pie may be preferable for the health of the collective psyche, it is still a completely arbitrary (cough) way to divide the pie. There are several other meaningful, rational, logical ways to divide the pie. (I believe I suggested one in a previous post.) Choosing to divide the pie into 5 equal parts simply asserts the premise 'existence = equal right' as the dominate principle by which to guide the division of the pie.
You have to remove all other considerations (including hunger, health, and any existing social relationships such as parent-child) in order to allow the 'existence = equal right' principle to be an acceptable way to divide the pie. This doesn't make that principle the 'bedrock' of morality. Quite the contrary. It says that this principle only dominates when all other factors are ignored.
It's the easiest thing in the world to armchair-quarterback the events of the past. The real challenge is to understand why so many people felt that it was a necessary, though I'm sure as abhorrent to them as it is to you, thing to do.
The truth is that you can't say what you would have done, because you weren't there. You can guess, and of course your guess will have all the smug self-righteous moral overtones of someone who has never held life and death in his hands and had to decide.
Relatively new here (hi) and without adequate ability to warp spacetime so that I may peruse all that EY has written on this topic, but am still wondering - Why pursue the idea that morality is hardwired, or that there is an absolute code of what is right or wrong?
Thou shall not kill - well, except is someone is trying to kill you.
To be brief - it seems to me that 1) Morality exists in a social context. 2) Morality is fluid, and can change/has changed over time. 3) If there is a primary moral imperative that underlies everything we know about morality, it seems that that imperative is SURVIVAL, of self first, kin second, group/species third.
Empathy exists because it is a useful survival skill. Altruism is a little harder to explain.
But what justifies the assumption that there IS an absolute (or even approximate) code of morality that can be hardwired and impervious to change?
The other thing I wonder about when reading EY on morality is - would you trust your AI to LEARN morality and moral codes in the same way a human does? (See Kohlberg's Levels of Moral Reasoning.)Or would you presume that SOMETHING must be hardwired? If so, why?
(EY - Do you summarize your views on these points somewhere? Pointers to said location very much appreciated.)
Why not divide the pie according to who will ultimately put the pie to the best use? If X and Y intend to take a nap after eating the pie, but Z is willing to plant a tree, wouldn't the best outcome for the pie favor Z getting more?
Before you dismiss the analogy, consider this - what if the pie was $1800.00 that none of the three had earned? What if the $1800.00 had been BORROWED with a certain expectation of its utility? Should X, Y, and Z each get $600.00, even though there is no stipulation as to what each of them must DO with that money? If X intends to save his portion, and Y intends to pay down debt, but Z will spend the money though it may not be is HIS best interests to do so, should he still only get an equal portion, even though his actions with his share best accomplish the purpose of the money?
If we return to pie, you may now see that pie represents potential action (as one of the earlier commenters who mentioned carbs noted). Instead of arguing for division based on merit for PAST actions/attributes (as mentioned by another commenter), why not argue for division based on merit of INTENDED actions? Who provides the best return on the invested carbs? Why assume that 'fair' division should reflect mere existence? Why can't 'fairness' include an evalutation of potential return?
This may simply deflect the argument of 'fairness' to one wherein 'best return' must be determined with regard to each individual and the group as a whole. If Y gets no shade from the tree Z plants, then perhaps her 'best return' might be a contented nap.
The ratio of productive and beneficial action, as a function of the input (pie), calculated across time (a tree has longer benefits than an immediate nap) seems to also be a 'fair' way to divide the pie.