The ants and the grasshopper
One winter a grasshopper, starving and frail, approaches a colony of ants drying out their grain in the sun, to ask for food. “Did you not store up food during the summer?” the ants ask. “No”, says the grasshopper. “I lost track of time, because I was singing and dancing all summer long.” The ants, disgusted, turn away and go back to work. One winter a grasshopper, starving and frail, approaches a colony of ants drying out their grain in the sun, to ask for food. “Did you not store up food during the summer?” the ants ask. “No”, says the grasshopper. “I lost track of time, because I was singing and dancing all summer long.” The ants are sympathetic. “We wish we could help you”, they say, “but it sets up the wrong incentives. We need to conditionalize our philanthropy to avoid procrastination like yours leading to a shortfall of food.” And they turn away and go back to their work, with a renewed sense of purpose. ...And they turn away and go back to their work, with a flicker of pride kindling in their minds, for being the types of creatures that are too clever to help others when it would lead to bad long-term outcomes. ...“Did you not store up food during the summer?” the ants ask. “Of course I did”, the grasshopper says. “But it was all washed away by a flash flood, and now I have nothing.” The ants express their sympathy, and feed the grasshopper abundantly. The grasshopper rejoices, and tells others of the kindness and generosity shown to it. The ants start to receive dozens of requests for food, then hundreds, each accompanied by a compelling and tragic story of accidental loss. The ants cannot feed them all; they now have to assign additional workers to guard their doors and food supplies, and rue the day they ever gave food to the grasshopper. ...The ants start to receive dozens of
Yes, as per my other reply to Wei I was just using the dictatorship example as the closest real-world example. Even when agents have the same utility function, with no indexical components, I claim that they might still face a trust bottleneck, because they have different beliefs. In particular they might not be certain that the other agent has the same values as they do, and therefore all else equal would prefer to have resources themselves rather than giving them to the other agent, which then recreates a bunch of standard problems like prisoner's dilemmas. (This is related to the epistemic prisoner's dilemma, but more general.)