The people of Omelas also can't take their own happiness seriously without the suffering child:
Yet it is their tears and anger, the trying of their generosity and the acceptance of their helplessness, which are perhaps the true source of the splendor of their lives. Theirs is no vapid, irresponsible happiness. They know that they, like the child, are not free. They know compassion. It is the existence of the child, and their knowledge of its existence, that makes possible the nobility of their architecture, the poignancy of their music, the profundity of their science. It is because of the child that they are so gentle with children. They know that if the wretched one were not there sniveling in the dark, the other one, the flute-player, could make no joyful music as the young riders line up in their beauty for the race in the sunlight of the first morning of summer.
The story emphasizes the avoidance of guilt:
One thing I know there is none of in Omelas is guilt. ... To exchange all the goodness and grace of every life in Omelas for that single, small improvement: to throw away the happiness of thousands for the chance of happiness of one: that would be to let guilt within the walls indeed.
My reading of this: The inhabitants need to avoid guilt, need to justify torturing the child to themselves and each other; and, while torturing the child doesn't materially do anything, to stop would be to admit that it was never necessary, which would invite guilt. It's a kind of terrible punishment-of-nonpunishers equilibrium.
The ones who walk away are the ones who recognize all of this and are no longer willing to participate in the collective illusion (hence, alone).
Importantly (though, I think, consistently with both this and the standard reading), The Wind's Twelve Quarters introduces "The Day Before the Revolution", a story about an anarchist revolutionary in the same world as The Dispossessed, with "This story is about one of the ones who walked away from Omelas."
I like this interpretation, but 'criticism of [something like zero-sum] bias rationalized as utilitarianism' ≠ 'criticism of utilitarianism'.[1]
This feels really important to me, in a way that's much less like 'the good name of utilitarianism must be defended' than like 'zero-sum bias is a sneaky evil bastard, don't ever let it get away with hiding behind other names'.
That's not a rational reason for a shutdown if you're not longtermist. (edit: - and older, like most decision-makers, so shutdown probably means you personally die).
This reads as if 'longtermism' and 'not caring at all about future generations or people who would outlive you' are the only possibilities.
Those are decent odds if you only care about yourself and your loved ones.
This assumes none of your loved ones are younger than you.
If someone believes a pause would meaningfully reduce extinction risk but also reduce their chance of personal immortality, they don't have to be a 'longtermist' (or utilitarian, altruist, scope-insensitive, etc) to prefer to pause, just care enough about some posterity.
(This isn't a claim about whether decision-makers do or don't have the preferences you're ascribing. I'm saying the dichotomy between those preferences and 'longtermism' is false, and also (like Haiku's sibling comment) I don't think they describe most humans even though 'longtermism' doesn't either, and this is important.)
Or maybe there wouldn't be a lot of worlds where the merger was totally fine and beneficial, because if you don't have enough discernment to tell founded from unfounded fears, you'll fall into adverse selection and probably get screwed over. (Some domains are like that, I don't know if this one is.)
(As a sort-of-aside, the US government continuing to control large proportions of the resources of the future — any current institution being locked in forever like that — strikes me as similarly lame and depressing. (A really good future should be less familiar.))
The second is a need to build up a model of exactly how the code works, and looking hard to fill any gaps in my understanding.
Yep. One concrete thing this sometimes looks like is 'debugging' things that aren't bugs: if some code works when it looks like it shouldn't, or a tool works without me passing information I would have expected to need to, or whatever, I need to understand why, by the same means I would try to understand why a bug is happening.
Nobody likes rules that are excessive or poorly chosen, or bad application of rules. I like rules that do things like[1]:
not a complete list ↩︎
Besides uncertainty, there's the problem of needing to pick cutoffs between tiers in a ~continuous space of 'how much effect does this have on a person's life?', with things slightly on one side or the other of a cutoff being treated very differently.
Intuitively, tiers correspond to the size of effect a given experience has on a person's life:
I agree with the intuition that this is important, but I think that points toward just rejecting utilitarianism (as in utility-as-a-function-purely-of-local-experiences, not consequentialism).
It could be incapacitation. Incapacitation and deterrence are both "affecting the other's behavior" in a sense, but the examples in the OP suggest you mean deterrence. (Meanwhile, PeteG's sibling comment seems to only be considering 'affecting behavior' to mean incapacitation.)
(... maybe you're reserving "punishment" to mean only deterrence and so saying, if A punishes B by killing them that's by definition done to affect B's behavior? I don't understand what's going on in this thread.)