I endorse and operate by Crocker's rules.
I have not signed any agreements whose existence I cannot mention.
"I'd add X to my priorities if I had 3x more time per day." is not how I'd understand "X is my 3rd tier priority.", so this would require additional explanation, whereas talking about it in terms of clones would require much less additional explanation.
Yeah, fair. "Nth serial clone's priority"?
Maybe something with Harry Potter's Time-Turner?
This seems oriented around modern examples, but I'd also be curious to hear pre-modern examples.
I thought that writing (as well as basic math) might technically fit your criteria, as it remained inaccessible to the majority for millennia, but maybe there weren't really plenty of incentives to spread it, and/or the common folk would not have benefited from it that much, given everything else about the society at the time.
(Perhaps some ambitiously altruistic prince at some point tried to teach his peasants to write, but they were like "I don't have time for this because I need to take care of my cows and my rice".)
You can think about it in terms of clones, e.g. instead of "I'd do it if I had 2x more time", you say "if I had a clone to have things done, the clone would do that thing" (equivalent in terms of work hours).
So you can say "that's my 1st/2nd/3rd/nth clone's priority".
In which case, what they care about (their "actual" domain of utility/preference) is not [being in a specific city], but rather something more like "trajectories".
I'm writing this mostly because I'm finding it horrifying that it appears the LessWrong consensus reality is treating "nonconsent" stuff as a cornerstone of how it views women.
Can you elaborate/give examples of this?
This post has also inspired some further thinking and conversations and refinement about the type of agency/consequentialism which I'm hoping to write up soon.
Did you end up publishing this?
Can you give a sample of those weird versions of AI safety?