I operate by Crocker's rules.
Post the rules?
Indeed you can use causal pathways like culture to increase the chances of people deontologically deciding to cooperate, or of people using UDT, but the latter is only useful if UDT cooperates. According to UDT, to decide what to do, compare not the possible worlds conditional on "I decide to cooperate/defect", but conditional on "UDT cooperates/defects".
Of course CDT can't be convinced in the moment that deciding to vote for your party changes the expected tallies by any more than one. But even CDT would agree that the CDT party loses against the UDT party, and that it should build UDT rather than CDT into its AI if that AI will be playing Prisoner's Dilemmas against its copies.
By CDT, deciding (to respect the wishes of the dead) intervenes on the universe by flipping a switch in your brain. By another decision theory, it intervenes on mindspace by flipping a switch in the neighborhood of your mind, which takes effect in the present, past and future.
There are people that have preferences about the world after they die, they can acausally increase their chances by respecting the wishes of the dead.
Ah, I got it. Just as I update towards the ghost pool being smaller when I get picked, I should update towards the ghost pool being larger when I find myself as a ghost in the first place. These updates cancel out, and I should buy the ability.
In the multiplayer roleplaying game ss13, one player gets the rare and coveted role of wizard, and can now choose his magic ability/equipment purchases. First he buys an apprentice, choosing me randomly from all ghosts (players without a role, observing the game) that would like it. Next he considers picking an ability that will let him spawn additional ghost roles during play. Let's say there's a 50% chance of an extra ghost, it's barely worth it in that case so he picks it.
But let's suppose I were the one with the choice. If over the years I get asked 4 times whether I want to play Apprentice, I'm gonna get picked 3 times out of 4, and 2 of those three times there's no extra ghosts. So I shouldn't buy the ability. But we're on the same team, so this doesn't make sense! What's going on?
GPT-2 works by deterministically fetching the probability distribution over the next token, then sampling from it. It is plausible that the probability it assigns to 6 is no larger than 80%, but it's simple enough to postprocess every probability larger than 50% to 100%. (This isn't always done because when completing a list prefix of size 4, it would always produce an infinite list, because the probability of another , is more than 50%.)
I have difficulty doing things on my own, but talking to people is easy. Would anyone like to regularly talk in ways that further AI safety research?
I'm just talking about the recent, surprisingly coherent text autocomplete engines like GPT-3. (They wouldn't need help staying motivated/curious, but they could use instruction in concept manipulation.)
Instructions like these may well teach Transformers to generate useful insights.