This reminds me nested time machines discussed by gwern. https://gwern.net/review/timecrimes
Precomitments plays the role of time loops and they can propagate almost infinitely in time and space. For example, any one who is going to become a major, can pre-pre-pre-commit never open any video for mafia boss etc.
I've thought about this before too, and I no longer feel confused about it. It helps to reduce this into a decision problem. The decision problem could 'be about' programs deciding anything, in principle; it doesn't need to be 'agents deciding whether to blackmail'.
I'll show decision structures symmetric to your examples, then give some more examples that might help.
Language of post's examples | Language for decision problems |
---|---|
Crime boss Mayor | Program C Program M |
Crime boss does not blackmail mayor Crime boss blackmails mayor | C outputs 0 C outputs 1 |
Mayor does not give in to blackmail Mayor gives in to blackmail | M outputs 0 M outputs 1 |
Your first example: M is a more advanced conditioner
C runs: if [M outputs 1 if C outputs 1], output 1; else, output 0
M runs: if C runs "If [M outputs 1 if C outputs 1], output 1; else, output 0", output 0; else, <doesn't occur, unspecified>
Outcome: Both output 0
Your second example
C runs: output 1
[1]
M runs: <unspecified>
Outcome: unspecified
When put like this, it seems clear to me that there's no paradox here.
Below are examples not from the post. The last one where both try to condition is most interesting.
3. C is commit-rock[2], M is conditioner
C runs: output 1
M runs: if C runs "If [M outputs 1 if C outputs 1], output 1; else, output 0", output 0; else, output 1
Outcome: both output 1
4. Both are commit-rocks
C runs: output 1
M runs: output 0
Outcome: C outputs 1
, M outputs 0
5. Both condition
C runs: run M. if M outputs 1 when C outputs 1, output 1; else, output 0
M runs: run C. if C outputs 0 when M outputs 0, output 0; else, output 1
Outcome: The programs run the other recursively and never halt, as coded.
Again, there is no paradox here.
To directly answer the question in the title, I think a commitment "to not give into blackmail" and a commitment "to blackmail" are logically symmetric, because what a decision problem is about (what the 0
s and 1
s correspond to in real life) is arbitrary. (Also, separately, there is no "commitment" primitive)
I know in your second example you want the Crime boss's decision to be conditional on the Mayor in some way, but it's not specified how, so I'm going to just leave it like this with this footnote.
In some posts about decision dilemmas, the example is used of "a rock with the word defect written on it" to make it clear that the decision to defect was not conditional on the other player.
Thanks, that's a interesting way to think about pre-commitments.
However, I'm not sure if I understand what your conclusion is. Do you believe that actors can not protect themself from blackmail with pre-commitments?
Commitments are computations that have influence in the world, that some things in the world are listening to. You can defeat a commitment by denying it influence. Blackmail is in part a commitment to enforce, and there is an opposing commitment to ignore blackmail. These commitments come into conflict, since ignorers trigger enforcement by blackmailers, which leads to more direct conflict between the bearers of the opposing commitments, with implications for influence of those commitments.
If a faction that gives influence to one of these commitments is more powerful than the faction giving influence to the other commitment, it locally won't go well for those who join the other faction. The game then shifts to the dynamics of how these factions are built and coordinated.
It all depends on what you mean by "sufficiently intelligent / coherent actors". For example, in this comment Eliezer says that it should mean actors that “respond to offers, not to threats”, but in 15 years no one has been able to cash out what this actually means, AFAIK.
As far as I can tell from Eliezer's writing (mostly Planecrash), a threat is when someone will (counterfactually) purposefully minimize someone else's utility function.
So releasing blackmail material would be a threat, but building a road through someone else's home (if doing so offers slightly more utility then going around) wouldn't be?
Actors could pre-commit to ignore any counterfactuals where someone purposefully minimizes their utility function, but then again would-be blackmailers could pre-commit to ignore such pre-commitments.
Maybe pre-commiting to ignore threats is a kind of "pre-commitment shelling point", that works if everyone does it? If all actors coordinated (even by just modeling other actors and without communication) to pre-commit to ignore threats, the would-be extorters accept that?
Yeah, but what does "purposefully minimize someone else’s utility function" mean? The source code just does stuff. What does it mean for it to be "on purpose"?
I believe "on purpose" in this case means, doing something conditional on the other actor's utility function disvaluing it.
So if you build a interstellar highway through someone's planet because that is the fastes route, you are not "purposefully minimizing their utility function", even if they strongly disvalue it. If you build it through their planet only if they disvalue it and would have build it around if they disvalued that, then you are "purposefully minimizing their utility function".
If you do so to prevent them from having a planet or to make them react in some (useful to you) way, and would have done so even if they didn't have disvalued their planet being destroyed, then you are not "purposefully minimizing their utility function", I think?
Let's talk about a specific example: the Ultimatum Game. According to EY the rational strategy for the responder in the Ultimatum Game is to accept if the split is "fair" and otherwise reject in proportion to how unfair he thinks the split is. But the only reason to reject is to penalize the proposer for proposing an unfair split -- which certainly seems to be "doing something conditional on the other actor’s utility function disvaluing it". So why is the Ultimatum Game considered an "offer" and not a "threat"?
Good question.
I can't tell, if saying that you will reject unfair splits would be a threat by the definition in my above comment. For it to be a threat, you would have to only do it if the other person cares about the thing being split. But in the Ultimatum Game both players per definition care about it, so I have a hard time thinking about what you would do if someone offers you a unfair split of something they don't care about (how can a split even be unfair, if only one person values the thing being split?).
As I understand it an actor can prevent blackmail[1] by (rational) actors it they credibly pre-commit to never give in to blackmail.
Example: A newly elected mayor has many dark secrets and lots of people are already planning on blackmailing them. To preempt any such blackmail they livestreams themself being hypnotized and implanted with the suggestion to never give into blackmail. Since in this world hypnotic suggestions are unbreakable, all (rational) would-be blackmailers give up, since any attempt at blackmail would be guaranteed to fail.
In general pre-commiting in such examples is about reducing the payoff matrix to just [blackmail, refuse] and [don't blackmail, refuse], which makes not blackmailing the optimal choice for the would-be blackmailer.
Of course, sufficiently intelligent / coherent actors wouldn't need a external commitment mechanism and a sufficiently intelligent and informed opposition would be able to infer the existence of such a pre-commitment. More so, I believe to have heard that if a sufficiently intelligent / coherent actors notices that it would be better of if it had pre-commited, it can just act as if it had (post-commit?).
However, what if the would-be blackmailer also tries to limit the possible outcomes?
Example: The anti-blackmail hypnosis is so successful that soon every newly elected mayor does it. A new candidate is likely to win the next election. They know that the local crime boss has a lot of dirt on them, but they aren't worried about blackmail, as they will just do the anti-blackmail hypnosis on their first day in office. On the evening of the election they are send a video of the crime boss being hypnotized into blackmailing the new mayor even if they have been anti-blackmail hypnotized.
This cuts down the payoff matrix to [blackmail, refuse] and [blackmail, give in]. Giving in to the blackmail is optimal for the new mayor and doing the anti-blackmail hypnosis just locks them into [blackmail, refuse].
So how does this work out between sufficiently intelligent / coherent actors? Does the first one to (credibly and transparently) pre-commit win?
And what if actors are able to post-commit (if that even is a thing and I didn't misunderstand the concept)? A actor could act as if they had pre-commited to ignore the oppositions pre-commitment (to ignore pre-commitments to never give into blackmail), but then the opposition could act as if they had pre-commited to ignore that pre-commitment?
(This comment thread seems to discuss the same question but did not resolve it for me.)
By blackmail I mean a scenario where the would-be blackmailers choices are blackmail or don't blackmail and the targets choices give in or refuse with a payoff matrix like this:
give in | refuse | |
---|---|---|
blackmail | target: -10 blackmailer: 20 | target: -100 blackmailer: -1 |
don't blackmail | target: 0 blackmailer: 0 | target: 0 blackmailer: 0 |