# 6

Epistemic status: pretty unsure about this, but I'm unlikely to be thinking much about multiverse-wide cooperation for a while, and the thought seemed plausibly worth making public.

Content note: I make no attempt to explain multiverse-wide cooperation in this post. Fortunately, there are two nice overviews of multiverse wide cooperation here and here.

Oesterheld (pg. 51) gives five preconditions for agents who can benefit from multiverse-wide superrational cooperation (MSR), and has suggested that one intervention worth exploring is making people more consequentialist (pg. 78). Insofar as I understand his argument, I take him to be saying: 'in worlds where acausal decision theorists are more consequentialist, we have an increased ability to enter into multiverse-wide acausal trades which are beneficial from the perspective of both parties. We should thus increase the number of consequentialists, so that more trades of this kind are made.'

I think another way of creating benefits is to make causal decision theorists more Kantian. That is, I think a Kantian CDT agent would have reason to engage MSR, as they meet all five preconditions outlined, and I think would engage in all the positive-sum acausal trades outlined in Oesterheld's paper. So making CDT agents Kantian also increases the number of multiverse-wide trades made which are beneficial from the perspective of both parties.

Let me be clear on what I'm saying: I think many people's attractors for Kant will be with certain object-level views they take him to endorse, like the imperssibility of lying. That is not what I mean to refer to. I am instead referring to someone who endorses the following claim:

Kantian Claim: The proper object of rational evaluation is not located at the level of acts, but rather at the level of maxims, which are of the form: 'I will do A in contexts C in order to achieve E'. The test for such maxims are those that can be willed without contradiction to be a universal law. This is sometimes called the first categorical imperative.

The canonical example is 'promising'. Suppose I am deliberating over whether to break a promise; I cannot (it is usually assumed) coherently will promise-breaking to be a universalisable maxim. This is because one would be unable to act on such a maxim in a world universalised so as to achieve its on purpose. If such a maxim were universal, there could be no institution of promise-making.

Oesterheld offers examples, like counterfactual mugging, where he says CDT agents would not precommit to cooperate with agents in other parts of the multiverse.

Omega decides to play a game of heads or tails with you. You are told that if the coin comes up tails, Omega will ask you to give it $100. If it comes up heads, Omega will predict whether you would have given$100 if the coin had come up tails. If Omega predicts that you would have given it the money, it gives you $10,000; otherwise, you receive nothing. Omega then flips the coin. It comes up tails, and you are asked to pay$100.

Do you pay? Oesterheld claims yes, as long as the outcome of the coin-flip is unknown. He also claims that CDT agents would answer no, unless Omega predicted based on a future version of that CDT agent. I claim that the Kantian CDT agent should answer yes, as long as they too are uncertain about the outcome of the coin-flip. The maxim: 'in contexts where you have some choice of action , where acts and deliver, with certainty, outcomes and respectively, and then make choice ' can be universalised without contradiction, and delivers strictly better causal consequences than any other maxim. The Kantian CDT agent should thus pay.

I think the point generalises: Kantians care about furthering their ends, and so any situation whereby consequentialist EDT agents find deals to have high expected utility, so too will a Kantian CDT agent. This is because any scenario in which an act has positive evidential (but not causal) expected utility for consequentialist agents will be one in which agents who perform do better, overall, at furthering their ends in situations of type . Kantians will, then, precommit to performing in , as Kantians evaluate the rationality of maxims rather than simple acts. If we believe there are positive gains from multiverse-wide cooperation, then Kantian outreach to CDT agents is a way of helping us achieve gains from multiverse-wide acausal trade.

# 6

New Comment
I cannot (it is usually assumed) coherently will promise-breaking to be a universalisable maxim.

This move is hiding a lot of work within your similarity clustering algorithm. "promise keeping" and "promise breaking" both describe a wide set of different actions taken in a wide set of situations. Within the Kantian imperative scheme, you are forced to make a single decision over all these different situations. So, what chose this set of actions, and why this set, not some other set.

Suppose a particularly nasty gang all have gang tattoos, the gang works based on promises to kill people. Meanwhile, nice people promise to do nice things. The maxim, "if you have a gang tattoo, break your promises, otherwise keep them" might have nicer consequences than everyone always breaking their promises, or always keeping them. But then introduce a gang that doesn't have tattoos, and a few reformed gang members promising to do nice things. Soon the ideal maxim becomes an enumeration of the ethical action in every conceivable situation. You get a giant lookup table of ethics, and while you can express ethics in the form of a giant lookup table, you can express anything in that form. Saying "the decisions this agent makes can be described in terms of a giant lookup table over all conceivable situations" is true for all agents, so doesn't distinguish a particular subset of agents.

I think that actual human Kantians are offloading a lot of work to the brains invisible black boxes, to properly say what a Kantian agent is, you need to figure out what the black box is doing. (This is a problem of coming up with a sensible technical definition that is similar to common usage)

I agree with you that choosing the appropriate set of actions is a non-trivial task, and I've said nothing here about how Kantians would choose an appropriate class of actions.

I am unclear on the point of your gang examples. You point out that the ideal maxim changes depending on features of the world. The Kantian claim, as I understand it, says that we should implement a particular decision-theoretic strategy, by focusing on maxims rather than acts. This a distinctively normative claim. The fact that, as we gain more information, the maxims might become increasingly specific seems true, but unproblematic. Likewise, I think it's true that we can describe any agent's decisions in terms of a lookup table over all conceivable situations. However, this just seems to indicate that we are looking at the wrong level of resolution. It's also true that I can describe all agents' behaviour (in principle) terms of fundamental physics. But this isn't to say that there are no useful higher-level descriptions of different agents.

When you say that actual human Kantians offload work to invisble black boxes, do you mean that Kantians, when choosing an appropriate set of actions to make into a maxim, are offloading that clustering of acts into a black box? If so, then I think I agree, and would also like a more formal account of what's going on this case. However, I think a good first step towards such an formal account is looking at more qualitative instances of behaviour from Kantians, so we know what it is we're trying to capture more formally.

'in worlds where acausal decision theorists are more consequentialist, we have an increased ability to enter into multiverse-wide acausal trades which are beneficial from the perspective of both parties. We should thus increase the number of consequentialists, so that more trades of this kind are made.'

This only holds to the extent that creating consequentialists has no other downsides, and that they are trading for things we want.

Suppose omega told me that there are gazillions of powerful agents in other universes, that are willing to fill their universe with paperclips in exchange for making one small staple in this universe. This would not encourage me to build a paperclip maximizer. A paperclip maximizer in this universe would be able to gain enormous amounts of paperclips from multiversal cooperation, but I don't particularly want paperclips, so while it benefits both parties, it doesn't benefit me.

If we are making a friendly AI, we might prefer it to be able to partake in multiverse wide trades.

This was my reconstruction of Caspar's argument, which may be wrong. But I took the argument to be that we should promote consequentialism in the world as we find it now, where Omega (fingers crossed!) isn't going to tell me claims of this sort, and people do not, in general, explicitly optimise for things we greatly disvalue. In this world, if people are more consequentialist, then there is a greater potential for positive-sum trades with other agents in the multiverse. As agents, in this world, have some overlap with our values, we should encourage consequentialism, as consequentialist agents we can causally interact with will get more of what they want, and so we get more of what we want.