LESSWRONG
LW

Acausal TradeEvolutionSolomonoff inductionUDASSAAI
Frontpage

26

Help me understand: how do multiverse acausal trades work?

by Aram Ebtekar
1st Sep 2025
2 min read
7

26

Acausal TradeEvolutionSolomonoff inductionUDASSAAI
Frontpage

26

Help me understand: how do multiverse acausal trades work?
4quetzal_rainbow
4Zach Stein-Perlman
4Cole Wyeth
2the gears to ascension
2Yair Halberstadt
1MinusGix
1Trevor Hill-Hand
New Comment
7 comments, sorted by
top scoring
Click to highlight new comments since: Today at 5:29 PM
[-]quetzal_rainbow8h42

Problem 1 is wrong objection.

CDT agents are not capable to cooperate in Prisoner's dilemma, therefore, they are selected out. EDT agents are not capable to refuse to pay in XOR blackmail (or, symmetrically, pay in Parfit's hitchhiker), therefore, they are selected out.

I think you will be interested in this paper.

Reply
[-]Zach Stein-Perlman12h40

Certainly not clear to me that acausal trade works but I don't think these problems are correct.

  1. Consider a post-selection state — a civilization has stable control over a fixed amount of resources in its universe
  2. idk but feels possible (and just a corollary of the model the distribution of other civilizations that want to engage in acausal trade problem)
Reply
[-]Cole Wyeth13h40

I don’t think it really works, for similar reasons: https://www.lesswrong.com/posts/y3zTP6sixGjAkz7xE/pitfalls-of-building-udt-agents

I also share your intuition that there is no objective prior on the mathematical multiverse. Additionally I am not convinced we should care about (other universes in) the mathematical multiverse. 

Reply
[-]the gears to ascension1h20

It seems easier to imagine trading across Everett branches, assuming one thinks they exist at all. They come from similar starting point but can end up very different. That reduces severity of problem 2.

Reply
[-]Yair Halberstadt12h20

I think that it's good to think concretely about what multiverse trading actually looks like, but I think problem 1 is a red herring - Darwinian selective pressure is irrelevant where there's only one entity, and ASIs should ensure that at least over a wide swathe of the universe there is only one entity. At the boundaries between two ASIs if defence is simpler than offense there'll be plenty of slack for non-selective preferences.

My bigger problem is that multiverse acausal trade requires that agent A in universe 1, can simulate that universe 2 exists, with agent B, which will simulate agent A in universe 1. Which is not theoretically impossible (if for example the amount of available compute increases without bound in both universes, or if it's possible to prove facts about the other universe without needing to simulate the whole thing), but does seem incredibly unlikely - and almost certainly not worth the cost required to attempt to search for such an agent.

Reply
[-]MinusGix32m10

A core element is that you expect acausal trade among far more intelligent agents, such as AGI or even ASI. As well that they'll be using approximations.

Problem 1: There isn't going to be much Darwinian selection pressure against a civilization that can rearrange stars and terraform planets. I'm of the opinion that it has mostly stopped mattering now, and will only matter even less over time. As long as we don't end up in a "everyone has an AI and competes in a race to the bottom". I don't think it is that odd that an ASI could resist selection pressures. It operates on a faster time-scale and can apply more intelligent optimization than evolution can, towards the goal of keeping itself and whatever civilization it manages stable.

Problem 2: I find it somewhat plausible there's some nicely sufficiently pinned down variables that can get us to a more objective measure. However, I don't think it is needed and most presentations of this don't go for an objective distribution.
So, to me, using a UTM that is informed by our own physics and reality is fine. This presumably results in more of a 'trading nearby' sense, the typical example being across branches, but in more generality. You have more information about how those nearby universes look anyway.

The downside here is that whatever true distribution there is, you're not trading directly against it. But if it is too hard for an ASI in our universe to manage, then presumably many agents aren't managing to acausally trade against the true distribution regardless.

Reply
[-]Trevor Hill-Hand5h10

There's no Darwinian selective pressure to favor agents who engage in acausal trades.

I think I would make this more specific- there's no external pressure from that other universe, sort of by definition. So for acausal trade to still work you're left only with internal pressure.

The question becomes, "Do one's own thoughts provide this pressure in a usefully predictable way?"

Presumably it would be have to happen necessarily, or be optimized away. Perhaps as a natural side effect of having intelligence as all, for example. Which I think would be similar in argument as, "Do natural categories exist?"

Reply
Moderation Log
More from Aram Ebtekar
View more
Curated and popular this week
7Comments

While I'm intrigued by the idea of acausal trading, I confess that so far I fail to see how they make sense in practice. Here I share my (unpolished) musings, in the hopes that someone can point me to a stronger (mathematically rigorous?) defense of the idea. Specifically, I've heard the claim that AI Safety should consider acausal trades over a Tegmarkian multiverse, and I want to know if there is any validity to this.

Basically, I in Universe A want to trade with some agent that I imagine to live in some other Universe B, who similarly imagines me. Suppose I really like the idea of filling the multiverse with triangles. Then maybe I can do something in A that this agent likes; in return, it goes on to make triangles in B.

Problem 1: There's no Darwinian selective pressure to favor agents who engage in acausal trades. Eventually, natural selection will just eliminate agents who waste even a small fraction of their resources on these trades, rendering the concept irrelevant to a descriptive theory of rationality or morality. To the extent that we do value multiverse happiness, it should be treated as a misgeneralization of more useful forms of morality, persisting only because acausal trades never occurred to our ancestors.

Defense 1a: Ok maybe instead of inducing the agent to make triangles in B, I induce it to build copies of me in B. Then surely, on a multiverse scale, I'm being selected for? Well not quite: selection in the long term is not about sheer numbers but about survival vs extinction, and here I'm still going extinct in Universe A, which likely also makes my trades worthless for B.

Defense 1b: Ok even if caring about acausal trades is a misgeneralization in evolutionary terms, since we care about the multiverse, shouldn't we ensure that the ASI does too? Maybe a sufficiently powerful ASI can forever resist selection pressures, but this sounds highly speculative to me.

Problem 2: A more critical issue is that for every Universe B that rewards us for doing X, there's another Universe C that rewards us for not doing X. How do we reason about which of B or C to assign more weight? Solomonoff induction? One of my research projects (please stay tuned!) is a rigorous defense of Solomonoff induction, but the defense I have in mind merely argues that Solomonoff induction predicts better than other algorithms. It stops short of treating it as an objective measure over possible worlds. If anything, it actually suggests the opposite: my argument presents probabilistic beliefs as essentially emergent properties of successful predictors. Since these multiverse beliefs are irrelevant to prediction, the idea of a probability measure over universes seems ill-defined. Moreover, Solomonoff induction requires a reference UTM, and my previous paper suggests this depends on the laws of physics. Such a universe-dependent measure lacks objective meaning in a true multiverse setting.

So what do you think: does multiverse trading really work?