Acausal trade is not about trade (or pursuit of particular goals), it's about expanding the scope of coordination. When things in multiple places, or at multiple times, or in multiple possibilities can jointly decide what to do, with each instance carrying out its part according to a shared policy, that's coordination.
A given agent/person is mostly coordinated across the instances, at different points in time, and between different possibilities (different possible situations that might be mutually exclusive in one timeline or not). Exploiting existing coordination is best done with updateless policy selection, to the extent that's possible. But this doesn't help with establishing coordination in the first place, especially across multiple agents that originally didn't think about each other (but could benefit from acting in concert). Logical updating (starting to listen to some computation that would influence a policy, perhaps a contract shared with other parties, which likely breaks some properties that updatelessness wants) has the character of enabling new coordination to be established, and the idea of acausal trade is gesturing at this, the kind of thing that happens in the Prisoner's Dilemma.
It seems strangely difficult to introduce pursuit of particular goals in protocols for establishing coordination that work this way. Possibly this is a clue that the process of carrying out logical updating that establishes coordination (once it's decided how to coordinate/update) should be distinct from the process of deciding how to coordinate/update in a way that pursues particular goals.
This is probably not very legible, so here's a more concrete sketch. A contract (taking the form of a computation that its signatories will let influence their policies or thinking) should be chosen according to one's values, but not based on the specific consequences of adhering to it (which will typically remain unknown at least until-in-logical-time everyone signs, and then the contract needs to take a look at what it's dealing with). Once signed (which can be an updateful step for the signatories in some sense), a contract that is itself an agent can (updatelessly) exploit coordination among the signing parties built in terms of its behavior across their situations, (updatelessly) pursuing its own values, which can be distinct from those of the signatories that listen to its policy in concert. This policy can depend on who decides to sign it, which is how an individual potential signatory gets to influence the contract's behavior, which in turn influences the behavior of the other signatories that would listen to this shared contract.
I like this framing, but I think the examples are missing the bit that makes me most skeptical about the kind of acausal trades that people on this website like to discuss; namely, that they're acausal in "both directions." In the apples for charity example, I think some of the intuition rides on the assumption that, even if I can't initiate or verify the trade, the descendent of the apple civilization will in fact check that I have placed an apple. That's what lets me do normal, everyday counterfactual reasoning of the form "if I place an apple, then he will donate, and if I don't, he won't" (with some probability). So there's still some causality in there somewhere, in that the presence of the apple directly causes the donation. In the case of superintelligences in different universes or whatever, we don't even have this, so the metaphor is more like "I think that there's a descendent of a civilization who donates when I put an apple someplace, and I think that he thinks that I am likely to exist and to put the apple, so he'll donate."
The obvious objection is, given only what I just wrote, I still get the donation even if I don't put the apple, so why should I bother? To get around this it needs to be the case that what the apple guy thinks I'll do somehow depends on what I actually do, which connects to the various other (controversial, nonintuitive) cans of worms that people like to talk about here. So agreed, "one-directional" acausal trade is not so scary, but I'm still not sure about bidirectional.
A gentle introduction to how we might make trades without direct evidence
Let me start by saying that I don't like concepts defined by negatives. "Decentralized" systems and "nonviolent" communication stresses me out. Even "non-fiction" strikes me as unhelpfully vague.
The concept defined via negativa I dislike most of all, however, is "acausal trades".
I dislike this term because it simply means “any trade that isn’t causal.” That’s not very specific. It’s also not obvious what “causal trades” are in the first place. And all we know is that these are not those.
So I thought about this for a while (admittedly not very long) and now think about it slightly differently. It's grounded in two basic concepts that (while not totally precise) I think are easier to understand:
Rather than thinking about causal and acausal trades, I prefer to think of things as “interaction-based trades” and “information-based trades".
To make the difference between these clear:
With that difference in mind, let's prime some intuitions for thinking about different sorts of trades: ones based on interaction, and ones that aren't.
Trades and Information
In general, you can think about interaction-based trades as having two points of interaction: an initiation, where each side meets with the other party to understand their demands and begin the trade; and a verification, where each side checks that the other has upheld their end of the bargain.
This should be familiar: to trade, we need to know what the other side wants, and we need to confirm that they have delivered. Importantly, however, this is not always true. There are trades without initiation interactions, and trades without verification interactions. Let’s go through examples of each.
Trades Without Initiation
Three intuition pumps:
To be sure, the further we move from direct interactions in these later examples, the weaker the signal, and the greater the uncertainty. However, it’s not true that we have zero signal. And if we expect enough value from the trade, it might still make sense to go ahead with it.
Trades Without Verification
Four more intuition pumps:
Trades without verification also become more difficult as the signal weakens, because both sides discount given the uncertainty. That said, if you can get a non-zero level of signal, the payoff might be high enough to justify the trade.
Trades Without Initiation or Verification
With that, let’s move to a different type of trade—the kind typically described as “acausal,” but which really just combines both of the properties we’ve seen so far. The basic shape is: you encounter evidence that if you do a particular thing, something you value will happen somewhere else, in a way you can’t verify.
A simple example: your friend you trust tells you that if you place an apple in this machine, it will send fifty dollars to a charity you like. This seems reasonable, so you do it. Nothing totally weird to see here!
A more complex example: you are a student of history. You discover evidence that there was once a civilisation that cared a great deal about apples in particular places and was willing to trade charity donations for them—and that this civilisation cared deeply about its legacy into the future. You know of nothing that has destroyed this civilisation, and you believe this provides sufficient evidence that a descendant of this civilisation exists today, one who would also send fifty dollars to your preferred charity if an apple were placed in the right spot. You're a bit uncertain, but you think this is probably overall a good trade, so you do it.[1]
Reasons for Skepticism
Information trades that aren’t interaction trades—what people call “acausal trades”—seem pretty reasonable to me. So why are people skeptical? From talking to people (n=3), I have a few ideas.
“There couldn’t be anything outside our light cone that we could trade with.”
I don’t know enough about anthropic reasoning to say much here, so I’ll basically pass by saying that this is likely to be an empirical question, decided on the basis of research which hasn’t really started yet. I don’t want to rule this out.
“We’ll never be sufficiently certain to actually make these trades.”
All trades involve uncertainty. When I make a trade with you, I don’t know for sure that you will reciprocate, and you don’t know that I will. That’s why we have various laws and enforcement mechanisms.
Since all trades involve uncertainty, all trades are a type of bet in expectation. I will exchange the thing I value if and only if I believe that what I’ll get in return is good enough, given a discount rate proportional to my certainty that I’ll actually receive it. If it’s unlikely I’ll get what I want, I won’t pay a high price; if it’s likely, I’ll offer more.
It seems plausible that we won’t have a high degree of confidence about events occurring outside our light cone. But I think it’s also unlikely that we can rule out getting any information at all with certainty. We do seem to have some evidence that aliens capable of faster-than-light travel don’t exist, for example. The set of things about which it is possible to have some credence—what you might call the “information cone”—is larger than the set of things within our light cone, and plausibly larger than the universe itself (if, for example, the universe is a simulation).
Additionally, our ability to extract information from the universe is increasing over time. I also think that intelligence is likely to increase rapidly with powerful AI, and one way to define intelligence is as the ability to read information and derive useful insight. In the past, craters on Mars would have looked random, but greater intelligence helped us realise that they’re a sign of water. The history of astronomy is full of such insights. By the same token, it seems conceivable that things which look like noise to us today—peculiar background radiation, spin states on quarks, or the shapes of distant asteroids—might in fact contain sufficient evidence to unlock information about the multiverse or a supersimulation.
Maybe the evidence we eventually get about the configuration of the world outside our causal sphere will lead us to think that trades can be made, or maybe it won’t. The real question is whether the trades would be valuable enough to be worth making. That seems like an empirical question, and it’s difficult to rule out.
(Of course, it’s possible to get Pascal’s mugged here—this is a live philosophical question and I don’t want to pretend I can resolve it.)
“Even with certainty, we wouldn’t want to trade with entities outside our light cone.”
This seems like a strange way to think. I routinely make ethical decisions aimed at optimising states of affairs I have no direct access to—other people’s consciousness and wellbeing. More generally, I frequently donate to charities that operate in countries I don’t visit.
Arguing that entities outside our light cone don’t matter seems analogous to arguing that people in other countries don’t matter. If a thing mattered when it was in your light cone, why would it cease to matter if someone moved it a lightyear to the left?
“But you’re not controlling the thing! You can’t have any certainty it would behave that way!”
I agree that this intuition feels strong. I think there’s something about the “causal/acausal” framing that keeps it in play.
But things are actually simpler than they seem. If I take an action that is counterfactually correlated with you taking a particular action—that is, if I take this action you will take yours, and if I don’t you won’t—then this is a sufficient relationship to make trades with, whether you want to call it “causal” or not.
“This seems a lot like prayer.”
Insofar as we have credence that God exists, and the uncertainty-discounted payouts from praying are worth the cost, then prayers are a good bet in expectation! The comparison is not as damning as it first appears—it just illustrates how information trades work when the signal is very weak and the claimed payoff is very high.
(Also, to those who think that acausal trades are totally unlike anything we do today: billions of people pray to Gods!)
Why Might Information Trades Matter?
That’s all well and good, you might think. But even if information trades are possible for future generations, they don’t matter to us—either because we’re too early in civilisational history, or because we have better things to do with our time.
I think this is the most reasonable criticism of acausal trade research. In my head, it’s on a par with cosmology: intellectually fascinating, but not as high-urgency as other things like AI safety.
However, there are several reasons to think it may matter more than it first appears:
The Problem of Infinite Payoffs
The potential for infinite (or astronomically large) payoffs does strange things to how we reason about expected value. This is the territory of Pascal’s mugging and Roko’s basilisk. If the space of possible trading partners extends beyond the light cone—or into simulated realities, or across a multiverse—then the expected value of information trades could be enormous, even after heavy discounting for uncertainty. Getting our reasoning frameworks right for these scenarios seems important, even if the trades themselves are far off.
Digital Minds and Self-Similarity
AI systems will be digital minds, and many will be copies or close variants of themselves. This makes them far more likely to be able to engage in information trades, because the key prerequisite—having a good model of your trading partner’s decision-making—is trivially satisfied when your trading partner is a copy of you. If we think most minds in the future will be digital, then information trades could be a significant part of how future civilisations operate. Understanding the dynamics now, while we still have influence over AI development, seems prudent.
Future Information Extraction
Supercomputers and advanced AI may be able to derive information about the world that we can’t currently access or even conceive of. As our ability to extract signal from apparent noise improves, the “information cone” may expand dramatically. This could change not just whether information trades are feasible, but also how we think about ethics, decision theory, and our obligations to entities we can’t directly observe. We can’t rule out that future discoveries will force a fundamental rethinking of the scope of moral consideration.
The Window for Shaping AI Values
We may only have a narrow window during which we can influence the values and decision-making frameworks of AI systems as they become more intelligent. If information trades turn out to be an important part of how superintelligent systems interact with the world, then the time to think carefully about the principles governing these trades is now—not after the systems are already operating beyond our ability to steer them. Baking in the right reasoning about cooperation, commitment, and evidence-based trading early could have outsized consequences.
Reflections
If you only take one thing away from this piece, I want it to be that acausal trades are not as exotic as they sound. They are just trades where the information channel is unusually indirect. We already make trades under uncertainty all the time — every transaction involves some degree of trust that the other party will deliver, and we routinely take costly actions aimed at improving states of affairs we can't directly observe (donating to distant charities, making ethical choices on behalf of other people's wellbeing). Religious practice, though unfashionable to rational atheists, may be the oldest and most widespread example: for millennia, people have taken costly actions on the basis of testimony, tradition, and personal experience that a non-observable entity will reciprocate.
To quote Claude 4.6: 'What we call "causal" trades are really just trades where the information signal happens to be strong. Once you see it that way, the question isn't whether information trades are possible in principle — it's where, on the spectrum from high-signal to low-signal, they stop being worth making.'
That said, the stakes are potentially very high, and the territory gets strange quickly. The possibility of astronomically large payoffs does odd things to expected value calculations, and we don't yet have robust frameworks for reasoning about them without getting Pascal's mugged. More pressingly, AI systems — as digital minds that can be copied, and that can model copies of themselves — are natural candidates for information trading in a way that biological minds are not. If most future minds are digital, information trades could become a major feature of how civilisations operate. This doesn't necessarily mean we need a large research programme on acausal trade today. But it does mean that, at a minimum, we should be aware that this is a live question — one that may become increasingly urgent as we dramatically increase the intelligence available to us over the coming decades. It might be that having a few people thinking carefully about the foundations now could matter a great deal later.
(I'm writing this quickly, so apologies for lack of citation: if there's demand, I will link the great posts I read on this topic more fully. Main credit goes to folk at Constellation and Joe Carlsmith's writing for informing my views. All opinions my own)
This probably isn't a good example. I'd be interested if people have a better one. In general, though, I don't know that this is cruxy. I don't claim to know what good acausal trades are. I merely want to suggest that there's nothing weird about the fact they might exist.