Division of labour in this world depends on trade — physical movement of goods — to get the goods from where they are made to where they are wanted. I do not see how I benefit from things I want being made in another universe, that cannot be sent here, nor how they in their universe benefit from me making stuff in mine.
I agree that this depends on people caring about goods in different universes. I care a bunch about goods in different universes and I expect many others will too.
(Actually there might be galaxy-brained decision-theoretic arguments nevertheless, including based on the prospect that we're in a simulation, but the basic case depends on caring about different universes.)
I believe this post has received three or four strong-downvotes. Downvoters, I'd appreciate if you DMed me why. I know some people are sensitive about infohazards on related topics but I think this post is fine and I'm interested in hearing if not.
There's a specific problem with acausal trade that I want to point out. Acausal cooperation does make sense, from a certain point of view, but I think acausal trade is taking it a bit too far. Correct me if I'm wrong on anything here or misunderstood the post.
Take two parties
Here we are acausal, and for some reason care what happens outside our light cone, so each party passes the goods bundle produced by other parties through a value function
Reward then becomes under acausal trade
where
Can you see the problem? For any
Trade is temporal, its dynamic. It takes place only when there is a given future where reward may be different. But with acausal trade there is no future reward, just the background expectation of what other light cones are doing, and because of acausality there is nothing we can do to change this.
If we knew lowering
I think you can simulate a bunch of other universes, determine what values are held by people in other universes who are into acausal cooperation (roughly speaking), then cooperate with those values and expect that some people in other universes will cooperate with you perfectly corresponding to your cooperativeness propensity. Like in Newcomb's problem, you get to choose the output of your decision procedure and that determines both what you do and what good predictors will predict you'll do. Maybe I misunderstand your point; maybe we don't disagree.
Ah okay. I think this reveals a larger disagreement actually.
You are assuming we already value acausal trade.
I'm saying we should look at the basic reward function as the dynamics of a microstate within a larger sociophysical system. My point is we can use game theory to layout the acausal landscape then use these dynamics to generate an estimate of if societies will converge to your assumption in the first place.
When I say 'converge' I don't mean by opinion either. I'm not saying people are inherently selfish and only care about the reward functions within their light-cone. I mean societies at scale simply aren't capable of acausal trade by the natural sociological natural selection.
Look inside the light cone at the social configuration space
So depending on the Lagrange multiplier you get that either societies with higher aggregate reward are more or less likely to exist. I argue that
So if we take my previous comment, lowering individual reward lowers the social aggregate reward, resulting in a rarer socieity. Societies that engage in complete acausal specialization will be very rare.
Suppose you reflect a bunch and land on a linear[1] utility function. Suppose in our universe, the most efficient way to increase your utility is creating many "widgets." We'll launch von Neumann probes to claim resources in distant galaxies. With your share of the lightcone, you can cause there to be (say) more widgets than otherwise. That sounds like a lot, but when you think about it, it's too bad that our universe wasn't better suited for producing widgets — it could have been that you could create so many widgets we need new notation to express such vast numbers, or that there was a structure profoundly more efficient than widgets for increasing utility.
Good news! There's probably many other universes out there — not just Everett branches but also very different universes, many not made of atoms. Some are much better suited for increasing your utility than this one, while this one has some comparative advantage. The gains from trade are astronomical. You can't actually communicate with other universes, but you can (given mature technology) simulate them to cooperate with them acausally. So we'll simulate other universes to figure out what everyone[2] values, then determine what goods our universe can produce to best promote those values (perhaps relative to other universes). And then we'll fill the universe with goods accordingly.
In particular, say our universe is comparatively advantaged in producing "diamonds." If diamond-lovers lived in universes comparatively advantaged in increasing your utility (whether by producing tons of widgets or by producing goods much more efficient at increasing your utility than widgets), that would present large gains from trade. And trade doesn't need to be one-to-one; it can be like we produce diamonds in order to get credit from diamond-lovers—we couldn't get more credit from others by producing anything else—and we spend that credit to get utility somewhere in the multiverse. But actually the market analogy and comparative advantage aren't quite right; it might be more like everyone adopts a compromise/aggregate utility function, then pursues that than trading.[3]
Producing what you prefer is much like defecting in the prisoner's dilemma. If everyone instead produced goods based on a combination of what everyone values and what their universe is efficient at, everyone would be better off. Fortunately, we can simulate others in order to only cooperate with the people who cooperate, and thus we can incentivize everyone to cooperate. So I think if people mostly get acausal trade right, then our universe will mostly be tiled[4] with "diamonds."
On acausal cooperation in general, see Nguyen and Aldred 2024a.[5]
Why are there gains from trade with other universes? Here I focus on how different universes have different comparative advantages;[6] I think that's most important. Additionally, the production possibilities frontier between different goods within one universe may be better than linear.[7] And disagreements about the prior, including attitudes about infinities and weights on various universes, may create opportunities for trade.
Epistemic status: seems correct. I asked some experts about my thesis; comments in footnote.[8] This idea isn't novel; for example, Nguyen and Aldred 2024b observes that different universes have different comparative advantages. I wrote this post because I hadn't heard my thesis—the cosmic endowment will basically be used for whatever our universe is good at producing—before, nor read anything on what acausal cooperation actually entails about the cosmic endowment. But after writing the post, comments suggest the thesis might be banal among experts.
In addition to commenting below, if you have minor nitpicks or confusions you can comment on this doc.
At least beyond some low threshold.
Everyone who does acausal cooperation.
(Maybe everyone, across universes, will converge to the same preferences, in which case trade is moot.)
You might make deals that look good from behind the veil of ignorance, even after the veil has been lifted. (I think this is related to "updatelessness.") For example:
Such trades are a generalization of nuances like everyone should coordinate to disincentivize coercion and everyone should reward people who opted to grow the pie rather than grab power.
All that is disanalogous to the individuals have resources and trade in a market picture — and if we're doing maximize an aggregate utility function rather than trade, comparative advantage may not matter (especially if the utility function is linear-ish). Setting all that aside, I think the market analogy is helpful because some economics principles translate. For example, if Alice highly values just putting a specific QR code somewhere in your universe, that doesn't mean you can capture any surplus by creating that QR code, since other acausal-cooperators in your universe would also be interested in trading and your competition would drive the price down to the cost of creating the QR code.
Oesterheld 2017 also discusses updatelessness in this context.
But note there may be some diversity within "diamonds," either because some people value diversity or for practical reasons like different parts of the universe being suited for different goods.
People distinguish acausal trade, which involves reciprocity, from ECL, which just involves correlation. I don't think the difference matters here. I think you'll still simulate lots of other universes in order to do ECL well.
Nuance: one weird source of comparative advantage is that some people may have preferences that are not straightforwardly scope-sensitive — instead they want a whole small universe to be used in some particular way, or they want a small structure to be placed in many universes. Perhaps the 90% of universes that aren't huge and aren't great for producing particular goods will mostly be used for weird stuff like that — that's their comparative advantage, if 10% of universes are dramatically better for producing many goods or high-quality goods.
In particular, maybe if several agents each create some compromise good, rather than naively creating whatever most efficiently gives them utility, they're all better off. Discussion of acausal cooperation usually focuses on this consideration, but I think it's less important than specialization between universes. There could be compromises because (A) many preferences themselves are compatible with a wide variety of goods — preferences like I dislike suffering and I want all sentient beings to have somewhat happy and meaningful lives, but I don't need them to be extremely happy/meaningful and I don't need there to be lots of beings. Or there could be compromises because (B) there's opportunity for compromise even between apparently rigid preferences like maximize paperclips and maximize staples and maximize happiness. I intuit that (B) is mostly false; others have intuited that it's importantly true.
Separately, maybe there's more than one kind of input resource and different goods use different sets of inputs.
Anthony DiGiovanni said (off the cuff, missing context):
Emery Cooper said (off the cuff, missing context):
Another expert said they hadn't fully considered it.