[ Question ]

What would be a good name for the view that the value of our decisions is primarily determined by how they affect causally-disconnected regions of the multiverse?

by Natália Mendonça1 min read9th Aug 202018 comments

25

Effective AltruismWorld Optimization
Frontpage

I’m looking for a word similar to “longtermism” to refer to the view that the most important determinant of the value of our decisions is how they affect regions of the multiverse[1] that are causally-disconnected from ours (henceforth “causally-disconnected regions”), since those regions are very big and can contain far more value-bearing locations than causally-connected regions.[2]

(Affecting causally-disconnected regions is possible if there is some subjunctive dependence between our decisions and outcomes in those regions; for example, if there are copies of us simulated in them.)

This is related to views posed by Wei Dai in this post (though note that he doesn’t use any terminology like “causally-disconnected region”; my usage of that term is influenced by Mati Roy). I’m interested in this because it seems to me that many of the intuitions that would lead someone to support longtermism would also lead them to support this view, as Wei Dai indicated in the last paragraph of this comment.

80000hours used to call a cluster of ideas related to caring about the long-term future the “long-term value thesis,” so I might start calling this the “causally-disconnected value thesis”; however, that sounds a bit too long and cumbersome, which is why I’m asking this question.

[1] If you don’t think there is a multiverse, just interpret my usage of “multiverse” as referring to the same thing as “universe.”

[2] Note that this definition is more analogous to how William MacAskill defines strong longtermism than to how he defines longtermism.

New Answer
Ask Related Question
New Comment

4 Answers

any of the 2x2 of Acausal/Nonlocal Consequentialism/Longtermism

"Acausalism" works, but might be confused with the idea that acausal dependence matters at all, or with other philosophical doctrines that deny causality in some sense.

I'm not sure whether to be located in a place is a different thing from the place subjunctively depending on your behavior.

Some more ideas: "outofreachism" (closest to "longtermism"), "extrauniversalism", "subjunctive dependentism" (hardest to strawman), "elsewherism", "spooky axiology at a distance"

Elsewherism strike me as the most usable of these options for aesthetic reasons. Spooky Axiology at a Distance is the name of my new prog rock band.

Er, isn't "affect causally disconnected" an oxymoron?

Thanks for your comment :) The definition of causality I meant to use in the question is physical causality, which doesn’t refer to things like affecting what happens in causally-disconnected regions of the multiverse that simulate your decision-making process. I’m going to edit the question to make that clearer.

11 comments, sorted by Highlighting new comments since Today at 2:09 PM

I still don't understand what you mean by "causally-disconnected" here. In physics, it's anything in your future light cone (under some mild technical assumptions). In that sense longtermism (regular or strong, or very strong, or extra-super-duper-strong) is definitely interested in the causally connected (to you now) parts of the Universe. A causally disconnected part would be caring now about something already beyond the cosmological horizon, which is different from something that will eventually go beyond the horizon. You can also be interested in modeling those casually disconnected parts, like what happens to someone falling into a black hole, because falling into a black hole might happen in the future, and so you in effect are interested in the causally connected parts.

I still don't understand what you mean by "causally-disconnected" here. In physics, it's anything in your future light cone (under some mild technical assumptions).

I think you mean to say “causally-connected,” not “causally-disconnected”?

I’m referring to regions outside of our future light cone.

A causally disconnected part would be caring now about something already beyond the cosmological horizon

Yes, that is what I’m referring to.

Ah, okay. I don't see any reason to be concerned about something that we have no effect on. Will try to explain below.

Regarding "subjunctive dependency" from the post linked in your other reply:

I agree with a version of "They are questions about what type of source code you should be running", formulated as "what type of an algorithm results in max EV, as evaluated by the same algorithm?" This removes the contentious "should" part, that implies that you have an option of running some other algorithm (you don't, you are your own algorithm).

The definition of "subjunctive dependency" in the post is something like "the predictor runs a simplified model of your actual algorithm that outputs the same result as your source code would, with high fidelity" and therefore the predictor's decisions "depend" on your algorithm, i.e. you can be modeled as affecting the predictor's actions "retroactively".

Note that you, an algorithm, have no control of what that algorithm is, you just are it, even if your algorithm comes equipped with the routines that "think" about themselves. If you also postulate that the predictor is an algorithm, as well, then the question of decision theory in presence of predictors becomes something like "what type of an agent algorithm results in max EV when immersed in a given predictor algorithm?" In that approach the subjunctive dependency is not a very useful abstraction, since the predictor algorithm is assumed to be fixed. In which case there is no reason to consider causally disconnected parts of the agent's universe.

Clearly your model is different from the above, since you seriously think about untestables and unaffectables.

From my understanding of the definition of causality, any action made in this moment cannot affect anywhere that is causally-disconnected from where and when we are. After all, if it could then that region definitionally. wouldn't be causally disconnected from us.

Are you referring to multiple future regions that are causally connected to the Earth at the current moment but are causally disconnected from each other?

Things outside of your future light cone (that is, things you cannot physically affect) can “subjunctively depend” on your decisions. If beings outside of your future light cone simulate your decision-making process (and base their own decisions on yours), you can affect things that happen there. It can be helpful to take into account those effects when you’re determining your decision-making process, and to act as if you were all of your copies at once.

Those were some of my takeaways from reading about functional decision theory (described in the post I linked above) and updateless decision theory.

A far off decision maker can't have direct evidence of your existence as then you would be the cause of their existence.

A far off observer can see a process that it can predict will result into you and things that it does may be cocauses with the future between you. I still think that the verb "affect" is wrong here.

Say there is a pregnant mother and he friend leaves into another country and lives there in isolation for 18 years but knowing there is likely to be a person sends a birthday gift with a card referring to "happy 18th birthday". Nothing that you do in your childhood or adulthood can affect what the information on the card reads if the far off country is sufficiently isolated. The event of you opening the box will be both a product how you lived your childhood and what the sender chose to put in the box. Even if the gift sender would want to reward better persons with better gifts the choice needs to eb based on what kind of baby you were and not what kind of adult you are.

And maybe crucially adult you will have past tht is not the past of baby you. The gift giver has no hope of taking a stance towards this data.

I don't think I've seen any existing terms covering this. How about "acausal dominance" or "acausal supremacy"?

I think “acausal-focused” works well as an adjective, compare to “suffering-focused”. As a noun, perhaps “acausal-focused altruism”?

I'm not sure if this is quite it, but it does get at the "acausal trade" framing often taken when discussing these issues.

Does this view lead to any different behaviors or moral beliefs than regular long-term-ism? Acausal motivations (except in contrived situations, where the agent has forbidden knowledge about the unreachable portions) seem to be simply amplifications of what's right ANYWAY, taking a thousands- to millions-of-years view in one's own light-cone.

quasi-causalism (cf. Leslie's use of 'quasi-causation' in ''Ensuring two bird deaths with one throw")