Wikitag Contributions

Comments

Sorted by
bokov10

The closest I can come to examples might be ones where the two-box outcome is so much worse then the one-box outcome that I have nothing to lose by choosing the path of hope.

E.g. picking one box even though I and everybody else knows I'm a two-boxer if I believe that in this case two-boxing will kill me

Or, cooperating when both unilateral defection, unilateral cooperation, and mutual defection have results vastly worse than mutual cooperation.

Are these on the right track?

bokov10

Because, based on the behavior of people here whose intelligence and ideas I have come to respect, this is an important topic.

Clearly I completely lack the background to understand the full theoretical argument. I also lack the background to understand the full theoretical argument behind general relatively and quantum uncertainty. Yet there are many real-world practical examples that I do understand and can work backwards from to get a roughly correct intuition about these ideas.

Every example I have seen for CDT falling short has been a hypothetical scenario that almost certainly never happened.

But if the only scenarios where CDT is a dominated strategy are hypothetical ones, I wouldn't expect smart people on LW to spend so much time and energy on them.

bokov10

Thank you for responding to my post despite its negative rating.

Can you, as a human, give any practical real-world examples that do not rely on non-existent tech where anything outperforms non-naive CDT?

By non-naive I mean CDT that isn't myopically just trying to maximize the immediate payoff but rather trying to maximize the long term value to the player into account future interactions, reputation, uncertainty about causal relationships, etc.

bokov10

In other words, what Putin has already been doing more and more, but with a specific deadline attached?

bokov10

Perhaps we should brainstorm leading indicators of nuclear attack.

bokov10

I always found that aspect weak. It is clearly and sadly evident that utility pessimization (I assume roughly synonymous with coercion?) is effective and stable, both on Golarion and Earth. Yet half the book seems to be gesturing at what a suboptimal strategy it is without actually spelling out how you can defeat an agent who pursues such a strategy (without having magic and some sort of mysterious meta-gods on your side).

bokov10

Update:

I went and read the background material on acausal trade and narrowed even further where it is I'm confused. It's this paragraph:

> Another objection: Can an agent care about (have a utility function that takes into account) entities with which it can never interact, and about whose existence it is not certain? However, this is quite common even for humans today. We care about the suffering of other people in faraway lands about whom we know next to nothing. We are even disturbed by the suffering of long-dead historical people, and wish that, counterfactually, the suffering had not happened. We even care about entities that we are not sure exist. For example:  We might be concerned by news report that a valuable archaeological artifact was destroyed in a distant country, yet at the same time read other news reports stating that the entire story is a fabrication and the artifact never existed. People even get emotionally attached to the fate of a fictional character.

My problem is lack of evidence that genuine caring about entities with which one can never interact really is "quite common even for humans today", after factoring out indirect benefits/costs and social signalling. 

How common, sincerely felt, and motivating should caring about such entities be for acausal trade to work? 

Can you still use acausal trade to resolve various game-theory scenarios with agents whom you might later contact while putting zero priority on agents that are completely causally disconnected from you? If so, then why so much emphasis on permanently un-contactable agents? What does it add?

bokov40

Acausally separate civilizations should obtain our consent in some fashion before invading our local causal environment with copies of themselves or other memes or artifacts.

Aha! Finally, there it is, a statement that exemplifies much of what I find confusing about acausal decision theory.

1. What are acausally separate civilizations? Are these civilizations we cannot directly talk to and so we model their utility functions and their modelling of our utility functions etc. and treat that as a proxy for interviewing them?

2. Are these civilizations we haven't met yet but might someday, or are these ones that are impossible for us to meet even in theory (parallel universes, far future, far past, outside our Hubble volume, etc.)? Because other acausal stuff I've read seems to imply the latter in which case...

2a. If I don't care what civilizations do (to include "simulating" me) unless it's possible for me or people I care about to someday meet them, do I have any reason to care about acausal trade?

3. Can you give any specific examples of what it would be like for an acausally separate civilization to invade our local causal environment which do NOT depend in any way on simulations?

4. I heard that acausal decision theory has practical applications in geopolitics, though unfortunately without any real-world examples. Do you know any concrete examples of using acausal trade or acausal norms to improve outcomes when dealing with ordinary physical people whom you cannot directly communicate? 


I realize you probably have better things to do than educating an individual noob about something that seems to be common knowledge on LW. For what it's worth, I might be representative of a larger group of people who are open to the idea of acausal decision theory but who cannot understand existing explanations. You seem like an especially down-to-earth and accessible proponent of acausal decision theory, and you seem to care about it enough to have written extensively about it. So if you can help me bridge the gap to fully getting what it's about, it may help both of us become better at explaining it to a wider audience. 

bokov10

What is meant by 'reflecting'?

  • reflecting on {reflecting on whether to obey norm x, and if that checks out, obeying norm x} and if that checks out, obeying norm x

Is this the same thing as saying "Before I think about whether to obey norm x, I will think about whether it's worth thinking about it and if both are true, I will obey norm x"? 
 

bokov32

I've been struggling to understand acausal trade and related concepts for a long time. Thank you for a concise and simple explanation that almost gets me there, I think...

Am I roughly correctly in the following interpretation of what I think you are saying?

Acausal norms amount to extrapolating the norms of people/aliens/AIs/whatever whom we haven't met yet and know nothing about other than what can be inferred from us someday meeting them. If we can identify norms that are likely to generalize to any intelligent being capable of contact and negotiation and not contingent on any specific culture/biology/happenstance, then we can pre-emptively obey those norms to maximize the probability of a good outcome when we do meet these people/aliens/AIs/whatever?

Load More