All of Caspar42's Comments + Replies

Extracting Money from Causal Decision Theorists

Sorry for taking some time to reply!

>You might wonder why am I spouting a bunch of wrong things in an unsuccessful attempt to attack your paper.

Nah, I'm a frequent spouter of wrong things myself, so I'm not too surprised when other people make errors, especially when the stakes are low, etc.

Re 1,2: I guess a lot of this comes down to convention. People have found that one can productively discuss these things without always giving the formal models (in part because people in the field know how to translate everything into formal models). That said, if y... (read more)

How to formalize predictors

As I mentioned elsewhere, I don't really understand...

>I think (1) is a poor formalization, because the game tree becomes unreasonably huge

What game tree? Why represent these decision problems as any kind of trees or game trees in particular? At least some problems of this type can be represented efficiently, using various methods to represent functions on the unit simplex (including decision trees)... Also: Is this decision-theoretically relevant? That is, are you saying, a good decision theory doesn't have to deal with 1 because it is cumbersome to wr... (read more)

2cousin_it3moI guess I just like game theory. "Alice chooses a box and Bob predicts her action" can be viewed as a game with Alice and Bob as players, or with only Alice as player and Bob as the shape of the game tree, but in any case it seems that option (2) from the post leads to games where solutions/equilibria always exist, while (1) doesn't. Also see my other comment about amnesia, it's basically the same argument. It's fine if it's not a strong argument for you.
Extracting Money from Causal Decision Theorists

On the more philosophical points. My position is perhaps similar to Daniel K's. But anyway...

Of course, I agree that problems that punish the agent for using a particular theory (or using float multiplication or feeling a little wistful or stuff like that) are "unfair"/"don't lead to interesting theory". (Perhaps more precisely, I don't think our theory needs to give algorithms that perform optimally in such problems in the way I want my theory to "perform optimally" Newcomb's problem. Maybe we should still expect our theory to say something about them, in... (read more)

Extracting Money from Causal Decision Theorists

Let's start with the technical question:

>Can your argument be extended to this case?

No, I don't think so. Take the class of problems. The agent can pick any distribution over actions. The final payoff is determined only as a function of the implemented action and some finite number of samples generated by Omega from that distribution. Note that the expectation is continuous in the distribution chosen. It can therefore be shown (using e.g. Kakutani's fixed-point theorem) that there is always at least one ratifiable distribution. See Theorem 3 at https://... (read more)

2cousin_it3moThanks! That's what I wanted to know. Will reply to the philosophical stuff in the comments to the other post.
Extracting Money from Causal Decision Theorists

Excellent - we should ask THEM about it.

Yes, that's the plan.

Some papers that express support for CDT:

In case you just ... (read more)

Extracting Money from Causal Decision Theorists

Note that while people on this forum mostly reject orthodox, two-boxing CDT, many academic philosophers favor CDT. I doubt that they would view this problem as out of CDT's scope, since it's pretty similar to Newcomb's problem.

How does this CDT agent reconcile a belief that the seller's prediction likelihood is different from the buyer's success likelihood?

Good question!

0Dagon3moExcellent - we should ask THEM about it. Please provide a few links to recent (say, 10 years - not textbooks written long ago) papers or even blog posts that defends CDT and/or advocates 2-boxing in this (or other Newcomb-like) scenarios.
Extracting Money from Causal Decision Theorists

I agree with both of Daniel Kokotajlo's points (both of which we also make in the paper in Sections IV.1 and IV.2): Certainly for humans it's normal to not be able to randomize; and even if it was a primarily hypothetical situation without any obvious practical application, I'd still be interested in knowing how to deal with the absence of the ability to randomize.

Besides, as noted in my other comment insisting on the ability to randomize doesn't get you that far (cf. Sections IV.1 and IV.4 on Ratificationism): even if you always have access to some nuclea... (read more)

Extracting Money from Causal Decision Theorists

I think some people may have their pet theories which they call CDT and which require randomization. But CDT as it is usually/traditionally described doesn't ever insist on randomizing (unless randomizing has a positive causal effect). In this particular case, even if a randomization device were made available, CDT would either uniquely favor one of the boxes or be indifferent between all distributions over . Compare Section IV.1 of the paper.

What you're referring to are probably so-called ratificationist variants of C... (read more)

7cousin_it3moThis is a bit unsatisfying, because in my view of decision theory you don't get to predict things like "the agent will randomize" or "the agent will take one box but feel a little wistful about it" and so on. This is unfair, in the same way as predicting that "the agent will use UDT" and punishing for it is unfair. No, you just predict the agent's output. Or if the agent can randomize, you can sample (as many times as you like, but finitely many) from the distribution of the agent's output. A bit more on this here [https://www.lesswrong.com/posts/YZTtYmNX86EKWdApq/how-to-formalize-predictors], though the post got little attention. Can your argument be extended to this case?
Extracting Money from Causal Decision Theorists

Yeah, basically standard game theory doesn't really have anything to say about the scenarios of the paper, because they don't fit the usual game-theoretical models.

By the way, the paper has some discussion of what happens if you insist on having access to an unpredictable randomization device, see Sections IV.1 and the discussion of Ratificationism in Section IV.4. (The latter may be of particular interest because Ratificationism is somewhat similar to Nash equilibrium. Unfortunately, the section doesn't explain Ratificationism in detail.)

Extracting Money from Causal Decision Theorists

>I think information "seller's prediction is accurate with probability 0,75" is supposed to be common knowledge.

Yes, correct!

>Is it even possible for a non-trivial probabilistic prediction to be a common knowledge? Like, not as in some real-life situation, but as in this condition not being logical contradiction? I am not a specialist on this subject, but it looks like a logical contradiction. And you can prove absolutely anything if your premise contains contradiction.

Why would it be a logical contradiction? Do you think Newcomb's problem also requi... (read more)

1AVoropaev3moYes, you are right. Sorry. Okay, it probably isn't a contradiction, because the situation "Buyer writes his decision and it is common knowledge that an hour later Seller sneaks a peek into this decision (with probability 0.75) or into a random false decision (0.25). After that Seller places money according to the decision he saw." seems similar enough and can probably be formalized into a model of this situation. You might wonder why am I spouting a bunch of wrong things in an unsuccessful attempt to attack your paper. I do that because it looks really suspicious to me for the following reasons: 1. You don't use language developed by logicians to avoid mistakes and paradoxes in similar situations. 2. Even for something written in more or less basic English, your paper doesn't seem to be rigorous enough for the kinds of problems it tries to tackle. For example, you don't specify what exactly is considered common knowledge, and that can probably be really important. 3. You result looks similar to something you will try to prove as a stepping stone to proving that this whole situation with boxes is impossible. "It follows that in this situation two perfectly rational agents with the same information would make different deterministic decisions. Thus we arrived at contradiction and this situation is impossible." In your paper agents are rational in a different ways (I think), but it still looks similar enough for me to become suspicious. So, while my previous attempts at finding error in your paper failed pathetically, I'm still suspicious, so I'll give it another shot. When you argue that Buyer should buy one of the boxes, you assume that Buyer knows the probabilities that Seller assigned to Buyer's actions. Are those probabilities also a part of common knowledge? How is that possible? If you try to do the same in Newcomb's problem, you will get something like "Omniscient predictor predicts that player will pick the box A
Extracting Money from Causal Decision Theorists

>Then the CDTheorist reasons:

>(1-0.75) = .25

>.25*3 = .75

>.75 - 1 = -.25

>'Therefore I should not buy a box - I expect to lose (expected) money by doing so.'

Well, that's not how CDT as it is typically specified reasons about this decision. The expected value 0.25*3=0.75 is the EDT expected amount of money in box  for both  and . That is, it is the expected content of box , conditional on taking . But when CDT assigns an expected utility to taking box  it doesn't condition on taking&nbs... (read more)

Extracting Money from Causal Decision Theorists

>If I win I get $6. If I lose, I get $5.

I assume you meant to write: "If I lose, I lose $5."

Yes, these are basically equivalent. (I even mention rock-paper-scissors bots in a footnote.)

Predictors exist: CDT going bonkers... forever

Apologies, I only saw your comment just now! Yes, I agree, CDT never strictly prefers randomizing. So there are agents who abide by CDT and never randomize. As our scenarios show, these agents are exploitable. However, there could also be CDT agents who, when indifferent between some set of actions (and when randomization is not associated with any cost), do randomize (and choose the probability according to some additional theory -- for example, you could have the decision procedure: "follow CDT, but when indifferent between multiple actions, choose a dis... (read more)

In memoryless Cartesian environments, every UDT policy is a CDT+SIA policy

Sorry for taking an eternity to reply (again).

On the first point: Good point! I've now finally fixed the SSA probabilities so that they sum up to 1, which really they should, to really have a version of EDT.

>prevents coordination between agents making different observations.

Yeah, coordination between different observations is definitely not optimal in this case. But I don't see an EDT way of doing it well. After all, there are cases where given one observation, you prefer one policy and given another observation you favor another policy. So I ... (read more)

Predictors exist: CDT going bonkers... forever

>Caspar Oesterheld and Vince Conitzer are also doing something like this

That paper can be found at https://users.cs.duke.edu/~ocaspar/CDTMoneyPump.pdf . And yes, it is structurally essentially the same as the problem in the post.

3Stuart_Armstrong1yCool! I notice that you assumed there were no independent randomising devices available. But why would the CDT agent ever opt to use a randomising device? Why would it see that as having value?
Pavlov Generalizes

Not super important but maybe worth mentioning in the context of generalizing Pavlov: the strategy Pavlov for the iterated PD can be seen as an extremely shortsighted version of the law of effect, which basically says: repeat actions that have worked well in the past (in similar situations). Of course, the LoE can be applied in a wide range of settings. For example, in their reinforcement learning textbook, Sutton and Barto write that LoE underlies all of (model-free) RL.

4abramdemski2ySomewhat true, but without further bells and whistles, RL does not replicate the Pavlov strategy in Prisoner's Dilemma, so I think looking at it that way is missing something important about what's going on.
CDT=EDT=UDT

> I tried to understand Caspar’s EDT+SSA but was unable to figure it out. Can someone show how to apply it to an example like the AMD to help illustrate it?

Sorry about that! I'll try to explain it some more. Let's take the original AMD. Here, the agent only faces a single type of choice -- whether to EXIT or CONTINUE. Hence, in place of a policy we can just condition on when computing our SSA probabilities. Now, when using EDT+SSA, we assign probabilities to being a specific instance in a specific possible history of the world. For example, ... (read more)

6Wei_Dai2yThanks, I think I understand now, and made some observations about EDT+SSA [https://www.greaterwrong.com/posts/5bd75cc58225bf06703751b2/in-memoryless-cartesian-environments-every-udt-policy-is-a-cdt-sia-policy/comment/kuY5LagQKgnuPTPYZ] at the old thread. At this point I'd say this quote from the OP is clearly wrong: In fact UDT1.0 > EDT+SSA > CDT+SIA, because CDT+SIA is not even able to coordinate agents making the same observation [https://www.lesswrong.com/posts/WkPf6XCzfJLCm2pbK/cdt-edt-udt#Ya8msDGzRdR8yw4br] , while EDT+SSA can do that but not coordinate agents making different observations [https://www.greaterwrong.com/posts/5bd75cc58225bf06703751b2/in-memoryless-cartesian-environments-every-udt-policy-is-a-cdt-sia-policy/comment/kuY5LagQKgnuPTPYZ] , and UDT1.0 can (probably) coordinate agents making different observations (but seemingly at least some of them require UDT1.1 [https://www.lesswrong.com/posts/g8xh9R7RaNitKtkaa/explicit-optimization-of-global-strategy-fixing-a-bug-in] to coordinate).
Dutch-Booking CDT
Caspar Oesterheld is working on similar ideas.

For anyone who's interested, Abram here refers to my work with Vincent Conitzer which we write about here.

ETA: This work has now been published in The Philosophical Quarterly.

Reflexive Oracles and superrationality: prisoner's dilemma

My paper "Robust program equilibrium" (published in Theory and Decision) discusses essentially NicerBot (under the name ϵGroundedFairBot) and mentions Jessica's comment in footnote 3. More generally, the paper takes strategies from iterated games and transfers them into programs for the corresponding program game. As one example, tit for tat in the iterated prisoner's dilemma gives rise to NicerBot in the "open-source prisoner's dilemma".

Idea: OpenAI Gym environments where the AI is a part of the environment

I list some relevant discussions of the "anvil problem" etc. here. In particular, Soares and Fallenstein (2014) seem to have implemented an environment in which such problems can be modeled.

Announcement: AI alignment prize winners and next round

For this round I submit the following entries on decision theory:

Robust Program Equilibrium (paper)

The law of effect, randomization and Newcomb’s problem (blog post) (I think James Bell's comment on this post makes an important point.)

A proof that every ex-ante-optimal policy is an EDT+SSA policy in memoryless POMPDs (IAFF comment) (though see my own comment to that comment for a caveat to that result)

2cousin_it3yAccepted
Causal Universes

(RobbBB seems to refer to what philosophers call the B-theory of time, whereas CronoDAS seems to refer to the A-theory of time.)

In memoryless Cartesian environments, every UDT policy is a CDT+SIA policy

Since Briggs [1] shows that EDT+SSA and CDT+SIA are both ex-ante-optimal policies in some class of cases, one might wonder whether the result of this post transfers to EDT+SSA. I.e., in memoryless POMDPs, is every (ex ante) optimal policy also consistent with EDT+SSA in a similar sense. I think it is, as I will try to show below.

Given some existing policy , EDT+SSA recommends that upon receiving observation we should choose an action from (For notational simplicity, I'll assume that poli... (read more)

5Wei_Dai2yI noticed that the sum inside argmaxa∑s1,...,sn∑ni=1SSA(siins1,...,sn∣o,πo→a)U(s n) is not actually an expected utility, because the SSA probabilities do not add up to 1 when there is more than one possible observation. The issue is that conditional on making an observation, the probabilities for the trajectories not containing that observation become 0, but the other probabilities are not renormalized. So this seems to be part way between "real" EDT and UDT (which does not set those probabilities to 0 and of course also does not renormalize). This zeroing of probabilities of trajectories not containing the current observation (and renormalizing, if one was to do that) seems at best useless busywork, and at worst prevents coordination between agents making different observations. In this formulation of EDT, such coordination is ruled out in another way, namely by specifying that conditional on o→a, the agent is still sure the rest of π is unchanged (i.e., copies of itself receiving other observations keep following π). If we remove the zeroing/renormalizing and say that the agent ought to have more realistic beliefs conditional on o→a, I think we end up with something close to UDT1.0 (modulo differences in the environment model from the original UDT). (Oh, I ignored the splitting up of probabilities of trajectories into SSA probabilities and then adding them back up again, which may have some intuitive appeal but ends up being just a null operation. Does anyone see a significance to that part?)
1Caspar422yElsewhere [https://www.lesswrong.com/posts/WkPf6XCzfJLCm2pbK/cdt-edt-udt#rnRrXZrTzReM93PdH] , I illustrate this result for the absent-minded driver.
9Caspar423yCaveat: The version of EDT provided above only takes dependences between instances of EDT making the same observation into account. Other dependences are possible because different decision situations may be completely "isomorphic"/symmetric even if the observations are different. It turns out that the result is not valid once one takes such dependences into account, as shown by Conitzer [2]. I propose a possible solution in https://casparoesterheld.com/2017/10/22/a-behaviorist-approach-to-building-phenomenological-bridges/ . Roughly speaking, my solution is to identify with all objects in the world that are perfectly correlated with you. However, the underlying motivation is unrelated to Conitzer's example. [2] Vincent Conitzer: A Dutch Book against Sleeping Beauties Who Are Evidential Decision Theorists. Synthese, Volume 192, Issue 9, pp. 2887-2899, October 2015. https://arxiv.org/pdf/1705.03560.pdf
In memoryless Cartesian environments, every UDT policy is a CDT+SIA policy

Caveat: The version of EDT provided above only takes dependences between instances of EDT making the same observation into account. Other dependences are possible because different decision situations may be completely "isomorphic"/symmetric even if the observations are different. It turns out that the result is not valid once one takes such dependences into account, as shown by Conitzer [2]. I propose a possible solution in https://casparoesterheld.com/2017/10/22/a-behaviorist-approach-to-building-phenomenological-bridges/ . Roughly speaking, my solution

... (read more)
Prisoner's dilemma tournament results

I tried to run this with racket and #lang scheme (as well as #lang racket) but didn't get it to work (though I didn't try for very long), perhaps because of backward compatibility issues. This is a bit unfortunate because it makes it harder for people interested in this topic to profit from the results and submitted programs of this tournament. Maybe you or Alex could write a brief description of how one could get the program tournament to run?

The Absent-Minded Driver

I wonder what people here think about the resolution proposed by Schwarz (2014). His analysis is that the divergence from the optimal policy also goes away if one combines EDT with the halfer position a.k.a. the self-sampling assumption, which, as shown by Briggs (2010), appears to be the right anthropic view to combine with EDT, anyway.

A model I use when making plans to reduce AI x-risk

I think this is a good overview, but most of the views proposed here seem contentious and the arguments given in support shouldn't suffice to change the mind of anyone who has thought about these questions for a bit or who is aware of the disagreements about them within the community.

Getting alignment right accounts for most of the variance in whether an AGI system will be positive for humanity.

If your values differ from those of the average human, then this may not be true/relevant. E.g., I would guess that for a utilitarian current average human val... (read more)

4AlexMennen3yAre those probabilities, or weightings for taking a weighted average? And if the latter, what does that even mean?
6Ben Pace3yYeah, that sounds right to me. Most of the value is probably spread between that and breaking out of our simulation, but I haven't put much thought into it. There's other crucial considerations too (e.g. how to deal with an infinite universe). Thanks for pointing out the nuanced ways that what I said was wrong, and I'll reflect more on what true sentiment my inutions are pointing to (if the sentiment is indeed true at all).
Prediction Markets are Confounded - Implications for the feasibility of Futarchy

The issue with this example (and many similar ones) is that to decide between interventions on a variable X from the outside, EDT needs an additional node representing that outside intervention, whereas Pearl-CDT can simply do(X) without the need for an additional variable. If you do add these variables, then conditioning on that variable is the same as intervening on the thing that the variable intervenes on. (Cf. section 3.2.2 "Interventions as variables" in Pearl's Causality.)

Niceness Stealth-Bombing

This advice is very similar to Part, 1, ch. 3; Part 3, ch. 5; Part 4, ch. 1, 6 in Dale Carnegie's classic How to Win Friends and Influence People.

Writing Down Conversations

Another classic on this topic by a community member is Brian Tomasik's Turn Discussions Into Blog Posts.

The expected value of the long-term future

I looked at the version 2017-12-30 10:48:11Z.

Overall, I think it's a nice, systematic overview. Below are some comments.

I should note that I'm not very expert on these things. This is also why the additional literature I mention is mostly weakly related stuff from FRI, the organization I work for. Sorry about that.

An abstract would be nice.

Locators in the citations would be useful, i.e. "Beckstead (2013, sect. XYZ)" instead of just "Beckstead (2013)" when you talk about some specific section of the Beckstead paper. (Cf. sectio... (read more)

Announcing the AI Alignment Prize

You don't mention decision theory in your list of topics, but I guess it doesn't hurt to try.

I have thought a bit about what one might call the "implementation problem of decision theory". Let's say you believe that some theory of rational decision making, e.g., evidential or updateless decision theory, is the right one for an AI to use. How would you design an AI to behave in accordance with such a normative theory? Conversely, if you just go ahead and build a system in some existing framework, how would that AI behave in Newcomb-... (read more)

3cousin_it3yCaspar, thanks for the amazing entry! Acknowledged.
Superintelligence via whole brain emulation

I wrote a summary of Hansons's The Age of Em, in which I focus on the bits of information that may be policy-relevant for effective altruists. For instance, I summarize what Hanson says about em values and also have a section about AI safety.

Intellectual Hipsters and Meta-Contrarianism

Great post, obviously.

You argue that signaling often leads to distribution of intellectual positions following this pattern: in favor of X with simple arguments / in favor of Y with complex arguments / in favor of something like X with simple arguments

I think it’s worth noting that the pattern of position often looks different. For example, there is: in favor of X with simple arguments / in favor of Y with complex arguments / in favor of something like X with surprising and even more sophisticated and hard-to-understand arguments

In fact, I think many of yo... (read more)

Are causal decision theorists trying to outsmart conditional probabilities?

I agree that in situations where A only has outgoing arrows, p(s | do(a)) = p(s | a), but this class of situations is not the "Newcomb-like" situations.

What I meant to say is that the situations where A only has outgoing arrows are all not Newcomb-like.

Maybe we just disagree on what "Newcomb-like" means? To me what makes a situation "Newcomb-like" is your decision algorithm influencing the world through something other than your decision (as happens in the Newcomb problem via Omega's prediction). In smoking lesion, this d

... (read more)
Principia Compat. The potential Importance of Multiverse Theory

Yes, the paper is relatively recent, but in May I published a talk on the same topic. I also asked on LW whether someone would be interested in giving feedback a month or so before actually the paper.

Do you think your proof/argument is also relevant for my multiverse-wide superrationality proposal?

0MakoYass4yI watched the talk, and it triggered some thoughts. I have to passionately refute the claim that superrationality is mostly irrelevant on earth. I'm getting the sense that much of what we call morality really is superrationality struggling to understand itself and failing under conditions in which CDT pseudorationality dominates our thinking. We've bought so deeply into this false dichotomy of rational xor decent. We know intuitively that unilateralist violent defection is personally perilous, that committing an act of extreme violence tears one's soul and transports one into a darker world. This isn't some elaborate psychological developmental morph or a manifestation of group selection, to me the clearest explanation of our moral intuitions is that humans' decision theory supports the superrational lemma; that the determinations we make about our agent class will be reflected by our agent class back upon us. We're afraid to kill because we don't want to be killed. Look anywhere where an act of violence is "unthinkable", violating any kind of trust that wouldn't, or couldn't have been offered if it knew we were mechanically capable of violating it, I think you'll find reflectivist[1] decision theory is the simplest explanation for our aversion to violating it. Regarding concrete applications of superrationality; I'm fairly sure that if we didn't have it, voting turnout wouldn't be so high (in places where it is high. The USA's disenfranchisement isn't the norm). There's a large class of situations where the individual's causal contribution is so small as to be unlikely to matter. If they didn't think themselves linked by some platonic thread to their peers, they would have almost no incentive to get off their couch and put their hand in. They turn out because they're afraid that if they don't, the defection behavior will be reflected by the rest of their agent class and (here I'll allude to some more examples of what seems to be applied superrationality) the kic
Are causal decision theorists trying to outsmart conditional probabilities?

So, the class of situations in which p(s | do(a)) = p(s | a) that I was alluding to is the one in which A has only outgoing arrows (or all the values of A’s predecessors are known). (I guess this could be generalized to: p(s | do(a)) = p(s | a) if A d-separates its predecessors from S?) (Presumably this stuff follows from Rule 2 of Theorem 3.4.1 in Causality.)

All problems in which you intervene in an isolated system from the outside are of this kind and so EDT and CDT make the same recommendations for intervening in a system from the outside. (That’s simil... (read more)

1IlyaShpitser4yI agree that in situations where A only has outgoing arrows, p(s | do(a)) = p(s | a), but this class of situations is not the "Newcomb-like" situations. In particular, classical smoking lesion has a confounder with an incoming arrow into a. Maybe we just disagree on what "Newcomb-like" means? To me what makes a situation "Newcomb-like" is your decision algorithm influencing the world through something other than your decision (as happens in the Newcomb problem via Omega's prediction). In smoking lesion, this does not happen, your decision algorithm only influences the world via your action, so it's not "Newcomb-like" to me.
A survey of polls on Newcomb’s problem

I claim that one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing)

Note that Omega isn't necessarily a perfect predictor. Most one-boxers would also one-box if Omega is a near-perfect predictor.

Aside from "lizard man", what are the other reasons that lead to two-boxing?

I think I could pass an intellectual Turing test (the main arguments in either direction aren't very sophisticated), but maybe it's easiest to just read, e.g., p. 151ff. of James Joyce's The Foundations of Causal Decision... (read more)

0Dagon4yI wish the polls that started this thread ever included those options. [pollid:1209]
Naturalized induction – a challenge for evidential and causal decision theory

I hadn’t seen these particular discussions, although I was aware of the fact that UDT and other logical decision theories avoid building phenomenological bridges in this way. I also knew that others (e.g., the MIRI people) were aware of this.

I didn't know you preferred a purely evidential variant of UDT. Thanks for the clarification!

As for the differences between LZEDT and UDT:

  • My understanding was that there is no full formal specification of UDT. The counterfactuals seem to be given by some unspecified mathematical intuition module. LZEDT, on the other
... (read more)
Naturalized induction – a challenge for evidential and causal decision theory

Yes, I share the impression that the BPB problem implies some amount of decision theory relativism. That said, one could argue that decision theories cannot be objectively correct, anyway. In most areas, statements can only be justified relative to some foundation. Probability assignments are correct relative to a prior, the truth of theorems depends on axioms, and whether you should take some action depends on your goals (or meta-goals). Priors, axioms, and goals themselves, on the other hand, cannot be justified (unless you have some meta-priors, meta-ax... (read more)

0entirelyuseless4yI agree that any chain of justification will have to come to an end at some point, certainly in practice and presumably in principle. But it does not follow that the thing at the beginning which has no additional justification is not objectively correct or incorrect. The typical realist response in all of these cases, with which I agree, is that your starting point is correct or incorrect by its relationship with reality, not by a relationship to some justification. Of course if it is really your starting point, you will not be able to prove that it is correct or incorrect. That does not mean it is not one or the other, unless you are assuming from the beginning that none of your starting points have any relationship at all with reality. But in that case, it would be equally reasonable to conclude that your starting points are objectively incorrect. Let me give some examples: An axiom: a statement cannot be both true and false in the same way. It does not seem possible to prove this, since if it is open to question, anything you say while trying to prove it, even if you think it true, might also be false. But if this is the way reality actually works, then it is objectively correct even though you cannot prove that it is. Saying that it cannot be objectively correct because you cannot prove it, in this case, seems similar to saying that there is no such thing as reality -- in other words, again, saying that your axioms have no relationship at all to reality. A prior: if there are three possibilities and nothing gives me reason to suspect one more than another, then each has a probability of 1/3. Mathematically it is possible to prove this, but in another sense there is nothing to prove: it really just says that if there are three equal possibilities, they have to be considered as equal possibilities and not as unequal ones. In that sense it is exactly like the above axiom: if reality is the way the axiom says, it is also the way this prior says, even though no on
Naturalized induction – a challenge for evidential and causal decision theory

No, I actually mean that world 2 doesn't exist. In this experiment, the agent believes that either world 1 or world 2 is actual and that they cannot be actual at the same time. So, if the agent thinks that it is in world 1, world 2 doesn't exist.

Are causal decision theorists trying to outsmart conditional probabilities?

(Sorry again for being slow to reply to this one.)

"Note that in non-Newcomb-like situations, P(s|do(a)) and P(s|a) yield the same result, see ch. 3.2.2 of Pearl’s Causality."

This is trivially not true.

Is this because I define "Newcomb-ness" via disagreement about the best action between EDT and CDT in the second paragraph? Of course, the d(P(s|do(a)),P(s|a)) could be so small that EDT and CDT agree on what action to take. They could even differ in such a way that CDT-EV and EDT-EV are the same.

But it seems that instead of comparing ... (read more)

0IlyaShpitser4yI guess: (a) p(s | do(a)) is in general not equal to p(s | a). The entire point of causal inference is characterizing that difference. (b) I looked at section 3.2.2, did not see how anything there supporting the claim. (c) We knew since the 90s that p(s | do(a)) and p(s | a) disagree on classical decision theory problems, standard smoking lesion being one. But in general on any problem where you shouldn't "manage the news." So I got super confused and stopped reading. -------------------------------------------------------------------------------- As cousin_it said somewhere at some point (and I say in my youtube talk), the confusing part of Newcomb is representing the situation correctly, and that is something you can solve by playing with graphs, essentially.
Naturalized induction – a challenge for evidential and causal decision theory

I apologize for not replying to your earlier comment. I do engage with comments a lot. E.g., I recall that your comment on that post contained a link to a ~1h talk that I watched after reading it. There are many obvious reasons that sometimes cause me not reply to comments, e.g. if I don't feel like I have anything interesting to say, or if the comment indicates lack of interest in discussion (e.g., your "I am not actually here, but ... Ok, disappearing again"). Anyway, I will reply your comment now. Sorry again for not doing so earlier.

Naturalized induction – a challenge for evidential and causal decision theory

I just remembered that in Naive TDT, Bayes nets, and counterfactual mugging, Stuart Armstrong made the point that it shouldn't matter whether you are simulated (in a way that you might be the simulation) or just predicted (in such a way that you don't believe that you could be the simulation).

Splitting Decision Theories

Interesting post! :)

I think the process is hard to formalize because specifying step 2 seems to require specifying a decision theory almost directly. Recall that causal decision theorists argue that two-boxing is the right choice in Newcomb’s problem. Similarly, some would argue that not giving the money in counterfactual mugging is the right choice from the perspective of the agent who already knows that it lost, whereas others argue for the opposite. Or take a look at the comments on the Two-Boxing Gene. Generally, the kind of decision problems that put

... (read more)
1somni4yI agree! I think that it this is hard for humans working with current syntactic machinary to specify things like: * what their decision thoery will return for every decision problem * what split(DT_1,DT_2) looks like Right now I think doing this requires putting all decision theories on a useful shared ontology. The way that UTMs put all computable algorithms on a useful shared ontology which allowed people to make proofs about algorithms in general. This looks hard and possibly requires creating new kinds of math. I am making the assumption here that the decision theories are rescued [https://arbital.com/p/rescue_utility/] to the point of being executable philosophy [https://arbital.com/p/executable_philosophy/]. DTs need to be specified this much to be run by an AI. I believe that the fuzzy concepts inside people's heads about how can in principle be made to work mathematically and then run on a computer. In a similar way that the fuzzy concept of "addition" was ported to symbolic representations and then circuits in a pocket calculator.
A survey of polls on Newcomb’s problem

Yeah, I also think the "fooling Omega idea" is a common response. Note however that two-boxing is more common among academic decision theorists, all of which understand that Newcomb's problem is set up such that you can't fool Omega. I also doubt that the fooling Omega idea is the only (or even the main) cause of two-boxing among non-decision theorists.

1Dagon4yActually, it would be interesting to break down the list of reasons people might have for two-boxing, even if we haven't polled for reasons, only decisions. From https://en.wikipedia.org/wiki/Newcomb%27s_paradox [https://en.wikipedia.org/wiki/Newcomb%27s_paradox], the outcomes are: * a: Omega predicts two-box, player two-boxes, payout $1000 * b: Omega predicts two-box, player one-boxes, payout $0 * c: Omega predicts one-box, player two-boxes, payout $1001000 * d: Omega predicts one-box, player one-boxes, payout $1000000 I claim that one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing), and reason that d > a. And further I think that two-boxers believe that all 4 are possible (b and c being "tricking Omega") and reason that c > d and a > b, so two-boxing dominates one-boxing. Aside from "lizard man", what are the other reasons that lead to two-boxing?
Multiverse-wide Cooperation via Correlated Decision Making

Thanks for the comment!

W.r.t. moral reflection: Probably many agents put little intrinsic value on whether society engages in a lot of moral reflection. However, I would guess that as a whole the set of agents having a similar decision mechanism as I have do care about this significantly and positively. (Empirically, disvaluing moral reflection seems to be rare.) Hence, (if the basic argument of the paper goes through) I should give some weight to it.

W.r.t. moral pluralism: Probably even fewer agents care about this intrinsically. I certainly don’t care ab... (read more)

Principia Compat. The potential Importance of Multiverse Theory

I recently published a different proposal for implementing acausal trade as humans: https://foundational-research.org/multiverse-wide-cooperation-via-correlated-decision-making/ Basically, if you care about other parts of the universe/multiverse and these parts contain agents that are decision-theoretically similar to you, you can cooperate with them via superrationality. For example, let's say I give most moral weight to utilitarian considerations and care less about, e.g., justice. Probably other parts of the universe contain agents that reason about dec... (read more)

3MakoYass4yAye, I've been meaning to read your paper for a few months now. (Edit: Hah. It dawns on me it's been a little less than a month since it was published? It's been a busy less-than-month for me I guess.) I should probably say where we're at right now... I came up with an outline of a very reductive proof that there isn't enough expected anthropic measure in higher universes to make adhering to Life's Pact profitable (coupled with a realization that patternist continuity of existence isn't meaningful to living things if it's accompanied by a drastic reduction in anthropic measure). Having discovered this proof outline makes compat uninteresting enough to me that writing it down has not thus far seemed worthwhile. Christian is mostly unmoved by what I've told him of it, but I'm not sure whether that's just because his attention is elsewhere right now. I'll try to expound it for you, if you want it.
Load More