Cavalcanti (2010) describes another problem with causal decision theory:

I apply some of the lessons from quantum theory, in particular from Bell’s theorem, to a debate on the foundations of decision theory and causation. By tracing a formal analogy between the basic assumptions of causal decision theory (CDT)—which was developed partly in response to Newcomb’s problem—and those of a local hidden variable theory in the context of quantum mechanics, I show that an agent who acts according to CDT and gives any nonzero credence to some possible causal interpretations underlying quantum phenomena should bet against quantum mechanics in some feasible game scenarios involving entangled systems, no matter what evidence they acquire. As a consequence, either the most accepted version of decision theory is wrong, or it provides a practical distinction, in terms of the prescribed behaviour of rational agents, between some metaphysical hypotheses regarding the causal structure underlying quantum mechanics.

 

New to LessWrong?

New Comment
8 comments, sorted by Click to highlight new comments since:
[-][anonymous]40

I have mixed feelings about this article. On the one hand, its main point is that causal decision theory hasn't been reconciled with quantum mechanics yet. That's hardly new. It does strengthen the case that ignoring quantum effects in a decision theory is a bad idea (in terms of getting Dutch-booked). To a causalist, quantum effects are essentially black swans, after all, and black swans are bad.

Now, they do raise the interesting question: roughly speaking, what are the kinds of black swans that an agent should be "comfortable" with ignoring? The example they consider is having cup of tea while knowing there's a nonzero probability that doing so will destroy the universe. Their counter claims are not terribly great -- they allege 1) a "pre-judgement" process in which we determine that no further hypotheses will affect the decision substantially and 2) a mostly specious argument by symmetry that the probability of not drinking the cup of tea causing the destruction of the universe is comparable to the probability that drinking the cup of tea will have the same effect.

Concerning the first claim, even if there is, for practical reasons, a pre-judgement process, it doesn't (at least in humans) operate in this method. I've seen this pre-judgement process alluded to in decision theory papers before, but I don't think it's clear how horribly uncomputable such a process would be to work as described. At the end of the day, black swans are still a problem, and some proportion of them are existential risks.

Concerning the second claim, there is no ground from which to assume such symmetry. The first event could be, for all we know, 10^-32, and the second event 10^-10; or vice versa. So a lack of knowledge about those probabilities doesn't imply that the two comparable.

It's an interesting paradox. How do you reduce, avoid, or insure against something you can't quantify over?

Concerning the second claim, there is no ground from which to assume such symmetry. The first event could be, for all we know, 10^-32, and the second event 10^-10; or vice versa. So a lack of knowledge about those probabilities doesn't imply that the two comparable.

But if we don't know which one's which, aren't our subjective probabilities of each destroying the world equal anyway?

[-][anonymous]00

I may have misread the original section:

It can be argued against this conclusion that one usually assumes that we are allowed to ignore extremely unlikely hypotheses in our decisions. Consider, say, the hypothesis that having a cup of tea would result in the destruction of the universe. Surely, the argument goes, we don’t need to consider all logically possible hypotheses? My response to this criticism is that we don’t consider all possible hypotheses because we make a pre-judgement that no further hypotheses would change our decisions and that further considerations would only introduce unnecessary complications in the calculations. Most tea drinkers attribute an exceedingly small probability for the destruction of the universe conditional on their drinking tea. But if a tea drinker were to give any appreciable probability to this hypothesis, it would certainly be irrational for them to have that cup of tea.

Further, in a situation like the referee’s example, not only would these kinds of unlikely hypotheses have negligible effects on the decisions but there would also usually be equally arbitrary competing hypotheses pulling the decision the other way: The hypothesis that NOT having a given cup of tea will lead to the destruction of the universe is just as (un)likely as the one that having that cup of tea will do so and precisely cancels the effect of the first.

It does not sound as if the author is assuming an uninformative prior with respect to the universe-destroying capabilities of tea, but that would explain the symmetry argument.

I'm ambivalent about the paper.

On the one hand, I find any writing about Newcomb's Problem distasteful, and that's the central point of this paper. But it seems somewhat less diseased than most writing on it, though Cavalcanti isn't as explicit as he could be about why that's the case.

The gist of it is this: BDT (I think it's typically called EDT here) accepts correlation as causation, whereas CDT will stubbornly reject correlation as causation if the source of the causation is deemed improper. Omega can't predict what the CDTer will do, because he can't, and so they'll two-box, never mind evidence that Omega does.

Cavalcanti explains how you can actually construct this (sort of) with QM. Create a game with a sequence of entangled particles, and people who believe they're entangled can win a bunch of money whereas people who refuse to believe they're entangled won't win a bunch of money. If someone playing the game subscribes to both CDT and a misunderstanding of QM, they'll lose the chance to win money at the game.

This is where things get sticky, though. The simple explanation of the entanglement is that even though there are two particles, there's one wavefunction. If you believe there's one wavefunction, then you'll believe you can win money playing the game. Both BDTers and CDTers can walk away rich (this is explanation 2 in his paper). But when we step to the original Newcomb's Problem, the analogy requires that the firm reality of one wavefunction be replaced by a firm reality of one possibility (Omega predicts with P=1), which isn't quite the case.

So we're back where we started. The BDTer will one-box if Omega predicts their actions with probability P>.5005, and the CDTer will reason that their actions can't change the past. When you throw QM into the mix, there's nothing new to legitimate Omega's predictive ability from the CDTer's point of view.

Although, it does raise the amusing question- what does Omega do when he predicts that you will use an unentangled quantum RNG to determine whether you'll one-box or two-box?

All physics is time reversible; causes and effects are merely a human abstraction.

Elevate this abstraction to the level of 'one true model of the world', the model which agent takes as absolutely true without having found it via the evidence, and the agent becomes incompatible with worlds where it doesn't hold, which incidentally includes our own world where the causality is just an approximation that works for warm systems with many degrees of freedom.

All physics is time reversible; causes and effects are merely a human abstraction.

The differential equations are time-reversible, but the boundary conditions (ridiculously low entropy in the past) aren't.

Yes, and that's why it made sense for us to make this abstraction.

With regards to bell's theorem, the issue is that even in MWI, which is local, there's still the whole thing that if you are randomly orienting a polariser, you either do it based on some quantum mechanical 'randomness', MWI style, or it was predetermined. You don't have agents deciding with their immaterial souls which way to turn polarizer. It doesn't work like decision theory where agent can decide to turn polarizer randomly, or to turn it at specific angle. There's causes to agent's decision. It is very difficult to reason about this stuff.

edit: to expand on that. What happens in the bell's theorem experiment (the simplest one where you have two photons go into two polarizers and onto detectors) under MWI, is that distribution of photon angles get emitted, it hits the polarizers, resulting distribution of the detector states has the probability of agreement proportional to cosine of the angle between polarizers. Under MWI that happens as in the end when observers talk with each other they fork each other changing the perceived probabilities of events that 'already happened', resulting in a huge mess where it is very difficult to show how did the probabilities emerge (not sure there is a satisfactory account of that yet, even). While the MWI may be most compact, it surely is not most computationally efficient; it is much more computationally efficient to drop the belief in causality and accept the result that the agreement is proportional to cosine of angle between detectors, as a fact that you don't derive from some causal explanation - you just observed it (and duly noted that angle of one detector has to affect results on the other detector for there to be such correlation). And indeed that is how most humans operate.

The causality is something we came up with to be able to reason easier; there's no need to hold onto it when it doesn't make problems easier.

[-][anonymous]10

Poor Omega. Its job got outsourced and replaced with Bob.