I have read lots of LW posts on this topic, and everyone seems to take this for granted without giving a proper explanation. So if anyone could explain this to me, I would appreciate that.

This is a simple question that is in need of a simple answer. Please don't link to pages and pages of theorycrafting. Thank you.

 

Edit: Since posting this, I have come to the conclusion that CDT doesn't actually play Newcomb. Here's a disagreement with that statement:

If you write up a CDT algorithm and then put it into a Newcomb's problem simulator, it will do something. It's playing the game; maybe not well, but it's playing.

And here's my response:

The thing is, an actual Newcomb simulator can't possibly exist because Omega doesn't exist. There are tons of workarounds, like using coin tosses as a substitution for Omega and ignoring the results whenever the coin was wrong, but that is something fundamentally different from Newcomb.

You can only simulate Newcomb in theory, and it is perfectly possible to just not play a theoretical game, if you reject the theory it is based on. In theoretical Newcomb, CDT doesn't care about the rule of Omega being right, so CDT does not play Newcomb.

If you're trying to simulate Newcomb in reality by substituting Omega with someone who has only empirically been proven right, you substitute Newcomb with a problem that consists of little more than simple calculation of priors and payoffs, and that's hardly the point here.

 

Edit 2: Clarification regarding backwards causality, which seems to confuse people:

Newcomb assumes that Omega is omniscient, which more importantly means that the decision you make right now determines whether Omega has put money in the box or not. Obviously this is backwards causality, and therefore not possible in real life, which is why Nozick doesn't spend too much ink on this.

But if you rule out the possibility of backwards causality, Omega can only make his prediction of your decision based on all your actions up to the point where it has to decide whether to put money in the box or not. In that case, if you take two people who have so far always acted (decided) identical, but one will one-box while the other one will two-box, Omega cannot make different predictions for them. And no matter what prediction Omega makes, you don't want to be the one who one-boxes.

 

Edit 3: Further clarification on the possible problems that could be considered Newcomb:

There's four types of Newcomb problems:

  1. Omniscient Omega (backwards causality) - CDT rejects this case, which cannot exist in reality.
  2. Fallible Omega, but still backwards causality - CDT rejects this case, which cannot exist in reality.
  3. Infallible Omega, no backwards causality - CDT correctly two-boxes. To improve payouts, CDT would have to have decided differently in the past, which is not decision theory anymore.
  4. Fallible Omega, no backwards causality - CDT correctly two-boxes. To improve payouts, CDT would have to have decided differently in the past, which is not decision theory anymore.

That's all there is to it.

 

Edit 4: Excerpt from Nozick's "Newcomb's Problem and Two Principles of Choice":

Now, at last, to return to Newcomb's example of the predictor. If one believes, for this case, that there is backwards causality, that your choice causes the money to be there or not, that it causes him to have made the prediction that he made, then there is no problem. One takes only what is in the second box. Or if one believes that the way the predictor works is by looking into the future; he, in some sense, sees what you are doing, and hence is no more likely to be wrong about what you do than someone else who is standing there at the time and watching you, and would normally see you, say, open only one box, then there is no problem. You take only what is in the second box. But suppose we establish or take as given that there is no backwards causality, that what you actually decide to do does not affect what he did in the past, that what you actually decide to do is not part of the explanation of why he made the prediction he made. So let us agree that the predictor works as follows: He observes you sometime before you are faced with the choice, examines you with complicated apparatus, etc., and then uses his theory to predict on the basis of this state you were in, what choice you would make later when faced with the choice. Your deciding to do as you do is not part of the explanation of why he makes the prediction he does, though your being in a certain state earlier, is part of the explanation of why he makes the prediction he does, and why you decide as you do.

I believe that one should take what is in both boxes. I fear that the considerations I have adduced thus far will not convince those proponents of taking only what is in the second box. Furthermore I suspect that an adequate solution to this problem will go much deeper than I have yet gone or shall go in this paper. So I want to pose one question. I assume that it is clear that in the vaccine example, the person should not be convinced by the probability argument, and should choose the dominant action. I assume also that it is clear that in the case of the two brothers, the brother should not be convinced by the probability argument offered. The question I should like to put to proponents of taking only what is in the second box in Newcomb's example (and hence not performing the dominant action) is: what is the difference between Newcomb's example and the other two examples which make the difference between not following the dominance principle, and following it?

New to LessWrong?

New Comment
136 comments, sorted by Click to highlight new comments since: Today at 12:03 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]12y110

CDT acts to physically cause nice things to happen. CDT can't physically cause the contents of the boxes to change, and fails to recognize the non-physical dependence of the box contents on its decision, which is a result of the logical dependence between CDT and Omega's CDT simulation. Since CDT believes its decision can't affect the contents of the boxes, it takes both in order to get any money that's there. Taking both boxes is in fact the correct course of action for the problem CDT thinks its facing, in which a guy may have randomly decided to leave some money around for them. CDT doesn't think that it will always get the $1 million; it is capable of representing a background probability that Omega did or didn't do something. It just can't factor out a part of that uncertainty, the part that's the same as its uncertainty about what it will do, into a causal relation link that points from the present to the past (or from a timeless platonic computation node to both the present and the CDT sim in the past, as TDT does).

Or from a different light, people who talked about causal decision theories historically were pretty vague, but basically said that causality was that thing by which you can influence the future but not the past or events outside your light cone, so when we build more formal versions of CDT, we make sure that's how it reasons and we keep that sense of the word causality.

[This comment is no longer endorsed by its author]Reply
-7Andreas_Giger12y

Omniscient Omega doesn't entail backwards causality, it only entails omniscience. If Omega can extrapolate how you would choose boxes from complete information about your present, you're not going to fool it no matter how many times you play the game.

Imagine a machine that sorts red balls from green balls. If you put in a red ball, it spits it out Terminal A, and if you put in a green ball it spits it out Terminal B. If you showed a completely colorblind person how you could predict in which terminal a ball would get spit out of before putting it into the machine, it might look to them like backwards causality, but only forwards causality is involved.

If you know that Omega can predict your actions, you should condition your decisions on the knowledge that Omega will have predicted you correctly.

Humans are predictable enough in real life to make this sort of reasoning salient. For instance, I have a friend who, when I ask her questions such as "you know what happened to me?" or "You know what I think is pretty cool?" or any similarly open ended question, will answer "Monkeys?" as a complete non sequitur, more often than not (it's functionally her way ... (read more)

0Andreas_Giger12y
I agree if you say that a more accurate statement would have been "omniscient Omega entails either backwards causality or the absence of free will." I actually assign a rather high probability to free will not existing; however discussing decision theory under that assumption is not interesting at all. Regardless of the issue of free will (which I don't want to discuss because it is obviously getting us nowhere), if Omega makes its prediction solely based on your past, then your past suddenly becomes an inherent part of the problem. This means that two-boxing-You either has a different past than one-boxing-You and therefore plays a different game, or that Omega makes the same prediction for both versions of you, in which case two-boxing-You wins.
2Desrtopa12y
Two-boxing-you is a different you than one-boxing-you. They make different decisions in the same scenario, so something about them must not be the same. Omega doesn't make its decision solely based on your past, it makes the decision based on all information salient to the question. Omega is an omniscient perfect reasoner. If there's anything that will affect your decision, Omega knows about it. If you know that Omega will correctly predict your actions, then you can draw a decision tree which crosses off the outcomes "I choose to two box and both boxes contain money," and "I choose to one box and the other box contains no money," because you can rule out any outcome that entails Omega having mispredicted you. Probability is in the mind. The reality is that either one or both boxes already contain money, and you are already going to choose one box or both, in accordance with Omega's prediction. Your role is to run through the algorithm to determine what is the best choice given what you know. And given what you know, one boxing has higher expected returns than two boxing.
-5Andreas_Giger12y

This is a simple question that is in need of a simple answer.

Because $1,000 is greater than $0, $1,001,000 is greater than $1,000,000 and those are the kind of comparisons that CDT cares about.

Please don't link to pages and pages of theorycrafting. Thank you.

You haven't seemed to respond to the 'simple' thus far and have instead defied it aggressively. That leaves you either reading the theory or staying confused.

-5Andreas_Giger12y

If you ask a mathematician to find 0x + 1 for x = 3, they will answer 1. If you then ask the mathematician to find the 10th root of the factorial of the eighth Mersenne prime, multiplied by zero, plus one, they will answer 1. You may protest they didn't actually calculate the eighth Mersenne prime, find its factorial, or calculate the tenth root of that, but you can't deny they gave the right answer.

If you put CDT in a room with a million dollars in Box A and a thousand dollars in Box B (no Omega, just the boxes), and give it the choice of either A or bo... (read more)

0Andreas_Giger12y
You're completely right, except that (assuming I understand you correctly) you're implying CDT only thinks it's playing "room with money", while in reality it would be playing Newcomb. And that's the issue; in reality Newcomb cannot exist, and if in theory you think you're playing something, you are playing it. Does that make sense?
5shokwave12y
Perfect sense. Theorising that CDT would lose because it's playing a different game is uninteresting as a thought experiment; if I theorise that any decision theory is playing a different game it will also lose; this is not a property of CDT but of the hypothetical. Let's turn to the case of playing in reality, as it's the interesting one. If you grant that Newcomb paradoxes might exist in reality, then there is a real problem: CDT can't distinguish between free money boxes and Newcomb paradoxes, so so when it encounters a Newcomb situation it underperforms. If you claim Newcomb cannot exist in reality, then this is not a problem with CDT. I (and hopefully others, though I shan't speak for them) would accept that this is not a problem with CDT if it is shown that Newcomb's is not possible in real life - but we are arguing against you here because we think Newcomb is possible. (Okay, I did speak for them). I disagree on two points: one, I think a simulator is possible (that is, Omega 's impossibility comes from other powers we've given it, we can remove those powers and weaken Omega to a fits-in-reality definition without losing prediction), and two, I don't think the priors-and-payoffs approach to an empirical predictor is correct (for game-theoretic reasons which I can explicate if you'd like, but if it's not the point of contention it would only distract).
-4Andreas_Giger12y
No, CDT can in fact distinguish very well. It always concludes that the money is there, and it is always right, because it never encounters Newcomb. To clarify: You are talking about actual Newcomb with an omniscient being, yes? Because in that case, I think several posters have already stated they deem this impossible, and Nozick agrees. If you're talking about empirical Newcomb, that certainly is possible, but it is impossible to do better than CDT without choosing differently in other situations, because if you've acted like CDT in the past, Omega is going to assume you are CDT, even if you're not. I agree on the "we can remove those powers and weaken Omega to a fits-in-reality definition without losing prediction" part, but this will change what the "correct" answer is. For example, you could substitute Omega with a coin toss and repeat the game if Omega is wrong. This is still a one-time problem, because Omega is a coin and therefore has no memory, but CDT, which would two-box in empirical Newcomb, one-boxes in this case and takes the $1,000,000. I don't think this is the point of contention, but after we've settled that, I would be interested in hearing your line of thought on this.
5Emile12y
How about the version where agents are computer programs, and Omega runs a simulation of the agent facing the choice, observes it's behavior, and fills the boxes accordingly? I see no violation of causality in that version.
-6Andreas_Giger12y

This is a simple question that is in need of a simple answer.

And this is an Open Thread, which exists precisely for this kind of questions.

-24Andreas_Giger12y
[-]TimS12y40

Newcomb's problem has sequential steps - that's the key difference between it and problems like Prisoner's Dilemma. By the time the decision-agent is faced with the problem, the first step (where Omega examines you and decides how to seed the box) is already done. Absent time travel, nothing the agent does now will affect the contents of the boxes.

Consider the idea of the hostage exchange - the inherent leverage is in favor of the person who receives what they want first. It takes fairly sophisticated analysis to decide that what happened before should aff... (read more)

-1Andreas_Giger12y
But Omega figuring out your decision is time travel. That's the whole point of Newcomb, and why you need a "timeless" decision theory to one-box. As soon as you're talking about reality (hostages, empirical evidence, no time travel, ...) you're talking about weak Newcomb, which is not an issue here. Also, Newcomb becomes a very different problem if you repeat it, similar to PD.
2TimS12y
Newcomb's problem is not particularly interesting if one assumes the mechanism is time travel. If Omega really (1) wants to reduce the amount it spends and (2) can send information backward in time (ie time travel). no decision theory can do well. The fact that Eliezar's proposed decision theory is called "timeless" doesn't actually mean anything - and it hasn't really been formalized anyway. In short, try thinking about the problem with time travel excluded. What insights there are to gain from the problem are most accessible from that perspective.
-4Andreas_Giger12y
This statement is clearly false. Any decision theory that gives time-travelling Omega enough incentive to believe that you will one-box will do well. I don't think this is possible without actually one-boxing, though. You can substitute "timeless" with "considering violation of causality, for example time travel". "Timeless" is just shorter. Without time travel, this problem either ceases to exist, or becomes simple calculus.
2Randaly12y
No; Timeless Decision Theory does not violate causality. It is not a physical theory, which postulates new timetravelling particles or whatever; almost all of its advocates believe in full determinism, in fact. (Counterfactual mugging is an equivalent problem.) Newcomb's Problem has never included time travel. Every standard issue was created for the standard, non-time travel version. In particular, if one allows for backward causation (ie for one's decision to causally affect what's in the box) then the problem becomes trivial.
0Andreas_Giger12y
I didn't say (or mean) that it violated causality. I meant it assigned a probability p>0 to violation of causality being possible. I may be wrong on this, since I only read enough about TDT to infer that it isn't interesting or relevant to me. Actual Newcomb includes an omniscient being, and omniscience is impossible without time travel / violation of causality. If you say that Omega makes its prediction purely based on the past, Newcomb becomes trivial as well.
8wedrifid12y
It intrinsically says nothing about causality violation. All zero is not a probability and lack of infinite certainty issues are independent of the decision theory. The decision theory just works with whatever your map contains.
2Randaly12y
Actual Newcomb doesn't include an omniscient being; I quote from Wikipedia: Except that this is false, so nevermind. Also, actual knowledge of everything aside from the Predictor is possible without time travel. It's impossible in practice, but this is a thought experiment. You "just" need to specify the starting position of the system, and the laws operating on it.
-1Andreas_Giger12y
Well, the German Wikipedia says something entirely different, so may I suggest you actually read Nozick? I have posted a paragraph from the paper in question here. Translation from German Wiki: "An omniscient being..." What does this tell us? Exactly, that we shouldn't use Wikipedia as a source.
1Randaly12y
Oops, my apologies.
1wedrifid12y
Omega makes its prediction purely based on the past (and present). That being the case which decision would you say is trivially correct? Based on what you have said so far I can't predict which way your decision would go.
-1Andreas_Giger12y
Ruling out backwards causality, I would two-box, and I would get $1000 unless Omega made a mistake. No, I wouldn't rather be someone who two-boxes in Newcomb, because if Omega makes its predictions based on the past, this would only lead to me losing $1000, because Newcomb is a one-time problem. I would have to choose differently in other decisions for Omega to change its prediction, and that is something I'm not willing to do. Of course if I'm allowed to communicate with Omega, I would try to convince it that I'll be one-boxing (while still two-boxing), and if I can increase the probability of Omega predicting me to one-box enough to justify actually precommiting to one-boxing (by use of a lie detector or whatever), then I would do that. However, in reality I would probably get some satisfaction out of proving Omega wrong, so the payoff matrix may not be that simple. I don't think this is in any way relevant to the theoretical problem, though.
[-]see12y40

CDT calculates it this way: At the point of decision, either the million-dollar box has a million or it doesn't, and your decision now can't change that. Therefore, if you two-box, you always come out ahead by $1,000 over one-boxing.

-1Andreas_Giger12y
So what you're saying is that CDT refuses the whole setup and then proceeds to solve a completely different problem, correct?
3see12y
Well, Nozick's formulation in 1969, which popularized the problem in philosophy, went ahead and specified that "what you actually decide to do is not part of the explanation of why he made the prediction he made". Which means smuggling in a theory of unidirectional causality into the very setup itself, which explains how it winds up called "Newcomb's Paradox" instead of Newcomb's Problem.
-10Andreas_Giger12y
2wedrifid12y
No.
0fubarobfusco12y
No, it's just not aware that it could be running inside Omega's head.
0drethelin12y
Another way of putting it is that CDT doesn't model entities as modeling it.
-10Andreas_Giger12y

I am not sure what you mean by "substitute Newcomb with a problem that consists of little more than simple calculation of priors and payoffs". If you mean that the decision algorithm should chose the the option correlated with the highest payoffs, then that's Evidential Decision Theory, and it fails on other problems- eg the Smoking Lesion.

0Andreas_Giger12y
If Omega makes its prediction based on the past instead of the future, CDT two-boxes and gets $1,000. However, that is a result not of the decision CDT is making, but of the decisions it has made in the past. If Omega plays this game with e.g. TDT, and you substitute TDT with CDT without Omega noticing, CDT two-boxes and takes $1,001,000. Vice versa, if you substitute CDT with TDT, it gets nothing. If Omega makes its prediction based on the future, CDT assigns a probability of 0 to being in that situation, which is correct, since this is purely theoretical.

Here is my take on the whole thing, fwiw.

The issue is assigning probability to the outcome (Omega predicted player one-boxing whereas player two-boxed), as it is the only one where two-boxing wins. Obviously any decision theorists who assigns a non-zero probability to this outcome hasn't read the problem statement carefully enough, specifically the part that says that Omega is a perfect predictor.

EDT calculates the expected utility by adding, for all outcomes (probability of outcome given specific action)*payoff of the outcome. In the Newcomb case the con... (read more)

-6Andreas_Giger12y

It seems like if you haven't understood what's going on in a problem until very recently, when people explained it to you, and then you've come up with an answer to the problem that most people familiar with the subject material are objecting to.

How high is the prior for your hypothesis, that your posterior is still high after so much evidence pointing the other way?

-10Andreas_Giger12y
[-]TrE12y00

What, exactly, is your goal in this conservation? What could an explanation why CDT two-boxes look like in order to make you accept that explanation?

0Andreas_Giger12y
We've already established that some of the disagreement comes from whether Newcomb includes backwards causality or not, with most posters agreeing that Newcomb including backwards causality is not realistic or interesting (see the excerpt from Nozick that I edited into my top level post) and the focus instead shifting onto weak (empirical) Newcomb, where Omega makes its predictions without looking into the future. Right now, most posters also seem to be of the opinion that the answer to Newcomb is not to just one-box, but to precommit to one-boxing before Omega can make its decision, for example by choosing a different decision theory before encountering Newcomb. I argued that this is a different problem ("meta-Newcomb") that is fundamentally different from both Newcomb and weak Newcomb. The question of whether a CDT agent should change strategies (precommit) in meta-Newcomb seems to be dependent on whether such a strategy can be proven to never perform worse than CDT in non-Newcomb problems. The last sentence is my personal assessment; the rest should be general consensus by now.

You don't need to perfectly simulate Omega to play Newcomb. I am not Omega, but I bet that if I had lots of money and decided to entertain my friends with a game of Newcomb's boxes, I would be able to predict their actions with better than 50.1% accuracy.

Clearly CDT (assuming for the sake of the argument that I'm friends with CDT) doesn't care about my prediction skills, and two-boxes anyway, earning a guaranteed $1000 and a 49.9% chance of a million, for a total of $500K in expectation.

On the other hand, if one of my friends one-boxes, then he gets a 50.1% chance of a million, for a total of $501K in expectation.

Not quite as dramatic a difference, but it's there.

0Andreas_Giger12y
It's not a question of whether Omega is fallible or not, it's a question of whether Omega's prediction (no matter how incorrect) is dependent on the decision you are going to make (backwards causality), or only on decisions you have made in the past (no backwards causality). The first case is uninteresting since it cannot occur in reality, and in the second case it is always better to two-box, no matter the payouts or the probability of Omega being wrong. * If Omega is 100% sure you're one-boxing, you should two-box. * If Omega is 75% sure you're one-boxing, you should two-box. * If Omega is 50% sure you're one-boxing, you should two-box. * If Omega is 25% sure you're one-boxing, you should two-box. * If Omega is 0% sure you're one-boxing, you should two-box.
3Kindly12y
What if Omega makes an identical copy of you, puts the copy in an identical situation, and uses the copy's decision to predict what you will do? Is "whatever I decide to do, my copy will have decided the same thing" a valid argument?
0Andreas_Giger12y
No, because if Omega tells you that, then you have information that your copy doesn't, which means that it's not an identical situation; and if Omega doesn't tell you, then you might just as well be the copy itself, meaning that either you can't be predicted or you're not playing Newcomb. If Omega tells both of you the same thing, it lies to one of you; and in that case you're not playing Newcomb either.
1Kindly12y
Could you elaborate on this? That's certainly the situation I have in mind (although certainly Omega can tell both of you "I have made a copy of the person that walked into this room to simulate; you are either the copy or the original" or something to that effect). But I don't see how either one of "you can't be predicted or you're not playing Newcomb" makes sense.
0Andreas_Giger12y
If you're the copy that Omega bases its prediction of the other copy on, how does Omega predict you?
-1wedrifid12y
Unless you like money, in which case you should one box.
-2Andreas_Giger12y
If Omega is 100% sure you're one-boxing, you can one-box and get $1,000,000 or you can two-box and get $1,001,000. You cannot make the argument that one-boxing is better in this case unless you argue that your decision affects Omega's prediction, and that would be backwards causality. If you think backwards causality is a possibility, that's fine and you should one-box; but then you still have to agree that under the assumption that backwards causality cannot exist, two-boxing wins.
2wedrifid12y
Backwards causality cannot exist. I still take one box. I get the money. You don't. Your reasoning fails. On a related note: The universe is (as far as I know) entirely deterministic. I still have free will.
0Vladimir_Nesov12y
It's not completely clear what "backward causality" (or any causality, outside the typical contexts) means, so maybe it can exist. Better to either ignore the concept in this context (as it doesn't seem relevant) or taboo/clarify it.
-2wedrifid12y
The meaning of what Andreas was saying was sufficiently clear. He means "you know, stuff like flipping time travel and changing the goddamn past". Trying to taboo causality and sending everyone off to read Pearl would be a distraction. Possibly a more interesting distraction than another "CDT one boxes! Oh, um.... wait... No, Newcomb's doesn't exist. Err... I mean CDT two boxes and it is right to do so so there!" conversation but not an overwhelmingly relevant one.
0Vladimir_Nesov12y
We are in a certain sense talking about determining the past, the distinction is between shared structure (as in, the predictor has your source code) and time machines. The main problem seems to be unwillingness to carefully consider the meaning of implausible hypotheticals, and continued distraction by the object level dispute doesn't seem to help. ("Changing" vs. "determining" point should probably be discussed in the context of the future, where implausibility and fiction are less of a distraction.)
0Andreas_Giger12y
If backwards causality cannot exist, would you say that your decision can affect the prediction that Omega made before you made your decision?
-1wedrifid12y
No. Both the prediction and my decision came about due to past states of the universe (including my brain). They do not influence each other directly. I still take one box and get $1,000,000 and that is the best possible outcome.o. Both the prediction and my decision came about due to past states of the universe (including my brain). I still take one box and get $1,000,000 and that is the best possible outcome I can expect.

Because academic decision theorists say that CDT two boxes. A real causal decision theorist would, of course, one box. But the causal decision theorists in academic decision theorists' heads two box, and when people talk about causal decision theory, they're generally talking about the version of causal decision theory that is in academics' heads. This needn't make any logical sense.

9wedrifid12y
Only in the "No True Scottsman" sense. What Will calls CDT is an interesting decision theory and from what little I've seen of Will talking about it it may also be a superior decision theory, but it doesn't correspond to the decision theory called CDT. The version of Causal Decision Theory that is in academics' heads is CDT, the one that is in Will's head needs a new name.
1Will_Newsome12y
(Fair enough. My only real problem with causal decision theory being called causal decision theory is that at best it's a strange use of the word "causal", breaking with thousands of years of reasonable philosophical tradition. That's my impression anyway—but there's like a billion papers on Newcomb's problem, and maybe one of them gives a perfectly valid explanation of the terminology.)
1wedrifid12y
I'm not familiar with the philosophical tradition that would be incompatible with the way CDT uses 'causality'. It quite possibly exists and my lack of respect for philosophical tradition leaves me ignorant of such.
0JonathanLivengood12y
From my perspective, it's a shame that you have little regard for philosophical tradition. But as someone who is intimately familiar with the philosophical literature on causation, it seems to me that the sense of "causal" in causal decision theory, while imprecise, is perfectly compatible with most traditional approaches. I don't see any reason to think the "causal" in "causal decision theory" is incompatible with regularity theories, probabilistic theories, counterfactual theories, conserved quantity theories, agency/manipulation/intervention theories, primitivism, power theories, or mechanism theories. It might be a tense relation between CDT and projectivist theories, but I suspect that even there, you will not find outright incompatibility. For a nice paper in the overlap between decision theory and the philosophy of causation and causal inference, you might take a look at the paper Conditioning and Intervening (pdf) by Meek and Glymour if you haven't seen it already. Of course, Glymour's account of causation is not very different from Pearl's, so maybe you don't think of this as philosophy.
0wedrifid12y
That was my impression (without sufficient confidence that I wished to outright contradict on facts.)