See also: Does Evidential Decision Theory really fail Solomon's Problem?, What's Wrong with Evidential Decision Theory?

It seems to me that the examples usually given of decision problems where EDT makes the wrong decisions are really examples of performing Bayesian updates incorrectly. The basic problem seems to be that naive EDT ignores a selection bias when it assumes that an agent that has just performed an action should be treated as a random sample from the population of all agents who have performed that action. Said another way, naive EDT agents make some unjustified assumptions about what reference classes they should put themselves into when considering counterfactuals. A more sophisticated Bayesian agent should make neither of these mistakes, and correcting them should not in principle require moving beyond EDT but just becoming less naive in applying it. 

Elaboration

Recall that an EDT agent attempts to maximize conditional expected utility. The main criticism of EDT is that naively computing conditional probabilities leads to the conclusion that you should perform actions which are good news upon learning that they happened, as opposed to actions which cause good outcomes (what CDT attempts to do instead). For a concrete example of the difference, let's take the smoking lesion problem:

Smoking is strongly correlated with lung cancer, but in the world of the Smoker's Lesion this correlation is understood to be the result of a common cause: a genetic lesion that tends to cause both smoking and cancer. Once we fix the presence or absence of the lesion, there is no additional correlation between smoking and cancer.

Suppose you prefer smoking without cancer to not smoking without cancer, and prefer smoking with cancer to not smoking with cancer. Should you smoke?

In the smoking lesion problem, smoking is bad news, but it doesn't cause a bad outcome: learning that someone smokes, in the absence of further information, increases your posterior probability that they have the lesion and therefore cancer, but choosing to smoke cannot in fact alter whether you have the lesion / cancer or not. Naive EDT recommends not smoking, but naive CDT recommends smoking, and in this case it seems that naive CDT's recommendation is correct and naive EDT's recommendation is not. 

The naive EDT agent's reasoning process involves considering the following counterfactual: "if I observe myself smoking, that increases my posterior probability that I have the lesion and therefore cancer, and that would be bad. Therefore I will not smoke." But it seems to me that in this counterfactual, the naive EDT agent -- who smokes and then glumly concludes that there is an increased probability that they have cancer -- is performing a Bayesian update incorrectly, and that the incorrectness of this Bayesian update, rather than any fundamental problem with making decisions based on conditional probabilities, is what causes the naive EDT agent to perform poorly. 

Here are some other examples of this kind of Bayesian update, all of which seem obviously incorrect to me. They lead to silly decisions because they are silly updates. 

  • "If I observe myself throwing away expensive things, that increases my posterior probability that I am rich and can afford to throw away expensive things, and that would be good. Therefore I will throw away expensive things." (This example requires that you have some uncertainty about your finances -- perhaps you never check your bank statement and never ask your boss what your salary is.)
  • "If I observe myself not showering, that increases my posterior probability that I am clean and do not need to shower, and that would be good. Therefore I will not shower." (This example requires that you have some uncertainty about how clean you are -- perhaps you don't have a sense of smell or a mirror.)
  • "If I observe myself playing video games, that increases my posterior probability that I don't have any work to do, and that would be good. Therefore I will play video games." (This example requires that you have some uncertainty about how much work you have to do -- perhaps you write this information down and then forget it.) 

Selection Bias

Earlier I said that in the absence of further information, learning that someone smokes increases your posterior probability that they have the lesion and therefore cancer in the smoking lesion problem. But when a naive EDT agent is deciding what to do, they have further information: in the counterfactual where they're smoking, they know that they're smoking because they're in a counterfactual about what would happen if they smoked (or something like that). This information should screen off inferences about other possible causes of smoking, which is perhaps clearer in the bulleted examples above. If you consider what would happen if you threw away expensive things, you know that you're doing so because you're considering what would happen if you threw away expensive things and not because you're rich. 

Failure to take this information into account is a kind of selection bias: a naive EDT agent considering the counterfactual where they perform some action treats itself as a random sample from the population of similar agents who have performed such actions, but it is not in fact such a random sample! The sampling procedure, which consists of actually performing an action, is undoubtedly biased. 

Reference Classes

Another way to think about the above situation is that a naive EDT agent chooses inappropriate reference classes: when an agent performs an action, the appropriate reference class is not all other agents who have performed that action. It's unclear to me exactly what it is, but at the very least it's something like "other sufficiently similar agents who have performed that action under sufficiently similar circumstances." 

This is actually very easy to see in the smoker's lesion problem because of the following observation (which I think I found in Eliezer's old TDT writeup): suppose the world of the smoker's legion is populated entirely with naive EDT agents who do not know whether or not they have the lesion. Then the above argument suggests that none of them will choose to smoke. But if that's the case, then where does the correlation between the lesion and smoking come from? Any agents who smoke are either not naive EDT agents or know whether they have the lesion. In either case, that makes them inappropriate members of the reference class any reasonable Bayesian agent should be using.

Furthermore, if the naive EDT agents collectively decide to become slightly less naive and restrict their reference class to each other, they now find that smoking no longer gives any information about whether they have the lesion or not! This is a kind of reflective inconsistency: the naive recommendation not to smoke in the smoker's lesion problem has the property that, if adopted by a population of naive EDT agents, it breaks the correlations upon which the recommendation is based. 

The Tickle Defense

As it happens, there is a standard counterargument in the decision theory literature to the claim that EDT recommends not smoking in the smoking lesion problem. It is known as the "tickle defense," and runs as follows: in the smoking lesion problem, what an EDT agent should be updating on is not the action of smoking but an internal desire, or "tickle," prompting it to smoke, and once the presence or absence of such a tickle has been updated on it screens off any information gained by updating on the act of smoking or not smoking. So EDT + Tickles smokes on the smoking lesion problem. (Note that this prescription also has the effect of breaking the correlation claimed in the setup of the smoking lesion problem among a population of EDT + Tickles agents who don't know whether hey have the lesion or not. So maybe there's just something wrong with the smoking lesion problem.) 

The tickle defense is good in that it encourages ignoring less information than naive EDT, but it strikes me as a patch covering up part of a more general problem, namely the problem of how to choose appropriate reference classes when performing Bayesian updates (or something like that). So I don't find it a satisfactory rescuing of EDT. It doesn't help that there's a more sophisticated version known as the "meta-tickle defense" that recommends two-boxing on Newcomb's problem.

Sophisticated EDT?

What does a more sophisticated version of EDT, taking the above observations into account, look like? I don't know. I suspect that it looks like some version of TDT / UDT, where TDT corresponds to something like trying to update on "being the kind of agent who outputs this action in this situation" and UDT corresponds to something more mysterious that I haven't been able to find a good explanation of yet, but I haven't thought about this much. If someone else has, let me know.

Here are some vague thoughts. First, I think this comment by Stuart_Armstrong is right on the money:

I've found that, in practice, most versions of EDT are underspecified, and people use their intuitions to fill the gaps in one direction or the other.

A "true" EDT agent needs to update on all the evidence they've ever observed, and it's very unclear to me how to do this in practice. So it seems that it's difficult to claim with much certainty that EDT will or will not do a particular thing in a particular situation.

CDT-via-causal-networks and TDT-via-causal-networks seem like reasonable candidates for more sophisticated versions of EDT in that they formalize the intuition above about screening off possible causes of a particular action. TDT seems like it better captures this intuition in that it better attempts to update on the cause of an action in a hypothetical about that action (the cause being that TDT outputs that action). My intuition here is that it should be possible to see causal networks as arising naturally out of Bayesian considerations, although I haven't thought about this much either. 

AIXI might be another candidate. Unfortunately, AIXI can't handle the smoking lesion problem because it models itself as separate from the environment, whereas a key point in the smoking lesion problem is that an agent in the world of the smoking lesion has some uncertainty about its innards, regarded as part of its environment. Fully specifying sophisticated EDT might involve finding a version of AIXI that models itself as part of its environment. 

New Comment
128 comments, sorted by Click to highlight new comments since: Today at 6:15 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Suggestion: credit and link the cartoon.

4Paul Crowley11y
Saturday Morning Breakfast Cereal, 2013-06-25
2Qiaochu_Yuan11y
I can link to it, but I'm not sure where the credit would go. Alt text?
2David_Gerard11y
Footnote?

I keep hoping my "toxoplasmosis problem" alternative to the Smoking Lesion will take off!

The toxoplasmosis problem is a scenario that demonstrates a failing of EDT and a success of CDT. Toxoplasma gondii is a single-celled parasite carried by a significant fraction of humanity. It affects mammals in general and is primarily hosted by cats. Infection can have a wide range of negative effects (though most show no symptoms). It has also been observed that infected rats will be less afraid of cats, and even attracted to cat urine. Correlations have

... (read more)
4arundelo11y
Other alternatives to the Smoking Lesion Problem: Eliezer has one with chewing gum and throat abcesses (PDF). "I have avoided [the Smoking Lesion] variant because in real life, smoking does cause lung cancer." (According to that same document this class of problem is known as Solomon's Problem.) orthonormal proposes the Aspirin Paradox.
4Richard_Kennaway11y
The toxoplasmosis version has the drawback that in the real world there is presumably also a causal link from adoring cats to getting infected, which has to be disregarded for The Toxoplasmosis Problem, just as the real causal effect of smoking on cancer must be disregarded in The Smoking Lesion.
1Qiaochu_Yuan11y
I like the toxoplasmosis problem but I wanted to stick to a more established example for the sake of familiarity.

Look, HIV patients who get HAART die more often (because people who get HAART are already very sick). We don't get to see the health status confounder because we don't get to observe everything we want. Given this, is HAART in fact killing people, or not?

EDT does the wrong thing here. Any attempt to not handle the confounder properly does the wrong thing here. If something does handle the confounder properly, it's not EDT anymore (because it's not going to look at E[death|HAART]). If you are willing to call such a thing "EDT", then EDT can m... (read more)

4twanvl11y
According to the wikipedia page, EDT uses conditional probabilities. I.e. V(HAART) = P(death|HAART)U(death) + P(!death|HAART)U(!death). The problem is not with this EDT formula in general, but with how these probabilities are defined and estimated. In reality, they are based on a sample, and we are making a decision for a particular patient, i.e. V(HAART-patient1) = P(death-patient1|HAART-patient1)U(death-patient1) + P(!death-patient1|HAART-patient1)U(!death-patient1). We don't know any of these probabilities exactly, since you will not find out whether the patient dies until after you give or not give him the treatment. So instead, you estimate the probabilities based on other patients. A completely brain-dead model would use the reference class of all people, and conclude that HAART kills. But a more sophisticated model would include something like P(patient1 is similar to patient2) to define a better reference class, and it would also take into account confounders.
0IlyaShpitser11y
Ok -- the data is as I describe above. You don't get any more data. What is your EDT solution to this example?
4twanvl11y
You didn't give any data, just a problem description. Am I to assume that there is a bunch of {A0, L0, A1, Y} records are available? And you say that the policy for giving A1 is known, is the information that this decision is based on (health status) also available? In any case, you end up with the problem of estimating a causal structure from observational data, which is a challenging problem. But I don't see what this has to do with EDT vs another DT. Wouldn't this other decision theory face exactly the same problem?
2IlyaShpitser11y
You have (let's say infinitely many to avoid dealing with stats issues) records for { A0, L0, A1, Y }. You know they come from the causal graph I specified (complete with an unobserved confounder for health status on which no records exist. You don't need to learn the graph, you just need to tell me whether HAART is killing people or not and why, using EDT.
2twanvl11y
There is no single 'right answer' in this case. The answer will depend on your prior for the confounder. As others have noted, the question "is HAART killing people?" has nothing to do with EDT, or any other decision theory for that matter. The question that decision theories answer is "should I give HAART to person X?"
2IlyaShpitser11y
I think I disagree with both of these assertions. First, there is the "right answer," and it has nothing to do with priors or Bayesian reasoning. In fact there is no model uncertainty in the problem -- I gave you "the truth" (the precise structure of the model and enough data to parameterize it precisely so you don't have to pick or average among a set of alternatives). All you have to do is answer a question related to a single parameter of the model I gave you. The only question is which parameter of the model I am asking you about. Second, it's easy enough to rephrase my question to be a decision theory question (I do so here: http://lesswrong.com/lw/hwq/evidential_decision_theory_selection_bias_and/9cdk).
0twanvl11y
To quote your other comment: You put the patient on HAART if and only if V(HAART) > V(!HAART), where V(HAART) = P(death|HAART)U(death) + P(!death|HAART)U(!death). V(!HAART) = P(death|!HAART)U(death) + P(!death|!HAART)U(!death). In these formulas HAART means "(decide to) put this patient on HAART" and death means "this patient dies". For concreteness, we can assume that the utility of death is low, say 0, while the utility of !death is positive. Then the decision reduces to P(!death|HAART) > P(!death|!HAART) So if you give me P(!death|HAART) and P(!death,!HAART) then I can give you a decision.
4IlyaShpitser11y
Ok. This is wrong. The problem is P(death|HAART) isn't telling you whether HAART is bad or not (due to unobserved confounding). I have already specified that there is confounding by health status (that is, HAART helps, but was only given to people who were very sick). What you need to compare is for various values of A1, and A0.
0twanvl11y
Note that I defined HAART as "put this patient on HAART", not the probability of death when giving HAART in general (maybe I should have used a different notation). If I understand your model correctly then A0 = is HAART given at time t=0 (boolean) L0 = time to wait (seconds, positive) A1 = is HAART given (again) at time t=L0 (boolean) with the confounding variable H1, the health at time t=L0, which influences the choice of A1. You didn't specify how L0 was determined, is it fixed or does it also depend on the patient's health? Your formula above suggests that it depends only on the choice A0. Now a new patient comes in, and you want to know whether you should pick A0=true/false and A1=true/false. Now for the new patient x, you want to estimate P(death[x] | A0[x],A1[x]). If it was just about A0[x], then it would be easy, since the assignment was randomized, so we know that A0 is independent of any confounders. But this is not true for A1, in fact, we have no good data with which to estimate A1[x], since we only have samples where A1 was chosen according to the health-status based policy.
2Vaniver11y
Yes, you should have. The notation "P(!death|HAART)" means "find every record with HAART, and calculate the percentage of them with !death." This is how EDT as an epistemic approach generates numbers to use for decisions. Why am I specifying as an epistemic approach? Because EDT and CDT ask for different sorts of information with which to make decisions, and thus have different natural epistemologies. CDT asks for "P(!death| do(HAART))", which is not the sort of information EDT asks for, and thus not the sort of information an EDT model has access to. To go back to an earlier statement: IlyaShpitser is asking you how you would calculate those from empirical data. The EDT answer uses the technical notation you used before, and it's the suboptimal way to do things. Really? My impression is that the observational records are good enough to get some knowledge. (Indeed, they must be good enough; lives are on the line, and saying "I don't have enough info" will result in more deaths than "this is the best I can do with the existing info.")
0twanvl11y
EDT does not answer this question, at least, the definition of EDT I found on wikipedia makes no mention of it. Can you point me to a description of EDT that includes the estimation of probabilities? I should have said "to estimate the effect of A1[x]". Sure, you can do something to make an estimate. But as I understand it, estimating causal models (which is what you need to estimate A1[x]) from observational data is a hard problem. That is why clinical trials use randomization, and studies that don't try very hard to control for all possible confounders.
0Vaniver11y
I don't think you're interpreting the wikipedia article correctly. It states that the value of an action is the sum over the conditional probability times the desirability. This means that the decision-relevant probability of an outcome given we do action is ) from the observational data. (You can slice this observational data to get a more appropriate reference class, but no guidance is given on how to do this. Causal network discovery formalizes this process, commonly referred to as 'factorization,' as well as making a few other technical improvements.) Yes, but it's a problem we have good approaches for.
0twanvl11y
Agreed. Where does it say how P(O_j|A) is estimated? Or that observational data comes into it at all? In my understanding you can apply EDT after you know what P(O_j|A) is. How you determine that quantity is outside the scope of decision theories.
2Vaniver11y
It seems like there are two views of decision theories. The first view is that a decision theory eats a problem description and outputs an action. The second view is that a decision theory eats a model and outputs an action. I strongly suspect that IlyaShpitser holds the first view (see here), and I've held both in different situations. Even when holding the second view, though, the different decision theories ask for different models, and those models must be generated somehow. I take the view that one should use off-the-shelf components for them, unless otherwise specified, and this assumption turns the second view into the first view. I should note here that the second view is not very useful practically; most of a decision analysis class will center around how to turn a problem description into a model, since the mathematics of turning models into decisions is very simple by comparison. When EDT is presented with a problem where observational data is supplied, EDT complains that it needs conditional probabilities, not observational data. The "off-the-shelf" way of transforming that data into conditional probabilities is to conditionalize on the possible actions within the observational data, and then EDT will happily pick the action with the highest utility weighted by conditional probability. When CDT is presented with the same problem, it complains that it needs a causal model. The "off-the-shelf" way of transforming observational data into a causal model is described in Causality, and so I won't go into it here, but once that's done CDT will happily pick the action with the highest utility weighted by counterfactual probability. Can we improve on the "off-the-shelf" method for EDT? If we apply some intuition to the observational data, we can narrow the reference class and get probabilities that are more meaningful. But this sort of patching is unsatisfying. At best, we recreate the causal model discovered by the off-the-shelf methods used by CDT, and now
2IlyaShpitser11y
In fact, in the example I gave, I fully specified everything needed for each decision theory to output an answer -- I gave a causal model to CDT (because I gave the graph under standard interventionist semantics), and a joint distribution over all observable variables to EDT (infinite sample size!). I just wanted someone to give me the right answer using EDT (and explain how they got it). EDT is not allowed to refer to causal concepts like "confounder" or "causal effect" when making a decision (otherwise it is not EDT).
0[anonymous]11y
IlyaShpitser is asking you to calculate those.
2Qiaochu_Yuan11y
Well, of course I can't give the right answer if the right answer depends on information you've just specified I don't have. Again, I think there is a nontrivial selection bias / reference class issue here that needs to be addressed. The appropriate reference class for deciding whether to give HAART to an HIV patient is not just the set of all HIV patients who've been given HAART precisely because of the possibility of confounders. In actual problems people want to solve, people have the option of acquiring more information and working from there. It's plausible that with enough information even relatively bad decision theories will still output a reasonable answer (my understanding is that this kind of phenomenon is common in machine learning, for example). But the general question of what to do given a fixed amount of information remains open and is still interesting.

Well, of course I can't give the right answer if the right answer depends on information you've just specified I don't have.

I think there is "the right answer" here, and I think it does not rely on observing the confounder. If your decision theory does then (a) your decision theory isn't as smart as it could be, and (b) you are needlessly restricting yourself to certain types of decision theories.

The appropriate reference class for deciding whether to give HAART to an HIV patient is not just the set of all HIV patients who've been given HAART precisely because of the possibility of confounders.

People have been thinking about confounders for a long time (earliest reference known to me to a "randomized" trial is the book of Daniel, see also this: http://ije.oxfordjournals.org/content/33/2/247.long). There is a lot of nice clever math that gets around unobserved confounders developed in the last 100 years or so. Saying "well we just need to observe confounders" is sort of silly. That's like saying "well, if you want to solve this tricky computational problem forget about developing new algorithms and that whole computational complexity ... (read more)

4William_Quixote11y
For non experts in the thread, what's the name of this area and is there a particular introductory text you would recommend?
4IlyaShpitser11y
Thanks for your interest! The name of the area is "causal inference." Keywords: "standardization" (in epidemiology), "confounder or covariate adjustment," "propensity score", "instrumental variables", "back-door criterion," "front-door criterion," "g-formula", "potential outcomes", "ignorability," "inverse probability weighting," "mediation analysis," "interference", etc. Pearl's Causality book (http://www.amazon.com/Causality-Reasoning-Inference-Judea-Pearl/dp/052189560X/ref=pd_sim_sbs_b_1) is a good overview (but doesn't talk a lot about statistics/estimation). Early references are Sewall Wright's path analysis paper from 1921 (http://naldc.nal.usda.gov/download/IND43966364/PDF) and Neyman's paper on potential outcomes from 1923 (http://www.ics.uci.edu/~sternh/courses/265/neyman_statsci1990.pdf). People say either Sewall Wright or his dad invented instrumental variables also.
4William_Quixote11y
Thanks
7endoself11y
You're sort of missing what Ilya is trying to say. You might have to look at the actual details of the example he is referring to in order for this to make sense. The general idea is that even though we can't observe certain variables, we still have enough evidence to justify the causal model where HAART leads to fewer people die, so we can conclude that we should prescribe it. I would object to Ilya's more general point though. Saying that EDT would use E(death|HAART) to determine whether to prescribe HAART is making the same sort of reference class error you discuss in the post. EDT agents use EDT, not the procedures used to A0 and A1 in the example, so we really need to calculate E(death|EDT agent prescribes HAART). I would expect this to produce essentially the same results as a Pearlian E(death | do(HAART)), and would probably regard it as a failure of EDT if it did not add up to the same thing, but I think that there is value in discovering how exactly this works out, if it does.
5IlyaShpitser11y
A challenge (not in a bad sense, I hope): I would be interested in seeing an EDT derivation of the right answer in this example, if anyone wants to do it.
6[anonymous]11y
Yeah, unfortunately everyone who responded to your question went all fuzzy in the brain and started philosophical evasive action.
0nshepperd11y
Um, since when were decision theories for answering epistemic questions? Are you trying to make some kind of point about how evidential decision theorists use incorrect math that ignores confounders?
4IlyaShpitser11y
??? How are you supposed to make good decisions? Well, I am trying to learn why people think EDT isn't terminally busted. I gave a simple example that usually breaks EDT as I understand it, and I hope someone will work out the right answer with EDT to show me where I am going wrong.
-2nshepperd11y
Use decision theory. The point is that it's not decision theory that tells you your shoelaces are undone when you look at your feet. "Are my shoelaces undone?" is a purely epistemic question, that has nothing to do with making decisions. But upon finding out that your shoelaces are undone, a decision theory might decide to do X or Y, after discovering (by making a few queries to the epistemic-calculations module of your brain) that certain actions will result in the shoelaces being tied again, that that would be safer, etc etc. You're complaining that EDT is somehow unable to solve the question of "is HAART bad" given some useless data set when that doesn't even sound like a question EDT should be trying to answer in the first place—but rather, a question you would try to answer with standard multivariate statistics.
1IlyaShpitser11y
Ok -- a patient comes in (from the same reference class as the patients in your data). This patient has HIV. Do you put him on HAART or not? Your utility function is minimizing patient deaths. By the way, if you do the wrong thing, you go to jail for malpractice.
0nshepperd11y
How about we dispense with this and you tell us if you know how to extract information about the usefulness (or not) of HAART from a data set like this?
5IlyaShpitser11y
Ok, first things first. Do you agree that "Do you put him on HAART or not? Your utility function is minimizing patient deaths." is in fact a kind of question EDT, or decision theories in general, should be trying to answer? In fact, I already said elsewhere in this thread that I think there is the right answer to this question, and this right answer is to put the patient on HAART (whereas my understanding of EDT is that it will notice that E[death | HAART] > E[death | no HAART], and conclude that HAART is bad). The way you get the answer is no secret either, it's what is called 'the g-formula' or 'truncated factorization' in the literature. I have been trying to understand how my understanding of EDT is wrong. If people's attempt to fix this is to require that we observe all unobserved confounders for death, then to me this says EDT is not a very good decision theory (because other decision theories can get the right answer here without having to observe anything over what I specified). If people say that the right answer is to not give HAART then that's even worse (e.g. they will kill people and go to jail if they actually practice medicine like that).
-2nshepperd11y
Yes. However a decision theory in general contains no specific prescriptions for obtaining probabilities from data, such as "oh, use the parametric g-formula". In general, they have lists of probabilistic information that they require. Setting that aside, I assume you mean the above to mean "count the proportion of samples without HAART with death, and compare to proportion of samples with HAART with death". Ignoring the fact that I thought there were no samples without HAART at t=0, what if half of the samples referred to hamsters, rather than humans? No-one would ever have proposed EDT as a serious decision theory if they intended one to blindly count records while ignoring all other relevant "confounding" information (such as species, or health status). In reality, the purpose of the program of "count the number of people who smoke who have the lesion" or "count how many people who have HAART die" is to obtain estimates of P(I have the lesion | I smoke) or P(this patient dies | I give this patient HAART). That is why we discard hamster samples, because there are good a priori reasons to think that the survival of hamsters and humans is not highly correlated, and "this patient" is a human.
3IlyaShpitser11y
Well, there is in reality A0 and A1. I choose this example because in this example it is both the case that E[death | A0, A1] is wrong, and \sum_{L0} E[death | A0,A1,L0] p(L0) (usual covariate adjustment) is wrong, because L0 is a rather unusual type of confounder. This example was something naive causal inference used to get wrong for a long time. More generally, you seem to be fighting the hypothetical. I gave a specific problem on only four variables, where everything is fully specified, there aren't hamsters, and which (I claim) breaks EDT. You aren't bringing up hamsters with Newcomb's problem, why bring them up here? This is just a standard longitudinal design: there is nothing exotic about it, no omnipotent Omegas or source-code reading AIs. I think you misunderstand decision theory. If you were right, there would be no difference between CDT and EDT. In fact, the entire point of decision theories is to give rules you would use to make decisions. EDT has a rule involving conditional probabilities of observed data (because EDT treats all observed data as evidence). CDT has a rule involving a causal connection between your action and the outcome. This rule implies, contrary to what you claimed, that a particular method must be used to get your answer from data (this method being given by the theory of identification of causal effects) on pain of getting garbage answers and going to jail.
0nshepperd11y
I said why I was bringing them up. To make the point that blindly counting the number of events in a dataset satisfying (action = X, outcome = Y) is blatantly ridiculous, and this applies whether or not hamsters are involved. If you think EDT does that then either you are mistaken, or everyone studying EDT are a lot less sane than they look. The difference is that CDT asks for P(utility | do(action), observations) and EDT asks for P(utility | action, observations). Neither CDT or EDT specify detailed rules for how to calculate these probabilities or update on observations, or what priors to use. Indeed, those rules are normally found in statistics textbooks, Pearl's Causality or—in the case of the g-formula—random math papers.
7IlyaShpitser11y
Ok. I keep asking you, because I want to see where I am going wrong. WIthout fighting the hypothetical, what is EDT's answer in my hamster-free, perfectly standard longitudinal example: do you in fact give the patient HAART or not? If you think there are multiple EDTs, pick the one that gives the right answer! My point is, if you do give HAART, you have to explain what rule you use to arrive at this, and how it's EDT and not CDT. If you do not give HAART, you are "wrong." The form of argument where you say "well, this couldn't possibly be right -- if it were I would be terrified!" isn't very convincing. I think Homer Simpson used that once :).
0nshepperd11y
What I meant was "if it were, that would require a large number of (I would expect) fairly intelligent mathematicians to have made an egregiously dumb mistake, on the order of an engineer modelling a 747 as made of cheese". Does that seem likely to you? The principle of charity says "don't assume someone is stupid so you can call them wrong". Regardless, since there is nothing weird going on here, I would expect (a particular non-strawman version of) EDT's answer to be precisely the same as CDT's answer, because "agent's action" has no common causes with the relevant outcomes (ETA: no common causes that aren't screened off by observations. If you measure patient vital signs and decide based on them, obviously that's a common cause, but irrelevant since you've observed them). In which case you use whatever statistical techniques one normally uses to calculate P(utility | do(action), observations) (the g-formula seems to be an ad-hoc frequentist device as far as I can tell, but there's probably a prior that leads to the same result in a bayesian calculation). You keep telling me that results in "give HAART" so I guess that's the answer, even though I don't actually have any data. Is that a satisfying answer? In retrospect, I would have said that before, but got distracted by the seeming ill-posedness of the problem and incompleteness of the data. (Yes, the data is incomplete. Analysing it requires nontrivial assumptions, as far as I can tell from reading a paper on the g-formula.)
4IlyaShpitser11y
See, its things like this that make people have the negative opinion of LW as a quasi-religion that they do. I am willing to wager a guess that your understanding of "the parametric g-formula" is actually based on a google search or two. Yet despite this, you are willing to make (dogmatic, dismissive, and wrong) Bayesian-sounding pronouncements about it. In fact the g-formula is just how you link do(.) and observational data, nothing more nothing less. do(.) is defined in terms of the g-formula in Pearl's chapter 1. The g-formula has nothing to do with Bayesian vs frequentist differences. No. EDT is not allowed to talk about "confounders" or "causes" or "do(.)". There is nothing in any definition of EDT in any textbook that allows you to refer to anything that isn't a function of the observed joint density. So that's all you can use to get the answer here. If you talk about "confounders" or "causes" or "do(.)", you are using CDT by definition. What is the difference between EDT and CDT to you? ---------------------------------------- Re: principle of charity, it's very easy to get causal questions wrong. Causal inference isn't easy! Causal inference as field itself used to get the example I gave wrong until the late 1980s. Your answers about how to use EDT to get the answer here are very vague. You should be able to find a textbook on EDT, and follow an algorithm there to give a condition in terms of p(A0,A1,L0,Y) for whether HAART should be given or not. My understanding of EDT is that the condition would be: Give HAART at A0,A1 iff E[death | A0=yes, A1=yes] < E[death | A0=no, A1=no] So you would not give HAART by construction in my example (I mentioned people who get HAART die more often due to confounding by health status).
1nshepperd11y
You're probably right. Not that this matters much. The reason I said that is because the few papers I could find on the g-formula were all in the context of using it to find out "whether HAART kills people", and none of them gave any kind of justification or motivation for it, or even mentioned how it related to probabilities involving do(). Did you read what I wrote? Since action and outcome do not have any common causes (conditional on observations), P(outcome | action, observations) = P(outcome | do(action), observations). I am well aware that EDT does not mention do. This does not change the fact that this equality holds in this particular situation, which is what allows me to say that EDT and CDT have the same answer here. Postulating "just count up how many samples have the particular action and outcome, and ignore everything else" as a decision theory is not a complicated causal mistake. This was the whole point of the hamster example. This method breaks horribly on the most simple dataset with a bit of irrelevant data. ETA: [responding to your edit] No, this is completely wrong, because this ignores the fact that the action the EDT agent considers is "I (EDT agent) give this person HAART", not "be a person who decides whether to give HAART based on metrics L0, and also give this person HAART" which isn't something it's possible to "decide" at all.
4IlyaShpitser11y
Thanks for this. Technical issue: In my example, A0 has no causes (it is randomized) but A1 has a common cause with the outcome Y (this common cause is the unobserved health status, which is a parent of both Y and L0, and L0 is a parent of A1). L0 is observed but you cannot adjust for it either because that screws up the effect of A0. To get the right answer here, you need a causal theory that connects observations to causal effects. The point is, EDT isn't allowed to just steal causal theory to get its answer without becoming a causal decision theory itself.
0nshepperd11y
Health status is screened off by the fact that L0 is an observation. At the point where you (EDT agent) decide whether to give HAART at A1 the relevant probability for purposes of calculating expected utility is P(outcome=Y | action=give-haart, observations=[L0, this dataset]). Effect of action on unobserved health-status and through to Y is screened off by conditioning on L0.
0IlyaShpitser11y
That's right, but as I said, you cannot just condition on L0 because that blocks the causal path from A0 to Y, and opens a non-causal path A0 -> L0 <-> Y. This is what makes L0 a "time dependent confounder" and this is why \sum_{L0} E[Y | L0,A0,A1] p(L0) and E[Y | L0, A0, A1] are both wrong here. (Remember, HAART is given in two stages, A0 and A1, separated by L0).
0nshepperd11y
Okay, this isn't actually a problem. At A1 (deciding whether to give HAART at time t=1) you condition on L0 because you've observed it. This means using P(outcome=Y | action=give-haart-at-A1, observations=[L0, the dataset]) which happens to be identical to P(outcome=Y | do(action=give-haart-at-A1), observations=[L0, the dataset]), since A1 has no parents apart from L0. So the decision is the same as CDT at A1. At A0 (deciding whether to give HAART at time t=0), you haven't measured L0, so you don't condition on it. You use P(outcome=Y | action=give-haart-at-A0, observations=[the dataset]) which happens to be the same as P(outcome=Y | do(action=give-haart-at-A0), observations=[the dataset]) since A0 has no parents at all. The decision is the same as CDT at A0, as well. To make this perfectly clear, what I am doing here is replacing the agents at A0 and A1 (that decide whether to administer HAART) with EDT agents with access to the aforementioned dataset and calculating what they would do. That is, "You are at A0. Decide whether to administer HAART using EDT." and "You are at A1. You have observed L0=[...]. Decide whether to administer HAART using EDT.". The decisions about what to do at A0 and A1 are calculated separately (though the agent at A0 will generally need to know, and therefore to first calculate what A1 will do, so that they can calculate stuff like P(outcome=Y | action=give-haart-at-A0, observations=[the dataset])). You may actually be thinking of "solve this problem using EDT" as "using EDT, derive the best (conditional) policy for agents at A0 and A1", which means an EDT agent standing "outside the problem", deciding upon what A0 and A1 should do ahead of time, which works somewhat differently — happily, though, it's practically trivial to show that this EDT agent's decision would be the same as CDT's: because an agent deciding on a policy for A0 and A1 ahead of time is affected by nothing except the original dataset, which is of course the input (an
-2Vaniver11y
Yes, actually, they do seem have to have made an egregiously dumb mistake. People think EDT is dumb because it is dumb. Full stop. The confusion is that sometimes when people talk about EDT, they are talking about the empirical group of "EDTers". EDTers aren't dumb enough to actually use the math of EDT. A "non-strawman EDT" is CDT. (If it wasn't, how could the answers always be the same?) The point of math, though, is that you can't strawman it; the math is what it is. Making decisions based on the conditional probabilities that resulted from observing that action historically is dumb, EDT makes decisions based on conditional probabilities, therefore EDT is dumb.
3nshepperd11y
They're not...? EDT one-boxes on Newcomb's and smokes (EDIT: doesn't smoke) on the smoking lesion (unless the tickle defense actually works or something). Of course, it also two-boxes on transparent Newcomb's, so it's still a dumb theory, but it's not that dumb.
0Vaniver11y
How else should I interpret "I would expect (a particular non-strawman version of) EDT's answer to be precisely the same as CDT's answer"? Huh? EDT doesn't smoke on the smoking lesion, because P(cancer|smoking)>P(cancer|!smoking).
0nshepperd11y
What I said was Meaning that in this particular situation (where there aren't any omniscient predictors or mysterious correlations), the decision is the same. I didn't mean they were the same generally. Er, you're right. I got mixed up there.
-2Vaniver11y
Okay. Do you have a mathematical description of whether they differ, or is it a "I know it when I see it" sort of description? What makes a correlation mysterious? I'm still having trouble imagining what a "non-strawman" EDT looks like mathematically, except for what I'm calling EDT+Intuition, which is people implicitly calculating probabilities using CDT and then using those probabilities to feed into EDT (in which case they're only using it for expected value calculation, which CDT can do just as easily). It sounds to me like someone insisting that a "non-strawman" formula for x squared is x cubed.
0nshepperd11y
A first try at formalising it would amount to "build a causal graph including EDT-agent's-decision-now as a node, and calculated expected utilities using P(utility | agent=action, observations)". For example, for your average boring everyday situation, such as noticing a $5 note on the ground and thinking about whether to pick it up, the graph is (do I see $5 on the ground) --> (do I try to pick it up) --> (outcome). To arrive at a decision, you calculate the expected utilities using P(utility | pick it up, observation=$5) vs P(utility | don't pick it up, observation=$5). Note that conditioning on both observations and your action breaks the correlation expressed by the first link of the graph, resulting in this being equivalent to CDT in this situation. Also conveniently this makes P(action | I see $5) not matter, even though this is technically a necessary component to have a complete graph. To be actually realistic you would need to include a lot of other stuff in the graph, such as everything else you've ever observed, and (agent's state 5 minutes ago) as causes of the current action (do I try to pick it up). But all of these can either be ignored (in the case of irrelevant observations) or marginalised out without effect (in the case of unobserved causes that we don't know affect the outcome in any particular direction). Next take an interesting case like Newcomb's. The graph is something like the below: We don't know whether agent-5-minutes-ago was the sort that would make omega fill both boxes or not (so it's not an observation), but we do know that there's a direct correlation between that and our one-boxing. So when calculating P(utility|one-box), which implicitly involves marginalising over (agent-5-minutes-ago) and (omega fills boxes) we see that the case where (agent-5-minutes-ago)=one-box and (omega fills boxes)=both dominates, while the opposite case dominates for P(utility|two-box), so one-boxing has a higher utility.
-2Vaniver11y
Are we still talking about EDT? Why call it that? (I do think that a good decision theory starts off with "build a causal graph," but I think that decision theory already exists, and is CDT, so there's no need to invent it again.) I don't think that formalization of Newcomb's works, or at least you should flip the arrows. I think these are the formalizations in which perfect prediction makes sense: 1. You're deciding which agent to be 5 minutes ago- and your decision is foreordained based on that. (This is the 'submit source code to a game' option like here.) 2. There is a causal arrow from your decision to Omega filling the boxes. (This is the Laplace's Demon option, where the universe is deterministic and Omega, even though it's in the past, is fully able to perceive the future. This is also your graph if you flip the 'one-box?' to 'agent 5 minutes ago' arrow.) In both of these causal graphs, CDT suggests one-boxing. (I think that CDTers who two-box build the wrong causal graph from the problem description.)
2nshepperd11y
It's not like only CDTers are allowed to use causal graphs. You can call them "bayes nets" if the word "causal" seems too icky. Joking aside, it's called EDT because it doesn't use do(·). We're just using regular boring old conditional probabilities on the obvious formalisation of the problem. As for reversing the arrows... I don't think it's entirely trivial to justify causal arrows that go backward in time. You can probably do it, with some kind of notion of "logical causality" or something. In fact, you could construct a more abstract network with "what this decision theory recommends" (a mathematical fact) as an ancestor node of both omega's predictions and the agent itself. If you optimize the resulting utility over various values of the decision theory node I imagine you'd end up with something analogous to Wei Dei's UDT (or was it ADT?). The decision theory node can be set as a parent of anywhere that decision theory is implemented in the universe, which was one of the main ideas of ADT. I'm not sure if that could really be called "causal" decision theory any more though.
2IlyaShpitser11y
Ugh. A Bayesian network is not a causal model. I am going to have to exit this, I am finding having to explain the same things over and over again very frustrating :(. From what I could tell following this thread you subscribe to the notion that there is no difference between EDT and CDT. That's fine, I guess, but it's a very exotic view of decision theories, to put to mildly. It just seems like a bizarre face-saving maneuver on behalf of EDT. ---------------------------------------- I have a little bit of unsolicited advice (which I know is dangerous to do), please do not view this as a status play: read the bit of Pearl's book where he discusses the difference between a Bayesian network, a causal Bayesian network, and a non-parametric structural equation model. This may also make it clear what the crucial difference between EDT and CDT is. Also read this if you have time: www.biostat.harvard.edu/robins/publications/wp100.pdf‎ This paper discusses what a causal model is very clearly (actually it discusses 4 separate causal models arranged in a hierarchy of "strength.")
3nshepperd11y
EDT one-boxes on newcomb's. Also I am well aware that not all bayes nets are causal models.
0Vaniver11y
The central disagreement between EDT and CDT is whether one should use conditionals or counterfactuals. Counterfactuals are much stronger, and so I don't see the argument for using conditionals. In particular, if you're just representing the joint probability distribution as a Bayes net, you don't have any of the new information that a causal graph provides you. In particular, you cannot tell the difference between observations and interventions, which leads to all of the silliness of normal EDT. The do() function is a feature. In causal graphs, information does not flow backwards across arrows in the presence of an intervention. (This is the difference between counterfactuals and conditionals.) If I make a decision now, it shouldn't impact things that might cause other people to make that decision. (Getting out of bed does not affect the time of day, even though the time of day affects getting out of bed.) When the decision "one-box?" impacts the node "5 minutes ago," it's part of that class of causal errors, as when someone hopes that staying in bed will make it still be morning when they decide to get out of bed. Your use of the graph as you described it has no internal mechanism to avoid that, and so would seek to "manage the news" in other situations. This is why EDT looks so broken, and why IlyaShpitser in particular is so interested in seeing the EDT method actually worked out. It's like the painter who insisted that he could get yellow by mixing red and white paint. In CDT, you get the causal separation between observations and interventions with the do() operator, but in EDT, you need a tube of yellow to 'sharpen it up'. I think that perfect prediction is functionally equivalent to causality flowing backwards in time, and don't think it's possible to construct a counterexample. I agree it doesn't happen in the real world, but in the hypothetical world of Newcomb's Problem, that's the way things are, and so resisting that point is fighting the hypothetic
1nshepperd11y
I'm not actually advocating EDT. After all, it two-boxes on transparent Newcomb's, which is a clear mistake. I'm just trying to explain how it's not as bad as it seems to be. For example: The only reason there is normally a correlation between getting out of bed and "morning time" is because people decide whether to get out of bed based on the clock, sunrise, etc. I do not think that EDT would "stay in bed in order to make it still be morning" because: 1. If you've already looked at the clock, there's no hope for you anyway. It's already a practical certainty that it won't be morning any more if you stay in bed and get up later. This screens off any possible effect that deciding could have. (Granted, there's a tiny remnant effect for "what if the laws of physics changed and time doesn't necessarily work the same ways as I'm used to". But your samples that suggest a correlation don't cover that case, and even if they did, they would show that people who stayed in bed hoping it would make morning last longer typically missed their early-morning commute.) 2. Or, if you've somehow managed to remain ignorant as to the time, the causal connection between "getting out of bed" and "morning" is broken, because you can't possibly be deciding whether to get out of bed based on the time. So the graph doesn't even have a connection between the two nodes, and staying in bed does nothing. In general "managing the news" doesn't work because "managing" is a very different behaviour to which simply doesn't correlate to the information in the same way. EDT, done right, is aware of that.
0Vaniver11y
I don't see why you think that EDT isn't "as bad as it seems to be," yet. I see lots of verbal explanations of why EDT wouldn't do X, but what I'm looking for is a mathematical reason why you think the language of conditionals is as strong as the language of counterfactuals, or why a decision theory that only operates in conditionals will not need human guidance (which uses counterfactuals) to avoid getting counterfactual questions wrong.
2nshepperd11y
Are you under the impression that I think EDT is The Best Thing Ever or something??? EDT gets stuff wrong. It gets transparent newcomb's wrong. It does not, however kill puppies hoping that this means that there is some sane reason to be killing puppies (because obviously people only ever kill puppies for a good reason). The reason for this should be obvious [hint: killing puppies in the hope that there's a good reason does not correlate with having good reasons]. I've explained this multiple times. I'm not going to play "person from the non-CDT tribe" any more if you still can't understand. I don't even like EDT... ETA: Remember when I gave you a mathematical formalism and you ignored it saying "your arrows are the wrong way around" and "CDT is better, by the way"? That's frustrating. This isn't a competition.
2Vaniver11y
No, I think you think it's not as bad as it seems to be. Things not being as they seem is worth investigating, especially about technical topics where additional insight could pay serious dividends. I apologize for any harshness. I don't think a tribal lens is useful for this discussion, though; friends can disagree stringently on technical issues. I think that as an epistemic principle it's worth pursuing such disagreements, even though disagreements in other situations might be a signal of enmity. It's not clear to me why you think I ignored your formalism; I needed to understand it to respond how I did. My view is that there are two options for your formalism: 1. The formalism should be viewed as a causal graph, and decisions as the do() operator, in which case the decision theory is CDT and the formalism does not capture Newcomb's problem because Omega is not a perfect predictor. Your formalism describes the situation where you want to be a one-boxer when scanned, and then switch to two-boxing after Omega has left, without Omega knowing that you'll switch. If this is impossible, you are choosing who to be 5 minutes ago, not your decision now (or your decision now causes who you were 5 minutes ago), and if this is possible, then Omega isn't a perfect predictor. 2. The formalism should be viewed as a factorization of a joint probability distribution, and decisions as observations, in which case the decision theory is EDT and the formalism will lead to "managing the news" in other situations. (If your formalism is a causal graph but decisions are observations, then the 'causal' part of the graph is unused, and I don't see a difference between it and a factorization of a joint probability distribution.) To justify the second view: I do not see a structural difference between and whereas with the do() operator, the structural difference is clear (interventions do not propagate backwards across causal arrows). If you want to use language like "the causal conn
0nshepperd11y
The difference is that in the first case the diamond graph is still accurate, because our dataset or other evidence that we construct the graph from says that there's a correlation between 5-minutes-ago and one-box even when the agent doesn't know the state of agent-5-minutes-ago. In the second case there should be no connection from time to get-out-of-bed, because we haven't observed the time, and all our samples which would otherwise suggest a correlation involve an agent who has observed the time, and decides whether to get out of bed based on that, so they're inapplicable to this situation. More strongly, we know that the supposed correlation is mediated by "looking at the clock and deciding to get up if it's past 8am", which cannot happen here. There is no correlation until we observe the time.
1Vaniver11y
My understanding of this response is "the structural difference between the situations is that the causal graphs are different for the two situations." I agree with that statement, but I think you have the graphs flipped around; I think the causal graph you drew describes the get-out-of-bed problem, and not Newcomb's problem. I think the following causal graph fits the "get out of bed?" situation: Time -> I get out of bed -> Late, Time -> Boss gets out of bed -> Late (I've added the second path to make the graphs exactly analogous; suppose I'm only late for work if I arrive after my boss, and if I manage to change what time it is, I'll change when my boss gets to work because I've changed what time it is.) This has the same form as the Newcomb's causal graph that you made; only the names have changed. In that situation, you asserted that the direct correlation between "one-box?" and "agent 5 minutes ago" was strong and relevant to our decision, even though we hadn't observed "agent 5 minutes ago," and historical agents who played Omega's game hadn't observed "agent 5 minutes ago" when they played. I assert that there's a direct correlation between Time and I-get-out-of-bed which doesn't depend on observing Time. (The assumption that the correlation between getting out of bed and the time is mediated by looking at the clock is your addition, and doesn't need to be true; let's assume the least convenient possible world where it isn't mediated by that. I can think of a few examples and I'm sure you could as well.) And so when we calculate P(Late|I-get-out-of-bed) using the method you recommended for Newcomb's in the last paragraph of this comment, we implicitly marginalize over Time and Boss-gets-out-of-bed, and notice that when we choose to stay in bed, this increases the probability that it's early, which increases the probability that our boss chooses to stay in bed! For the two graphs- which only differ in the names of the nodes- to give different mathematical
0nshepperd11y
You can turn any ordinary situation into the smoking lesion by postulating mysterious correlations rather than straightforward correlations that work by conscious decisions based on observations. Did you have any realistic examples in mind?
-2Vaniver11y
Yes. Suppose that the person always goes to sleep at the same time, and wakes up after a random interval. In the dark of their bedroom (blackout curtains and no clock), they decide whether or not to go back to sleep or get up. Historically, the later in the morning it is, the more likely they are to get up. To make the historical record analogous to Newcomb's, we might postulate that historically, they have always decided to go back to sleep before 7 AM, and always decided to get up after 7 AM, and the boss's alarm is set to wake him up at 7 AM. This is not a very realistic postulation, as a stochastic relationship between the two is more realistic, but the parameters of the factorization are not related to the structure of the causal graph (and Newcomb's isn't very realistic either). It's not obvious to me what you mean by "mysterious correlations" and "straightforward correlations." Correlations are statistical objects that either exist or don't, and I don't know what conscious decision based on observations you're referring to in the smoking lesion problem. What makes the smoking lesion problem a problem is that the lesions are unobserved. For example, in an Israeli study, parole decisions are correlated with the time since the parole board last took a break. Is that correlation mysterious, or straightforward? No one familiar with the system (board members, lawyers, etc.) expected the effects the study revealed, but there are plausible explanations for the effect (mental fatigue, low blood sugar, declining mood, all of which are replenished by a meal break). It might be that by 'mysterious correlations' you mean 'correlations without an obvious underlying causal mechanism', and by 'straightforward correlations' you mean 'correlations with an obvious underlying causal mechanism.' It's not clear to me what the value of that distinction is. Neither joint probability distributions nor causal graphs do not require that the correlations or causal arrows be labeled.
3nshepperd11y
Well, the correlations in the smoking lesion problem are mysterious because they aren't caused by agents observing lesion|no-lesion and deciding whether to smoke based on that. They are mysterious because it is simply postulated that "the lesion causes smoking without being observed" without any explanation of how, and it is generally assumed that the correlation somehow still applies when you're deciding what to do using EDT, which I personally have some doubt about (EDT decides what to do based only on preferences and observations, so how can its output be correlated to anything else?). Straightforward correlations are those where, for example, people go out with an umbrella if they see rain clouds forming. The correlation is created by straightforward decision-making based on observations. Simple statistical reasoning suggests that you only have reason to expect these correlations to hold for an EDT agent if the EDT agent makes the same decisions in the same situations. Furthermore, these correlations tend to pose no problem for EDT because the only time an EDT agent is in a position to take an action correlated to some observation in this way ("I observe rain clouds, should I take my umbrella?"), they must have already observed the correlate ("rain clouds"), so EDT makes no attempt to influence it ("whether or not I take my umbrella, I know there are rain clouds already") . Returning to the smoking lesion problem, there are a few ways of making the mystery go away. You can suppose that the lesion works by making you smoke even after you (consciously) decide to do something else. In this case the decision of the EDT agent isn't actually smoke | don't-smoke, but rather you get to decide a parameter of something else that determines whether you smoke. This makes the lesion not actually a cause of your decision, so you choose-to-smoke, obviously. Alternatively, I was going to analyse the situation where the lesion makes you want to smoke (by altering your decisio
-2Vaniver11y
No mathematical decision theory requires verbal explanations to be part of the model that it operates on. (It's true that when learning a causal model from data, you need causal assumptions; but when a problem provides the model rather than the data, this is not necessary.) You have doubt that this is how EDT, as a mathematical algorithm, operates, or you have some doubt that this is a wise way to construct a decision-making algorithm? If the second, this is why I think EDT is a subpar decision theory. It sees the world as a joint probability distribution, and does not have the ability to distinguish correlation and causation, which means it cannot know whether or not a correlation applies for any particular action (and so assumes that all do). If the first, I'm not sure how to clear up your confusion. There is a mindset that programming cultivates, which is that the system does exactly what you tell it to, with the corollary that your intentions have no weight. The trouble with LCPW is that it's asymmetric; Eliezer claims that the LCPW is the one where his friend has to face a moral question, and Eliezer's friend might claim that the LCPW is the one where Eliezer has to face a practical problem. The way to break the asymmetry is to try to find the most informative comparison. If the hypothetical has been fought, then we learn nothing about morality, because there is no moral problem. If the hypothetical is accepted despite faults, then we learn quite a bit about morality. The issues with EDT might require 'edge cases' to make obvious, but in the same way that the issues with Newtonian dynamics might require 'edge cases' to make obvious.
0nshepperd11y
What I'm saying is that the only way to solve any decision theory problem is to learn a causal model from data. It just doesn't make sense to postulate particular correlations between an EDT agent's decisions and other things before you even know what EDT decides! The only reason you get away with assuming graphs like lesion -> (CDT Agent) -> action for CDT is because the first thing CDT does when calculating a decision is break all connections to parents by means of do(...). Take Jiro's example. The lesion makes people jump into volcanoes. 100% of them, and no-one else. Furthermore, I'll postulate that all of them are using decision theory "check if I have the lesion, if so, jump into a volcano, otherwise don't". Should you infer the causal graph lesion -> (EDT decision: jump?) -> die with a perfect correlation between lesion and jump? (Hint: no, that would be stupid, since we're not using jump-based-on-lesion-decision-theory, we're using EDT.) In programming, we also say "garbage in, garbage out". You are feeding EDT garbage input by giving it factually wrong joint probability distributions.
7IlyaShpitser11y
Ok, what about cases where there are multiple causal hypotheses that are observationally indistinguishable: a -> b -> c vs a <- b <- c Both models imply the same joint probability distribution p(a,b,c) with a single conditional independence (a independent of c given b) and cannot be told apart without experimentation. That is, you cannot call p(a,b,c) "factually wrong" because the correct causal model implies it. But the wrong causal model implies it too! To figure out which is which requires causal information. You can give it to EDT and it will work -- but then it's not EDT anymore. I can give you a graph which implies the same independences as my HAART example but has a completely different causal structure, and the procedure you propose here: http://lesswrong.com/lw/hwq/evidential_decision_theory_selection_bias_and/9d6f will give the right answer in one case and the wrong answer in another. The point is, EDT lacks a rich enough input language to avoid getting garbage inputs in lots of standard cases. Or, more precisely, EDT lacks a rich enough input languages to tell when input is garbage and when it isn't. This is why EDT is a terrible decision theory.
-1Vaniver11y
I think there are a couple of confusions this sentence highlights. First, there are approaches to solving decision theory problems that don't use causal models. Part of what has made this conversation challenging is that there are several different ways to represent the world- and so even if CDT is the best / natural one, it needs to be distinguished from other approaches. EDT is not CDT in disguise; the two are distinct formulas / approaches. Second, there are good reasons to modularize the components of the decision theory, so that you can treat learning a model from data separately from making a decision given a model. An algorithm to turn models into decisions should be able to operate on an arbitrary model, where it sees a -> b -> c as isomorphic to Drunk -> Fall -> Death. To tell an anecdote, when my decision analysis professor would teach that subject to petroleum engineers, he quickly learned not to use petroleum examples. Say something like "suppose the probability of striking oil by drilling a well here is 40%" and an engineer's hand will shoot up, asking "what kind of rock is it?". The kind of rock is useful for determining whether or not the probability is 40% or something else, but the question totally misses the point of what the professor is trying to teach. The primary example he uses is choosing a location for a party subject to the uncertainty of the weather. I'm not sure how to interpret this sentence. The way EDT operates is to perform the following three steps for each possible action in turn: 1. Assume that I saw myself doing X. 2. Perform a Bayesian update on this new evidence. 3. Calculate and record my utility. It then chooses the possible action which had the highest calculated utility. One interpretation is you saying that EDT doesn't make sense, but I'm not sure I agree with what seems to be the stated reason. It looks to me like you're saying "it doesn't make sense to assume that you do X until you know what you decide!", when
1pengvado11y
Ideal Bayesian updates assume logical omniscience, right? Including knowledge about logical fact of what EDT would do for any given input. If you know that you are an EDT agent, and condition on all of your past observations and also on the fact that you do X, but X is not in fact what EDT does given those inputs, then as an ideal Bayesian you will know that you're conditioning on something impossible. More generally, what update you perform in step 2 depends on EDT's input-output map, thus making the definition circular. So, is EDT really underspecified? Or are you supposed to search for a fixed point of the circular definition, if there is one? Or does it use some method other than Bayes for the hypothetical update? Or does an EDT agent really break if it ever finds out its own decision algorithm? Or did I totally misunderstand?
0Vaniver11y
Note that step 1 is "Assume that I saw myself doing X," not "Assume that EDT outputs X as the optimal action." I believe that excludes any contradictions along those lines. Does logical omniscience preclude imagining counterfactual worlds?
1pengvado11y
If I already know "I am EDT", then "I saw myself doing X" does imply "EDT outputs X as the optimal action". Logical omniscience doesn't preclude imagining counterfactual worlds, but imagining counterfactual worlds is a different operation than performing Bayesian updates. CDT constructs counterfactuals by severing some of the edges in its causal graph and then assuming certain values for the nodes that no longer have any causes. TDT does too, except with a different graph and a different choice of edges to sever.
-2nshepperd11y
I don't know how I can fail to communicate so consistently. Yes, you can technically apply "EDT" to any causal model or (more generally) joint probability distribution containing a "EDT agent decision" node. But in practice this freedom is useless, because to derive an accurate model you generally need to take account of a) the fact that the agent is using EDT and b) any observations the agent does or does not make. To be clear, the input EDT requires is a probabilistic model describing the EDT agent's situation (not describing historical data of "similar" situations). There are people here trying to argue against EDT by taking a model describing historical data (such as people following dumb decision theories jumping into volcanoes) and feeding this model directly into EDT. Which is simply wrong. A model that describes the historical behaviour of agents using some other decision theory does not in general accurately describe an EDT agent in the same situation. The fact that this egregious mistake looks perfectly normal is an artifact of the fact that CDT doesn't care about causal parents of the "CDT decision" node.
-2Vaniver11y
I suspect it's because what you are referring to as "EDT" is not what experts in the field use that technical term to mean. nsheppard-EDT is, as far as I can tell, the second half of CDT. Take a causal model and use the do() operator to create the manipulated subgraph that would result taking possible action (as an intervention). Determine the joint probability distribution from the manipulated subgraph. Condition on observing that action with the joint probability distribution, and calculate the probabilistically-weighted mean utility of the possible outcomes. This is isomorphic to CDT, and so referring to it as EDT leads to confusion.
0nshepperd11y
Whatever. I give up.
0Jiro11y
Here's a modified version. Instead of a smoking lesion, there's a "jump into active volcano lesion". Furthermore, the correlation isn't as puny as for the smoking lesion. 100% of people with this lesion jump into active volcanoes and die, and nobody else does. Should you go jump into an active volcano? Using a decision theory to figure out what decision you should make assumes that you're capable of making a decision. "The lesion causes you to jump into an active volcano/smoke" and "you can choose whether to jump into an active volcano/smoke" are contradictory. Even "the lesion is correlated (at less than 100%) with jumping into an active volcano/smoking" and "you can choose whether to jump into an active volcano/smoke" are contradictory unless "is correlated with" involves some correlation for people who don't use decision theory and no correlation for people who do.
0Vaniver11y
Agreed. Doesn't this seem sort of realistic, actually? Decisions made with System 1 and System 2, to use Kahneman's language, might have entirely different underlying algorithms. (There is some philosophical trouble about how far we can push the idea of an 'intervention', but I think for human-scale decisions there is a meaningful difference between interventions and observations such that CDT distinguishing between them is a feature.) This maps onto an objection by proponents of EDT that the observational data might not be from people using EDT, and thus the correlation may disappear when EDT comes onto the stage. I think that objection proves too much- suppose all of our observational data on the health effects of jumping off cliffs comes from subjects who were not using EDT (suppose they were drunk). I don't see a reason inside the decision theory for differentiating between the effects of EDT on the correlation between jumping off the cliff and the effects of EDT on the correlation between smoking and having the lesion. These two situations correspond to two different causal structures - Drunk -> Fall -> Death and Smoke <- Lesion -> Cancer - which could have the same joint probability distribution. The directionality of the arrow is something that CDT can make use of to tell that the two situations will respond differently to interventions at Drunk and Smoke: it is dangerous to be drunk around cliffs, but not to smoke (in this hypothetical world). EDT cannot make use of those arrows. It just has Drunk -- Fall -- Death and Smoke -- Lesion -- Cancer (where it knows that the correlations between Drunk and Death are mediated by Fall, and the correlations between Smoke and Cancer are mediated by Lesion). If we suppose that adding an EDT node might mean that the correlation between Smoke and Lesion (and thus Cancer) might be mediated by EDT, then we must also suppose that adding an EDT node might mean that the correlation between Drunk and Fall (and thus Death) mi
0fractalman11y
there's another CRUCIAL difference regarding the Newcombs problem: there's always a chance you're in a simulation being run by Omega. I think if you can account for that, it SHOULD patch most decent decision-theories up. I'm willing to be quite flexible in my understanding of which theories get patched up or not. this has the BIG advantage of NOT requiring non-linear causality in the model-it just gives a flow from simulation->"real"world.
-2shminux11y
Yes, reflective consistency tends to make things better.
0fractalman11y
um...that wasn't sarcastic, was it? I just ran low on mental energy so... anyways, the downside is you have to figure out how to dissolve all or most of the anthropic paradoxes when evaluating simulation chance.

I suspect that it looks like some version of TDT / UDT, where TDT corresponds to something like trying to update on "being the kind of agent who outputs this action in this situation" and UDT corresponds to something more mysterious that I haven't been able to find a good explanation of yet, but I haven't thought about this much.

I can try to explain UDT a bit more if you say what you find mysterious about it. Or if you just want to think about it some more, keep in mind that UDT was designed to solve a bunch of problems at the same time, so if... (read more)

5Qiaochu_Yuan11y
Even more than an explanation, I would appreciate an explanation on the LessWrong Wiki because there currently isn't one! I've just reread through the LW posts I could find about UDT and I guess I should let them stew for awhile. I might also ask people at the current MIRI workshop for their thoughts in person. Only as an intuition pump; when it's time to get down to brass tacks I'm much happier to talk about a well-specified program than a poorly-specified human.
4Tyrrell_McAllister11y
I wrote a brief mathematical write-up of "bare bones" UDT1 and UDT1.1. The write-up describes the version that Wei Dai gave in his original posts. The write-up doesn't get into more advanced versions that invoke proof-length limits, try to "play chicken with the universe", or otherwise develop how the "mathematical intuition module" is supposed to work. Without trying to make too much of the analogy, I think that I would describe TDT as "non-naive" CDT, and UDT as "non-naive" EDT.
2Qiaochu_Yuan11y
In this writeup it really seems like all of the content is in how the mathematical intuition module works.
0Tyrrell_McAllister11y
This is not much of an exaggeration. Still, UDT basically solves many toy problems where we get to declare what the output of the MIM is ("Omega tells you that ...").
0Wei Dai11y
What kind of explanation are you looking for, though? The best explanation of UDT I can currently give, without some sort of additional information about where you find it confusing or how it should be improved, is in my first post about it, Towards a New Decision Theory. Ah, ok. Some people (such as Ilya Shpitser) do seem to be thinking mostly in terms of human application, so it seems a good idea to make the distinction explicit.
0moemurray11y
Are there any problems that (U|T)DT are designed to solve which are not one-shot problems? I apologize if this sounds like a stupid question, but I'm having some difficulty understanding all of the purported problems. Those I understand are one-shot problems like the Prisoner's Dilemma and the Newcomb Problem. Is there anything like the Iterated Prisoner's Dilemma for which (E|C)DT is inadequate, but (U|T)DT solves?

My intuition here is that it should be possible to see causal networks as arising naturally out of Bayesian considerations

You disagree, then, with Pearl's dictum that causality is a primitive concept, not reducible to any statistical construction?

The Smoker's Lesion problem is completely dissolved by using the causal information about the lesion. Without that information it cannot be. The correlations among Smoking, Lesion, and Cancer, on their own, allow of the alternative causal possibilities that Smoking causes Lesion, which causes Cancer, or that ... (read more)

2Qiaochu_Yuan11y
No. For example, AIXI is what I would regard as essentially a Bayesian agent, but it has a notion of causality because it has a notion of the environment taking its actions as an input. What I mean is more like wondering if AIXI would invent causal networks. I think this is too narrow a way to describe the mistake that naive EDT is making. First, I hope you agree that even naive EDT wouldn't use statistical correlations in a population of agents completely unrelated to it (for example, agents who make their decisions randomly). But naive EDT may be in the position of existing in a world where it is the only naive EDT agent, although there may be many agents which are similar but not completely identical to it. How should it update in this situation? It might try to pick a population of agents sufficiently similar to itself, but then it's unclear how the fact that they're similar but not identical should be taken into account. AIXI, by contrast, would do something more sophisticated. Namely, its observations about the environment, including other agents similar to itself, would all update its model of the environment. It seems like some variant of the tickle defense covers this. Once the other agent professes their inclination to smoke, that screens off any further information obtained by the other agent smoking or not smoking. I guess AIXI could do something like start with a prior over possible models of how various actions, including smoking, could affect the other agent, update, then use the posterior distribution over models to predict the effect of interventions like smoking. But this requires a lot more data than is usually given in the smoking lesion problem.
7endoself11y
This looks like a symptom of AIXI's inability to self-model. Of course causality is going to look fundamental when you think you can magically intervene from outside the system. Do you share the intuition I mention in my other comment? I feel that they way this post reframes CDT and TDT as attempts to clarify bad self-modelling by naive EDT is very similar to the way I would reframe Pearl's positions as an attempt to clarify bad self-modelling by naive probability theory a la AIXI.
0Qiaochu_Yuan11y
So your intuition is that causality isn't fundamental but should fall out of correct self-modeling? I guess that's also my intuition, and I also don't know how to make that precise.
0endoself11y
I think this isn't actually compatible with the thought experiment. Our hypothetical agent knows that it is an agent. I can't yet formalize what I mean by this, but I think that it requires probability distributions corresponding to a certain causal structure, which would allow us to distinguish it from the other graphs. I don't know how to write down a probability distribution that contains myself as I write it, but it seems that such a thing would encode the interventional information about the system that I am interacting with on a purely probabilistic level. If this is correct, you wouldn't need a separate representation of causality to decide correctly.
0Richard_Kennaway11y
How about: an agent, relative to a given situation described by a causal graph G, is an entity that can perform do-actions on G.
0endoself11y
No, that's not what I meant at all. In what you said, the agent needs to be separate from the system in order to preform do-actions. I want an agent that knows it's an agent, so it has to have a self-model and, in particular, has to be inside the system that is modelled by our causal graph. One of the guiding heuristics in FAI theory is that an agent should model itself the same way it models other things. Roughly, the agent isn't actually tagged as different from nonagent things in reality, so any desired behaviour that depends on correctly making this distinction cannot be regulated with evidence as to whether it is actually making the distinction the way we want it to. A common example of this is the distinction between self-modification and creating a successor AI; an FAI should not need to distinguish these, since they're functionally the same. These sorts of ideas are why I want the agent to be modelled within its own causal graph.

UDT corresponds to something more mysterious

Don't update at all, but instead optimize yourself, viewed as a function from observations to actions, over all possible worlds.

There are tons of details, but it doesn't seem impossible to summarize in a sentence.

0Manfred11y
Or even simpler: find the optimal strategy, then do that.

How useful is it to clarify EDT until it becomes some decision theory with a different, previously determined name?

6Qiaochu_Yuan11y
It would be useful for my mental organization of how decision theory works. I don't know if it would be useful to anyone else though.
3Rob Bensinger11y
I don't much care what we call the thing, but exploring the logical relations between conventional EDT and other anti-CDT options could be extremely useful for persuading EDTists to adopt TDT, UDT, or some other novel theory. Framing matters even for academics.

Lots of interesting points, but on your final paragraph, is a theory that models the agent as part of its environment necessarily possible? Since the model is part of the agent, it would have to include the model as part of the model. I suppose that isn't an outright contradiction, as there are of course mathematical structures with proper parts equivalent to the whole, but does it seem likely that plausible models human agents can construct could be like that?

It seems to me that there are logical constraints on self-knowledge, related to the well-known ... (read more)

3nshepperd11y
Pretty sure humans normally model themselves as part of the environment. Seems a bit excessive to conjecture the impossibility of something humans do every day (even if "approximately") without particularly strong evidence. (Note that quines exist and people are able to understand that brains are made of neurons.)
0Protagoras11y
"Approximately" would be important. A lot of the discussions of decision theory seem to be trying to come up with something logically perfect, some theory which in principle could always give the best answer (though obviously no human would ever implement any theory perfectly). It thus seems relevant whether in principle perfection is possible. If it isn't, then the evaluation of decision theories must somehow compare severity of flaws, rather than seeking flawlessness, and the discussions around here don't generally seem to go that way.. That being said, I'm not sure I agree here anyway. It seems that people's minds are sufficiently complicated and disunified that it is certainly possible for part of a person to model another part of the same person. I am not certain that self-modeling ever takes any other form; it is not obvious that it is ever possible for part of a person to successfully model that exact part.
2fractalman11y
I'm a bit tired at the moment, but my more or less cached reply is "use a coarse-grained simulation of yourself."
0Qiaochu_Yuan11y
Who knows? I think this is a really interesting question and hopefully some of the work going on in MIRI workshops will be relevant to answering it.

Approximately this point appears to have been made in the decision theory literature already, in Against causal decision theory by Huw Price.

What does a more sophisticated version of EDT, taking the above observations into account, look like? I don't know. I suspect that it looks like some version of TDT / UDT

When I suggested this in the post of mine that you referenced, benelloitt pointed out that it fails the transparent-box variant of Newcomb's problem, where you can see the contents of the boxes, and Omega makes his decision based on what he predicts you would do if you saw $1 million in box A. I don't see an obvious way to rescue EDT in that scenario.

0Qiaochu_Yuan11y
Again, I think it's difficult to claim that EDT does a particular thing in a particular scenario. An EDT agent who has a prior over causal networks with logical nodes describing the environment (including itself) and who updates this prior by acquiring information may approximate a TDT agent as it collects more information about the environment and its posterior becomes concentrated at the "true" causal network.
0AlexMennen11y
I'm not sure what you mean. Can you give an example of a probability distribution over causal networks that could be believed by an EDT agent in the transparent Newcomb's problem, such that the agent would one-box? Or at least give a plausibility argument for the existence of such a probability distribution?
2Qiaochu_Yuan11y
Maybe it's better not to talk about causal networks. Let's use an AIXI-like setup instead. The EDT agent starts with a Solomonoff prior over all computable functions that Omega could be. Part of the setup of Newcomb's problem is that Omega convinces you that it's a very good predictor, so some series of trials takes place in which the EDT agent updates its prior over what Omega is. The posterior will be concentrated at computable functions that are very good predictors. The EDT agent then reasons that if it two-boxes then Omega will predict this and it won't get a good payoff, so it one-boxes.
2AlexMennen11y
But in the transparent-box variant, the EDT agent knows exactly how much money is in box A before making its decision, so its beliefs about the contents of box A do not change when it updates on its counterfactual decision.
0Qiaochu_Yuan11y
Ah. I guess we're not allowing EDT to make precommitments?
4endoself11y
If you want to change what you want, then you've decided that your first-orded preferences were bad. EDT recognizing that it can replace itself with a better decision theory is not the same as it getting the answer right; the thing that makes the decision is not EDT anymore.
3AlexMennen11y
We don't usually let decision theories make precommitments. That's why CDT fails Newcomb's problem. I think CDT and EDT both converge to something like TDT/UDT when allowed to precommit as far in advance as desirable.

Upvoted for the ad absurdum examples. They highlight the essential bit of information (common cause) being thrown out by the naive EDT. Just like the naive CDT throws out the essential bit of information (Omega is always right, therefore two-boxing is guaranteed to result in zero payout) in Newcomb.

As for the reference class, knowing the common cause with certainty means that either you have some metaphysical access to the inside of the smoking lesion problem setup, in which case EDT is a wrong tool to use, or that there have been enough experiments to assign high probability to this common cause, probably through random placebo controlled double blind studies, which would then form your reference class(es).

0Manfred11y
Hmm, I don't think hat's quite the key point. For example, what about the absentminded driver problem? My attempt would be "the process that decides where the money is the same as the process as the choice you make - you have just one independent decision to choose them with." (Cool trick - you get the correct answer to the absent-minded driver problem (mostly) - even post-updates - if you make the probability of being at the different intersections depend in the obvious way on your probability of continuing when maximizing expected utility)
0shminux11y
Can't say I follow... As for the absentminded driver, I thought reflective consistency takes care of it (you don't recalculate your probabilities on the fly in absence of any new information).
0Manfred11y
The absentminded driver learns something when they learn they are at an intersection. The bits of information they get from the enviroment enable them to distinguish between intersection and non-intersection situations, at least :P
1shminux11y
I don't believe the driver learns anything new at an intersection. She knows the map and the payout in advance, there is not a single bit of information at an intersection that requires any decision making not already done before the start. The absentmindedness part means that the calculation is repeated at each intersection, but it's the exact same calculation.
[-]Jiro11y00

Having a physical condition affect whether one smokes, while also posing a problem which implies that you can choose whether to smoke, suggests a variation of the problem: there's a brain lesion which increases your lifespan, but makes you incapable of computing conditional probabilities (plus some other effect that is enough for there to be a genuine question of how you should act). How should you behave in this version?

A "true" EDT agent needs to update on all the evidence they've ever observed, and it's very unclear to me how to do this in practice.

The only way I know how to explore what this means is to use simple toy problems and be very careful about never ever using the concept "reference class." Oh, and writing down algorithms helps stop you from sneaking in extra information.

Example algorithm (basic EDT):

We want to pick one action out of a list of possible actions (provided to us as a1, a2...), which can lead to various outcomes that we hav... (read more)

[-][anonymous]11y00

I think I prefer the "throwing away expensive things" formulation to the "smoking lesion" formulation.

In the smoking lesion, it's not clear whether the lesion causes smoking by modifying your preferences or modifying your decision algorithm. But if it's the latter, asking "what would decision theory X do?" is pointless since people with the lesion aren't using decision theory X. And if it's due to preferences, you already know you have the lesion when you get to the part of the problem that says you'd prefer to smoke.

So actually it's like your throwing-things-away problem, except that you can look at your bank balance, except obfuscated behind a layer of free-will-like confusion.

I have made similar remarks in a comment here:

I would like to say that I agree with the arguments presented in this post, even though the OP eventually retracted them. I think the arguments for why EDT leads to the wrong decision are themselves wrong.

As mentioned by others, EY referred to this argument as the 'tickle defense' in section 9.1 of his TDT paper. I am not defending the advocates which EY attacked, since (assuming EY hasn't misrepresented them) they have made some mistakes of their own. In particular they argue for two-boxing.

I will start by t

... (read more)