I don't really think Newcomb's problem or any of its variations belong in here. Newcomb's problem is not a decision theory problem, the real difficulty is translating the underspecified English into a payoff matrix.
The ambiguity comes from the the combination of the two claims, (a) Omega being a perfect predictor and (b) the subject being allowed to choose after Omega has made its prediction. Either these two are inconsistent, or they necessitate further unstated assumptions such as backwards causality.
First, let us assume (a) but not (b), which can be formulated as follows: Omega, a computer engineer, can read your code and test run it as many times as he would like in advance. You must submit (simple, unobfuscated) code which either chooses to one- or two-box. The contents of the boxes will depend on Omega's prediction of your code's choice. Do you submit one- or two-boxing code?
Second, let us assume (b) but not (a), which can be formulated as follows: Omega has subjected you to the Newcomb's setup, but because of a bug in its code, its prediction is based on someone else's choice than yours, which has no correlation with your choice whatsoever. Do you one- or two-box?
Both of the...
(Thanks for discussing!)
I will address your last paragraph first. The only significant difference between my original example and the proper Newcomb's paradox is that, in Newcomb's paradox, Omega is made a predictor by fiat and without explanation. This allows perfect prediction and choice to sneak into the same paragraph without obvious contradiction. It seems, if I try to make the mode of prediction transparent, you protest there is no choice being made.
From Omega's point of view, its Newcomb subjects are not making choices in any substantial sense, they are just predictably acting out their own personality. That is what allows Omega its predictive power. Choice is not something inherent to a system, but a feature of an outsider's model of a system, in much the same sense as random is not something inherent to a Eeny, meeny, miny, moe however much it might seem that way to children.
As for the rest of our disagreement, I am not sure why you insist that CDT must work with a misleading model. The standard formulation of Newcomb's paradox is inconsistent or underspecified. Here are some messy explanations for why, in list form:
Thanks for your post, it was a good summary of decision theory basics. Some corrections:
In the Allais paradox, choice (2A) should be "A 34% chance of 24,000$ and a 66% chance of nothing" (now 27,000$).
A typo in title 10.3.1., the title should probably be "Why should degrees of belief follow the laws of probability?".
In 11.1.10. Prisoner's dilemma, the Resnik quotation mentions a twenty-five year term, yet the decision matrix has "20 years in jail" as an outcome.
Easy explanation for the Ellsberg Paradox: We humans treat the urn as if it was subjected to two kinds of uncertainties.
Somehow, we prefer to chose the "truly random" option. I think I can sense why: when it's "truly random", I know no potentially hostile agent messed up with me. I mean, I could chose "red" in situation A, but then the organizers could have put 60 blue balls just to mess with me!
Put it simply, choosing "red" opens me up for external sentient influence, and therefore risk being outsmarted. This particular risk aversion sounds like a pretty sound heuristic.
What about mentioning the St. Petersburg paradox? This is a pretty striking issue for EUM, IMHO.
I'm finding the "counterfactual mugging" challenging. At this point, the rules of the game seem to be "design a thoughtless, inert, unthinking algorithm, such as CDT or EDT or BT or TDT, which will always give the winning answer." Fine. But for the entire range of Newcomb's problems, we are pitting this dumb-as-a-rock algo against a super-intelligence. By the time we get to the counterfactual mugging, we seem to have a scenario where omega is saying "I will reward you only if you are a trusting rube who can be fleeced." N...
VNM utility isn't any of the types you listed. Ratios (a-b)/|c-d| of VNM utilities aren't meaningful, only ratios (a-b)/|c-b|.
I would *really* appreciate any help from lesswrong readers in helping me understand something really basic about the standard money pump argument for transitivity of preferences.
So clearly there can be situations, like in a game of Rock Scissors Paper (or games featuring non-transitive dice, like 'Efron's dice') where faced with pairwise choices it seems rational to have non-transitive preferences. And it could be that these non-transitive games/situations pay out money (or utility or whatever) if you make the right choice.
But so then if ...
Presentation of Newcomb's problem in section 11.1.1. seems faulty. What if the human flips a coin to determine whether to one-box or two-box? (or any suitable source of entropy that is beyond the predictive powers of the super-intelligence.) What happens then?
This point is danced around in the next section, but never stated outright: EDT provides exactly the right answer if humans are fully deterministic and predictable by the superintelligence. CDT gives the right answer if the human employs an unpredictable entropy source in their decision-making. It is the entropy source that makes the decision acausal from the acts of the super-intelligence.
Small correction, Arntzenius name has a Z (that paper is great by the way, I sent it to Yudkwosky a while ago).
There is a compliment true of both this post and of that paper, they are both very well condensed. Congratulations Luke and crazy88!
In the VNM system, utility is defined via preferences over acts rather than preferences over outcomes. To many, it seems odd to define utility with respect to preferences over risky acts. After all, even an agent who thinks she lives in a world where every act is certain to result in a known outcome could have preferences for some outcomes over others. Many would argue that utility should be defined in relation to preferences over outcomes or world-states, and that's not what the VNM system does. (Also see section 9.)
It's misleading to associate acts wi...
In this case, even if an extremely low value is set for L, it seems that paying this amount to play the game is unreasonable. After all, as Peterson notes, about nine times out of ten an agent that plays this game will win no more than 8 · 10-100 utility.
It seems there's an error here. Should it be "In this case, even if an extremely high value is set for L, it seems that paying a lot to play the game is unreasonable."?
Typo:
Usually, it is argued that each of the axioms are pragmatically justified because an agent which violates the axioms can face situations in which they are guaranteed end up worse off (from their own perspective).
Should read:
guaranteed to end up worse off
Does the horizontal axis of the decision tree in section 3 represent time? If so, I'd advocate smearing those red triangles out over the whole history of actions and events. Even though, in the particular example, it's unlikely that the agent cares about having been insured as such, apart from the monetary payoffs, in the general case agents care about the whole history. I think that forgetting this point sometimes leads to misapplications of decision theory.
When reading about Transparent Newcomb's problem: Isn't this perfectly general? Suppose Omega says: I give everyone who subscribes to decision theory A $1000, and give those who subscribe to other decision theories nothing. Clearly everyone who subscribes to decision theory A "wins".
It seems that if one lives in the world with many such Omegas, and subscribing to decision theory A (vs subscribing to decision theory B) would otherwise lead to losing at most, say, $100 per day between two successive encounters with such Omegas, then one would wi...
Maybe worth noting that there's recommended reading on decision theory on the "Best textbooks on every subject" post.
On decision theory, lukeprog recommends Peterson's An Introduction to Decision Theory over Resnik's Choices and Luce & Raiffa's Games and Decisions.
In this equation, V(A & O) represents the value to the agent of the combination of an act and an outcome. So this is the utility that the agent will receive if they carry out a certain act and a certain outcome occurs. Further, PrAO represents the probability of each outcome occurring on the supposition that the agent carries out a certain act. It is in terms of this probability that CDT and EDT differ. EDT uses the conditional probability, Pr(O|A), while CDT uses the probability of subjunctive conditionals, Pr(A
O).
Please, don't use the same letters ...
...The second problem with using the law of large numbers to justify EUM has to do with a mathematical theorem known as gambler's ruin. Imagine that you and I flip a fair coin, and I pay you $1 every time it comes up heads and you pay me $1 every time it comes up tails. We both start with $100. If we flip the coin enough times, one of us will face a situation in which the sequence of heads or tails is longer than we can afford. If a long-enough sequence of heads comes up, I'll run out of $1 bills with which to pay you. If a long-enough sequence of tails comes
I think the example given to show the irrationality of leximin in certain situations doesn’t do a good job of distinguishing its failings from maximin. To usefully illustrate the difference between the two I believe a another state is required with even worse outcomes for both acts (e.g. $0). This way the worst outcomes for both acts would be equal and so the second worst outcomes (a1:$1, a2:$1.01) would then be compared under the leximin strategy leading to the choice of a2 as the best act again with the acknowledgment that you miss out on the opportunity to get $10,001.01
In the last chapter of his book "Utility Theory for Decision Making," Peter Fishburn published a concise rendering of Leonard Savage's proof that "rational" preferences over events implied that one behaved "as if" he (or she) was obeying Expected Utility Theory. He furthermore proved that following Savage's axioms implied that your utility function is bounded (he attributes this extension of the proof, in its essence, to Savage). So Subjective Expected Utility Theory has an answer to the St. Petersburg Paradox "built in" to its axioms. That seems like a point well worth mentioning in this article.
The image of Ellsberg's Paradox has the picture of the Yellow/Blue bet replaced with a picture of a Yellow/Red bet. Having looked at the picture I was about to claim that it was always rational to take the R/B bet over Y/R before I read the actual description.
Isn't there a typo in "Experiments have shown that many people prefer (1A) to (1B) and (2B) to (2A)." ? Shouldn't it be "(2A) to (2B)" ?
Edit : hrm, no, in fact it's like http://lesswrong.com/lw/gu1/decision_theory_faq/8jav said : it should be 24 000$ instead of 27 000$ in option A, or else it makes no sense.
Thus, the expected utility (EU) of choice A is, for this decision maker, (1)(1000) = 1000. Meanwhile, the EU of choice B is (0.5)(1500) + (0.5)(0) = 750. In this case, the expected utility of choice B is greater than that of choice A, even though choice B has a greater expected monetary value.
Choice A at 1000 is still greater than Choice B at 750
Minor error: In the prisoner's dilemma example, the decision matrix has twenty years for if you cooperate and your partner defects, while the text quoted right above the matrix claims that that amount is twenty five years.
I find it helpful to use the term "security level" to understand maximin/leximin and "hope level" to understand maximax. "Security level" is the worst case scenario, and under maximin/leximin we want to maximize it. "Hope level" is the best case scenario, and under maximax, we want to maximize it.
Concerning the transitivity axiom, what about rational choices in situations of intransitivity cycles?
(Well, sort of. The minimax and maximax principles require only that we measure value on an ordinal scale, whereas the optimism-pessimism rule requires that we measure value on an interval scale.)
I'm using this as an introduction to decision theory so I might be wrong, and I've read that 'maximin' and 'minimax' do have different meanings in game theory, but you exclusively use the term 'maximin' up to a certain point and then mention a 'minimax principle' once, so I can only imagine that you meant to write 'maximin principle.' It confused me. It's proba...
...Another objection to the VNM approach (and to expected utility approaches generally), the St. Petersburg paradox, draws on the possibility of infinite utilities. The St. Petersburg paradox is based around a game where a fair coin is tossed until it lands heads up. At this point, the agent receives a prize worth 2n utility, where n is equal to the number of times the coin was tossed during the game. The so-called paradox occurs because the expected utility of choosing to play this game is infinite and so, according to a standard expected utility approach,
In section 8.1, your example of the gambler's ruin postulates that both agents have the same starting resources, but this is exactly the case in which the gambler's ruin doesn't apply. That might be worth changing.
- Can decisions under ignorance be transformed into decisions under uncertainty?
I'd add a comment on Jaynes' solution for determining ignorance priors in terms of transformation groups.
I'd say that there's no such think as an "ignorance" prior - priors are set by information. Setting a prior by symmetry or the more general transformation group is an assertion of information.
There are numerous typos throughout the thing. Someone needs to re-read it. The math in "8.6.3. The Allais paradox" is all wrong, option 2A is not actually 34% of 1A and 66% of nothing, etc.
This may not be the best place for this question, but it's something I've been wondering for a while: how does causal decision theory fail us humans in the real world, here and now?
There is one rather annoying subtext that recurs throughout the FAQ: the very casual and carefree use of the words "rational" and "irrational", with the rather flawed idea that following some axiomatic system (e.g. VNM) and Bayes is "rational" and not doing so is "irrational". I think this is a dis-service, and, what's more, fails to look into the effects of intelligence, experience, training and emotion. The Allias paradox scratches the surface, as do various psych experiments. But ...
The real question is "wh...
The conclusion to section "11.1.3. Medical Newcomb problems" begs a question which remains unanswered: -- "So just as CDT “loses” on Newcomb’s problem, EDT will "lose” on Medical Newcomb problems (if the tickle defense fails) or will join CDT and "lose" on Newcomb’s Problem itself (if the tickle defense succeeds)."
If I was designing a self-driving car and had to provide an algorithm for what to do during an emergency, I may choose to hard-code CDT or EDT into the system, as seems appropriate. However, as an intelligen...
But note burger-choosing Jane (6.1) is still irrational - for she has discounted the much stronger preference of a cow not to be harmed. Rationality entails overcoming egocentric bias - and ethnocentric and anthropocentric bias - and adopting a God's eye point-of-view that impartially gives weight to all possible first-person perspectives.
When we say 'rationality', we mean instrumental rationality; getting what you want. Elsewhere, we also refer to epistemic rationality, which is believing true things. In neither case do we say anything about what you should want.
It might be a good thing to care about cows, but it's not rationality as we understand the word. Good you bring this up though, as I can easily imagine others being confused.
See also What Do We Mean by Rationality
Isn't the giant elephant in this room the whole issue of moral realism? I'm a moral cognitivist but not a moral realist. I have laid out what it means for my moral beliefs to be true - the combination of physical fact and logical function against which my moral judgments are being compared. This gives my moral beliefs truth value. And having laid this out, it becomes perfectly obvious that it's possible to build powerful optimizers who are not motivated by what I call moral truths; they are maximizing something other than morality, like paperclips. They will also meta-maximize something other than morality if you ask them to choose between possible utility functions, and will quite predictably go on picking the utility function "maximize paperclips". Just as I correctly know it is better to be moral than to be paperclippy, they accurately evaluate that it is more paperclippy to maximize paperclips than morality. They know damn well that they're making you unhappy and violating your strong preferences by doing so. It's just that all this talk about the preferences that feel so intrinsically motivating to you, is itself of no interest to them because you haven't got...
I'm not sure this taxonomy is helpful from David Pearce's perspective. David Pearce's position is that there are universally motivating facts - facts whose truth, once known, is compelling for every possible sort of mind. This reifies his observation that the desire for happiness feels really, actually compelling to him and this compellingness seems innate to qualia, so anyone who truly knew the facts about the quale would also know that compelling sense and act accordingly. This may not correspond exactly to what SEP says under moral realism and let me know if there's a standard term, but realism seems to describe the Pearcean (or Eliezer circa 1996) feeling about the subject - that happiness is really intrinsically preferable, that this is truth and not opinion.
From my perspective this is a confusion which I claim to fully and exactly understand, which licenses my definite rejection of the hypothesis. (The dawning of this understanding did in fact cause my definite rejection of the hypothesis in 2003.) The inherent-desirableness of happiness is your mind reifying the internal data describing its motivation to do something, so if you try to use your empathy to imagine another...
Eliezer, in my view, we don't need to assume meta-ethical realism to recognise that it's irrational - both epistemically irrational and instrumentally irrational - arbitrarily to privilege a weak preference over a strong preference.
You need some stage at which a fact grabs control of a mind, regardless of any other properties of its construction, and causes its motor output to have a certain value.
Paperclippers? Perhaps let us consider the mechanism by which paperclips can take on supreme value. We understand, in principle at least, how to make paperclips seem intrinsically supremely valuable to biological minds - more valuable than the prospect of happiness in the abstract. [“Happiness is a very pretty thing to feel, but very dry to talk about.” - Jeremy Bentham]. Experimentally, perhaps we might use imprinting (recall Lorenz and his goslings), microelectrodes implanted in the reward and punishment centres, behavioural conditioning and ideological indoctrination - and perhaps the promise of 72 virgins in the afterlife for the faithful paperclipper. The result: a fanatical paperclip fetishist!
As Sarokrae observes, this isn't the idea at all. We construct a paperclip maxim...
Anyone who is isn't profoundly disturbed by torture, for instance, or by agony so bad one would end the world to stop the horror, simply hasn't understood it.
Similarly, anyone who doesn't want to maximize paperclips simply hasn't understood the ineffable appeal of paperclipping.
"Aargh!" he said out loud in real life. David, are you disagreeing with me here or do you honestly not understand what I'm getting at?
The whole idea is that an agent can fully understand, model, predict, manipulate, and derive all relevant facts that could affect which actions lead to how many paperclips, regarding happiness, without having a pleasure-pain architecture. I don't have a paperclipping architecture but this doesn't stop me from modeling and understanding paperclipping architectures.
The paperclipper can model and predict an agent (you) that (a) operates on a pleasure-pain architecture and (b) has a self-model consisting of introspectively opaque elements which actually contain internally coded instructions for your brain to experience or want certain things (e.g. happiness). The paperclipper can fully understand how your workspace is modeling happiness and know exactly how much you would want happiness and why you write papers about the apparent ineffability of happiness, without being happy itself or at all sympathetic toward you. It will experience no future surprise on comprehending these things, because it already knows them. It doesn't have any objec...
As Kawoomba colorfully pointed out, clippy's subroutines simulating humans suffering may be fully sentient. However, unless those subroutines have privileged access to clippy's motor outputs or planning algorithms, clippy will go on acting as if he didn't care about suffering. He may even understand that inflicting suffering is morally wrong--but this will not make him avoid suffering, any more than a thrown rock with "suffering is wrong" painted on it will change direction to avoid someone's head. Moral wrongness is simply not a consideration that has the power to move a paperclip maximizer.
To slightly expand, if an intelligence is not prohibited from the following epistemic feats:
1) Be good at predicting which hypothetical actions would lead to how many paperclips, as a question of pure fact.
2) Be good at searching out possible plans which would lead to unusually high numbers of paperclips - answering the purely epistemic search question, "What sort of plan would lead to many paperclips existing, if someone followed it?"
3) Be good at predicting and searching out which possible minds would, if constructed, be good at (1), (2), and (3) as purely epistemic feats.
Then we can hook up this epistemic capability to a motor output and away it goes. You cannot defeat the Orthogonality Thesis without prohibiting superintelligences from accomplishing 1-3 as purely epistemic feats. They must be unable to know the answers to these questions of fact.
...microelectrodes implanted in the reward and punishment centres, behavioural conditioning and ideological indoctrination - and perhaps the promise of 72 virgins in the afterlife for the faithful paperclipper. The result: a fanatical paperclip fetishist!
Have to point out here that the above is emphatically not what Eliezer talks about when he says "maximise paperclips". Your examples above contain in themselves the actual, more intrisics values to which paperclips would be merely instrumental: feelings in your reward and punishment centres, virgins in the afterlife, and so on. You can re-wire the electrodes, or change the promise of what happens in the afterlife, and watch as the paperclip preference fades away.
What Eliezer is talking about is a being for whom "pleasure" and "pain" are not concepts. Paperclips ARE the reward. Lack of paperclips IS the punishment. Even if pleasure and pain are concepts, they are merely instrumental to obtaining more paperclips. Pleasure would be good because it results in paperclips, not vice versa. If you reverse the electrodes so that they stimulate the pain centre when they find paperclips, and the pleasure centr...