All of scmbradley's Comments + Replies

What Is Signaling, Really?

Signals by Brian Skyrms is a great book in this area. It shows how signalling can evolve in even quite simple set-ups.

5gwern10yAlso available for a short time at http://libgen.info/view.php?id=416947 [http://libgen.info/view.php?id=416947]
Decision Theories: A Less Wrong Primer

So I agree. It's lucky I've never met a game theorist in the desert.

Less flippantly. The logic pretty much the same yes. But I don't see that as a problem for the point I'm making; which is that the perfect predictor isn't a thought experiment we should worry about.

Decision Theories: A Less Wrong Primer

Elsewhere on this comment thread I've discussed why I think those "rules" are not interesting. Basically, because they're impossible to implement.

Decision Theories: A Less Wrong Primer

According to what rules? And anyway I have preferences for all kinds of impossible things. For example, I prefer cooperating with copies of myself, even though I know it would never happen, since we'd both accept the dominance reasoning and defect.

2APMason10yI think he meant according to the rules of the thought experiments. In Newcomb's problem, Omega predicts what you do. Whatever you choose to do, that's what Omega predicted you would choose to do. You cannot to choose to do something that Omega wouldn't predict - it's impossible. There is no such thing as "the kind of agent who is predicted to one-box, but then two-box once the money has been put in the opaque box".
Decision Theories: A Less Wrong Primer

So these alternative decision theories have relations of dependence going back in time? Are they sort of couterfactual dependences like "If I were to one-box, Omega would have put the million in the box"? That just sounds like the Evidentialist "news value" account. So it must be some other kind of relation of dependence going backwards in time that rules out the dominance reasoning. I guess I need "Other Decision Theories: A Less Wrong Primer".

Decision Theories: A Less Wrong Primer

See mine and orthonormal's comments on the PD on this post for my view of that.

The point I'm struggling to express is that I don't think we should worry about the thought experiment, because I have the feeling that Omega is somehow impossible. The suggestion is that Newcomb's problem makes a problem with CDT clearer. But I argue that Newcomb's problem makes the problem. The flaw is not with the decision theory, but with the concept of such a predictor. So you can't use CDT's "failure" in this circumstance as evidence that CDT is wrong.

Here's a re... (read more)

0MrMind10yThat line of reasoning is though available to Smith as well, so he can choose to one-boxing because he knows that Omega is a perfect predictor. You're right to say that the interplay between Omega-prediction-of-Smith and Smith-prediction-of-Omega are in a meta-stable state, BUT: Smith has to decide, he is going to make a decision, and so whatever algorithm it implements, if it ever goes down this line of meta-stable reasoning, must have a way to get out and choose something, even if it's just bounded computational power (or the limit step of computation in Hamkins infinite Turing machine). But since Omega is a perfect predictor, it will know that and choose accordingly. I have the feeling that Omega existence is something like an axiom, you can refuse or accept it and both stances are coherent.
0Dmytry10yWell, i can implement omega by scanning your brain and simulating you. The other 'non implementations' of omega, though, imo are best ignored entirely. You can't really blame a decision theory for failure if there's no sensible model of the world for it to use. My decision theory, personally, allows me to ignore unknown and edit my expected utility formula in ad-hoc way if i'm sufficiently convinced that omega will work as described. I think that's practically useful because effective heuristics often have to be invented on spot without sufficient model of the world. edit: albeit, if i was convinced that omega works as described, i'd be convinced that it has scanned my brain and is emulating my decision procedure, or is using time travel, or is deciding randomly then destroying the universes where it was wrong... with more time i can probably come up with other implementations, the common thing about the implementations though is that i should 1-box.
1[anonymous]10yBy that logic, you can never win in Kavka's toxin/Parfit's hitchhiker scenario.
Decision Theories: A Less Wrong Primer

Aha. So when agents' actions are probabilistically independent, only then does the dominance reasoning kick in?

So the causal decision theorist will say that the dominance reasoning is applicable whenever the agents' actions are causally independent. So do these other decision theories deny this? That is, do they claim that the dominance reasoning can be unsound even when my choice doesn't causally impact the choice of the other?

1orthonormal10yThat's one valid way of looking at the distinction. CDT allows the causal link from its current move in chess to its opponent's next move, so it doesn't view the two as independent. In Newcomb's Problem, traditional CDT doesn't allow a causal link from its decision now to Omega's action before, so it applies the independence assumption to conclude that two-boxing is the dominant strategy. Ditto with playing PD against its clone. (Come to think of it, it's basically a Markov chain formalism.)
Schelling fences on slippery slopes

Given the discussion, strictly speaking the pill reduces Ghandi's reluctance to murder by 1 percentage point. Not 1%.

7Cyan10yGlad I'm not the only person who noticed. Pedantry for the win! (Upvoted.)
Decision Theories: A Less Wrong Primer

Wouldn't you like to be the type of agent who cooperates with near-copies of yourself? Wouldn't you like to be the type of agent who one-boxes?

Yes, but it would be strictly better (for me) to be the kind of agent who defects against near-copies of myself when they co-operate in one-shot games. It would be better to be the kind of agent who is predicted to one-box, but then two-box once the money has been put in the opaque box.

But the point is really that I don't see it as the job of an alternative decision theory to get "the right" answers to these sorts of questions.

0Jonathan_Graehl10yThe larger point makes sense. Those two things you prefer are impossible according to the rules, though.
Decision Theories: A Less Wrong Primer

we might ask whether it is preferable to be the type of person who two boxes or the type of person who one boxes. As it turns out it seems to be more preferable to one-box

No. What is preferable is to be the kind of person Omega will predict will one-box, and then actually two-box. As long as you "trick" Omega, you get strictly more money. But I guess your point is you can't trick Omega this way.

Which brings me back to whether Omega is feasible. I just don't share the intuition that Omega is capable of the sort of predictive capacity required of it.

0Estarlio10yWell, I guess my response to that would be that it's a thought experiment. Omega's really just an extreme - hypothetical - case of a powerful predictor, that makes problems in CDT more easily seen by amplifying them. If we were to talk about the prisoner's dilemma, we could easily have roughly the same underlying discussion.
Decision Theories: A Less Wrong Primer

There are a couple of things I find odd about this. First, it seems to be taken for granted that one-boxing is obviously better than two boxing, but I'm not sure that's right. J.M. Joyce has an argument (in his foundations of causal decision theory) that is supposed to convince you that two-boxing is the right solution. Importantly, he accepts that you might still wish you weren't a CDT (so that Omega predicted you would one-box). But, he says, in either case, once the boxes are in front of you, whether you are a CDT or a EDT, you should two-box! The domin... (read more)

3Estarlio10yYour actions have been determined in part by the bet that Omega has made with you - I do not see how that is supposed to make them unpredictable any more than adding any other variable would do so. Remember: You only appear to have free will from within the algorithm, you may decide to think of something you'd never otherwise think about but Omega is advanced enough to model you down to the most basic level - it can predict your more complex behaviours based upon the combination of far simpler rules. You cannot necessarily just decide to think of something random which would be required in order to be unpredictable. Similarly, the whole question of whether you should choose to two box or one box is a bit iffy. Strictly speaking there's no SHOULD about it. You will one box or you will two box. The question phrased as a should question - as a choice - is meaningless unless you're treating choice as a high-level abstraction of lower level rules; and if you do that, then the difficult disappears - just as you don't ask a rock whether it should or shouldn't crush someone when it falls down a hill. Meaningfully, we might ask whether it is preferable to be the type of person who two boxes or the type of person who one boxes. As it turns out it seems to be more preferable to one-box and make stinking great piles of dosh. And as it turns out I'm the sort of person who, holding a desire for filthy lucre, will do so. It's really difficult to side step your intuitions - your illusion that you actually get a free choice here. And I think the phrasing of the problem and its answers themselves have a lot to do with that. I think if you think that people get a choice - and the mechanisms of Omega's prediction hinge upon you being strongly determined - then the question just ceases to make sense. And you've got to jettison one of the two; either Omega's prediction ability or your ability to make a choice in the sense conventionally meant.
1orthonormal10yIt's not always cooperating- that would be dumb. The claim is that there can be improvements on what a CDT algorithm can achieve: TDT or UDT still defects against an opponent that always defects or always cooperates, but achieves (C,C) in some situations where CDT gets (D,D). The dominance reasoning is only impeccable if agents' decisions really are independent, just like certain theorems in probability only hold when the random variables are independent. (And yes, this is a precisely analogous meaning of "independent".)
0syllogism10y(gah. I wanted to delete this because I decided it was sort of a useless thing to say, but now it's here in distracting retracted form, being even worse) And it's arguably telling that this is the solution evolution found. Humans are actually pretty good at avoiding proper prisoners' dilemmas, due to our somewhat pro-social utility functions.
5Jonathan_Graehl10yI generally share your reservations. But as I understand it, proponents of alternative DTs are talking about a conditional PD where you know you face an opponent executing a particular DT. The fancy-DT-users all defect on PD when the prior of their PD-partner being on CDT or similar is high enough, right? Wouldn't you like to be the type of agent who cooperates with near-copies of yourself? Wouldn't you like to be the type of agent who one-boxes? The trick is to satisfy this desire without using a bunch of stupid special-case rules, and show that it doesn't lead to poor decisions elsewhere.
Satisficers want to become maximisers

As I understand what is meant by satisficing, this misses the mark. A satisficer will search for an action until it finds one that is good enough, then it will do that. A maximiser will search for the best action and then do that. A bounded maximser will search for the "best" (best according to its bounded utility function) and then do that.

So what the satisficer picks depends on what order the possible actions are presented to it in a way it doesn't for either maximiser. Now, if easier options are presented to it first then I guess your conclusion still follows, as long as we grant the premise that self-transforming will be easy.

But I don't think it's right to identify bounded maximisers and satisficers.

Rationality Quotes February 2012

Any logically coherent body of doctrine is sure to be in part painful and contrary to current prejudices

– Bertrand Russell, History of Western Philosophy p. 98

Bertie is a goldmine of rationality quotes.

Also don't confuse "logically coherent" with "true".

A summary of Savage's foundations for probability and utility.

P6 entails that there are (uncountably) infinitely many events. It is at least compatible with modern physics that the world is fundamentally discrete both spatially and temporally. The visible universe is bounded. So it may be that there are only finitely many possible configurations of the universe. It's a big number sure, but if it's finite, then Savage's theorem is irrelevant. It doesn't tell us anything about what to believe in our world. This is perhaps a silly point, and there's probably a nearby theorem that works for "appropriately large fini... (read more)

1fool10yBut it was a conditional statement. If the universe is discrete and finite, then obviously there are no immortal agents either. Basically I don't see that aspect of P6 as more problematic than the unbounded resource assumption. And when we question that assumption, we'll be questioning a lot more than P6.
Rationality Quotes February 2012

The greatest challenge to any thinker is stating the problem, in a way that will allow a solution

– Bertrand Russell

Rationality Quotes February 2012

Anyone who can handle a needle convincingly can make us see a thread which isn't there

-E.H. Gombrich

2Will_Newsome10yIs that true or is Gombrich just handling a needle convincingly?
Rationality quotes January 2012

Ah I see now. Glad we cleared that up.

Still, I think there's something to the idea that if there is a genuine debate about some claim that lasts a long time, then there might well be some truth on either side. So perhaps Russell was wrong to universally quantify over "debates" (as your counterexamples might show), but I think there is something to the claim.

A summary of Savage's foundations for probability and utility.

But why ought the world be such that such a partition exists for us to name? That doesn't seem normative. I guess there's a minor normative element in that it demands "If the world conspires to allow us to have partitions like the ones needed in P6, then the agent must be able to know of them and reason about them" but that still seems secondary to the demand that the world is thus and so.

0fool10yAgreed, the structural component is not normative. But to me, it is the structural part that seems benign. If we assume the agent lives forever, and there's always some uncertainty, then surely the world is thus and so. If the agent doesn't live forever, then we're into bounded rationality questions, and even transitivity is up in the air.
Rationality quotes January 2012

Er. What? You can call it a false generalisation all you like, that isn't in itself enough to convince me it is false. (It may well be false, that's not what's at stake here). You seem to be suggesting that merely by calling it a generalisation is enough to impugn its status.

And in homage to your unconvential arguing style, here are some non sequituurs: How many angels can dance on the head of a pin? Did Thomas Aquinas prefer red wine or white wine? Was Stalin lefthanded? What colour were Sherlock Holmes' eyes?

1Manfred10ySuppose that I wanted to demonstrate conclusively that a generalization was false. I would have to provide one or more counterexamples. What sort of thing would be a counterexample to the claim "each party to all disputes that persist through long periods of time is partly right and partly wrong?" Well, it would have to be a dispute that persisted through long periods of time, but in which there was a party that was not partly right and partly wrong. So in my above reply, I listed some disputes that persisted for long periods of time, but in which there was (or is) a party that was not partly right and partly wrong.
A summary of Savage's foundations for probability and utility.

This thought isn't original to me, but it's probably worth making. It feels like there are two sorts of axioms. I am following tradition in describing them as "rationality axioms" and "structure axioms". The rationality axioms (like the transitivity of the order among acts) are norms on action. The structure axioms (like P6) aren't normative at all. (It's about structure on the world, how bizarre is it to say "The world ought to be such that P6 holds of it"?)

Given this, and given the necessity of the structure axioms for the proof, it feels like Savage's theorem can't serve as a justification of Bayesian epistemolgy as a norm of rational behaviour.

0fool10yP6 is really both. Structurally, it forces there to be something like a coin that we can flip as many times as we want. But normatively, we can say that if the agent has blah blah blah preference, it shall be able to name a partition such that blah blah blah. See e.g. [rule 4] [http://lesswrong.com/lw/9e4/the_savage_theorem_and_the_ellsberg_paradox/]. This of course doesn't address why we think such a thing is normative, but that's another issue.
Dutch Books and Decision Theory: An Introduction to a Long Conversation

What the Dutch book theorem gives you are restrictions on the kinds of will-to-wager numbers you can exhibit and still avoid sure loss. It's a big leap to claim that these numbers perfectly reflect what your degrees of belief ought to be.

But that's not really what's at issue. The point I was making is that even among imperfect reasoners, there are better and worse ways to reason. We've sorted out the perfect case now. It's been done to death. Let's look at what kind of imperfect reasoning is best.

1Jack10yYes. This was the subject of half the post. It actually is what was at issue in this year old post and ensuing discussion. There is no consensus justification for Bayesian epistemology. If you would rather talk about imperfect reasoning strategies than the philosophical foundations of ideal reasoning than you should go ahead and write a post about it. It isn't all that relevant as a reply to my comment.
Rationality quotes January 2012

What do you mean "the statement is affected by a generalisation"? What does it mean for something to be "affected by a generalisation"? What does it mean for a statement to be "affected"?

The claim is a general one. Are general claims always false? I highly doubt that. That said, this generalisation might be false, but it seems like establishing that would require more than just pointing out that the claim is general.

0Manfred10yRight. So calling it a "false generalization" needed two words. Anyhow: Where does the sun go at night? How big is the earth? Is it harmful to market cigarettes to teenagers? Is Fermat's last theorem true? Can you square the circle? Will heathens burn in hell for all eternity?
Dutch Books and Decision Theory: An Introduction to a Long Conversation

I think this misses the point, somewhat. There are important norms on rational action that don't apply only in the abstract case of the perfect bayesian reasoner. For example, some kinds of nonprobabilistic "bid/ask" betting strategies can be Dutch-booked and some can't. So even if we don't have point-valued will-to-wager values, there are still sensible and not sensible ways to decide what bets to take.

1Jack10yThe question that needs answering isn't "What bets do I take?" but "What is the justification for Bayesian epistemology?".
Dutch Books and Decision Theory: An Introduction to a Long Conversation

If you weaken your will-to-wager assumption and effectively allow your agents to offer bid-ask spreads on bets (i'll buy bets on H for x, but sell them for y) then you get "Dutch book like" arguments that show that your beliefs conform to Dempster-Shafer belief functions, or Choquet capacities, depending on what other constraints you allow.

Or, if you allow that the world is non-classical – that the function that decides which propositions are true is not a classical logic valuation function – then you get similar results.

Other arguments for havin... (read more)

Dutch Books and Decision Theory: An Introduction to a Long Conversation

This seems to be orthogonal to the current argument. The Dutch book argument says that your will-to-wager fair betting prices for dollar stakes had better conform to the axioms of probability. Cox's theorem says that your real-valued logic of plausible inference had better conform to the axioms of probability. So you need the extra step of saying that your betting behaviour should match up with your logic of plausible inference before the arguments support each other.

Dutch Books and Decision Theory: An Introduction to a Long Conversation

Savage's representation theorem in Foundations of Statistics starts assuming neither. He just needs some axioms about preference over acts, some independence concepts and some pretty darn strong assumptions about the nature of events.

So it's possible to do it without assuming a utility scale or a probability function.

2Sniffnoy10yI suppose this would be a good time to point anyone stumbling on this thread to my post that I later wrote on that theorem [http://lesswrong.com/lw/5te/a_summary_of_savages_foundations_for_probability/]. :)
Rationality quotes January 2012

I've had rosewater flavoured ice cream.

I bet cabbage ice cream does not taste as nice.

Rationality quotes January 2012

Sorry I'm new. I don't understand. What do you mean?

2Manfred10yUm, so the " 'd " suggests that something has been affected by a noun [http://2.bp.blogspot.com/_CSmsEIWZS8g/SzP2zQHf_2I/AAAAAAAACvE/iJDlnG7sWbQ/s400/TGS+MSG'D.jpg] . In this case, the statement "every disputant is partly right and partly wrong" is affected by generalization. In that it is, er, a false generalization.
Welcome to Less Wrong! (2012)

I have lots of particular views and some general views on decision theory. I picked on decision theory posts because it's something I know something about. I know less about some of the other things that crop up on this site…

Rationality quotes January 2012

it is clear that each party to this dispute – as to all that persist through long periods of time – is partly right and partly wrong

— Bertrand Russell History of Western Philosophy (from the introduction, again.)

4Manfred10yGeneralization'd.
Rationality quotes January 2012

Uncertainty, in the presence of vivid hopes and fears, is painful, but must be endured if we wish to live without the support of comforting fairy tales

— Bertrand Russell, History of Western Philosophy (from the introduction)

Welcome to Less Wrong! (2012)

Hi. I'll mostly be making snarky comments on decision theory related posts.

0orthonormal10yThat's fairly specific. Do you have a particular viewpoint on decision theory?
4windmil10yHey! If I find the time I'll be making snarky comments on your snarky comment related posts.
So You Want to Save the World

The VNM utility theorem implies there is some good we value highest? Where has this come from? I can't see how this could be true. The utility theorem only applies once you've fixed what your decision problem looks like…