All of robzahra's Comments + Replies

The current best answer we know seems to be to write each consistent hypothesis in a formal language, and weight longer explanations inverse exponentially, renormalizing such that your total probability sums to 1. Look up aixi, universal prior

Shutting up and multiplying, answer is clearly to save eliezer...and do so versus a lot more people than just three...question is more interesting if you ask people what n (probably greater than 3) is their cut off point.

Due to chaotic / non-linear effects, you're not going to get anywhere near the compression you need for 33 bits to be enough...I'm very confident the answer is much much higher...

you're right. speaking more precisely, by "ask yourself what you would do", I mean "engage in the act of reflecting, wherein you realize the symmetry between you and your opponent which reduces the decision problem to (C,C) and (D,D), so that you choose (C,C)", as you've outlined above. Note though that even when the reduction is not complete (for example, b/c you're fighting a similar but inexact clone), there can still be added incentive to cooperate...

Agreed that in general one will have some uncertainty over whether one's opponent is the type of algorithm who one boxes / cooperates / whom one wants to cooperate with, etc. It does look like you need to plug these uncertainties into your expected utility calculation, such that you decide to cooperate or defect based on your degree of uncertainty about your opponent.

However, in some cases at least, you don't need to be Omega-superior to predict whether another agent one-boxes....for example, if you're facing a clone of yourself; you can just ask yoursel... (read more)

0Vladimir_Nesov14y
No, you can't ask yourself what you'll do [http://www.overcomingbias.com/2008/08/no-self-trust.html]. It's like a calculator that seeks the answers to the question of "what is 2+2?" in a form "what will I answer to the question "what is 2+2"?", in which case the answer 57 will be perfectly reasonable. If you are cooperating with your copy, you only know that the copy will do the same action, which is a restriction on your joint state space. Given this restriction, the expected utility calculation for your actions will return a result different from what other restrictions may force. In this case, you are left only with 2 options: (C,C) and (D,D), of which (C,C) is better.

Agreed with tarleton, the prisoner's dilemma questions do look under-specified...e.g., eliezer has said something like cooperate if he thinks his opponent one-boxes on newcomb-like problems..maybe you could have some write-in box here and figure out how to map the votes to simple categories later, depending on the variety of survey responses you get

-1cousin_it14y
Going slightly offtopic: Eliezer's answer has irked me for a long time, and only now I got a handle on why. To reliably win by determining whether the opponent one-boxes, we need to be Omega-superior relative to them, almost by the definition of Newcomb's. But such powers would allow us to just use the trivial solution: "cooperate if I think my opponent will cooperate".

On the belief in god question, rule out simulation scenarios explicitly...I assume you intend "supernatural" to rule out a simulation creator as a "god"?

On marital status, distinguish "single and looking for a relationship" versus "single and looking for people to casually romantically interact with"

Seems worth mentioning: I think a thorough treatment of what "you" want needs to address extrapolated volition and all the associated issues that raises.
To my knowledge, some of those issues remain unsolved, such as whether different simulations of oneself in different environments necessarily converge (seems to me very unlikely, and this looks provable in a simplified model of the situation), and if not, how to "best" harmonize their differing opinions... similarly, whether a single simulated instance of oneself might itself not conver... (read more)

Wh- I definitely agree the point you're making about knives etc., though I think one intepretation of the nfl as applying not to just to search but also to optimization makes your observation an instance of one type of nfl. Admittedly, there are some fine print assumptions that I think go under the term "almost no free lunch" when discussed.

Tim-Good, your distinction sounds correct to me.

Annoyance, I don't disagree. The runaway loop leading to intelligence seems plausible, and it appears to support the idea that partially accurate modeling confers enough advantage to be incrementally selected .

Yes, the golden gate bridge is a special case of deduction in the sense meant here. I have no problem with anything in your comment, I think we agree.

I think we're probably using some words differently, and that's making you think my claim that deductive reasoning is a special case of Bayes is stronger than I mean it to be.

All I mean, approximately, is:

Bayes theorem: p(B|A) = p(A|B)*p(B) / p(A)

Deduction : Consider a deductive system to be a set of axioms and inference rules. Each inference rule says: "with such and such things proven already, you can then conclude such and such". And deduction in general then consists of recursively turning the crank of the inference rules on the axioms ... (read more)

1timtyler14y
I think I would describe what you are talking about as being Bayesian statistics - plus a whole bunch of unspecified rules (the "i" s). What I was saying is that there isn't a standard set of rules of deductive reasoning axioms that is considered to be part of Bayesian statistics. I would not dispute that you can model deductive reasoning using Bayesian statistics.

Ciphergoth, I agree your points, that if your prior over world-states were not induction biased to start with, you would not be able to reliably use induction, and that this is a type of circularity. Also of course, the universe might just be such that the Occam prior doesn't make you win; there is no free lunch, after all.

But I still think induction could meaningfully justify itself, at least in a partial sense. One possible, though speculative, pathway: Suppose Tegmark is right and all possible math structures exist, and that some of these contain c... (read more)

I agree with Jimmy's examples. Tim, the Solomonoff model may have some other fine print assumptions {see some analysis by Shane Legg here}, but "the earth having the same laws as space" or "laws not varying with time" are definitely not needed for the optimality proofs of the universal prior (though of course, to your point, uniformity does make our induction in practice easier, and time and space translation invariance of physical law do appear to be true, AFAIK.). Basically, assuming the universe is computable is enough to get the o... (read more)

Tim--- To resolve your disagreement: Induction is not purely about deduction, but it nevertheless can be completely modelled by a deductive system.

More specifically, I agree with your claim about induction (see point 4 above). However, in defense of Eliezer's claim that induction is a special case of deduction, I think you can model it in a deductive system even though induction might require additional assumptions. For one thing, deduction in practice seems to me to require empirical assumptions as well (i.e., the "axioms" and "inferenc... (read more)

0timtyler14y
If that is a defense of induction being a special case of deduction, then it's a defense of anything being a special case of deduction - since logic can model anything. The golden gate bridge is a special case of deduction, in this sense. I am not impressed by the idea that induction is a special case of deduction - I would describe it as being wrong. You need extra axioms for induction. It is not the same thing at all.

I agree with the spirit of this, though of course we have a long way to go in cognitive neuroscience before we know ourselves anywhere near as well as we know the majority of our current human artifacts. However, it does seem like relatively more accurate models will help us comparatively more, most of the time. Presumably that human intelligence was able to evolve at all is some evidence in favor of this.

0Annoyance14y
I don't disagree strongly with this point, but current understanding suggests that our intelligence developed in a positive feedback process of trying to anticipate others. Those who were best at anticipating and manipulating others then set the new ground competence. The hypothetically-resulting runaway loop may explain a great deal.

It looks to me like those uniformity of nature principles would be nice but that induction could still be a smart thing to do despite non-uniformity. We'd need to specify in what sense uniformity was broken to distinguish when induction still holds.

2jimmy14y
Right. We only assume uniformity for the same reason we assume all emeralds are green and not bleen [http://en.wikipedia.org/wiki/Grue_and_Bleen]. It's just the simpler hypothesis. If we had reason to think that the laws of physics alternated like a checkerboard, or that colors magically changed in 2012, then we'd just have to take that into account. This reminds me of the Feynman quote "Philosophers say a great deal about what is absolutely necessary for science, and it is always, so far as one can see, rather naive, and probably wrong."
0Paul Crowley14y
To a Bayesian, the problem of induction comes down to justifying your priors. If your priors rate an orderly universe as no more likely than a disorderly one, than all the evidence of regularity in the past is no reason to expect regularity in the future - all futures are still equally likely. Only with a prior that weights more orderly universes with a higher probability, as Solomonoff's universal prior does, will you be able to use the past to make predictions.

Are you saying that you would modify the first definition of rational to include these >> other ways of knowing (Occam's Razor and Inductive Bias), and that they can make conclusions about metaphysical things?

yes, I don't think you can get far at all without an induction principle. We could make a meta-model of ourselves and our situation and prove we need induction in that model, if it helps people, but I think most people have the intuition already that nothing observational can be proven "absolutely", that there are an infinite nu... (read more)

2Eliezer Yudkowsky14y
And induction is a special case of deduction, since probability theory itself is a logic with theorems: what a given prior updates to, on given evidence, is a deductive mathematical fact. Besides, I'm informed that I just use duction.

Why to accept an inductive principle:

  1. Finite agents have to accept an "inductive-ish" principle, because they can't even process the infinitely many consistent theories which are longer than the number of computations they have in which to compute, and therefore they can't even directly consider most of the long theories. Zooming out and viewing from the macro, this is extremely inductive-ish, though it doesn't decide between two fairly short theories, like Christianity versus string theory.

  2. Probabilities over all your hypotheses have to add t

... (read more)
6Wei_Dai12y
I agree up to the first half of step 6, but I think the conclusion is wrong (or at least not justified from the argument). There are two different principles involved here: 1. A finite agent must use an "inductive-ish" prior with a finite complexity 2. One should use the simplest prior. (Occam's Razor) If every finite agent must use an "inductive-ish" prior, then there is no need to invoke or appeal to Occam's Razor to explain or justify our own inductive tendencies, so Rob's argument actually undercuts Occam's Razor. If we replace Occam's Razor with the principle that every finite agent must use a prior with finite complexity, then one's prior is just whatever it is, and not necessarily the simplest prior. There is no longer an argument against someone who says their prior assigns a greater weight to Christianity than to string theory. (In the second half of step 6, Rob says that's "contrived", but they could always answer "so what?")
3Eliezer Yudkowsky14y
Rob, just make it a post.

this can be viewed the other way around, deductive reasoning as a special case of Bayes

0timtyler14y
By "Bayes" I meant this: http://en.wikipedia.org/wiki/Bayes'_theorem [http://en.wikipedia.org/wiki/Bayes'_theorem] - a formalisation of induction. If you think "Bayes" somehow includes deductive reasoning, can you explain whether it supposedly encapsulates first-order logic or second-order logic?
1orthonormal14y
Exactly: the special case where the conditional probabilities are (practically) 0 or 1.

seconding timtyler and guysrinivasan--I think, but can't prove, that you need an induction principle to reach the anti-religion conclusion. See especially Occam's Razor and Inductive Bias. If someone wants to bullet point the reasons to accept an induction principle, that would be useful. Maybe I'll take a stab later. It ties into Solomonoff induction among other things.

EDIT---I've put some bullet points below which state the case for induction to the best of my knowledge.

0byrnema14y
The anti-religion conclusion in my post was just an application of the definitions given for religion and rational. ARE YOU SAYING THAT YOU WOULD MODIFY THE FIRST DEFINITION OF RATIONAL TO INCLUDE THESE OTHER WAYS OF KNOWING (OCCAM'S RAZOR AND INDUCTIVE BIAS), AND THAT THEY CAN MAKE CONCLUSIONS ABOUT METAPHYSICAL THINGS? Oh, I see, these would be included under "logical reasoning". The part I would modify is (1) whether some metaphysical beliefs are acceptable and (2) that they can be constrained by logical reasoning.

Why to accept an inductive principle:

  1. Finite agents have to accept an "inductive-ish" principle, because they can't even process the infinitely many consistent theories which are longer than the number of computations they have in which to compute, and therefore they can't even directly consider most of the long theories. Zooming out and viewing from the macro, this is extremely inductive-ish, though it doesn't decide between two fairly short theories, like Christianity versus string theory.

  2. Probabilities over all your hypotheses have to add t

... (read more)

yes, what to call the chunk is a separate issue...I at least partially agree with you, but I'd want to hear what others have to say. The recent debate over the tone of the Twelve Virtues seems relevant.

This is the Dark Side root link. In my opinion it's a useful chunked) concept, though maybe people should be hyperlinking here when they use the term, to be more accessible to people who haven't read every post. At the very least, the FAQ builders should add this, if it's not there already.

6Eliezer Yudkowsky14y
Actually, the term "Dark Side Epistemology" seems to be tending towards over-generalization (being used to describe any persuasive art, say, rather than explicitly defended systematized bad rules of reasoning). "Dark Arts" isn't even a term of my own invention; someone else imported that from Harry Potter. It seems to be trending towards synonymy with "Dark Side". I may have to deprecate both terms as overly poetic and come up with something else - I'm thinking of Anti-Epistemology for systematically bad epistemology.
3AlexU14y
I'm certainly not against using chunked concepts on here per se. But I think associating this community too closely with sci-fi/fantasy tropes could have deleterious consequences in the long run, as far as attracting diverse viewpoints and selling the ideas to people who aren't already pre-disposed to buying them. If Eliezer really wanted to proselytize by poeticizing, he should turn LW into the most hyper-rational, successful PUA community on the Internet, rather than the Star Wars-esque roleplaying game it seems to want to become.

Some examples of what I think you're looking for:

  1. Vassar's proposed shift from saying "this is the best thing you can do" to "this is a cool thing you can do" because people's psychologies respond better to this
  2. Operant conditioning in general
  3. Generally, create a model of the other person, then use standard rationality to explore how to most efficiently change them. Obviously, the less wrong and overcoming bias knowledge base is very relevant for this.
2cousin_it14y
Standard rationality tells me it's most efficient to lie to them from a young age.
1PhilGoetz14y
Thanks! Those are good examples. Although the fact that they wouldn't make me feel dirty makes me suspect we should go farther.

I mostly agree with your practical conclusion, however I don't see purchasing fuzzies and utilons separately as an instance of irrationality per se. As a rationalist, you should model the inside of your brain accurately and admit that some things you would like to do might actually be beyond your control to carry out. Purchasing fuzzies would then be rational for agents with certain types of brains. "Oh well, nobody's perfect" is not the right reason to purchase fuzzies; rather, upon reflection, this appears to be the best way for you to maximize utilons long term. Maybe this is only a language difference (you tell me), but I think it might be more than that.

0[anonymous]14y
I agree. Getting warm and fuzzies is not instrumentally irrational. We should just accept that our goal values are not purely altruistic, and that we value a unit of happiness for ourselves more than for strangers. As far as I can tell this is not irrational at all.

If a gun were put to my head and I had to decide right now, I agree with your irritation. However, he did make an interesting point about public disrespect as a means of deterrence which deserves more thinking about. If that method looks promising after further inspection, we'd probably want to reconsider its application to this situation, though it's still unclear to me to what extent it applies in this case.

2Eliezer Yudkowsky14y
There's also the consideration of total time expenditures on my part. Since the main reason I don't respond at length to Goetz is his repeated behaviors that force me to expend large amounts of time or suffer penalties, elaborate time-consuming courtesies aren't a solution either.

Ok, I soften my critique given your reply which made a point I hadn't fully considered.
It sounds like the public disrespect is intentional, and it does have a purpose.. To be a good thing to do, you need to believe, among other things:

  1. Publicly doing that is more likely to make him stop relative to privately doing it. (Seems plausible).
  2. You're not losing something greater than the wasted time by other people observing your doing it. (Unclear to me)

It would be better I think if you could just privately charge someone for the time wasted;but it does seem... (read more)

Phil, I think you're interpeting his claim too literally (relative to his intent). He is only trying to help people who have a psychological inability to discount small probabilities appropriately. Certainly if the lottery award grows high enough, standard decision theory implies you play ....this is one of the pascal's mugging variants (similarly, whether to perform hypothetical exotic physics experiments with small probability of yielding infinite (or just extremely large) utility and large probability of destroying everything) which is not fully resolved for any of us, I think.

2Eliezer Yudkowsky14y
Yup.
3PhilGoetz14y
You're probably right. But I'm still irritated that instead of EY saying, "I didn't say exactly what I meant", he is sticking to "Phil is stupid."

Eli tends to say stylistically: "You will not " for what others, when they're thinking formally, express as "You very probably will not __" This is only a language confusion between speakers. There are other related ones here, I'll link to them later. Telling someone to "win" versus "try to win" is a very similar issue.

2PhilGoetz14y
That's not what's at issue. The statement still says that the chance of winning is so low as not to be worth talking about. That implies that one does not calculate expected utility. My interpretation is correct. Eliezer has written 3 comments in reply, and is still trying to present it as if what is at issue here is that I consistently misrepresent him. I am not misrepresenting him. My interpretation is correct. As has probably often been the case.
4Eliezer Yudkowsky14y
To be exact, I say this when human brains undergo the failure mode of being unable to discount small probabilities. Vide: "But there's still a chance, right?" [http://www.overcomingbias.com/2008/01/still-a-chance.html]

While you appear to be right about phil's incorrect interpretation, I don't think he meant any malice by it...however, you appear to me to have meant malice in return. So, I think your comment borders on unnecessary disrespect and if it were me who had made the comment, I would edit it to make the same point while sounding less hateful. If people disagree with me, please down vote this comment. (Though admittedly, if you edit your comment now, we won't get good data, so you probably should leave it as is.)

I admit that I'm not factoring in your entire hi... (read more)

5Kaj_Sotala14y
Agreed. Also, saying somebody is wrong and then not bothering to explain how does come across as somewhat rude, as it forces the other person to try to guess what they did wrong instead of providing more constructive feedback.
4Eliezer Yudkowsky14y
Phil does this a lot, usually in ways which present me with the dilemma of spending a lot of time correcting him, or letting others pick up a poor idea of what my positions are (because people have a poor ability to discount this kind of evidence). I've said as much to Phil, and he apparently thinks it's fine to go on doing this - that it's good for him to force me to correct him, even though others don't make similar misinterpretations. Whether or not this is done from conscious malice doesn't change the fact that it's a behavior that forces me to expend resources or suffer a penalty, which is game-theoretically a hostile act. So, to discourage this unpleasant behavior, it seems to me that rather than scratching his itch for his benefit (encouraging repetition), I should make some reply which encourages him not to do it again. I would like to just reply: "Phil Goetz repeatedly misinterprets what I'm saying in an attempt to force me to correct him, which I consider very annoying behavior and have asked him to stop." If that's not what Phil intends.... well, see how it feels to be misinterpreted, Phil? Unfortunately this comes too close to lying for my tastes, so I'll have to figure out some similar standard reply. Maybe even a standard comment to link to each time he does this.

Phil- clever heuristic, canceling idiots..though note that it actually applies directly from a bayesian expected value calculation in certain scenarios:

  1. Assume you have no info about the voting issues except who the idiots are and how they vote. Now either your prior is that reversed stupidity is intelligence in this domain or it's not. If it is, then you have clear bayesian grounds to vote against the idiots. If it's not, then reversed stupidity either is definite stupidity or it has 0 correlation. In case 1, reason itself does not work (e.g., a situa
... (read more)

Just read your last 5 comments and they looked useful to me, including most with 1 karma point. I would keep posting whenever you have information to add, and take actual critiques in replies to your comments much more seriously than lack of karma. Hope this helps.. Rob zahra

Whpearson----I think I do see some powerful points in your post that aren't getting fully appreciated by the comments so far. It looks to me like you're constructing a situation in which rationality won't help. I think such situations necessarily exist in the realm of platonic possibility. In other words, it appears you provably cannot always win across all possible math structures; that is, I think your observation can be considered one instance of a no free lunch theorem.

My advice to you is that No Free Lunch is a fact and thus you must deal with it. ... (read more)

1whpearson14y
My point is slightly different from NFL theorems. They say if you exhaustively search a problem then there are problems for the way you search that mean you will find the optimum last. I'm trying to say there are problems where exhaustive search is something you don't want to do. E.g. seeing what happens when you stick a knife into your heart or jumping into a bonfire. These problems also exist in real life, where as the NFL problems are harder to make the case that they exist in real life for any specific agent.

I'm quite confident there is only a language difference between eliezer's description and the point a number of you have just made. Winning versus trying to win are clearly two different things, and it's also clear that "genuinely trying to win" is the best one can do, based on the definition those in this thread are using. But Eli's point on ob was that telling oneself "I'm genuinely trying to win" often results in less than genuinely trying. It results in "trying to try"...which means being satisfied by a display of effor... (read more)

2timtyler14y
Eliezer seems to be talking about actually winning - e.g.: "Achieving a win is much harder than achieving an expectation of winning". He's been doing this pretty consistently for a while now - including on his administrator's page on the topic: "Instrumental rationality: achieving your values." * http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/ [http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/] That is why this discussion is still happening.

This post is a good idea, but wouldn't it be easier for everyone to join the less wrong facebook group? I'm not positive, but I think the geographical sorting can then be easily viewed automatically. You could then invite the subgroups to their own group, and easily send group messages.

1Paul Crowley14y
I thought of that, but quite a few folk seemed to object to the idea of using Facebook here, so I thought this was the safest option. I'm in the Facebook group too, as Paul Crowley.

Ob has changed people's practical lives in some major ways. Not all of these are mine personally:

"I donated more money to anti aging, risk reduction, etc"

"I signed up for cryonics."

"I wear a seatbelt in a taxi even when no one else does."

"I stopped going to church but started hanging out socially with aspiring rationalists."

"I decided rationality works and started writing down my goals and pathways to them."

"I decided it's important for me to think carefully about what my ultimate values are."

0Bleys14y
Yes! Or even further, "I am now focusing my life on risk reduction and have significantly reduced akrasia in all facets of my life."

Various people on our blogs have talked about how useful a whuffie concept would be (see especially Vassar on reputation markets. I agree that Less Wrong's karma scores encourage an inward focus; however, the general concept seems so useful that we ought to consider finding a way to expand Karma scores beyond just this site, as opposed to shelving the idea. Whether that is best implemented through facebook or some other means is unclear to me. Can anyone link to any analysis on this?

Rob Zahra

Michael: "The closest thing that I have found to a secular church really is probably a gym."

Perhaps in the short run we could just use the gym directly, or analogs. Aristotle's Peripatetic school and other notable thinkers who walked suggests that having people walking while talking, thinking, and socializing is worth some experimentation. This could be done by walking outside or on parallel exercise machines in a gym (would be informative which worked better to tease out what it is about walking that improves thinking, assuming the hypothesi... (read more)

3[anonymous]13y
I regularly combine thinking and walking as well. I try to walk outside for at least a half hour daily, preferably along an unfamiliar path or in a new pattern. I find that this is a good time to integrate new information via insights. This could be because my mind is at ease, and the novel sequence of environmental stimuli may be conducive to avoiding cached thoughts [http://lesswrong.com/lw/k5/cached_thoughts/].
4AlexU14y
One obvious implication of this is that we should be making our homes in warmer climates. Even if you, personally, have high resistance to foul weather, it's going to be tougher to get people to walk and converse with you year-round in Boston than it would be in Miami. This conflicts with the observation that, at least in modern times, the colder parts of the world have tended to produce the better thinkers. I'm not sure it would be smart to move from Cambridge to South Beach in hopes of leading a more intellectually fruitful life...
3Vladimir_Golovin14y
Interesting. I also combine walking and thinking -- even in the office (thankfully, we have a 'thinking corridor'). My ideal daily dose is about 7 kilometers (4.34 miles), but unfortunately it's difficult to find a good thinking route in a city -- too much cars, too few forests.

Agree with and like the post. Two related avenues for application:

  1. Using this effect to accelerate one's own behavior modification by making commitments in the direction of the type of person one wants to become. (e.g. donating even small amounts to SIAI to view oneself as rationally altruistic, speaking in favor of weight loss as a way to achieve weight loss goals, etc.). Obviously this would need to be used cautiously to avoid cementing sub-optimal goals.

  2. Memetics: Applying these techniques on others may help them adopt your goals without your needing to explicitly push them too hard. Again, caution and foresight advisable.