Shutting up and multiplying, answer is clearly to save eliezer...and do so versus a lot more people than just three...question is more interesting if you ask people what n (probably greater than 3) is their cut off point.
Due to chaotic / non-linear effects, you're not going to get anywhere near the compression you need for 33 bits to be enough...I'm very confident the answer is much much higher...
you're right. speaking more precisely, by "ask yourself what you would do", I mean "engage in the act of reflecting, wherein you realize the symmetry between you and your opponent which reduces the decision problem to (C,C) and (D,D), so that you choose (C,C)", as you've outlined above. Note though that even when the reduction is not complete (for example, b/c you're fighting a similar but inexact clone), there can still be added incentive to cooperate...
Agreed that in general one will have some uncertainty over whether one's opponent is the type of algorithm who one boxes / cooperates / whom one wants to cooperate with, etc. It does look like you need to plug these uncertainties into your expected utility calculation, such that you decide to cooperate or defect based on your degree of uncertainty about your opponent.
However, in some cases at least, you don't need to be Omega-superior to predict whether another agent one-boxes....for example, if you're facing a clone of yourself; you can just ask yoursel...
Agreed with tarleton, the prisoner's dilemma questions do look under-specified...e.g., eliezer has said something like cooperate if he thinks his opponent one-boxes on newcomb-like problems..maybe you could have some write-in box here and figure out how to map the votes to simple categories later, depending on the variety of survey responses you get
On the belief in god question, rule out simulation scenarios explicitly...I assume you intend "supernatural" to rule out a simulation creator as a "god"?
On marital status, distinguish "single and looking for a relationship" versus "single and looking for people to casually romantically interact with"
Seems worth mentioning: I think a thorough treatment of what "you" want needs to address extrapolated volition and all the associated issues that raises.
To my knowledge, some of those issues remain unsolved, such as whether different simulations of oneself in different environments necessarily converge (seems to me very unlikely, and this looks provable in a simplified model of the situation), and if not, how to "best" harmonize their differing opinions...
similarly, whether a single simulated instance of oneself might itself not conver...
Wh- I definitely agree the point you're making about knives etc., though I think one intepretation of the nfl as applying not to just to search but also to optimization makes your observation an instance of one type of nfl. Admittedly, there are some fine print assumptions that I think go under the term "almost no free lunch" when discussed.
Tim-Good, your distinction sounds correct to me.
Annoyance, I don't disagree. The runaway loop leading to intelligence seems plausible, and it appears to support the idea that partially accurate modeling confers enough advantage to be incrementally selected .
Yes, the golden gate bridge is a special case of deduction in the sense meant here. I have no problem with anything in your comment, I think we agree.
I think we're probably using some words differently, and that's making you think my claim that deductive reasoning is a special case of Bayes is stronger than I mean it to be.
All I mean, approximately, is:
Bayes theorem: p(B|A) = p(A|B)*p(B) / p(A)
Deduction : Consider a deductive system to be a set of axioms and inference rules. Each inference rule says: "with such and such things proven already, you can then conclude such and such". And deduction in general then consists of recursively turning the crank of the inference rules on the axioms ...
Ciphergoth, I agree your points, that if your prior over world-states were not induction biased to start with, you would not be able to reliably use induction, and that this is a type of circularity. Also of course, the universe might just be such that the Occam prior doesn't make you win; there is no free lunch, after all.
But I still think induction could meaningfully justify itself, at least in a partial sense. One possible, though speculative, pathway: Suppose Tegmark is right and all possible math structures exist, and that some of these contain c...
I agree with Jimmy's examples. Tim, the Solomonoff model may have some other fine print assumptions {see some analysis by Shane Legg here}, but "the earth having the same laws as space" or "laws not varying with time" are definitely not needed for the optimality proofs of the universal prior (though of course, to your point, uniformity does make our induction in practice easier, and time and space translation invariance of physical law do appear to be true, AFAIK.). Basically, assuming the universe is computable is enough to get the o...
Tim--- To resolve your disagreement: Induction is not purely about deduction, but it nevertheless can be completely modelled by a deductive system.
More specifically, I agree with your claim about induction (see point 4 above). However, in defense of Eliezer's claim that induction is a special case of deduction, I think you can model it in a deductive system even though induction might require additional assumptions. For one thing, deduction in practice seems to me to require empirical assumptions as well (i.e., the "axioms" and "inferenc...
I agree with the spirit of this, though of course we have a long way to go in cognitive neuroscience before we know ourselves anywhere near as well as we know the majority of our current human artifacts. However, it does seem like relatively more accurate models will help us comparatively more, most of the time. Presumably that human intelligence was able to evolve at all is some evidence in favor of this.
It looks to me like those uniformity of nature principles would be nice but that induction could still be a smart thing to do despite non-uniformity. We'd need to specify in what sense uniformity was broken to distinguish when induction still holds.
Are you saying that you would modify the first definition of rational to include these >> other ways of knowing (Occam's Razor and Inductive Bias), and that they can make conclusions about metaphysical things?
yes, I don't think you can get far at all without an induction principle. We could make a meta-model of ourselves and our situation and prove we need induction in that model, if it helps people, but I think most people have the intuition already that nothing observational can be proven "absolutely", that there are an infinite nu...
yes, exactly
Why to accept an inductive principle:
Finite agents have to accept an "inductive-ish" principle, because they can't even process the infinitely many consistent theories which are longer than the number of computations they have in which to compute, and therefore they can't even directly consider most of the long theories. Zooming out and viewing from the macro, this is extremely inductive-ish, though it doesn't decide between two fairly short theories, like Christianity versus string theory.
Probabilities over all your hypotheses have to add t
this can be viewed the other way around, deductive reasoning as a special case of Bayes
seconding timtyler and guysrinivasan--I think, but can't prove, that you need an induction principle to reach the anti-religion conclusion. See especially Occam's Razor and Inductive Bias. If someone wants to bullet point the reasons to accept an induction principle, that would be useful. Maybe I'll take a stab later. It ties into Solomonoff induction among other things.
EDIT---I've put some bullet points below which state the case for induction to the best of my knowledge.
Why to accept an inductive principle:
Finite agents have to accept an "inductive-ish" principle, because they can't even process the infinitely many consistent theories which are longer than the number of computations they have in which to compute, and therefore they can't even directly consider most of the long theories. Zooming out and viewing from the macro, this is extremely inductive-ish, though it doesn't decide between two fairly short theories, like Christianity versus string theory.
Probabilities over all your hypotheses have to add t
yes, what to call the chunk is a separate issue...I at least partially agree with you, but I'd want to hear what others have to say. The recent debate over the tone of the Twelve Virtues seems relevant.
This is the Dark Side root link. In my opinion it's a useful chunked) concept, though maybe people should be hyperlinking here when they use the term, to be more accessible to people who haven't read every post. At the very least, the FAQ builders should add this, if it's not there already.
Some examples of what I think you're looking for:
I mostly agree with your practical conclusion, however I don't see purchasing fuzzies and utilons separately as an instance of irrationality per se. As a rationalist, you should model the inside of your brain accurately and admit that some things you would like to do might actually be beyond your control to carry out. Purchasing fuzzies would then be rational for agents with certain types of brains. "Oh well, nobody's perfect" is not the right reason to purchase fuzzies; rather, upon reflection, this appears to be the best way for you to maximize utilons long term. Maybe this is only a language difference (you tell me), but I think it might be more than that.
If a gun were put to my head and I had to decide right now, I agree with your irritation. However, he did make an interesting point about public disrespect as a means of deterrence which deserves more thinking about. If that method looks promising after further inspection, we'd probably want to reconsider its application to this situation, though it's still unclear to me to what extent it applies in this case.
Ok, I soften my critique given your reply which made a point I hadn't fully considered.
It sounds like the public disrespect is intentional, and it does have a purpose..
To be a good thing to do, you need to believe, among other things:
It would be better I think if you could just privately charge someone for the time wasted;but it does seem...
Phil, I think you're interpeting his claim too literally (relative to his intent). He is only trying to help people who have a psychological inability to discount small probabilities appropriately. Certainly if the lottery award grows high enough, standard decision theory implies you play ....this is one of the pascal's mugging variants (similarly, whether to perform hypothetical exotic physics experiments with small probability of yielding infinite (or just extremely large) utility and large probability of destroying everything) which is not fully resolved for any of us, I think.
Eli tends to say stylistically: "You will not " for what others, when they're thinking formally, express as "You very probably will not __" This is only a language confusion between speakers. There are other related ones here, I'll link to them later. Telling someone to "win" versus "try to win" is a very similar issue.
While you appear to be right about phil's incorrect interpretation, I don't think he meant any malice by it...however, you appear to me to have meant malice in return. So, I think your comment borders on unnecessary disrespect and if it were me who had made the comment, I would edit it to make the same point while sounding less hateful. If people disagree with me, please down vote this comment. (Though admittedly, if you edit your comment now, we won't get good data, so you probably should leave it as is.)
I admit that I'm not factoring in your entire hi...
NYC area: Rob Zahra, AlexU, and Michael Vassar sometimes...
Phil- clever heuristic, canceling idiots..though note that it actually applies directly from a bayesian expected value calculation in certain scenarios:
Just read your last 5 comments and they looked useful to me, including most with 1 karma point. I would keep posting whenever you have information to add, and take actual critiques in replies to your comments much more seriously than lack of karma. Hope this helps.. Rob zahra
Whpearson----I think I do see some powerful points in your post that aren't getting fully appreciated by the comments so far. It looks to me like you're constructing a situation in which rationality won't help. I think such situations necessarily exist in the realm of platonic possibility. In other words, it appears you provably cannot always win across all possible math structures; that is, I think your observation can be considered one instance of a no free lunch theorem.
My advice to you is that No Free Lunch is a fact and thus you must deal with it. ...
I'm quite confident there is only a language difference between eliezer's description and the point a number of you have just made. Winning versus trying to win are clearly two different things, and it's also clear that "genuinely trying to win" is the best one can do, based on the definition those in this thread are using. But Eli's point on ob was that telling oneself "I'm genuinely trying to win" often results in less than genuinely trying. It results in "trying to try"...which means being satisfied by a display of effor...
This post is a good idea, but wouldn't it be easier for everyone to join the less wrong facebook group? I'm not positive, but I think the geographical sorting can then be easily viewed automatically. You could then invite the subgroups to their own group, and easily send group messages.
Ob has changed people's practical lives in some major ways. Not all of these are mine personally:
"I donated more money to anti aging, risk reduction, etc"
"I signed up for cryonics."
"I wear a seatbelt in a taxi even when no one else does."
"I stopped going to church but started hanging out socially with aspiring rationalists."
"I decided rationality works and started writing down my goals and pathways to them."
"I decided it's important for me to think carefully about what my ultimate values are."
Various people on our blogs have talked about how useful a whuffie concept would be (see especially Vassar on reputation markets. I agree that Less Wrong's karma scores encourage an inward focus; however, the general concept seems so useful that we ought to consider finding a way to expand Karma scores beyond just this site, as opposed to shelving the idea. Whether that is best implemented through facebook or some other means is unclear to me. Can anyone link to any analysis on this?
Michael: "The closest thing that I have found to a secular church really is probably a gym."
Perhaps in the short run we could just use the gym directly, or analogs. Aristotle's Peripatetic school and other notable thinkers who walked suggests that having people walking while talking, thinking, and socializing is worth some experimentation. This could be done by walking outside or on parallel exercise machines in a gym (would be informative which worked better to tease out what it is about walking that improves thinking, assuming the hypothesi...
Agree with and like the post. Two related avenues for application:
Using this effect to accelerate one's own behavior modification by making commitments in the direction of the type of person one wants to become. (e.g. donating even small amounts to SIAI to view oneself as rationally altruistic, speaking in favor of weight loss as a way to achieve weight loss goals, etc.). Obviously this would need to be used cautiously to avoid cementing sub-optimal goals.
Memetics: Applying these techniques on others may help them adopt your goals without your needing to explicitly push them too hard. Again, caution and foresight advisable.
The current best answer we know seems to be to write each consistent hypothesis in a formal language, and weight longer explanations inverse exponentially, renormalizing such that your total probability sums to 1. Look up aixi, universal prior