Simetrical

20

The question required us to provide real numbers, and infinitesimals are not real numbers. Even if you allowed infinitesimals, though, 0 would still be the Nash equilibrium. After all, if 1/∞ is a valid guess, so is (1/∞)*(2/3), etc., so the exact same logic applies: any number larger than 0 is too large. The only value where everyone could know everyone else's choice and still not want to change is 0.

00

Doing this with server side scripting is crazy. You'd have to submit a zillion forms and take a second to get the answer for each try. This is precisely the sort of thing client-side scripting is meant for.

Of course, the page would explain that it needed JavaScript, if you had JavaScript disabled, not just show a blank page.

30

I got the wrong rule, but it said I was right because I made only one mistake. I thought the rule was that a sequence was awesome if it was an increasing arithmetic progression. The only one of your examples at the end that contradicted this was 2, 9, 15. All the other awesome ones were, in fact, increasing arithmetic progressions: five out of the six awesome sequences you gave at the end. You should probably cut that down to two or three, so I'd have lost.

00

That clears things up a lot. I hadn't really thought about the multiple-models take on it (despite having read the "prior probabilities as mathematical objects" post). Thanks.

-10

Even accepting the premise that voting for the proposition was clearly wrong, that's a single anecdote. It does nothing to demonstrate that Mormons are *overall* worse people than atheists. It is only a single point in the atheists' favor. I could respond with examples of atheists doing terrible things, e.g., the amount of suffering caused by communists.

Anecdotes are not reliable evidence; you need a careful, thorough, and systematic analysis to be able to make confident statements. It's really surprised me how commonly people supply purely anecdotal evidence here and expect it to be accepted (and how often it is accepted!). This is a site all about promoting rationalism, and part of that is reserving judgment unless you have good evidence.

I really don't think a systematic analysis of the morality of Mormons vs. atheists exists, for any given utility function. That kind of analysis is probably close to impossible, in fact, even if you can precisely specify a utility function that a lot of people will agree on. To begin with, it would absolutely have to be controlled to be meaningful ― the cultural, etc. backgrounds of atheists are surely not comparable on average to those of Mormons.

I think this is an issue that rationalists just need to admit uncertainty about. That's life, when you're rational. Only religious people get to be certain most of the time about moral issues. A Mormon asked the same question would be able to say with confidence that the atheists caused more evil, since not following Mormonism is so evil that it would clearly outweigh any minor statistical differences between the two groups in terms of things like violent crime. If you believe in utility functions that depend on all sorts of complex empirical questions, you really can't answer most moral questions very confidently.

10

I think this post could have been more formally worded. It draws a distinction between two types of probability assignment, but the only practical difference given is that you'd be surprised if you're wrong in one case but not the other. My initial thought was just that surprise is an irrational thing that should be disregarded ― there's no term for "how surprised I was" in Bayes' Theorem.

But let's rephrase the problem a bit. You've made your probability assignments based on Omega's question: say 1/12 for each color. Now consider another situation where you'd give an identical probability assignment. Say I'm going to roll a demonstrated-fair twelve-sided die, and ask you the probability that it lands on one. Again, you assign 1/12 probability to each possibility.

(Actually, these assignments are spectacularly wrong, since they give a zero probability to all other colors/numbers. *Nothing* deserves a zero probability. But let's assume you gave a negligible but nonzero probability to everything else, and 1/12 is just shorthand for "slightly less than 1/12, but not enough to bother specifying".)

So as far as everything goes, your probability assignments for the two cases look identical up to this point. Now let's say I offer you a bet: we'll go through both events (drawing a bead and putting it back, or rolling the die) a million times. If your estimate of the probability of red/one was within 1% of correct in that sample, I give you $1000. Otherwise, you give me $1000.

In the case of the die, we would all take the bet in a heartbeat. We're *very sure* that our figures are correct, since the die is demonstrated to be fair, and 1% is a lot of wiggle room for the law of large numbers. But you'd have to be crazy to take the same bet on the jar, despite having assigned a precisely identical chance of winning.

So what's the difference? Isn't all the information you care about supposed to be encapsulated in your probability distribution? What is the mathematical distinction between these two cases that causes such a clear difference in whether a given bet is rational? Are we supposed to not only assign probabilities to which events will occur, but also to our probabilities themselves, ad infinitum?

70

I see this conclusion as a mistake: being surprised is a way of translating between intuition and explicit probability estimates. If you are not surprised, you should assign high enough probability, and otherwise if you assign tiny probability, you should be surprised (modulo known mistakes in either representation).

That's not true at all. Before I'm dealt a bridge hand, my probability assignment for getting the hand J♠, 8♣, 6♠, Q♡, 5♣, Q♢, Q♣, 5♡, 3♡, J♣, J♡, 2♡, 7♢ in that order would be one in 3,954,242,643,911,239,680,000. But I wouldn't be the least bit surprised to get it.

In the terminology of statistical mechanics, I guess surprise isn't caused by low-probability *microstates* ― it's caused by low-probability *macrostates*. (I'd have been *very* surprised if that were a full suit in order, despite the fact that a priori that has the same probability.) What you define as a macrostate is to some extent arbitrary. In the case of bridge, you'd probably divide up hands into classes based on their utility in bridge, and be surprised only if you get an unlikely *type* of hand.

In this case, I'd probably divide the outcomes up into macrostates like "red", "some other bright color like green or blue", "some other common color like brown", "a weird color like grayish-pink", and "something other than a solid-colored ball, or something I failed to even think of". Each macrostate would have a pretty high probability (including the last: who knows what Omega's up to?), so I wouldn't be surprised at any outcome.

This is an off-the-cuff analysis, and maybe I'm missing something, but the idea that any low-probability event should be surprising certainly can't be correct.

50

Huh. Do you need me to post a few dozen links to articles detailing incidents where Mormons did evil acts because of their religious beliefs? I mean, Mormonism isn't as inherently destructive as Islam, but it's not Buddhism either.

Do you have empirical evidence that Mormons are more likely to cause harm than atheists? (Let's say in the clear-cut sense of stabbing people instead of in the sense of spreading irrationality.) Mormons might do more bad things because their god requires it, but atheists might do more bad things because they don't have a god to require otherwise. They might be more likely to become nihilists or solipsists and not care about other people, say, acting purely selfishly. A priori, I have no idea which one is correct.

It seems that as a rationalist, you should be wary of assigning high probabilities here without direct empirical evidence. Especially since you presumably suffer from in-group bias. But perhaps you're aware of studies that support your view that religion is harmful in a simple sense?

(If you consider spreading religion inherently evil, then you have more reason to presume that Mormonism is harmful. You would still have to argue that the harm outweighs any possible benefit, but you'd have a stronger case for assuming that. However, by your comparisons to Islam and Buddhism you seem to mean plain old violence and so forth.)

100

If the question is "Should Wednesday, while not exactly choosing to believe religion, avoid thinking about it too hard because she thinks doing so will make her an atheist?," then she's already an atheist on some level because she thinks knowing more will make her more atheist, which implies atheism is true. This reduces to the case of deception, which you seem to be against unconditionally.

That's not necessarily true. Perhaps she believes Mormonism is almost certainly right, but acknowledges that she's not fully rational and might be misled if she read too many arguments against it. Most Christians believe in the idea that God (or Satan) tempts people to sin, and that avoiding temptation is a useful tactic to avoid sin. Kind of like avoiding stores where candy is on display if you're trying to lose weight, say. You know what's right in advance, but you're afraid of losing resolve.

Certainly whatever your beliefs, some people who disagree with you are sufficiently charismatic and good at rhetoric that they might persuade you if you give them the chance. (Well, for most of us, anyway.) How many atheist Less Wrongers would be able to withstand lengthy debate with very talented missionaries? Some, certainly. Most, probably. All? I doubt it.

Overall, though, an excellent response, and I agree with almost all the rest of it.

In that case Warrigal would have said "rational" rather than "real". Numbers such as 17π would presumably be fine too, not just fractions. "No funny business" presumably means "I'd better be able to figure out whether it's the closest easily". For instance, the number "S(12)/2^n, where S is the max shifts function and n is the smallest integer such that my number is less than 100" is technically well-defined, in a mathematical sense. But if you can actually figure out what it is, you could publish a paper about it in any journal of computer science you liked.