Take heed, for it is a trap

If you have worked your way through most of the sequences you are likely to agree with the majority of these statements:

  • When people die we should cut off their heads so we can preserve those heads and make the person come back to life in the (far far) future.
  • It is possible to run a person on Conways Game of Life. This would be a person as real as you or me, and wouldn't be able to tell he's in a virtual world because it looks exactly like ours.
  • Right now there exist many copies/clones of you, some of which are blissfully happy and some of which are being tortured and we should not care about this at all.
  • Most scientists disagree with this but that's just because it sounds counter-intuitive and scientists are biased against counterintuitive explanations.
  • Besides, the scientific method is wrong because it is in conflict with probability theory. Oh, and probability is created by humans, it doesn't exist in the universe.
  • Every fraction of a second you split into thousands of copies of yourself. Of course you cannot detect these copies scientifically, but that because science is wrong and stupid.
  • In fact, it's not just people that split but the entire universe splits over and over.
  • Time isn't real. There is no flow of time from 0 to now. All your future and past selves just exist. 
  • Computers will soon become so fast that AI researchers will be able to create an artificial intelligence that's smarter than any human. When this happens humanity will probably be wiped out.
  • To protect us against computers destroying humanity we must create a super-powerful computer intelligence that won't destroy humanity.
  • Ethics are very important and we must take extreme caution to make sure we do the right thing. Also, we sometimes prefer torture to dust-specs.
  • If everything goes to plan a super computer will solve all problems (disease, famine, aging) and turn us into super humans who can then go on to explore the galaxy and have fun.
  • And finally, the truth of all these statements is completely obvious to those who take the time to study the underlying arguments. People who disagree are just dumb, irrational, miseducated or a combination thereof. 
  • I learned this all from this website by these guys who want us to give them our money.

In two words: crackpot beliefs.

These statements cover only a fraction of the sequences and although they're deliberately phrased to incite kneejerk disagreement and ugh-fields I think most LW readers will find themselves in agreement with almost all of them. And If not then you can always come up with better examples that illustrate some of your non-mainstream beliefs.

Think back for a second to your pre-bayesian days. Think back to the time before your exposure to the sequences. Now the question is, what estimate would you have given that any chain of arguments could persuade you the statements above are true? In my case, it would be near zero.

You can take somebody who likes philosophy and is familiar with the different streams and philosophical dilemmas, who knows computation theory and classical physics, who has a good understanding of probability and math and somebody who is a naturally curious reductionist. And this person will still roll his eyes and will sarcastically dismiss the ideas enumerated above. After all, these are crackpot ideas, and people who believe them are so far "out there", they cannot be reasoned with!

That is really the bottom line here. You cannot explain the beliefs that follow from the sequences because they have too many dependencies and even if you did have time to go through all the necessary dependencies explaining a belief is still an order of magnitude more difficult than following the explanation written down by somebody else because in order to explain something you have to juggle two mental models: your own and the one of the listener.

Some of the sequences touches on the concept of the cognitive gap (inferential distance). We have all learned this the hard way that we can't expect people to just understand what we say and we can't expect short inferential distances. In practice there is just no way to bridge the cognitive gap. This isn't a big deal for most educated people, because people don't expect to understand complex arguments in other people's fields and all educated intellectuals are on the same team anyway (well, most of the time). For crackpot LW beliefs it's a whole different story though. I suspect most of us have found that out the hard way.

Rational Rian: What do you think is going to happen to the economy?

Bayesian Bob: I'm not sure. I think Krugman believes that a bigger cash injection is needed to prevent a second dip.

Rational Rian: Why do you always say what other people think, what's your opinion?

Bayesian Bob: I can't really distinguish between good economic reasoning and flawed economic reasoning because I'm a lay man. So I tend to go with what Krugman writes, unless I have a good reason to believe he is wrong. I don't really have strong opinions about the economy, I just go with the evidence I have.

Rational Rian: Evidence? You mean his opinion.

Bayesian Bob: Yep.

Rational Rian: Eh? Opinions aren't evidence.

Bayesian Bob: (Whoops, now I have to either explain the nature of evidence on the spot or Rian will think I'm an idiot with crazy beliefs. Okay then, here goes.) An opinion reflects the belief of the expert. These beliefs can either be uncorrelated with reality, negatively correlated or positively correlated. If there is absolutely no relation between what an expert believes and what is true then, sure, it wouldn't count as evidence. However, it turns out that experts mostly believe true things (that's why they're called experts) and so the beliefs of an expert are positively correlated with reality and thus his opinion counts as evidence.

Rational Rian: That doesn't make sense. It's still just an opinion. Evidence comes from experiments.

Bayesian Bob: Yep, but experts have either done experiments themselves or read about experiments other people have done. That's what their opinions are based on. Suppose you take a random scientific statement, you have no idea what it is, and the only thing you know is that 80% of the top researchers in that field agree with that statement, would you then assume the statement is probably true? Would the agreement of these scientists be evidence for the truth of the statement?

Rational Rian: That's just an argument ad populus! Truth isn't governed by majority opinion! It is just religious nonsense that if enough people believe something then there must be some some truth to it.

Bayesian Bob: (Ad populum! Populum! Ah, crud, I should've phrased that more carefully.) I don't mean that majority opinion proves that the statement is true, it's just evidence in favor of it. If there is counterevidence the scale can tip the other way. In the case of religion there is overwhelming counterevidence. Scientifically speaking religion is clearly false, no disagreement there.

Rational Rian: There's scientific counterevidence for religion? Science can't prove non-existence. You know that!

Bayesian Bob: (Oh god, not this again!) Absence of evidence is evidence of absence.

Rational Rian: Counter-evidence is not the same as absence of evidence! Besides, stay with the point, science can't prove a negative.

Bayesian Bob: The certainty of our beliefs should be proportional to amount of evidence we have in favor of the belief. Complex beliefs require more evidence than simple beliefs, and the laws of probability, Bayes specifically, tell us how to weigh new evidence. A statement, any statement, starts out with a 50% probability of being true, and then you adjust that percentage based on the evidence you come into contact with. (I shouldn't have said that 50% part. There's no way that's going to go over well. I'm such an idiot.)

Rational Rian: A statement without evidence is 50% likely to be true!? Have you forgotten everything from math class? This doesn't make sense on so many levels, I don't even know where to start!

Bayesian Bob: (There's no way to rescue this. I'm going to cut my losses.) I meant that in a vacuum we should believe it with 50% certainty, not that any arbitrary statement is 50% likely to accurately reflect reality. But no matter. Let's just get something to eat, I'm hungry.

Rational Rian: So we should believe something even if it's unlikely to be true? That's just stupid. Why do I even get into these conversations with you? *sigh* ... So, how about Subway?

 


 

The moral here is that crackpot beliefs are low status. Not just low-status like believing in a deity, but majorly low status. When you believe things that are perceived as crazy and when you can't explain to people why you believe what you believe then the only result is that people will see you as "that crazy guy". They'll wonder, behind your back, why a smart person can have such stupid beliefs. Then they'll conclude that intelligence doesn't protect people against religion either so there's no point in trying to talk about it.

If you fail to conceal your low-status beliefs you'll be punished for it socially. If you think that they're in the wrong and that you're in the right, then you missed the point. This isn't about right and wrong, this is about anticipating the consequences of your behavior. If you choose to to talk about outlandish beliefs when you know you cannot convince people that your belief is justified then you hurt your credibility and you get nothing for it in exchange. You cannot repair the damage easily, because even if your friends are patient and willing to listen to your complete reasoning you'll (accidently) expose three even crazier beliefs you have.

An important life skill is the ability to get along with other people and to not expose yourself as a weirdo when this isn't in your interest to do so. So take heed and choose your words wisely, lest you fall into the trap.

 


EDIT - Google Survey by Pfft

PS: intended for /main but since this is my first serious post I'll put it in discussion first to see if it's considered sufficiently insightful.

187 comments, sorted by
magical algorithm
Highlighting new comments since Today at 7:32 AM
Select new highlight date
Moderation Guidelinesexpand_more

To everyone who just read this and is about to argue with the specific details of the bullet points or the mock argument:

Don't bother, they're (hopefully) not really the point of this.

Focus on the conclusion and the point that LW beliefs have a large inferential distance. The summary of this post which is interesting to talk about is "some (maybe most) LW beliefs will appear to be crackpot beliefs to the general public" and "you can't actually explain them in a short conversation in person because the inferential distance is too large". Therefore, we should be very careful to not get into situations where we might need to explain things in short conversations in person.

Therefore, we should be very careful to not get into situations where we might need to explain things in short conversations in person.

Should I start staying indoors more?

You could. Or you could just refuse to get into arguments about politics/philosophy. Or you could find a social group such that these things aren't problems.

I certainly don't have amazing solutions to this particular problem, but I'm fairly sure they exist.

The solutions that I have so far are just finding groups of people who tend to be open-minded, and then discussing things from the perspective of "this is interesting, and I think somewhat compelling".

When I get back from vacation I intend to do more wandering around and talking to strangers about LWy type stuff until I get the impression that I don't sound like a crackpot. When I get good at talking about it with people with whom social mistakes are relatively cheap, I'll talk about it more with less open-minded friends.

This comment makes the OP's point effectively, in a fraction of its length and without the patronizing attitude. Comment upvoted, OP downvoted.

The way to have these conversations is to try to keep them as narrow as possible. You're not trying to explain your worldview, you're just trying to take the other person one step forward in inferential distance. There should be one point that you're trying to make that you want the other person to take away from the conversation, and you should try to make that point as clearly and simply as possible, in a way that will be understandable to the other person. Maybe you can give them a glimpse that there's more to your thinking than just this one point, but only if it doesn't distract from that point.

Bob doesn't do this. He feels that he needs to explain the nature of evidence, he uses an example which is controversial to Rian (and thus is a distraction from the point that Bob is trying to establish with the example), and he responds to every issue that Rian brings up instead of trying to bring the conversation back to the original point. Bob's problem is not that he has particularly unusual or crazy beliefs, it's that he has various views that are different from Rian's and he lets the conversation bounce from one to another without ever sticking with one point of disagreement long enough to get a clear explanation of his views across.

you are likely to agree with the majority of these statements

If you hadn't amplified the oddness of the beliefs on the list, this would be true. The trouble is, the way you amplified oddness is mostly by changing the substance of what was communicated, not just the style. Like by using over-general words so that people will hear one connotation when you might have been trying to say another. And so, why should we agree with statements that say the wrong thing?

Starting out with an incorrect guess about the reader is really bad for the rest of the post. You should start with your message instead, maybe even use personal experience - "I've had conversations where I brought up beliefs spread on LW, and people thought I was a crackpot."

But I also disagree with the thesis that the solution is to try to "hide the crazy." Bayesian Bob doesn't break things down into small enough parts and tries to use too many "impressive" statements that are actually harmful to communication. So a first action might be to stop digging himself deeper, but ultimately I think Bob should try to get better at explaining.

I think Bob should try to get better at explaining.

Got any tips for Bob and the rest of us?

The only stratagem that occurs to me after reading Zed's dialogue is that Bob should have spent more time motivating his solutions. I notice that Rian is the one asking all the questions, while Bob is the one offering short answers. Perhaps if Bob had been asking Rian why someone would believe in the opinions of experts and allowed him to offer possible solutions, and then guided Rian's own questioning in the right direction with more questions, the exchange would have gone differently.

I'm a bad explainer in this sort of situation, too, but perhaps something like:

Rian: That doesn't make sense. It's still just an opinion. Evidence comes from experiments.

Bob: Hmm... perhaps we think about evidence in slightly different ways. So is evidence only something that comes from an experiment? Do you form your opinions using only evidence, or are there other ingredients, too?

Once I've got a positive position staked out from Rian, I can much more easily show him the reasons that I think they're wrong. I'm no longer at risk of appearing a credulous crackpot, but instead appear to be the level-headed skeptical one.

ETA: One more attempt at summarizing my idea: don't offer your solutions until the problems are understood.

If you fail to conceal your low-status beliefs you'll be punished for it socially.

This shows a lack of understanding of signaling theory.

A poor kid wears middle class clothes so that people will think they're middle class and not poor. A middle class person wears rich clothes so that people will think they're rich and not middle class. A rich person wears whatever they want, because middle class people are already wearing 'rich' clothes and nobody's going to confuse them for being poor while they're matching ripped jeans with Rolex watches. If you and your beliefs are already low status, then having 'crackpot' beliefs will push your status lower. If you are already high status, then eccentric beliefs will increase your status. At the highest levels of status, people will automatically and unconsciously update their beliefs toward yours.

Your story sounds like Ryan is much higher status than Bob. Ryan's got kung-fu master level rationality skills versus low level Bayesian judo. Ryan also sounds more articulate and intelligent than Bob, although that might be the halo effect talking since we already established he's higher status. Bob is outgunned on every level and isn't smart enough to extricate himself, so of course he's going to be punished for it socially. It could have been an argument between any two ideological positions and Bob would have lost.

It says nothing about how most of us on Less Wrong should display our beliefs.

He may be more familiar with certain other internet communities and assume most LessWrong readers have low status.

http://lesswrong.com/lw/28w/aspergers_poll_results_lw_is_on_the_spectrum/

Only about 1 in 10 people on Less Wrong are "normal" in terms of the empathizing/systematizing scale, perhaps 1 in 10 are far enough out to be full blown Aspergers, and the rest of us sit somewhere in between, with most people being more to the right of the distribution than the average Cambridge mathematics student.

I'd say that's pretty damning.

It is not "damning". The test diagnoses a particular cognitive style, characterised by precision and attention to detail - this is of no great benefit in social settings, and in extreme cases can lead to difficulty in social interaction and peculiar behaviour. On the other hand, in sciences, engineering and probably philosophy, this style brings major benefits. The overall quality of LW site is a reflection of this.

Aspergers and anti-social tendencies are, as far as I can tell, highly correlated with low social status. I agree with you that the test also selects for people who are good at the sciences and engineering. Unfortunately scientists and engineers also have low social status in western society.

First Xachariah suggested I may have misunderstood signaling theory. Then Incorrect said that what I said would be correct assuming LessWrong readers have low status. Then I replied with evidence that I think supports that position. You probably interpreted what I said in a different context.

I classed this as a 'why commonly held LW beliefs are wrong' post when I first saw the list, then skipped to the conclusion (which made a really useful point, for which I upvoted the post.) I'm mentioning this because I think that the post would communicate better if you revealed your main point earlier.

Thank you, I'll bear that in mind next time.

The conversation between Rational Rian and Bayesian Bob is uncannily reminiscent of several conversations I had when I first grew infatuated with some of EY's writings and Lesswrong overall. This later led me to very quickly start wondering if the community would be willing to dedicated some intellectual effort and apply rationality to hiding bad signalling.

I think the OP is worth posting in the main section. But someone should write up something, about how to raise the sanity waterline without damaging your own reputation after that. Now I know when people call on someone to do something, this more or less means no one especially not me. This is why I've been doing my own thinking on the matter, but I'd first like to know if people on LW are interested at all in this line of thought.

For an example: A basic stratagem seems to be to successfully diagnose, perhaps even affirm, some of your acquaintances beliefs then over time present some simple and powerful, if perhaps by now obsolete or superseded arguments that first started several of LW's more prominent writers (or yourself) on a path to the current set of beliefs. This naturally isn't rationality building (though it might happen in the process), just spreading beliefs, but the objective here is to change the in group norms of your social circle.

Then, you can start individually building rationality skills.

I would definitely be interested.

I'd also be interested in posts about raising your status (I'm thinking social skills) since status is really useful.

I think that's a great idea, and if you have any ideas please share.

It is not nearly as bad as you make it out. Bayesian Bob just seems really bad at explaining.

Rian seems to not consider detectives investigating a crime to be gathering evidence, but Bob does not seem to notice this. We can come up with examples of socially categorized types of evidence and explain why the categories are socially useful.

Absence of Evidence is Evidence of Absence can be explained in scientific terms. If a scientific experiment looking for evidence of a theory produces no results, that is evidence against the theory. This is easier to deal with in a scientific experiment because its controlled nature allows you to know how hard it was looking for evidence, to calculate how likely it would be to find the evidence if the theory were correct. Outside the context, the principle is harder to apply because the conditional probability is harder to calculate, but it is still valid.

Not once did Bob bring up such concepts as likelihood ratios or conditional probability.

And plenty of other comments have noted the problem with "starts out with a 50% probability".

As has also been pointed out already, most of the bullet point statements are either not actually controversial, or distorted from the idea they refer to. In particular, though probability theory does not perfectly align with the scientific method, it does explain how the scientific method has been as successful as it is.

I myself have discussed LW ideas with people from skeptics and atheist groups, and not come off as a crackpot.

A statement, any statement, starts out with a 50% probability of being true, and then you adjust that percentage based on the evidence you come into contact with.

Zed, you have earned an upvote (and several more mental ones) from me for this display of understanding on a level of abstraction even beyond what some LW readers are comfortable with, as witnessed by other comments. How prescient indeed was Bayesian Bob's remark:

(I shouldn't have said that 50% part. There's no way that's going to go over well. I'm such an idiot.)

You can be assured that poor Rational Rian has no chance when even Less Wrong has trouble!

But yes, this is of course completely correct. 50% is the probability of total ignorance -- including ignorance of how many possibilities are in the hypothesis space. Probability measures how much information you have, and 50% represents a "score" of zero. (How do you calculate the "score", you ask? It's the logarithm of the odds ratio. Why should that be chosen as the score? Because it makes updating additive: when you see evidence, you update your score by adding to it the number of bits of evidence you see.)

Of course, we almost never reach this level of ignorance in practice, which makes this the type of abstract academic point that people all-too-characteristically have trouble with. The step of calculating the complexity of a hypothesis seems "automatic", so much so that it's easy to forget that there is a step there.

If P is the probability that an ideal Bayesian would assign to a proposition A on hearing A but having observed no relevant evidence, then you have described the meta expected value of P in logical ignorance before doing any calculations (and assuming an ignorance prior on the distribution of propositions one might hear about). It seems to me that you have made excessively harsh criticism against those who have made correct statements about P itself.

[Y]ou have described the meta expected value of P...It seems to me that you have made excessively harsh criticism against those who have made correct statements about P itself.

See my other comments. In my opinion, the correct point of view is that P is a variable (or, if you prefer, a two-argument function); the "correct" statements are about a different value of P from the relevant one (resp. depend on inappropriately fixing one of the two arguments).

EDIT: Also, I think this is the level on which Bayesian Bob was thinking, and the critical comments weren't taking this into account and were assuming a basic error was being made (just like Rational Rian).

Of course, we almost never reach this level of ignorance in practice,

I think this is actually too weak. Hypothesis specification of any kind requires some kind of working model/theory/map of the external world. Otherwise the hypothesis doesn't have semantic content. And once you have that model some not totally ignorant prior will fall out. You're right that 50% is the probability of total ignorance, but this is something of a conceptual constant that falls out of the math-- you can't actually specify a hypothesis with such little information.

You're right that 50% is the probability of total ignorance, but this is something of a conceptual constant that falls out of the math

Yes, that's exactly right! It is a conceptual constant that falls out of the math. It's purely a formality. Integrating this into your conceptual scheme is good for the versatility of your conceptual scheme, but not for much else -- until, later, greater versatility proves to be important.

People have a great deal of trouble accepting formalities that do not appear to have concrete practical relevance. This is why it took so long for the numbers 0 and 1 to be accepted as numbers.

It's purely a formality

I disagree with this bit. It's only purely a formality when you consider a single hypothesis, but when you consider a hypothesis that is comprised of several parts, each of which uses the prior of total ignorance, then the 0.5 prior probability shows up in the real math (that in turn affects the decisions you make).

I describe an example of this here: http://lesswrong.com/r/discussion/lw/73g/take_heed_for_it_is_a_trap/4nl8?context=1#4nl8

If you think that the concept of the universal prior of total ignorance is purely a formality, i.e. something that can never affect the decisions you make, then I'd be very interested in your thoughts behind that.

A statement, any statement, starts out with a 50% probability of being true, and then you adjust that percentage based on the evidence you come into contact with.

Is it not propositions that can only be true or false, while statements can be other things?

What's the relevance of this question? Is there a reason "statement" shouldn't be interpreted as "proposition" in the above?

As I see it, statements start with some probability of being true propositions, some probability of being false propositions, and some probability of being neither. So a statement about which I have no information, say a random statement to which a random number generator was designed to preface with "Not" half the time, has a less than 50% chance of being true.

This speaks to the intuition that statements fail to be true most of the time. "A proposition, any proposition, starts out with a 50% probability of being true" is only true assuming the given statement is a proposition, and I think knowing that an actual statement is a proposition entails being contaminated by knowledge about the proposition's contents.

As I see it, statements start with some probability of being true propositions, some probability of being false propositions, and some probability of being neither.

Okay. So "a statement, any statement, is as likely to be true as false (under total ignorance)" would be more accurate. The odds ratio remains the same.

The intuition that statements fail to be true most of the time is wrong, however. Because, trivially, for every statement that is true its negation is false and for every statement that is false its negation is true. (Statements that have no negation are neither true nor false)

It's just that (interesting) statements in practice tend to be positive claims (about the world), and it's much harder to make a true positive claim about the world than a true negative one. This is why a long (measured in Kolmogorov complexity) positive claim is very unlikely to be true and a long negative claim (Kolmogorov complexity) is very likely to be true. Also, it's why a long conjunction of terms is unlikely to be true and a long disjunction of terms is likely to be true. Again, symmetry.

S=P+N

P=T+F

T=F

S=~T+T

N>0

~~~

~T+T=P+N

~T+T=T+F+N

~T=F+N

~T=T+N

~T>T

Legend:

S -> statements
P -> propositions
N -> non-propositional statements
T -> true propositions
F -> false propositions

I don't agree with condition S = ~T + T.

Because ~T + T is what you would call the set of (true and false) propositions, and I have readily accepted the existence of statements which are neither true nor false. That's N. So you get S = ~T + T + N = T + F + N = P + N

We can just taboo proposition and statement as proposed by komponisto. If you agree with the way he phrased it in terms of hypothesis then we're also in agreement (by transitivity of agreement :)

(This may be redundant, but if your point is that the set of non-true statements is larger than the set of false propositions, then yes, of course, I agree with that. I still don't think the distinction between statement and proposition is that relevant to the underlying point because the odds ratio is not affected by the inclusion or exclusion of non-propositional statements)

Some of those statements in the list are sufficiently unclear that I can't really agree or disagree with them. Others have multiple different claims in them and I agree with some parts and disagree with others. And some are just false.

Most scientists disagree with this but that's just because it sounds counter-intuitive and scientists are biased against counterintuitive explanations.

This one is false, as some other comments have pointed out.

Besides, the scientific method is wrong because it is in conflict with probability theory. Oh, and probability is created by humans, it doesn't exist in the universe.

(Bayesian) Probability theory doesn't say that the scientific method is wrong. It provides a formal specification of why the scientific method (of changing beliefs based on evidence) is correct and how to apply it. The second sentence refers to the true beliefs explained in Probability is in the Mind and Probability is Subjectively Objective, but it mangles them.

Every fraction of a second you split into thousands of copies of yourself. Of course you cannot detect these copies scientifically, but that because science is wrong and stupid.

"Science is wrong and stupid" is just false. It's more like, you can't detect these copies directly but they are implied by math that has been supported by experiment. Unless you want to claim that theoretical physics is unscientific, you have to accept that using math to find out facts about the physical world is possible.

And finally, the truth of all these statements is completely obvious to those who take the time to study the underlying arguments. People who disagree are just dumb, irrational, miseducated or a combination thereof.

This exaggerates the simple (tautological?) true statement that "I have enough evidence to convince a rational person of all the above" until it is not true.

I learned this all from this website by these guys who want us to give them our money.

This is also a misrepresentation. Some of the guys on the website work for a nonprofit and want you to give money to fund their research, which they believe will save many lives. Or if you don't want to do that, they want you give money to some other charity that you believe will save the most possible lives. A majority of the content producers don't ask for money but many of them do give it.

Some of the guys on the website work for a nonprofit and want you to give money to fund their research, which they believe will save many lives. Or if you don't want to do that, they want you give money to some other charity that you believe will save the most possible lives.

If it were a con, it would be a very long con. It wouldn't necessarily look any different from what we see and you describe though. Its hard to con this audience, but most of the contributors wouldn't be in on it, in fact its imperative that they not be.

For future reference, all of my comments and posts are by someone who wants you to give me your money. Likewise for most people, I suspect.

Yes but are my donations to you tax exempt? Can I get a rewards credit card to pay you off? There actually is a difference between "wanting money" and having your livelihood depend on donations to your non-profit foundation.

I do not think there is anything sinister going on at all here, but it is mistaken to think that someone who doesn't nakedly solicit donations and does endorse other charities cannot be running a con (i.e. they have pretenses about the purpose and usage of the money). For some types of fish you can only catch some if you are willing to let most of them go.

Great point.

I will add two levels of nuance.

One is the extent to which individual future donors are necessary.

The other is the divergence between any group's goals and those of its members. A good analogy is heat dissipation: for any one member, one can't predict his or her goals from the group's goals, though in general one can generalize about group members and their goals.

Note that these are matters of extent and not type. Note also how much this is true for other things. :)

I would be much happier with that survey if it used the standard five-degrees-of-belief format rather than a flat agree/disagree. Especially later on, it includes many statements which I believe or disbelieve with low confidence, or which I consider irrelevant or so malformed as to be essentially meaningless.

If you have worked your way through most of the sequences you are likely to agree with the majority of these statements

I realize this is not the main point of the post, but this statement made me curious: what fraction of Less Wrong readers become convinced of these less mainstream beliefs?

To this end I made a Google survey! If you have some spare time, please fill it out. (Obviously, we should overlook the deliberately provocative phrasing when answering).

I'll come back two weeks from now and post a new comment with the results.

Here are the crackpot belief survey results.

All in all, 77 people responded. It seems we do drink the Kool-Aid! Of the substantial questions, the most contentious ones were "many clones" and timeless physics, and even they got over 50%. Thanks to everyone who responded!


I want people to cut off my head when I'm medically dead, so my head can be preserved and I can come back to life in the (far far) future.
Agree 73% Disagree 27%

It is possible to run a person on Conways Game of Life. This would be a person as real as you or me, and wouldn't be able to tell he's in a virtual world because it looks exactly like ours.
Agree 90% Disagree 10%

Right now there exist many copies/clones of you, some of which are blissfully happy and some of which are being tortured and we should not care about this at all.
Agree 53% Disagree 47%

Most scientists disagree with this but that's just because it sounds counter-intuitive and scientists are biased against counterintuitive explanations.
Agree 32% Disagree 68%

Besides, the scientific method is wrong because it is in conflict with probability theory.
Agree 23% Disagree 77%

Oh, and probability is created by humans, it doesn't exist in the universe.
Agree 77% Disagree 23%

Every fraction of a second you split into thousands of copies of yourself.
Agree 74% Disagree 26%

Of course you cannot detect these copies scientifically, but that because science is wrong and stupid.
Agree 7% Disagree 93%

In fact, it's not just people that split but the entire universe splits over and over.
Agree 77% Disagree 23%

Time isn't real. There is no flow of time from 0 to now. All your future and past selves just exist.
Agree 53% Disagree 47%

Computers will soon become so fast that AI researchers will be able to create an artificial intelligence that's smarter than any human. When this happens humanity will probably be wiped out.
Agree 68% Disagree 32%

To protect us against computers destroying humanity we must create a super-powerful computer that won't destroy humanity.
Agree 70% Disagree 30%

Ethics are very important and we must take extreme caution to make sure we do the right thing.
Agree 82% Disagree 18%

Also, we sometimes prefer torture to dust-specs.
Agree 69% Disagree 31%

If everything goes to plan a super computer will solve all problems (disease, famine, aging) and turn us into super humans who can then go on to explore the galaxy and have fun.
Agree 79% Disagree 21%

the truth of all these statements is completely obvious to those who take the time to study the underlying arguments. People who disagree are just dumb, irrational, miseducated or a combination thereof.
Agree 27% Disagree 73%

I learned this all from this website by these guys who want us to give them our money.
Agree 66% Disagree 34%

I want to fill it out, I really do, but the double statements make me hesitate.

For example I do believe that there are ~lots of "clones of me" around, but I disagree that we shouldn't care about this. It has significant meaning when you're an average utilitarian, or something approaching one.

I think this survey is a really good illustration of why degrees of belief are so helpful.

It is possible to run a person on Conways Game of Life. This would be a person as real as you or me, and wouldn't be able to tell he's in a virtual world because it looks exactly like ours.

I don't see why you call this a "crackpot belief". The ( extended ) Church-Turing Thesis has near-universal acceptance and implies that humans can be simulated by turing machines. Similarly, it is widely accepted that Conways Game of Life can run turing machines . Physicists who don't believe this are widely regarded as controversial.

near-universal acceptance

among the extremely small subset of mankind who have studied it.

Exactly.

And Cryonics is based on the idea that medical death and information death are distinct. This isn't a crackpot belief either.

And many worlds? Feynman and Hawking and many other well known theoretical physicists have supported it.

And that probability theory only exists "in the mind" isn't that controversial either.

So, ummm .... these beliefs are not controversial but they are low-status?

Feynman and Hawking and many other well known theoretical physicists have supported it.

And that is evidence that MW is not a low-status, or crackpot, belief. Certainly not among physicists. Just like "you can run people on game of life" is not a low-status belief, certainly not among computer scientists.

Sure, these beliefs are low-status in communities that are low-status by less wrong standards (e.g. various kinds of non-reductionists). And this seems quite unavoidable given some of LW's goals

"Less Wrong has at least two goals. One goal is to raise the sanity waterline "...

..."the one community on earth ... where the highest-status figures are signed up for cryonics "...

Right, so whether a belief is low status is (among other things) a property of the audience.

But even if the audience consists of people who "who like philosophy and [are] familiar with the different streams and philosophical dilemmas, who know computation theory and classical physics, who [have] a good understanding of probability and math and somebody who [are] naturally curious reductionists", which is a very educated audience, then the cognitive gap is still so large that it cannot be bridged in casual conversation.

I think it's fair to say a highly educated reductionist audience is considered high status by almost any standard[1]. And my claim is, and my experience is, that if you casually slip in a LW-style argument then because of the cognitive gap you won't be able to explain exactly what you mean, because it's extraordinarily difficult to fall back on arguments that don't depend on the sequences or any other prerequisites.

If you have a belief that you can't explain coherently then I think people will assume that's because your understanding of the subject matter is bad, even though that's not the problem at all. So if you try to explain your beliefs but fail to do so in a manner that makes sense (to the audience) then you face a social penalty.

[1] we can't get away with defining every group that doesn't reason like we do as low-status

I think it's fair to say a highly educated reductionist audience is considered high status by almost any standard[1].

Extreme non-reductionists tend to form communities with inverted status-ladders (relative to ours) where the high-status members constantly signal adherence to certain baseless assertions.

But even if the audience consists of (LW target audienct) ... then the cognitive gap is still so large that it cannot be bridged in casual conversation.

A: Hi! Have you ever heard of cellular automata?

B: No. What is it?

A: Well basically you take a large cartesian grid and every cell can have 2 values : "alive" or "dead". And you modify it using these simple rules ... and you can get all kinds of neat patterns.

B: Ah, I might have read something like that somewhere.

A: Did you know it's turing-complete?

B: What?

A: Yes, you can run any computer on such a grid! Neat, huh.

B: One learns a new thing every day... (Note: I have gotten this exact response when I told a friend, a mathematician, about the turing-completeness of the game of life)

A: So, you're a reductionist, right? No magical stuff inside the brain?

B: Yes, of course.

A: So in principle, we could simulate a human on a computer, right?

B: For sufficiently large values of "in principle", yes.

A: So we can run a human on game of life!

B: Oh right. "In principle". Why should I care, again?

OK, fictional evidence, I have only tried the first half of this conversation in reality.

This conversation starts from the non-controversial side, slowly building the infrastructure for the final declaration. If you have friends tolerant enough for you to introduce the LW sequences conversation by conversation in a "had you ever heard" type of way, and you have a lot of time, this will work fine.

However, the OP seems to be about the situation where you start by underestimating the inferential gap and saying something as if it should be obvious, while it still sounds crazy to your audience. How do you rescue yourself from that without a status hit, and without being dishonest?

argument ad populum

I think a more correct term in this context would be argumentum ad verecundiam. It's about arguing based on the opinion of a small number of authoritative people, not the general public.

This reminds me of some old OB posts, I think, on non-conformity - the upshot being that you can't get away with being public on all the ways you are a maverick and to do so is self-sabotaging.

Related: On interpreting maverick beliefs as signals indicating rationality:
Undiscriminating Skepticism

Only related, though; I take Eliezer as pointing out that individual beliefs are rational but beliefs are highly correlated with other beliefs, so any one position doesn't allow much inference. The OP and Hanson are discussing more practical signaling issues unrelated to epistemic inferences.

If you have worked your way through most of the sequences you are likely to agree with the majority of these statements:

I have, but I don't. A couple I agree with and there are some others about which I can at least see how they could be used as a straw man. Then there are some which are just way off.

Then there is:

It is possible to run a person on Conways Game of Life. This would be a person as real as you or me, and wouldn't be able to tell he's in a virtual world because it looks exactly like ours.

That I agree with. And it doesn't qualify as a 'crackpot belief'.

The moral here is that crackpot beliefs are low status

You probably should have opened with that. It's true and basically universally accepted here already.

I think Bayesian Bob should just get better at arguing. It's the same thing I tell students when they complain that they can't explain their paper properly in a 4 sentence abstract: The number of possible sentences you might write is very very large. You're going to have to work a lot harder before I'm convinced that no sequence of 4 sentences will suffice.

My experience has been that if I'm arguing about something I know well and I'm very confident about, it never feels like I'm in danger of "losing status".

Sure but you pick your arguments, right? If you are in a social situation that won't permit more than a few sentences to be exchanged on a topic then you certainly can't expect to take people through more than one level of inference. If you have no idea how many levels of inference are required it would be quite a risking undertaking to explain why everyone should sign up for cryonics, for example.

That's true. I do avoid "biting off more than I can chew". But then, I wouldn't even challenge someone on religion if the context was wrong. I'm not sure the loss of status would come from arguing for "crackpot beliefs". Rather, if I'm not talking to people who would want to go 10 levels deep with me on an abstract discussion, it's impolite to put the conversation on that track.

I'm trying to think of arguments I've made that have left people a bit horrified. The sort of thing where people have brought it up later and said, "Yeah, but you believe X".

Once I was talking to some friends about capital punishment, and I suggested that capital punishment would be much better applied to white collar crimes, because those crimes likely involve a more explicit cost/benefit analysis, and they tend to have worse social impacts than a single murder anyway. The inferential distance here is high because it relies on a consequentialist view of the purpose of criminal punishments. I was also being a bit contrarian here just for the sake of it. I'm not at all confident about whether this would be helpful or harmful.

In another similar context, I was explaining how I viewed punishments as strictly deterrents, and didn't view "justice" as an intrinsic good. The thought experiment I put forward was, if it were all the same to everyone else in the world, and nobody ever knew about it, I would prefer that Hitler had escaped from the bunker and lived out his life happily in isolation somewhere. Or that he died and went to the best heaven imaginable. I guess this is the "Hitler doesn't deserve so much as a stubbed toe" idea.

I've also horrified people with views on child pornography. Arguing that fictive images (cartoons etc) of children shouldn't be illegal makes people uncomfortable, and questioning the role of child pornography in motivating offenders is also dangerous. I've had good and bad discussions about this. Sometimes I've also been contrarian about this, too.

These are all similar examples, because they're the ones that started to come to mind. There may be other cases on different topics, I don't remember.

Overall I don't regret talking about these things at all, and I think mostly people find me more interesting for my willingness to "go there". Hm, I should point out that I believed all these things before reading LessWrong. So maybe the inferential distance isn't as high anyway.

I agree with everything you said (including the grandparent). Some of the examples you named are primarily difficult because of the ugh-field and not because of inferential distance, though.

One of the problems is that it's strictly more difficult to explain something than to understand it. To understand something you can just go through the literature at your own pace, look up everything you're not certain about, and so continue studying until all your questions are answered. When you want to explain something you have to understand it but you also have to be able to figure out the right words to bridge the inferential gap, you have to figure out where the other person's model differs from yours and so on.

So there will always be a set of problems you understand well enough to be confident they're true but not well enough to explain them to others.

Anthropomorphic global warming is a belief that falls into this category for most of us. It's easy to follow the arguments and to look at the data and conclude that yes, it's humans that are the cause of global warming. But to argue for it successfully? Nearly impossible (unless you have studied the subject for years).

Cryonics is also a topic that's notoriously difficult to discuss. If you can argue for that effectively my hat's off to you. (Argue for it effectively => they sign up)

I think you should have introduced your point much earlier on (perhaps at the beginning).

Concealing unconventional beliefs with high inferential distance to those you are speaking with makes sense. Dismissing those beliefs with the absurdity heuristic does not.

Also, I think you underestimate the utility of rhetorical strategies. For example, you could:

  • Talk about these weird beliefs in a hypothetical, facetious manner (or claim you had been).
  • Close the inferential difference gradually using the Socratic method.
  • Introduce them to the belief indirectly. For example, you could link them to a more conventional LessWrong sequence post and let them investigate the others on their own.
  • Ask them for help finding what is objectively and specifically wrong with the weird belief.

I think the Sequences paint a picture of scientists on Many Worlds that is just wrong. Sure, if you count all scientists. But if you just look at the ones whose opinions matter: 58 percent think its true. Eighteen percent disagree.

"Bayesian Bob: ... I meant that in a vacuum we should believe it with 50% certainty..."

No we shouldn't: http://lesswrong.com/lw/jp/occams_razor/

As for proving a negative, I've got two words: Modus Tollens.

Bob does need to go back to math class! ;)