Related to: Why Real Men Wear Pink, That Other Kind of Status, Pretending to be Wise, The "Outside The Box" Box

WARNING: Beware of things that are fun to argue -- Eliezer Yudkowsky

Science has inexplicably failed to come up with a precise definition of "hipster", but from my limited understanding a hipster is a person who deliberately uses unpopular, obsolete, or obscure styles and preferences in an attempt to be "cooler" than the mainstream. But why would being deliberately uncool be cooler than being cool?

As previously discussed, in certain situations refusing to signal can be a sign of high status. Thorstein Veblen invented the term "conspicuous consumption" to refer to the showy spending habits of the nouveau riche, who unlike the established money of his day took great pains to signal their wealth by buying fast cars, expensive clothes, and shiny jewelery. Why was such flashiness common among new money but not old? Because the old money was so secure in their position that it never even occurred to them that they might be confused with poor people, whereas new money, with their lack of aristocratic breeding, worried they might be mistaken for poor people if they didn't make it blatantly obvious that they had expensive things.

The old money might have started off not buying flashy things for pragmatic reasons - they didn't need to, so why waste the money? But if F. Scott Fitzgerald is to be believed, the old money actively cultivated an air of superiority to the nouveau riche and their conspicuous consumption; not buying flashy objects becomes a matter of principle. This makes sense: the nouveau riche need to differentiate themselves from the poor, but the old money need to differentiate themselves from the nouveau riche.

This process is called countersignaling, and one can find its telltale patterns in many walks of life. Those who study human romantic attraction warn men not to "come on too strong", and this has similarities to the nouveau riche example. A total loser might come up to a woman without a hint of romance, promise her nothing, and demand sex. A more sophisticated man might buy roses for a woman, write her love poetry, hover on her every wish, et cetera; this signifies that he is not a total loser. But the most desirable men may deliberately avoid doing nice things for women in an attempt to signal they are so high status that they don't need to. The average man tries to differentiate himself from the total loser by being nice; the extremely attractive man tries to differentiate himself from the average man by not being especially nice.

In all three examples, people at the top of the pyramid end up displaying characteristics similar to those at the bottom. Hipsters deliberately wear the same clothes uncool people wear. Families with old money don't wear much more jewelry than the middle class. And very attractive men approach women with the same lack of subtlety a total loser would use.1

If politics, philosophy, and religion are really about signaling, we should expect to find countersignaling there as well.

Pretending To Be Wise

Let's go back to Less Wrong's long-running discussion on death. Ask any five year old child, and ey can tell you that death is bad. Death is bad because it kills you. There is nothing subtle about it, and there does not need to be. Death universally seems bad to pretty much everyone on first analysis, and what it seems, it is.

But as has been pointed out, along with the gigantic cost, death does have a few small benefits. It lowers overpopulation, it allows the new generation to develop free from interference by their elders, it provides motivation to get things done quickly. Precisely because these benefits are so much smaller than the cost, they are hard to notice. It takes a particularly subtle and clever mind to think them up. Any idiot can tell you why death is bad, but it takes a very particular sort of idiot to believe that death might be good.

So pointing out this contrarian position, that death has some benefits, is potentially a signal of high intelligence. It is not a very reliable signal, because once the first person brings it up everyone can just copy it, but it is a cheap signal. And to the sort of person who might not be clever enough to come up with the benefits of death themselves, and only notices that wise people seem to mention death can have benefits, it might seem super extra wise to say death has lots and lots of great benefits, and is really quite a good thing, and if other people should protest that death is bad, well, that's an opinion a five year old child could come up with, and so clearly that person is no smarter than a five year old child. Thus Eliezer's title for this mentality, "Pretending To Be Wise".

If dwelling on the benefits of a great evil is not your thing, you can also pretend to be wise by dwelling on the costs of a great good. All things considered, modern industrial civilization - with its advanced technology, its high standard of living, and its lack of typhoid fever -  is pretty neat. But modern industrial civilization also has many costs: alienation from nature, strains on the traditional family, the anonymity of big city life, pollution and overcrowding. These are real costs, and they are certainly worth taking seriously; nevertheless, the crowds of emigrants trying to get from the Third World to the First, and the lack of any crowd in the opposite direction, suggest the benefits outweigh the costs. But in my estimation - and speak up if you disagree - people spend a lot more time dwelling on the negatives than on the positives, and most people I meet coming back from a Third World country have to talk about how much more authentic their way of life is and how much we could learn from them. This sort of talk sounds Wise, whereas talk about how nice it is to have buses that don't break down every half mile sounds trivial and selfish..

So my hypothesis is that if a certain side of an issue has very obvious points in support of it, and the other side of an issue relies on much more subtle points that the average person might not be expected to grasp, then adopting the second side of the issue will become a signal for intelligence, even if that side of the argument is wrong.

This only works in issues which are so muddled to begin with that there is no fact of the matter, or where the fact of the matter is difficult to tease out: so no one tries to signal intelligence by saying that 1+1 equals 3 (although it would not surprise me to find a philosopher who says truth is relative and this equation is a legitimate form of discourse).

Meta-Contrarians Are Intellectual Hipsters

A person who is somewhat upper-class will conspicuously signal eir wealth by buying difficult-to-obtain goods. A person who is very upper-class will conspicuously signal that ey feels no need to conspicuously signal eir wealth, by deliberately not buying difficult-to-obtain goods.

A person who is somewhat intelligent will conspicuously signal eir intelligence by holding difficult-to-understand opinions. A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.

According to the survey, the average IQ on this site is around 1452. People on this site differ from the mainstream in that they are more willing to say death is bad, more willing to say that science, capitalism, and the like are good, and less willing to say that there's some deep philosophical sense in which 1+1 = 3. That suggests people around that level of intelligence have reached the point where they no longer feel it necessary to differentiate themselves from the sort of people who aren't smart enough to understand that there might be side benefits to death. Instead, they are at the level where they want to differentiate themselves from the somewhat smarter people who think the side benefits to death are great. They are, basically, meta-contrarians, who counter-signal by holding opinions contrary to those of the contrarians' signals. And in the case of death, this cannot but be a good thing.

But just as contrarians risk becoming too contrary, moving from "actually, death has a few side benefits" to "DEATH IS GREAT!", meta-contrarians are at risk of becoming too meta-contrary.

All the possible examples here are controversial, so I will just take the least controversial one I can think of and beg forgiveness. A naive person might think that industrial production is an absolute good thing. Someone smarter than that naive person might realize that global warming is a strong negative to industrial production and desperately needs to be stopped. Someone even smarter than that, to differentiate emself from the second person, might decide global warming wasn't such a big deal after all, or doesn't exist, or isn't man-made.

In this case, the contrarian position happened to be right (well, maybe), and the third person's meta-contrariness took em further from the truth. I do feel like there are more global warming skeptics among what Eliezer called "the atheist/libertarian/technophile/sf-fan/early-adopter/programmer empirical cluster in personspace" than among, say, college professors.

In fact, very often, the uneducated position of the five year old child may be deeply flawed and the contrarian position a necessary correction to those flaws. This makes meta-contrarianism a very dangerous business.

Remember, most everyone hates hipsters.

Without meaning to imply anything about whether or not any of these positions are correct or not3, the following triads come to mind as connected to an uneducated/contrarian/meta-contrarian divide:

- KKK-style racist / politically correct liberal / "but there are scientifically proven genetic differences"
- misogyny / women's rights movement / men's rights movement
- conservative / liberal / libertarian4
- herbal-spiritual-alternative medicine / conventional medicine / Robin Hanson
- don't care about Africa / give aid to Africa / don't give aid to Africa
- Obama is Muslim / Obama is obviously not Muslim, you idiot / Patri Friedman5

What is interesting about these triads is not that people hold the positions (which could be expected by chance) but that people get deep personal satisfaction from arguing the positions even when their arguments are unlikely to change policy6 - and that people identify with these positions to the point where arguments about them can become personal.

If meta-contrarianism is a real tendency in over-intelligent people, it doesn't mean they should immediately abandon their beliefs; that would just be meta-meta-contrarianism. It means that they need to recognize the meta-contrarian tendency within themselves and so be extra suspicious and careful about a desire to believe something contrary to the prevailing contrarian wisdom, especially if they really enjoy doing so.


1) But what's really interesting here is that people at each level of the pyramid don't just follow the customs of their level. They enjoy following the customs, it makes them feel good to talk about how they follow the customs, and they devote quite a bit of energy to insulting the people on the other levels. For example, old money call the nouveau riche "crass", and men who don't need to pursue women call those who do "chumps". Whenever holding a position makes you feel superior and is fun to talk about, that's a good sign that the position is not just practical, but signaling related.

2) There is no need to point out just how unlikely it is that such a number is correct, nor how unscientific the survey was.

3) One more time: the fact that those beliefs are in an order does not mean some of them are good and others are bad. For example, "5 year old child / pro-death / transhumanist" is a triad, and "warming denier / warming believer / warming skeptic" is a triad, but I personally support 1+3 in the first triad and 2 in the second. You can't evaluate the truth of a statement by its position in a signaling game; otherwise you could use human psychology to figure out if global warming is real!

4) This is my solution to the eternal question of why libertarians are always more hostile toward liberals, even though they have just about as many points of real disagreement with the conservatives.

5) To be fair to Patri, he admitted that those two posts were "trolling", but I think the fact that he derived so much enjoyment from trolling in that particular way is significant.

6) Worth a footnote: I think in a lot of issues, the original uneducated position has disappeared, or been relegated to a few rednecks in some remote corner of the world, and so meta-contrarians simply look like contrarians. I think it's important to keep the terminology, because most contrarians retain a psychology of feeling like they are being contrarian, even after they are the new norm. But my only evidence for this is introspection, so it might be false.

New to LessWrong?

New Comment
369 comments, sorted by Click to highlight new comments since: Today at 7:15 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I also recently noticed this triad:

Seek sex + money / pursue only pure truth and virtue / seek sex + money

To be fair, I think that this triad is largely a function of the sort of society one lives in. It could be summarized as "submit to virtuous social orders, seek to dominate non-virtuous ones if you have the ability to discern between them"

I think it’s more along the lines of: people in the third stage have acquired and digested all the low-hanging and medium-hanging fruit that those in the second stage are struggling to acquire, that advancing further is now really hard. So they now seek sex and money/power partly because acquiring those will (in the long run) help them further advance in the areas that they have currently put on hold. And partly because of course it’s also nice to have them.
Could anyone elaborate on this? All the ones listed in the article seem fairly obvious or well-explained, but nothing jumps out to me on this one. I think the problem is that I don't see what positions these are occupying or signaling: The clothing stuff is about wealth, while all the political ones are about intelligence (apparent intelligence, specifically). My assumption is that the first is someone who has very little money and the last is someone who has a lot, but then I'm not sure where the middle one would be. That and perhaps that Yvain didn't list any distinguishing features between the first and last ones. I'm noticing now that all the counter-signaling ones tend to be slightly different- I'm sure the Old Rich didn't wear the exact same things as the poor, but rather nicer but less showy clothes. All the political examples have the third-stage ones usually acknowledging the existence of and problems with the lowest stage, often with significant differences. Likewise Hipsters have a lot of distinctly hipster traits that don't make them look like any particular non-mainstream group, although my knowledge of Hipsters comes almost entirely from jokes about Hipsters rather than having seen the phenomenon much.

Implementing your suggestion is easy. Just keep going "meta" until your opinions become stupid, then set meta = meta - 1.

There's an art to knowing when;

Never try to guess.

Toast until it smokes & then

20 seconds less.

I'm reminded of some "advice" I read about making money in the stock market:

Buy a stock, wait until it goes up, and then sell it. If it doesn't go up, then don't have bought it.


That strategy requires an impossible action in the case that the stock does not go up.


That comment made me smile. I didn't upvote it, but I just hid a paperclip, making the moment when I'll have to buy another box that much closer.

edit: actually, I wrote the above before I actually did it. But when I looked in the place I expected to find paperclips, and didn't find any, making the probability that I'll buy paperclips in the near future somewhat higher. So it's all good.


One more time: the fact that those beliefs are in an order does not mean some of them are good and others are bad. For example, "5 year old child / pro-death / transhumanist" is a triad, and "warming denier / warming believer / warming skeptic" is a triad, but I personally support 1+3 in the first triad and 2 in the second. You can't evaluate the truth of a statement by its position in a signaling game; otherwise you could use human psychology to figure out if global warming is real!

Well worth stressing.

It's possible to go meta on nearly any issue, and there are a lot of meta-level arguments - group affiliation, signaling, rationalization, ulterior motives, whether a position is contrarian or supported by the majority, who the experts are and how much we should trust them, which group is persecuted the most, straw man positions and whether anybody really holds them, slippery slopes, different ways to interpret statements, who is working under which cognitive bias ...

Which is why I prefer discussions to stick to the object level rather than go meta. It's just too easy to rationalize a position in meta, and to find convincing-sounding arguments as to why the other side mistakenly disagrees with you. And meta-level disagreements are more likely to persist in the long run, because they are hard to verify.

Sure, meta-level arguments are very valuable in many cases, we shouldn't drop them altogether. But we should be very cautious while using them.

That's a triad too: naive instinctive signaling / signaling-aware people disliking signaling / signaling is actually a useful and necessary thing.

Going meta often introduces burdensome details. This will only lead you closer to truth when your epistemic rationality is strong enough to shoulder the weight.

One element of meta-contrarian reasoning is as follows. Consider a proposition P, that is hard for a layperson to assess. Because of this difficulty, an individual must rely on others for information. Now, a reasonable layperson might look around and listen with an open mind to all the arguments, and choose the one that seems most plausible to assign a probability to P.

The problem is that certain propositions have large corps of people whose professions depend on the proposition being true, but no counterforce of professional critics. So there is a large group of people (priests) who are professionally committed to the proposition "God exists". The existence of this group causes an obvious bias in the layperson's decision algorithm. Other groups, like doctors, economists, soldiers, and public school teachers, have similar commitments. Consider the proposition "public education improves national academic achievement." It could be true, it could be false - it's an empirical question. But all public school teachers are committed to this proposition, and there are very few people committed to the opposite.

So meta-contrarians explicitly correct for this kind of bias. I don't necessarily think that the public school proposition is false, but it should be thoroughly examined. I don't necessarily think that the nation would be safer if we abolished the Army and Marine Corps, but it might be.

The problem is that certain propositions have large corps of people whose professions depend on the proposition being true, but no counterforce of professional critics.

This really is a very good point.

I think this post speaks of an interesting signaling element in societal dialectics. Let's call your hypothesis the "contrarian signaling" hypothesis. But to me, your post also hints at a couple other hypotheses behind this behavior.

The first hypothesis is the mundane one: that people end up in this groups with positions contrary to other positions, because those are just the positions that are more plausible to them, and they decide their subcultures out of their actual tastes. The reason that people divide themselves into groups with contrary views is because people have different phenotypes. I'm sure you've already thought of it, but I want to say a little more about it.

Under this hypothesis, hipster are hipsters primarily because they like retro clothes (and other aspects of the culture). They would have worn these same clothes back when they were in fashion; whereas true contrarians wouldn't have. This might be easier to imagine with another overlapping subculture: hippies. Hippies don't idolize the 60's to be contrarian: they idolize the 60's because they like the ideals of the 60's and feel nostalgic for them.

Now, you may say that the 60's were a contrarian time (w... (read more)

But to me, your post also hints at a couple other hypotheses behind this behavior.

My reading of the post was not so much that it proposed contrarianism as an explanation for other cultural divisions, but that peoples' inclination towards a given level of contrarianism is itself a cultural division. We don't need to hypothesize about why people are metacontrarians; we're defining them by the habit of being metacontrary.

However, your hypotheses are still interesting in their own right. I predict that, were we to run your experiments, the first one would tend to describe the early adopters of a given subculture--the first hipster actually liked those dumb glasses, etc.--and later members would increasingly be described by the latter.

This is roughly what Gladwell's Tipping Point is about, actually.

check out this other cool belief Y!

I think that this is how all debates (and evangelism) should sound.

The account of the nouveau riche's ostentatious behavior and appearance compared to the relatively subtle expressions exhibited by the old-money generation has causes and explanations far beyond "counter-signaling". I do not mean to say that counter-signaling doesn't play a part; however it's a small facet and not nearly as important as other factors. (I realize that this may come off as overly nit-picky or outright derailing. However, as the bit I am critiquing is one of your foundational points to your article; I feel there is value in calling attention to it.) You did not account for the nouveau riche generation's updated social conditioning factors such as the increase in the volume and effectiveness of mass-marketing. It's important to know what sort of films, books, advertising trends, etc were prevalent and popular during the nouveau riche's formative years. What sort of values became most important in society? So much changed in people psychologically with the rise of consumer culture, such that it is impossible to track human behavior unless we take that rather sudden cultural evolution into account. A person does not need to be counter-signaling when she or he identifies with a particular demographic. A very simple example: The child with enormous wealth watches the same cartoons as the middle class child and learns a similar set of social standards and values; and both children remain in a similar marketing demographic as they age. When the wealthy child becomes an adolescent, she or he will still attribute value to certain types of behaviors and appearances.
I have a strong urge to signal my difference to the Lesswrong crowd. Should I be worried?

Here's a different hypothesis that also accounts for opinions reverting in the direction of the original uneducated position. Suppose "uneducated" and "contrarian" opinion are two independent random (e.g. normal) variables with the same mean representing the truth (but maybe higher variance for "uneducated"); and suppose what you call "meta-contrarian" opinion is just the truth. Then if you start from "contrarian" it's more likely that "meta-contrarian" opinion will be in the direction of "uneducated" than in the opposite direction, simply because "uneducated" contains nonzero information about where the truth is. I think you can also see this as a kind of regression to the mean.

I don't see why we should expect the random variables to be based around "truth". I'd believe in a common centerpoint, but I think it would be more usefully labeled "human-intuitive position" than "truth".
It seems to me that uneducated-person opinion would be the "human-intuitive position", and the educated person opinion would be... changed off from that, with a tendency to be somewhere in the direction of truth. Although the uneducated-person opinion won't always be constant across different times and cultures (although it varies; Death is Bad is probably a universal for 5-year-olds, Racism might be pretty universal outside of isolated groups with populations too small to have any racial diversity), so I don't think it will usually be an inherent position. I think steven0461's statement makes some sense though if you talk about the average position among many different issues, and also if you only look at issues where things like evidence and human reasoning can tell you about the truth. I expect that the uneducated opinion will appear distributed randomly around the truth (although not absurdly far from it, when you consider the entirety of possibility-space), and the educated opinion will diverge from it in a way that will usually be towards what the evidence supports, but often overshooting, undershooting, or going far to the side. Likewise the 3rd-stage opinion should diverge from the educated opinion in a similar manner, except... by definition it will be in the rough direction of the original position, or we'd just call it a more radical version of the educated position. However there seems to be a MAJOR potential pitfall in reasoning about where they're located, since all the examples listed tend to align politically (roughly conservative/liberal/LessWrong-type). So trying to reason by looking at those examples and seeing which one is true, and then trying to derive a theory on the tendencies involved based on that, will tend to give you a theory which supports your position being right.
1Paul Crowley13y
I think that this only works if positions are in one dimension. If they are in many dimensions then I suspect that the truth and the uneducated opinion are on the same side of the contrarian opinion as often as they are on opposite sides. EDIT: I no longer think the above makes any sense. I'm tired, sorry!
I'm a little sad that I've integrated this pretty thoroughly into my epistemology because it's a very good point and yet most people probably missed this comment.
Thank-you for commenting and bringing this to my attention. This also makes for a fantastic "shut down the contrarian" response when your meta-contrarianism is questioned.

Yet another thought-provoking post from Yvain.

I've implicitly noticed the meta-contrarian trend on Less Wrong and to a lesser extent in SIAI before, and I think it's led me to taking my meta-meta-contrarianism a little far sometimes. I get a little too much enjoyment out of trolling cryonicists and libertarians: indeed, I get a feeling of self-righteousness because it seems that I'm doing a public service by pointing out what appears to be a systematic bias and flaw of group epistemology in the Less Wrong belief cluster. This feeling is completely disproportionate to the extent that I'm actually helping: in general, the best way to emphasize the weaker points of an appealing argument isn't to directly troll the person who holds it. Steve Rayhawk is significantly better than me in this regard. So thanks, Yvain, for pointing out these different levels of meta and how the sense of superiority they give can lead to bad epistemic practice. I'll definitely check for signs of this next time I'm feeling epistemically self-righteous.

A friend of mine likes to say that, if you find that your personal opinion happens to align perfectly with what popular culture tells you to think, you should examine that opinion really closely to make sure it's really yours. It's a similar heuristic to the self-righteousness one, applied specifically to the first-level or "uninformed" position (since "uninformed" is really a lot closer to "only informed subconsciously, by local culture and media").

Belatedly, a quotation to hang at the top of the post:

There is a great difference between still believing something and believing it again. Still to believe that the moon affects the plants reveals stupidity and superstition, but to believe it again is a sign of philosophy and reflection.

Lichtenberg, Georg Christoph, 1775


Whenever holding a position makes you feel superior and is fun to talk about, that's a good sign that the position is not just practical, but signaling related.

Readers be warned: Internalizing this insight may result in catastrophic loss of interest in politics.

Perhaps for some people -- but on the other hand, it creates an even higher intellectual challenge to achieve accurate understanding. Understanding hard and complicated things in math and science is extremely challenging, but ultimately, you still have fully reliable trusted authorities to turn to when you're lost, and you know they won't lie and bullshit you. In politics and heavily politicized fields in general, there is no such safety net; you are completely on your own.

I've known politics is largely about status signaling (which hasn't caused any reduction of interest in issues which our society politicizes, however, just in elections and the like) since I started reading LW, but I just realized that reading LessWrong makes me feel superior (although I've noticed this before, but it seems hard to avoid) and it's fun to talk about. That's horrifying.

Here's my alternative explanation for your triads which, while obviously a caricature, is no more so than yours and I think is more accurate: un-educated / academic / educated non-academic.

Essentially your 'contrarian' positions are the mainstream positions you are more or less required to hold to build a successful academic (or media) career. Some academics can get away with deviation in some areas (at some cost to their career prospects) but relatively few are willing to risk it. Intelligent, educated individuals who have not been subject to excessive exposure to academic groupthink are more likely to take your meta-contrarian positions.

See also Moldbug's thoughts on the University.

Seems to me like a (hopefully Friendly) seed AI is more likely to provide the "Schelling point" that'd provide an alternative to the modern US government than any sort of reactionary "antiversity". EDIT: Come to think of it, a libertarian space society could probably do it, too, much the same way as the Soviet Union always had "surrender to the US" as an eject button.

A while back the "Steveosphere" had a list of items for which "the masses display more common sense than the smarties do". These suggest that they think they have located Yvain-clusters of the following type:

  1. Troglodyte position.
  2. Liberal position.
  3. Troglodyte position held for sophisticated reasons.
...many of those questions are rather odd. I went in expecting things like "are the tides controlled by the oceans", questions phrased in a way that sounds stupid but are actually correct (or the "is it scientific to say"), which would have shown that deliberately avoiding stupid statements can lead smart people to make incorrect statements. And some, like "Genes play a major role in determining personality." and "Things for blacks in the US have improved over time.", fall into that category. I particularly liked "Whites are hurt by affirmative action policies that favor blacks". However, many of the questions outright contain "should" statements where you actually cannot say that the answers the "smarties" gave were factually incorrect, because they require getting into our goals and morality, or at least talking about very complicated things from that angle.

Global warming was the least controversial example you could think of? Seriously?

Well, the example was to show that there are certain meta-contrarian views held by a big part of this community which are trivially wrong and proof that they have gone too far. Given that restriction, what less controversial example would you have preferred?

I really would have liked to use the racism example, because it's most elegant. The in-group bias means people will naturally like their own race more than others. Some very intelligent and moral people come up with the opposing position that all races are equal; overcoming one's biases enough to believe this becomes (rightly) correlated with high intelligence and morality. This contrarian idea spreads until practically everyone believes it and signals it so much as to become annoying and inane. This creates a niche for certain people to signal their difference to the majority by becoming pro-racial differences. But taken too far, this meta-contrarian position could easily lead to racism.

But any post that includes a whole paragraph on racism automatically ends up with the comments entirely devoted to discussing racism, and the rest of the post completely ignored. Feminism would also have worked, but I would have to be dumb as a rock to voluntarily bring up gender issues on this blog. Global warming seemed like something that Less Wrong is generally willing to admit is correct and doesn't care that much about, while still having enough of an anti-global-warming faction to work as an example.

What less controversial example should have been used instead?

I've been lurking and reading for a few days--interested in a few things, thinking about a few things, but not quite ready to jump in and start participating yet. This comment cracked me up enough to make an account and upvote it.

conservative / liberal / libertarian

No way, I don't buy this one at all. I find that most little kids are essentially naive liberals. We should give poor sick people free medicine! We should stop bad polluters from hurting birds and trees! Conservatism/libertarianism is the contrarian position. Everything has a cost! There are no free lunches! Managerial-technocratic liberals are the meta-contrarians. So what about the costs? We've got 800 of the smartest guys from Yarvard and Oxbridge to do cost-benefit analyses for us!

Of course there are meta-meta-contrarians as well: reactionaries, meta-libertarians (Patri Friedman is a good example of a metalibertarian IMO), anarchists, etc.

It's contrarians all the way down.

I was thinking more in terms of conservative values like "My country is the best" and "Our enemies are bad people who hate our freedom", but your way makes a lot of sense too.

Although it's worth noting that all of what you say is obvious even to little kids are things no one had even thought of a hundred years ago. Rachel Carson and Silent Spring are remembered as iconic because they kick-started an environmentalist movement that just didn't really exist before the second half of the 20th century (although Thoureau and people like that get honorable mention). The idea of rich people paying to give poor sick people free medicine would have gotten you laughed out of most socially stratified civilizations on the wrong side of about 1850.

But I don't want to get too bogged down in which side is more contrarian, because it sounds too close to arguing whether liberalism or conservativism is better, which of course would be a terribly low status thing to do on a site like this :)

I think it was probably a mistake to include such large-scale politics on there at all. Whether a political position seems natural or contrarian depends on what social context someone's in, wha... (read more)

I think you're right about the chronological sequence of kids as "naive liberals" to adults as conservative (more so than the kids, anyway), but not about the rationale. Positioning oneself on the contrarian hierarchy is about showing off that your intellect is greater than the people below you on it. It's the rare adult who feels a need to explicitly demonstrate their intellectual superiority to children--but the common adult who has a job and pays taxes and actually ever thinks about the cost of things, as opposed to the kids, who don't need to.

In short, adults don't oppose free medicine etc. to be contrary to the position of naive children; they oppose it because they're the ones who'd have to pay for it.

I think the takeaway from this is just that classification of phenomena into these triads is a very subjective business. That's not necessarily a bad thing, since the point of this (if I'm reading Yvain correctly) is not to determine the correctness of a position by its position in a triad, but simply to encourage people to notice when their own thinking is motivated by a desire to climb the triad, rather than pursue truth, and to be skeptical of yourself when you detect yourself trying to triad-climb.
Ah thanks that position makes more sense to me now, you mean what most people call social democracy, not liberalism as it is understood outside the US? Because at least in britain, libertarian's align with liberals/conservatives against socialists and social democrats. But to be honest, they are a good example of a flaw in the setup, which is that people tend to define themselves against imaginary enemies that believe everything they do only backwards, rather than naively dispute everything their enemy says. So libertarians are more likely to complain about "statists", than come out in favour of taxes or wars because socialists are against them.

I think it's worth noting explicitly (though you certainly noted it implicitly) that meta-contrarianism does not simply agree with the original, non-contrarian opinion. Meta-contrarianism usually goes to great lengths to signal that it is indeed the level above, and absolutely not the level below, the default position.

An example, from a guy who lives in a local hipster capital:

People not interested in (or just unskilled at) looking cool will mostly buy their clothing at places like Wal-Mart. The "contrarian" cluster differentiates itself by shopping at very expensive, high-status stores (dropping $150 on a pair of jeans, say). Your hipster crowd does not respond to this by returning to Wal-Mart. Instead, they get very distinct retro or otherwise unusual clothing from thrift stores and the like, places that no one who simply, actually didn't care about signaling would never bother to seek out.

The counter-counter culture often cares just as much about differentiating itself from the culture as it does the counter-culture. The noveau-riche may not have to worry about this, if in their case it comes automatically, but other groups do.

Of course they do, otherwise their signalling would be indistinguishable from the culture's, and thus useless.
There are other dimensions in which the counter-counter people can signal their difference from the non-counters (e.g. hipsters are already living in upper class neighborhoods, have upper class mannerisms etc.). This makes it possible for a simple reversal in the uninformed/contrary/meta-contrary dimension to differentiate them from the counters.
Absolutely. This is the sort of thing I was referring to in the last sentence. The point being, just because they don't seem to go to great pains to distinguish themselves from the non-counters, doesn't mean they're only trying to differentiate from one group: status above both is still the goal, even if they don't have to actively "seek" it.
No problem. I was just providing a live example of metacontrarianism ;)

Could it be that the entire history of philosophy and its "thesis, antithesis, synthesis" recurring structure is an instance of this? Not to mention other liberal arts, and the development of the cycles of fashion.

According to the survey, the average IQ on this site is around 145^2

I can't possibly have been the only one to have been amused by this.

(Well, doesn't Clippy claim to be a superintelligence?)

According to the survey, the average IQ on this site is around 145

I can't possibly have been the only one to have been amused by this.

The really disturbing possibility is that average people hanging out here might actually be of the sort that solves IQ tests extremely successfully, with scores over 140, but whose real-life accomplishments are far below what these scores might suggest. In other words, that there might be a selection effect for the sort of people that Scott Adams encountered when he joined Mensa:

I decided to take an I.Q. test administered by Mensa, the organization of geniuses. If you score in the top 2% of people who take that same test, you get to call yourself a “genius” and optionally join the group. I squeaked in and immediately joined so I could hang out with the other geniuses and do genius things. I even volunteered to host some meetings at my apartment.

Then, the horror.

It turns out that the people who join Mensa and attend meetings are, on average, not successful titans of industry. They are instead – and I say this with great affection – huge losers. I was making $735 per month and I was like frickin’ Goldfinger in this crowd. We had a guy who was

... (read more)

I should clarify that I was specifically referring to the interesting placement of that superscript 2. :-)

EDIT: Though actually, this is probably the perfect opportunity to wonder if the reason people join this community is that it's probably the easiest high-IQ group to join in the world: you don't have to pass a test or earn a degree; all you have to do is write intelligent blog comments.

Oh, then it was a misunderstanding. I thought you were (like me) amused by the poll result suggesting that the intelligence of the average person here is in the upper 99.865-th percentile.

(Just to get the feel for that number, belonging to the same percentile of income distribution in the U.S. would mean roughly a million dollars a year.)

Hmm.. Isn't the intelligence distribution more like a bell curve and the distribution of income more like a power law?
Both can be power-law or Gaussian depending on your "perspective". There are roughly as many people with a IQ over 190 as there are people with an income over 1 billion USD per annum. By roughly I mean an order of magnitude. Generally IQ is graphed as a Gaussian distribution because of the way it's measured--the middle of the distribution is defined as 100. Income is raw numbers. (edited to move a scare quote)
Upvoted for the quality of the analogy, although I also agree with you.
Well I'm also amused by that, to be sure.
And since the correlation between the two is about 0.4, that would suggest an income of 1.2 standard deviations above the mean, or about $80,000 a year in the US, not controlling for age. Controlling for age, I suspect LWers have approximately average income for their level of intelligence (and because regression to the mean is not intuitive, it feels like we should be doing far better than that).
I find this sort of puzzling. There is clearly a demand for organizations which provide opportunities to interact and socialize with people carefully selected for their ability to solve clever puzzles (and whatever else is on the IQ test--I haven't taken a real one). Why is that? Does anybody here specifically seek out high-IQ friends? Do you feel like trying to explain the appeal to me? Intelligence is one of my criteria for my companions, to be sure, but I'm not sure it's in the top three, and I certainly wouldn't settle for it alone. Also, I'm not sure that earning a degree is harder than writing an intelligent blog post. Not for everyone, anyway.

There is clearly a demand for organizations which provide opportunities to interact and socialize with people carefully selected for their ability to solve clever puzzles (and whatever else is on the IQ test--I haven't taken a real one)

That's not the sense of IQ that I mean; rather, I mean the underlying thing which that ability is supposed to be an indicator of.

(My guess would be that this underlying thing is probably something like "richness of mental life".)

Does anybody here specifically seek out high-IQ friends? Do you feel like trying to explain the appeal to me?

My experience suggests that it makes a significant difference to one's quality of life whether the people in one's social circle are close to one's own intelligence level.

Not too long ago I spent some time at the SIAI house; and even though I was probably doing more "work" than usual while I was there, it felt like vacation, simply because the everyday task of communicating with people was so much easier and more efficient than in my normal life.

See my response to cata. I suppose it's possible that I'm merely spoiled in this regard, but I'm not sure. Yes, most of the people I've spent a lot of time with in my life have been some kind of intelligent--my parents are very smart, and I was taught to value intellect highly growing up. But some of the folks who've really made me glad to have them around have been less educated and less well-read than I am, which isn't trivial (I'm a high school dropout, albeit one who likes to do some learning on her own time). I'm thinking particularly of my coworkers at my last job. We worked behind the counter at a dry cleaner. These were not people with college educations, or who had learned much about critical thinking or logic or debate. This is not to say they had below average intelligence--just not particularly higher, either. They were confused as to why I was working this dead-end job with them instead of going to college and making some of myself; I was clearly capable of it. But those people made the job worthwhile. They were thoughtful, respectful, often funny, and supportive. They were good at their jobs--on a busy day, it felt like being part of a well-oiled machine. There isn't one quality in that list you could have traded for outstanding intelligence and made them better people, nor made me happier to be around them. If your point is right, maybe all that means is that my brain is nothing to write home about. But I'm fonder of the theory that there are other qualities that have at least as much value in terms of quality of life. Would you be happy living in a house of smart people who were all jerks?
Of course not. What caused your probability of my saying "yes" to be high enough to make this question worth asking? I could with more genuine curiosity ask you the following: would you be happy spending your life surrounded by nice people who understood maybe 20% of your thoughts?
It was rhetorical, and meant to support the point that intelligence alone does not make a person worthwhile. I'd rather have more kindness and less intelligence than the reverse. I think it's clear we'd both prefer a balance, though, and that's really all my point was: intelligence is not enough to qualify a person as worthwhile. Which is why social groups with that as the only criterion confuse me. :)

Here I go, speaking for other people, but I'm guessing that people at the LessWrong meetup at least met some baseline of all those other qualities, by komponisto's estimation, and that the difference of intelligence allowed for such a massive increase in ability to communicate made talking so much more enjoyable, given that ey was talking to decent people.

Each quality may not be linear. If someone is "half as nice" as another person, I don't want to talk to them at half the frequency, or bet that I'll fully enjoy conversation half of the time. A certain threshold of most qualities makes a person totally not worth talking to. But at the same time, a person can only be so much more thoughtful, respectful, funny, supportive, before you lose your ability to identify with them again! That's my experience anyhow - if I admire a person too much, I have difficulty imagining that they identify with me as I do with them. Trust needs some symmetry. And so there are probably optimal levels of friendship-worthy qualities (very roughly by any measure), a minimum threshold, and a region where a little difference makes a big difference. The left-bounded S-curves of friendship.

Then there ... (read more)

I think this is a really excellent analysis and I agree with just about all of it. I suspect that the difference in our initial reactions had to do with your premise that intelligent people are easier to communicate with. This hasn't been true in my experience, but I'd bet that the difference is the topics of conversation. If you want to talk to people about AI, someone with more education and intellect is going to suit you better than someone with less, even if they're also really nice. I've definitely also had conversations where the guy in the room who was the most confused and having the least fun was the one with the most book smarts. I'm trying to remember what they were about ... off the top of my head, I think it tended to be social situations or issues which he had not encountered. Empathy would have done him more good than education in that instance (given that his education was not in the social sciences).
Your suspicion rings true. Having more intelligence won't make you more enjoyable to talk to on a subject you don't care about! It also may not make a difference if the topic is simple to understand, but still feels worth talking about (personal conversations on all sorts of things). Education isn't the same as intelligence of course. Intelligence will help you gain and retain an education faster, through books or conversation, in anything that interests you. Most of my high school friends were extremely intelligent, and mostly applied themselves to art and writing. A few mostly applied themselves to programming and tesla coils. I think a common characteristic that they held was genuine curiosity in exploring new domains, and could enjoy conversations with people of many different interests. The same was true for most of my college friends. I would say I selected for good intelligent people with unusually broad interests. I still care a great deal for my specialist friends, and friends of varying intelligence. It's easy for me to enjoy a conversation with almost anyone genuinely interested in communicating, because I'll probably share the person's interest to some degree. Roughly, curiosity overlap lays the ground for topical conversation, education determines the launching point on a topic, and intelligence determines the speed.
Isn't that what you would expect for most conversations, when all else is equal? This is an effect I expect to in general and I attribute it both due to self selection and causation.
... well, it isn't what I do expect, so I guess I wouldn't. The thought never crossed my mind, so I don't really have anything more insightful to say about it yet. Let me chew on it. I suspect that I mostly socialize with people I consider equals.
Actually, I was talking about my two-week stay as an SIAI Visiting Fellow. (Which is kind of like a Less Wrong meetup...) But, yeah.
I'm quite curious about what benefits you experienced from your two week visit... anything you can share or is it all secret and mysterious? Not that I am considering applying. If I was I would have had to refrain from telling Eliezer (and probably Alicorn) whenever they are being silly. The freedom to speak ones mind without the need for securing approval is just too attractive to pass up! :)
Neither of these should stop you. Alicorn lives on the other side of the country from the house, and Eliezer is pretty lax about criticism (and isn't around much, anyway).
Oh, there's the thing with being on the other side of the world too. ;)
They pay for airfare, you know...
Damn you and your shooting down all my excuses! ;) Not that I'd let them pay for my airfare anyway. I would only do it if I could pay them for the experience.
Fortunately, you appear to be able to rationalize more quite easily. ;)
Perhaps the most publicly noticeable result was that I had the opportunity to write this post (and also this wiki entry) in an environment where writing Less Wrong posts was socially reinforced as a worthwhile use of one's time. Then, of course, are the benefits discussed above -- those that one would automatically get from spending time living in a high-IQ environment. In some ways, in fact, it was indeed like a two-week-long Less Wrong meetup. I had the opportunity to learn specific information about subjects relating to artificial intelligence and existential risk (and the beliefs of certain people about these subjects), which resulted in some updating of my beliefs about these subjects; as well as the opportunity to participate in rationality training exercises. It was also nice to become personally acquainted with some of the "important people" on LW, such as Anna Salamon, Kaj Sotala, Nick Tarleton, Mike Blume, and Alicorn (who did indeed go by that name around SIAI!); as well as a number of other folks at SIAI who do very important work but don't post as much here. Conversations were frequent and very stimulating. (Kaj Sotala wasn't lying about Michael Vassar.) As a result of having done this, I am now "in the network", which will tend to facilitate any specific contributions to existential risk reduction that I might be able to make apart from my basic strategy of "become as high-status/high-value as possible in the field(s) I most enjoy working in, and transfer some of that value via money to existential risk reduction". Eliezer is uninvolved with the Visiting Fellows program, and I doubt he even had any idea that I was there. Nor is Alicorn currently there, as I understand.
I hear that the secret to being a fellow is show rigorously that the probability that one of them is being silly is greater than 1/2. Just a silly math test.
Ah, you lucky fellow!

There is clearly a demand for organizations which provide opportunities to interact and socialize with people carefully selected for their ability to solve clever puzzles (and whatever else is on the IQ test--I haven't taken a real one).

Really? I don't think that's true; I think people just tend to assume that IQ is a good proxy for general intellectualism (e.g. highbrow tastes, willingness to talk and debate a lot, being well-read.) Since it's easier to score an IQ test than a test judging political literacy, education, and favorite novels, that's what organizations like Mensa use, and that's the measuring stick everyone trots out. Needless to say, it's not a very good one, but it's made its way into the culture.

I mean, even in casual usage, when most people talk about someone's high IQ, they probably aren't talking about focus, memory, or pattern recognition. They're likely actually talking about education and interests.

That's precisely what troubles me. I don't like that we use a term which actually only means the former to refer to how "smart" someone is in vague, visceral sense--nor the implied equation of either IQ or smartness with utility. I'm not accusing you of that necessarily, it's just a pattern I see in the world and fret about. Actually, it reminds me of something which might make a good article in its own right; I'll ruminate on it for a bit while I'm still getting used to article etiquette.
I definitely agree on this. It's an abused and conflated word, though I don't know if that's more of a cause than an effect of problems society has with thinking about intelligence. I wonder how we could best get people to casually use a wider array of words and associations to distinguish the many different things we mean by "smart".
You've hit an important point here, and not just about the topic in question. Consider body image (we want to see people on TV we think are pretty, but we get our ideas of what's pretty in part from TV) and media violence (we want to depict the world as it really is, but we also want to impart values that will change the world for the better rather than glorifying people and events which change it for the worse). How, in general, do we break these loops? So far, I haven't thought of anything better than choosing to be precise when I'm talking about somebody's talents and weaknesses, so I try to do that.
Well, me neither; I think it's a reflection of how people would like to imagine other humans as being much simpler and more homogeneous than they actually are. I look forward to your forthcoming post.
That's reassuring. :) Me too. I don't have a post's worth of idea yet. But there's cud yet to chew. (Ruminate has one of my favorite etymologies.)
This surprises me. One explanation for the mismatch between my experience with Mensa and Adams' is that local groups vary a lot. Another is that he's making up a bunch of insults based on a cliche. What I've seen of Mensa is people who seemed socially ordinary (bear in mind, my reference group is sf fandom), but not as intelligent as I hoped. I went to a couple of gatherings-- one had pretty ordinary discussion of Star Trek. Another was basically alright, but had one annoying person who'd been in the group so long that the other members didn't notice how annoying he was-- hardly a problem unique to Mensa. Kate Jones, President of Kadon Games, is a Mensan and one of the more intelligent people I know. I know one other Mensan I consider intelligent, and there's no reason to think I have a complete list of the Mensans in my social circle. I was in Mensa for a while-- I hoped it would be useful for networking, but I didn't get any good out of it. The publications were generally underwhelming-- there was a lot of articles which would start with more or less arbitrary definitions for words, and then an effort to build an argument from the definitions. This was in the 80s, and I don't know whether the organization has changed. Still, if I'd lived in a small town with no access to sf fandom, Mensa might have been a best available choice for me. These days, I'd say there are a lot of online communities for smart people. All this being said, I suspect that IQ tests the like select for people with mild ADD (look! another question! no need to stay focused on a project!) and against people who want to do things which are directly connected to their goals.

I'd say that the problem is the selection effect for intelligent underachievers. People who are in the top 2% of the population by some widely recognized measure of intellectual accomplishment presumably already have affiliations, titles, and positions far more prestigious than the membership in an organization where the only qualification is passing a written test could ever be. Also, their everyday social circles are likely to consist of other individuals of the same caliber, so they have no need to seek them out actively.

Therefore, in an organization like Mensa, I would expect a strong selection effect for people who have the ability to achieve high IQ scores (whatever that might specifically imply, considering the controversies in IQ research), but who lack other abilities necessary to translate that into actual accomplishment and acquire recognition and connections among high-achieving people. Needless to say, such people are unlikely to end up as high-status individuals in our culture (or any other, for that matter). People of the sort you mention, smart enough to have flashes of extraordinary insight but unable to stay focused long enough to get anything done, likely account for some non-trivial subset of those.

That said, in such a decentralized organization, I would expect that the quality of local chapters and the sort of people they attract depends greatly on the ability and attitudes of the local leadership. There are probably places both significantly better and worse than what you describe.

I'm not sure about this. I doubt I would do all that well on a Mensa-type IQ test, and I suspect ADD may be part of the reason. (Though SarahC has raised the possibility of motivated cognition interfering with mathematical problem solving, which I hadn't really considered.) This, however, I do believe. Despite Richard Feynman's supposedly low IQ score, and Albert Einstein's status as the popular exemplar of high-IQ, my impression (prejudice?) regarding traditional "IQ tests" is that they would in fact tend to select for people like Feynman (clever tinkerers) at the expense of people like Einstein (imaginative ponderers).
While I'm passing through looking for something else:
I was generalizing from one example-- it's easier for me to focus on a series of little problems. If I have ADD, it's quite mild as such things go.
That's fairly analogous to my worries about joining LW. I was afraid it would be full of extremely intelligent, very dumb people. ;)
How do you know this isn't the case?
Intelligence is but one measure of mental ability. One of the critical ones for modern life goes by "Executive Function" it seems to be moderately independent of IQ. It could also be called "Self Discipline". It is why really bright kids get lousy grades. Why kids who do well in High School, but never seem to study, tank when they hit college, or when the get out of college and actually have to show up for work clean, neat and on time. I don't CARE if you can solve a rubics cube in 38 seconds, I need those TPS reports NOW.
It's correlated with self discipline but it is actually a different ability. In fact, some with problems with executive function compensate by developing excessive self discipline. (Having a #@$%ed up system for dealing with prioritisation makes anxiety based perfectionism more adaptive.)

herbal-spiritual-alternative medicine / conventional medicine / Robin Hanson

Can you link to a Robin Hanson article on this topic so that people who aren't already familiar with his opinions on this subject (read: LW newbies like me) know what this is about?

Or alternately, I propose this sequence:

regular medical care by default / alt-med / regular medical care because alt-med is unscientific

regular medical care by default / alt-med / regular medical care because alt-med is unscientific

This is more in line with the other examples. I second the request for an edit. Yvain, you could add "Robin Hanson" to the fourth slot: it would kinda mess up your triplets, but with the justification that it'd be a funny example of just how awesomely contrarian Robin Hanson is. :D

Also, Yvain, you happen to list what people here would deem more-or-less correct contrarian clusters in your triplet examples. But I have no idea how often the meta-level contrarian position is actually correct, and I fear that I might get too much of a kick out of the positions you list in your triplets simply because my position is more meta and I associate metaness with truth when in reality it might be negatively correlated. Perhaps you could think of a few more-wrong meta-contrarian positions to balance what may be a small affective bias?

Huh? In all of those examples the unmentioned fourth level is correct and the second and third level both about equally useless.
Half-agree with you, as none of the 18 positions are 'correct', but I don't know what you mean by 'useless'. Instead of generalizing I'll list my personal positions: If I failed to notice that there are scientifically proven genetic differences I would be missing a far more important part of reality (evolutionary psychology and the huge effects of evolution in the last 20,000 years) than if I failed to notice that being a bigot was bad and impeded moral progress. That said, if most people took this position, it'd result in a horrible tragedy of the commons situation, which is why most social scientists cooperate on the 'let's not promote racism' dilemma. I'm not a social scientist so I get to defect and study some of the more interesting aspects of human evolutionary biology. No opinion. Women seem to be doing perfectly fine. Men seem to get screwed over by divorce laws and the like. Tentatively agree more with third level but hey, I'm pretty ignorant here. What can I say, it's politics. Libertarians in charge would mean more drugs and ethically questionable experiments of the sort I promote, as well as a lot more focus on the risks and benefits of technology. Since the Singularity trumps everything else policy-wise I have to root for the libertarian team here, even if I find them obnoxiously pretentious. (ETA: Actually, maybe more libertarians would just make it more likely that the 'Yeah yeah Singularity AI transhumanism wooooo!' meme would get bigger which would increase existential risk. So uh... never mind, I dunno.) Too ignorant to comment. My oxycodone and antiobiotics sure did me good when I got an infection a week ago. My dermatologist drugs didn't help much with my acne. I've gotten a few small surgeries which made me better. Overall conventional medicine seems to have helped me a fair bit and costs me little. I don't even know what Robin Hanson's claims are, though. A link would be great. Okay, anyone who cares about helping people in Africa and can

My comment was largely tongue in cheek, but:

  • KKK-style racist / politically correct liberal / "but there are scientifically proven genetic differences"

If I failed to notice that there are scientifically proven genetic differences I would be missing a far more important part of reality (evolutionary psychology and the huge effects of evolution in the last 20,000 years) than if I failed to notice that being a bigot was bad and impeded moral progress. That said, if most people took this position, it'd result in a horrible tragedy of the commons situation, which is why most social scientists cooperate on the 'let's not promote racism' dilemma. I'm not a social scientist so I get to defect and study some of the more interesting aspects of human evolutionary biology.

Awareness of genetic differences between races constitutes negative knowledge in many cases, that is it leads to anticipations that match the outcomes more badly than they would have otherwise. If everyone suspects that genetically blue-haired people are slightly less intelligent on average for genetic reasons, you want to hire the most intelligent person for a job and after a very long selection process (th... (read more)

Okay, anyone who cares about helping people in Africa and can multiply should be giving their money to x-risk charities. Because saving the world also includes saving Africa.

But... but... but saving the world doesn't signal the same affiliations as saving Africa!


On LW, it signals better affiliations!

My impression is that Hanson's take on conventional medicine is that half the money spent is wasted. However, I don't know if he's been very specific about which half.
The RAND Health Experiment, which he frequently citied study didn't investigate the benefits of catastrophic medical insurance or that which people pay for from their own pocket, and found the rest useless.
Why is giving money to x-risk charities conducive to saving the world? (I don't necessarily disagree, but want to see what you have to say to substantiate your claim.) In particular, what's your response to Holden's comment #12 at the GiveWell Singularity Summit thread ?

Sorry, I didn't mean to assume the conclusion. Rather than do a disservice to the arguments with a hastily written reply, I'm going to cop out of the responsibility of providing a rigorous technical analysis and just share some thoughts. From what I've seen of your posts, your arguments were that the current nominally x-risk-reducing organizations (primarily FHI and SIAI) aren't up to snuff when it comes to actually saving the world (in the case of SIAI perhaps even being actively harmful). Despite and because of being involved with SIAI I share some of your misgivings. That said, I personally think that SIAI is net-beneficial for their cause of promoting clear and accurate thinking about the Singularity, and that the PR issues you cite regarding Eliezer will be negligible in 5-10 years when more academics start speaking out publically about Singularity issues, which will only happen if SIAI stays around, gets funding, keeps on writing papers, and promotes the pretty-successful Singularity Summits. Also, I never saw you mention that SIAI is actively working on the research problems of building a Friendly artificial intelligence. Indeed, in a few years, SIAI will have begun the ende... (read more)

Reasonable response, upvoted :-). •As I said, I cut my planned sequence of postings on SIAI short. There's more that I would have liked to say and more that I hope to say in the future. For now I'm focusing on finishing my thesis. •An important point that did not come across in my postings is that I'm skeptical of philanthropic projects having a positive impact on what they're trying to do in general (independently of relation to existential risk). One major influence here has been my personal experience with public institutions. Another major influence has been reading the GiveWell blog. See for example GiveWell's page on Social Programs That Just Don't Work. At present I think that it's a highly nonobvious but important fact that those projects which superficially look to be promising and which are not well-grounded by constant feedback from outsiders almost always fail to have any nontrivial impact on the relevant cause. See the comment here by prase which I agree with. •On the subject of a proposed project inadvertently doing more harm than good, see the last few paragraphs of the GiveWell post titled Against Promise Neighborhoods. Consideration of counterfactuals is very tricky and very smart people often get it wrong. •Quite possibly SIAI is having a positive holistic impact - I don't have confidence that this is so, the situation is just that I don't have enough information to judge from the outside. •Regarding the time line for AGI and the feasibility of FAI research, see my back and forth with Tim Tyler here. •My thinking as to what the most important causes to focus at present are is very much in flux. I welcome any information that you or others can point me to. •My reasons for supporting developing world aid in particular at present are various and nuanced and I haven't yet had the time to write out a detailed explanation that's ready for public consumption. Feel free to PM me with your email address if you'd like to correspond. Thanks again for
If you had a post on this specifically planned then I would be interested in reading it!
Is that what they are doing?!? They seem to be funded by promoting the idea that DOOM is SOON - and that to avert it we should all be sending our hard-earned dollars to their intrepid band of Friendly Folk. One might naively expect such an organisation would typically act so as to exaggerate the risks - so as to increase the flow of donations. That seems pretty consistent with their actions to me. From that perspective the organisation seems likely to be an unreliable guide to the facts of the matter - since they have glaringly-obvious vested interests.


They seem to be funded by promoting the idea that DOOM is SOON - and that to avert it we should all be sending our hard-earned dollars to their intrepid band of Friendly Folk.

Or, more realistically, the idea that DOOM has a CHANCE of happening any time between NOW and ONE HUNDRED YEARS FROM NOW but that small CHANCE has a large enough impact in EXPECTED UTILITY that we should really figure out more about the problem because someone, not necessarily SIAI might have to deal with the problem EVENTUALLY.

One might naively expect such an organization would typically act so as to exaggerate the risks -- but SIAI doesn't seem to be doing that so one's naive expectations would be wrong. It's amazing how people associate an aura of overconfidence coming from the philosophical positions of Eliezer with the actual confidence levels of the thinkers of SIAI. Seriously, where are these crazy claims about DOOM being SOON and that ELIEZER YUDKOWSKY is the MESSIAH? From something Eliezer wrote 10 years ago? The Singularity Institute is pretty damn reasonable. The journal and conference papers they write are pretty well grounded in sound and careful reasoning. But ha, who would read tho... (read more)

[This comment is no longer endorsed by its author]Reply
That was quite a rant! I hope I don't come across as thinking "the worst" about those involved. I expect they are all very nice and sincere. By way of comparison, not all cults have deliberately exploitative ringleaders. Really? Really? You actually think the level of DOOM is cold realism - and not a ploy to attract funding? Why do you think that? De Garis and Warwick were doing much the same kind of attention-seeking before the SIAI came along - DOOM is an old school of marketing in the field. You encourage me to speculate about the motives of the individuals involved. While that might be fun, it doesn't seem to matter much - the SIAI itself is evidently behaving as though it wants dollars, attention, and manpower - to help it meet its aims. FWIW, I don't see what I am saying as particularly "contrarian". A lot of people would be pretty sceptical about the end of the world being nigh - or the idea that a bug might take over the world - or the idea that a bunch of saintly programmers will be the ones to save us all. Maybe contrary to the ideas of the true believers - if that is what you mean. Anyway, the basic point is that if you are interested in DOOM, or p(DOOM), consulting a DOOM-mongering organisation, that wants your dollars to help them SAVE THE WORLD may not be your best move. The "follow the money" principle is simple - and often produces good results.

FWIW, I don't see what I am saying as particularly "contrarian". A lot of people would be pretty sceptical about the end of the world being nigh - or the idea that a bug might take over the world - or the idea that a bunch of saintly programmers will be the ones to save us all. Maybe contrary to the ideas of the true believers - if that is what you mean.

Right, I said metacontrarian. Although most LW people seem SIAI-agnostic, a lot of the most vocal and most experienced posters are pro-SIAI or SIAI-related, so LW comes across as having a generally pro-SIAI attitude, which is a traditionally contrarian attitude. Thus going against the contrarian status quo is metacontrarian.

You encourage me to speculate about the motives of the individuals involved. While that might be fun, it doesn't seem to matter much - the SIAI itself is evidently behaving as though it wants dollars, attention, and manpower - to help it meet its aims.

I'm confused. Anyone trying to accomplish anything is going to try to get dollars, attention, and manpower. I'm confused as to how this is relevant to the merit of SIAI's purpose. SIAI's never claimed to be fundamentally opposed to having resources.... (read more)


Everyone is incredibly critical of Eliezer, probably much more so than he deserves, because everyone is racing to be first to establish their non-cult-victim status.

I don't know about anybody else, but I am somewhat disturbed by Eliezer's persistent use of hyphens in place of em dashes, and am very concerned that it could be hurting SIAI's image.

And I say the same about his use of double spacing. It's an outdated and unprofessional practice. In fact, Anna Salamon and Louie Helm are 2 other SIAI folk that engage in this abysmal writing style, and for that reason I've often been tempted to write them off entirely. They're obviously not cognizant of the writing style of modern academic thinkers. The implications are obvious.

Another reason that I suspect is more important than trying to signal non-cult-victim status is that people who do want to be considered part of the cult believe that the cause is important and believe that Eliezer's mistakes could destroy the world (for example).
To recap, the SIAI is funded by donations from those who think that they will help prevent the end of the world at the hands of intelligent machines. For this pitch to work, the world must be at risk - in order for them to be able to save it. The SIAI face some resistance over this point, and these days, much of their output is oriented towards convincing others that these may be the end days. Also there will be a selection bias, with those most convinced of a high p(DOOM) most likely to be involved. Like I said, not necessarily the type of organisation one would want to approach if seeking the facts of the matter. You pretend to fail to see connections between the SIAI and an END OF THE WORLD cult - but it isn't a terribly convincing act. For the connections, see here. For protesting too much, see You're calling who a cult leader?
No, I see it, look further, and find the model lacking in explanatory power. It selectively leaves out all kinds of useful information that I can use to control my anticipations. Hmuh, I guess we won't be able to make progress, 'cuz I pretty much wholeheartedly agree with Vladimir when he says: and Nick Tarleton when he says:
"This one is right" for example. ;)
Another reason that I suspect is more important than trying to signal non-cult-victim status is that people who do want to be considered part of the cult believe that the cause is important and believe that Eliezer's mistakes could destroy the world (for example).
I didn't say anyone was "racing to be first to establish their non-cult-victim status" - but it is certainly a curious image! [deleted parent comment was a dupe].
Oops, connection troubles then missed.
Tim, do you think that nuclear-disarmament organizations were inherently flawed from the start because their aim was to prevent a catastrophic global nuclear war? Would you hold their claims to a much higher standard than the claims of organizations that looked to help smaller numbers of people here and now? I recognize that there are relevant differences, but merely pattern-matching an organization's conclusion about the scope of their problem, without addressing the quality of their intermediate reasoning, isn't sufficient reason to discount their rationality.
Will said "meta-contrarian," which refers to the recent meta-contrarians are intellectual hipsters post. I also think you see yourself as trying to help SIAI see how they look to "average joe" potential collaborators or contributors, while Will sees your criticisms as actually calling into question the motives, competence, and ingenuity of SIAI's staff. If I'm right, you're talking at cross-purposes.
Reforming the SIAI is a possibility - but not a terribly realistic one, IMO. So, my intended audience here is less that organisation, and more some of the individuals here who I share interests with.
Oh, that might be. Other comments by timtyler seemed really vague but generally anti-SIAI (I hate to set it up as if you could be for or against a set of related propositions in memespace, but it's natural to do here, meh), so I assumed he was expressing his own beliefs, and not a hypothetical average joe's.
This is an incredibly anti-name-calling community. People ascribe a lot of value to having "good" discussions (disagreement is common, but not adversarialism or ad hominems.) LW folks really don't like being called a cult. SIAI isn't a cult, and Eliezer isn't a cult leader, and I'm sure you know that your insinuations don't correspond to literal fact, and that this organization is no more a scam than a variety of other charitable and advocacy organizations. I do think that folks around here are over-sensitive to normal levels of name-calling and ad hominems. It's odd. Holding yourself above the fray comes across as a little snobbish. There's a whole world of discourse out there, people gathering evidence and exchanging opinions, and the vast majority of them are doing it like this: UR A FASCIST. But do you think there's therefore nothing to learn from them?
I think the reasoning goes something like: * Existential risks are things that could destroy the world as we know it. * Existential risk charities work to reduce such risks. * Existential risk charities use donations to perform said task * Giving to x-risk charities is conducive to saving the world. Before looking at evidence for or against the effectiveness of particular x-risk charities our prior expectation should be that people who dedicate themselves to doing something are more likely to contribute progress towards that goal than to sabotage it.
This is only true if it is the case that the first-order effect of legalizing drugs (legality would encourage more people to take them) outweighs second order effects. An example of the second order effects is the fact that the price is higher encourages production and distribution. Or the fact the that illegality allows them to be used as signals of rebellion. Legalizing drugs would potentially put distribution in the hands of more responsible people.And so forth. As the evidence based altruism people have found, improving the world is a lot harder than it looks.
I actually disagree with this statement outright. First of all, ignoring the existence of a specific piece of evidence is not the same as being wholly ignorant of the workings of evolution. Second, I think that the use or abuse of data (false or true) leading to the mistreatment of humans is a worse outcome than the ignorance of said data. Science isn't a goal in and of itself--it's a tool, a process invented for the betterment of humanity. It accomplishes that admirably, better than any other tool we've applied to the same problems. If the use of the tool, or in this case one particular end of the tool, causes harm, perhaps it's better to use another end (a different area of science than genetics), or the same one in a different environment (in a time and place where racial inequality and bias are not so heated and widespread--our future, if we're lucky). Otherwise, we're making the purpose of the tool subservient to the use of the tool for its own sake--pounding nails into the coffee table. Besides--anecdotally, people who think that the genetic differences between races are important incite less violence than people who think that not being a bigot is important. If, as you posited, one had to choose. ;) I have a couple other objections (really? sex discrimination is over? where was I?) but other people have covered them satisfactorily. New here; can I get a brief definition of this term? I've gotten the gist of what it means by following a couple of links, I just want to know where the x bit comes from. Didn't find it on the site's wiki or the internet at large.
X-risk stands for existential risk. It about possible events that risk ending the existence of the human race.
Got it; thank you.
What do you have in mind?
I'm not sure what "what" would refer to here. I didn't have an incident in mind, I'm just giving my impression of public perception (the first person gets called racist, and the second one gets called, well, normal, one hopes). It wasn't meant to be taken very seriously.

Noticing a social cluster takes social savvy and intelligence.

Therefore, showing that you can see a social cluster makes you look good.

Maybe going up a level in one of Yvain's hierarchies is showing off that you've discovered a social cluster? It goes together with distancing yourself from that cluster, but I don't know why.

I would like to announce that I have discovered the social cluster that has discovered the method of discovering all social clusters, and am now a postmodernist. Seriously guys, postmodernism is pretty meta. Update on expected metaness.

I'm confused. What point are you trying to make about postmodernism?

None, really. I just like how its proponents can always win arguments by claiming to be more meta than their opponents. ("Sure, everything you made sense within your frame of reference, but there are no privileged frames of reference. Indeed, proving that there are privileged frames of reference requires a privileged frame of reference and is thus an impossible philosophical act. I can't prove anything I just said, which proves my point, depending on whether you think it did or not.")

(I don't take postmodernism seriously, but some of the ideas are philosophically elegant.)


I can't prove anything I just said, which proves my point, depending on whether you think it did or not.

I would like this on a t-shirt.

Mmm, but isn't it true that "proving that there are privileged frames of reference requires a privileged frame of reference and is thus an impossible philosophical act."
I think there's missed "you said" here: "everything you made sense".

I have to admit, this has definitely been a hazard for me. As I said to simplicio a few months ago, I've had a sort of tendency to be "too clever" by taking the "clever contrarian" position. This gets to the point where I'm fascinated by those who can write up defenses of ridiculous positions and significantly increase my exposure to them.

I think part of what made me stray from "the path" was a tendency to root for the rhetorical "underdog" and be intrigued -- excessively -- with brilliant arguments that could defend ridiculous positions

I have to wonder if I'm falling into the same trap with my "Most scientists only complain about how hard it is to explain their field because their understanding is so poor to begin with." (i.e., below Level 2, the level at which you can trace out the implications between your field and numerous others in both directions, possibly knowing how to trace back the basis of all specialized knowledge to arbitrary levels)


Does making fun of hipsters to seem cool make you a meta-hipster?

Very much related to The Correct Contrarian Cluster.

Also, we had a post specifically on countersignaling: Things You Can't Countersignal.

One more cluster I can think of is attitude to copyright law. Something like:

  1. Huh? It's illegal for me to copy that song? How totally stupid, I'm not harming anyone.
  2. Strong intellectual property law is necessary to encourage innovation and protect artists.
  3. Copyright law does more harm than good and needs to be reformed or abolished.

This is actually an interesting example, because I think if you look at the patterns of contrarian and meta-contrarian groups--that is, the people who tend to prefer those attitudes--you actually flip the second two, which breaks the pattern of contradiction and counter-contradiction. That is to say,

  1. (ordinary people who don't worry too much about this) Huh? It's illegal for me to copy that song? How totally stupid, I'm not harming anyone.
  2. (people who are really into the torrent community) It's not/shouldn't be illegal! Information wants to be free!
  3. (meta) If nobody paid for music, no one could live off being a musician. Torrenters are just making excuses for the convenience of breaking the law.
  4. (approaching sense) We need to reform or abolish copyright law and replace it with a system that pays artists fairly while working with, not against, new technology.

At least, that's my experience; take it with a grain of bias in favor of position four.

That's an interesting example because 1 and 2 arrive at the same conclusion, but 2 might still want to signal themselves as being contrary to 1 (i.e. "It's not just that it's not harming anybody, but sharing information around freely is actually helping everybody!")
I agree with that, and you make a good point--it suggests that being contrarian doesn't require disagreeing with the position as disagreeing with the reasoning. In a lot of cases it'll amount to the same thing, or at least come off as the same thing, but the above is one where it doesn't.
I basically object to copyright law because of 1. Clearly my opinion is transcendent, a least fixed point of meta-contrarianism.

Not everything is signaling.

The intellectually compulsive are natural critics. You see something wrong in an argument, and you argue against it. The natural stopping point in this process is when you don't find significant problems with the theory, and that is more likely for a fringe theory that other's don't bother to critique. When no one is helping you find the flaws, it's less likely you'll find them. You'll win arguments, at least by your evaluation, because you are familiar with their arguments and can show flaws, but your argument is unfamiliar to ... (read more)

doesn't follow politics / political junkie / avoids talking about politics due to mind-killing

This suggests that a common tactic (deliberate or otherwise) would be to represent your opponents as being the level below you, rather than the level above. For example this article, which treats Singularitarians as at level 1, rather than level 3, on

technology is great! -> but it has costs, like to the enviroment, and making social control easier -> Actually, the benefits vastly outweigh those.

Ironically, it's not that far off for SIAI, which is at level 4, 'certain technologies are existentially dangerous'

This seems to hold true for all the triads ... (read more)

Existentially dangerous doesn't mean the benefits still don't outweigh the costs. If there's a 95% chance that uFAI kills us all, that's still a whopping 5% chance at unfathomably large amounts of utility. Technology still ends up having been a good idea after all. Each level adds necessary nuance. Unfortunately, at each level is a new chance for unnecessary nuance. Strong epistemic rationality is the only thing that can shoulder the weight of the burdensome details. Added: Your epistemic rationality is limited by your epistemology. There's a whole bunch of pretty and convincing mathematics that says Bayesian epistemology is the Way. We trust in Bayes because we trust in that math: the math shoulders the weight. A question, then. When is Bayesianism not the ideal epistemology? As humans the answer is 'limited resources'. But what if you had unlimited resources? At the limit, where doesn't Bayes hold?
I've noticed that quite often long before seeing this article. There seems to be a strong tendency for people to try to present themselves as breaking old, established stereotypes even when the person they're arguing against says exactly the same thing, and in some cases where the stereotype has only been around for a very short time (I recall one article arguing against the idea of Afghanistan being "the graveyard of empires", which in my understanding was an idea that had surfaced around 6 months prior to that article with the publication of a specific book). However, this does add an interesting dimension to it, with the fact that Type 2 positions actually were founded on a rejection of old, untrue beliefs of Type 1s, and Type 3s often resemble Type 1s. In fact I'd say that in every listed political example, the Type 2s who know about Type 3s will usually lump them in with Type 1s. This is, IMO, good in a way because it limits us from massive proliferation of levels over and over again and the resulting complications; instead we just get added nuance into the Type 2 and 3 positions.

The pleasure I get out of trolling atheists definitely has a meta-contrarian component to it. When I was a teenager I would troll Christians but I've long since stopped finding that even slightly challenging or fun.

7Scott Alexander14y
Yes, I often find myself tempted to do that too. Although I understand on an intellectual level that creationism is stupid, it is hard for me to get worked up about it and I certainly don't have the energy to argue with creationists ad nauseum. I do find myself angry whenever an atheist makes a superficial or stupid point in defense of atheism, or when they get too smug about how much smarter they are than creationists. My guess is that I have a sufficiently inflated view of my intelligence to be high enough that I have no need to differentiate myself intellectually from creationists, but I do feel a need to differentiate myself intellectually from the less intelligent sort of atheist.

As a mathematician, I offer my services for anybody who wants arguments (mathematical arguments, not philosophical ones) that 1+1 = 3. But beware: as a meta-contrarian mathematician, I will also explain why these arguments, though valid in their own way, are silly.

1.3 + 1.4 = 2.7, which when reported to one significant figure...
As the "old" computer science joke goes, 2 + 2 = 5 (for extremely large values of 2).
The physicist-typical version is that 3=4, if you take lim(3->4).
This reminds me that the difference between a physicist and astronomer is that a physicist uses π ≈ 3 while an astronomer uses π ≈ 1.
I remember someone in a newsgroup saying the average person is about one metre tall and weighs about 100 kilos, and when asked whether maybe they were approximately a bit too roughly, they answered “I'm an astronomer, not a jeweller.” (And physicists sometimes use π ≈ 1 too -- that's called dimensional analysis. :-) The problem is when the constant factor dimensional analysis can't tell you turns out to be 1/(2π)^4 ≈ 6.4e-4 or stuff like that.)
You know, I am seized with a sudden curiosity. You have arguments such that 1 is still the successor of 0 and 3 is still the successor of the successor of 1, where 0 is the additive identity?
Ah, now I have to remember what I was thinking of back in September! Well, let's see what I can come up with now. One thing that I could do is to redefine every term in the expression. You tried to forestall this by insisting [Note: I originally interpreted this as "3 is still the successor of 2" for some dumb reason.] But you never insisted that 2 is the successor of 1, so I'll redefine 2 to be 1 and redefine 3 to be 2, and your conditions are met, while my theorem holds. (I could also, or instead, redefine equality.) But this is silly; nobody uses the terms in this way. For another method, I'll be a little more precise. Since you mentioned the successor of 0, let's work in Peano Arithmetic (first-order, classical logic, starting at zero), supplemented with the axiom that 0 = 1. Then 1 + 1 = 3 can be proved as follows: * 1 + 1 = 1 + S(0) by definition of 1; * 1 + S(0) = 1 + S(1) by substitution of equality; * 1 + S(1) = 3 by any ordinary proof in PA; * 1 + 1 = 3 by transitivity of equality (twice). Of course, this is also silly, since PA with my new axiom is inconsistent. Anything in the language can be proved (by going through the axiom that 0 = S(n) is always false, combining this with my new axiom, and using ex contradictione quodlibet). Here is a slightly less silly way: Modular arithmetic is very useful, not silly at all, and in arithmetic modulo 1, 1 + 1 = 3 is true. But however useful modular arithmetic in general may be, arithmetic modulo 1 is silly (for roughly the same reasons that an inconsistent set of axioms is silly); everything is equal to everything else, so any equation at all is true. In other words, arithmetic modulo 1 is trivial. You can get arithmetic modulo b by replacing the Peano axiom that 0 = S(n) is always false with the axiom that 0 = b and b − 1 additional axioms stating (altogether) that a = b is false whenever (in ordinary arithemetic) 0 < a < b. But you could instead add an arbitrary axiom of the form b = c (and another
I wrote "successor of the successor of" - 3 is the successor of 2, which is the successor of 1. But I understand that this was a typo. :P But yes, I enjoyed that. Thank you.
Ha, that would be a reado! But seriously, I should have read that again. I got it in my head that you had done this while I spent time planning my response and forgot to verify.

I have a strong urge to signal my difference to the Lesswrong crowd. Should I be worried that all my positions may be just meta^2 contrarianism?

people get deep personal satisfaction from arguing the positions even when their arguments are unlikely to change policy

I very much wish that intellectual debate was more effectiveness-oriented in general. I myself try to refrain from arguing about things that don't actually matter or that I can't hope to change (not always successfully).

I have a style question. Are there less grating ways to write gender neutral texts?

I, to my great surprise, was irritated to no end by "ey" and "eir". I always stumbled when reading it. I dislike it and think "he/she" or "they" may be more natural and cause less stumbling when reading the article.

So far, I am against all the invented gender-neutral pronouns. Most of them sound strange ("ey" and "eir" look like a typo or phonetic imitation of deep southern accent, "xe" and "xir" use "x" sound and are simply painful to pronounce)

As of now, I am willing to sacrifice gender neutrality in texts in favor of readability.

Technically, "he" is perfectly acceptable for gender neutral texts. Merriam-Webster states that "he" can be "used in a generic sense or when the sex of the person is unspecified". However, to avoid the appearance of non-neutral text, I usually use "he/she", "his/her", etc. "They" or "their" can be used, but these are not really appropriate referring to a singular antecedent, so I quite often use "his/her" rather than "their". Another technique that you see frequently and that I sometimes use is to use "he" sometimes and "she" other times. As long as these more-less balance out in your text, you should be OK from a neutrality standpoint. Any of these alternatives is preferable IMO to "ey" and "eir".
The Eir of Slytherin has opened the Chamber of Socrates...
If you dislike zes, xes and eys and find them horrible little abominations that have no place among good and decent words, and suspect they were meant to trick us into unknowingly saying things that count as worship of Cthulhu... ...oh, wait, that's me. Let's try again. If you dislike zes, xes and eys, then using "they" seems to me the best solution if you care about being gender-neutral.

That suggests people around that level of intelligence have reached the point where they no longer feel it necessary to differentiate themselves from the sort of people who aren't smart enough to understand that there might be side benefits to death.

This is an interesting hypothesis, but applying it to LessWrong requires that the LW community has a consensus on how people rank by intelligence, that that consensus be correct, and that people believe it is correct. My impression is that everybody thinks they're the smartest person in the room, and judges ... (read more)

I think this is generalizing from one example; I've certainly met people who didn't think they were the smartest person in the room, either because they're below median intelligence and reasonably expect that most people are smarter than them or because even though they're above median they've met enough people visibly smarter than them. (I've been in rooms where I wasn't the smartest person.) I suspect that people may not be very good at ranking, and are mostly able to put people in buckets of "probably smarter than me," "about as smart as me," and "probably less smart than me" (that is, I think the 'levels below mine' blur together similarly to how the 'levels above mine' do). I also suspect that a lot of very clever people think that they're the best at their particular brand of intelligence, but then it's just a question of self-awareness as to whether or not they see the reason they're picking that particular measure. I can recall, as a high schooler, telling someone at one point "I'm the smartest person at my high school" and then having to immediately revise that statement to clarify 'smartest' in a way that excluded a friend of mine who definitely had more subject matter expertise in several fields and probably had higher g but had (I thought, at least) a narrower intellectual focus.

Ebola has offered a recent nice example of the triad. Mainstream: "be afraid, be very afraid"; contrarian: "don't be so gullible, why, hardly any more people have died from Ebola than have died from flu/traffic accidents/smoking/etc"; meta-contrarian: "what is to be feared is a super-lethal disease escaping containment & killing many more millions than the normal flu or traffic death toll".

Meh. Mankind survived the mad cow, the SARS, the bird flu and the swine flu hardly scathed; why should it be different this time around?
1. human deaths are not irrelevant; a million deaths != no deaths. 2. pandemics are existential threats, which can drive species extinct; I trust you understand why 'mad cow, the SARS, the bird flu and the swine flu' are not counter-arguments to this point.
No, I don't. EDIT: Anthropics?
In that triad the meta-contrarian is broadening the scope of the discussion. They address what actually matters, but that doesn’t change that the contrarian is correct (well, a better contrarian would point out the number of deaths due to Ebola is far less than any of those examples and Ebola doesn’t seem a likely candidate to evolve into a something causing an epidemic) and that the meta-contrarian has basically changed the subject.

BTW, I'm not actually that intelligent (IQ about 92 or 96 if I remember right) but pretending to adopt a meta-contrarian position might be a useful social tactic for me. Any advice from those who know the area on how to use it?

Advocate for the obvious position using the language and catchphrases of its opponents. I remember once saying, "Well, have we ever tried blindly throwing lots of money at the educational system?" Everyone agreed that this was a wise and sophisticated thing to say, even though I was by far the least knowledgeable person in the room on the subject and was just advocating the default strategy for improving public schools. Other examples:

"Greed is good."

"The chief virtue of a $professional is $vice."

"I'm a tax-and-spend liberal, and I think there should be much more government regulation. For example, the sad truth is that the realities of medical care require the existence of death panels, and I'd rather have them run by government bureaucrats than corporate accountants."

Kansas City was one of the more notable examples of having tried that; it didn't work out well:
This seems quite unlikely given your reasonably high-quality posting history. Is this number from a professionally administered test? Do you have a condition like dyslexia or dyscalculia that impairs specific abilities but not others?
I have Aspergers Syndrome, which affects things like this. Probably has something to do with it.

I wonder if this means we should place more weight on opinions that don't easily compress onto this contrarianism axis, since they're less likely to be rooted in signalling/group affiliations, and more likely to have a non-trivial amount of thought put into them.

Another thing to take away from this is that we should be wary of any system that categorizes opinions based on sociology rather than direct measures of their actual truth. Contrarians and Meta-Contrarians both have similar explanations of why they go for their levels, by pointing out the flaws with the lower level.

I think about the counter-signaling game a bit differently. Consider some question that has a binary answer - e.g. a yes/no question?. Natural prejudices or upbringing might cause most people to say pick, say, yes. Then someone thinks about the question and for reason r1 switches to no. Someone else who agrees with r1 then comes up with reason r2, and switches back to yes. Then r3 causes a switch back to no, ad infinitum.

Even though the conclusion at each point in the hierarchy is indistinguishable from a conclusion somewhere else in the hierarchy, th... (read more)

Nassim Taleb's makes an argument that he believes on God by default and he is widely seen as a rational person. I don't think it makes sense to see his position as lower in the hierarchy than people who believe based on the design argument.
His belief by default is based on some sort of argument, not unthinking acceptance of whatever his parents told him. In other words, his "default belief" is not the same as my hierarchy's "default belief."
So, where would "Believe because of generalised Pascal's Wager" be on your hierarchy? ;)
I'm not sure what the "generalized" is doing, but normal pascal's wager would probably be right before or right after the design argument.

6) Worth a footnote: I think in a lot of issues, the original uneducated position has disappeared, or been relegated to a few rednecks in some remote corner of the world, and so meta-contrarians simply look like contrarians. I think it's important to keep the terminology, because most contrarians retain a psychology of feeling like they are being contrarian, even after they are the new norm. But my only evidence for this is introspection, so it might be false.

Deserves MORE than a footnote.

conservative / liberal / libertarian

Liberal and libertarian don't mean the same thing in Europe as in America; keep that in mind when writing for international audiences. (Very roughly speaking, an European liberal is a moderate version of an American libertarian, and an American liberal is a moderate version of an European libertarian.)

Do European and American liberals advocate different policies, or is it just that the political spectrum in both places is different so the same policies appear at different relative positions? As far as I can tell, while this was true in the 19th century, Europe has almost completely adopted the American use of the word. Here are some examples (is there a way to get markdown to work with links that end in parentheses?): * * * *
If I understand correctly, in America liberal is essentially a synonym of ‘(moderate) left-winger’, and hence the antonym of conservative or ‘(moderate) right-winger’ wrt social values, though they often are in favour of greater economic regulation (e.g. the US Democratic Party), whereas in Europe liberals are those who favour greater economic freedom, though they are often conservative wrt social values (e.g. the Italian centre-right). Among mainstream parties, it appears to me to be the case both in Europe and in America that the political spectrum concentrates along a line with positive slope in the Political Compass, i.e. those who value free capitalism also value traditional social values and those who value economic equality also value social freedom (and liberal appear to refer to different directions along that line in Europe and in America), but Europe has (or should I say “used to have”?, I was not aware of the parties you mentioned) fewer extremists east of that line and America has fewer extremists west of it, AFAICT. (28 and 29 being the hex codes for ( and ) respectively).
2David Althaus12y
FWIW I'm from Germany and tend to agree with the above comment.
Would have upvoted just for this.
Alternately, note that the escape character in markdown is "\". Putting that before the (first) closing parenthesis works fine.
In short, 'liberal' in the US is merely the opposite of 'conservative', matching the usage of "He was liberal with his praise"; 'liberal' in Europe for the most part retains the meaning specified by 'classical liberal' in the US - "in favor of individual liberty".
It looks to me like it means something more specific than just the opposite of "conservative". For example, this article has a header "opposition to socialism". I'm aware that US liberals are less conservative than the US spectrum and that European liberals are more in favor of individual liberty than the European spectrum, but before concluding they're different, you'd first need to rule out the hypothesis that it's because the US spectrum is more conservative and the European spectrum is less in favor of individual liberty. ETA: I don't think this is the whole explanation, but I think it's a large part of the explanation.
The thing is, "less conservative" doesn't actually mean anything in the US. "Conservative" and "liberal" are just pointers to the Republican and Democratic parties, respectively, which in turn are semi-permanent coalitions of people with vastly different (and often incompatible) ideologies, that end up being used for color politics. There isn't really a spectrum, but you can pretend there is - if you have 390 Green beliefs and 100 Blue beliefs, then you're clearly a Moderate Green (aquamarine?). Whereas in most of Europe, parties actually represent ideologies to some extent, and so ideological terms don't get corrupted so much in favor of talking about the platform of a party. This is often because temporary coalitions happen between political parties, instead of within them. England is a notable exception to this - it has more-or-less two parties, and they pretend to fall on a "political spectrum" like the US parties; thus, they even tend to echo US meaningless political rhetoric.
At least in Italy, “It's a complete mess” would be a more accurate (though less precise) description than that.
FWIW, in Australia, there are two main political parties, Liberal and Labor. The Liberals are reasonably close to the Republicans (from what I can glean of US politics), and "liberals" (US meaning) seem to align with Labor or one of the other parties. A backslash in front of the offending punctuation should fix it.

Thus Eliezer's title for this mentality, "Pretending To Be Wise".

Have we broadened that term to refer to... well, lowercase pretending-to-be-wise in general? In the original post, he used it specifically to refer to those who try to signal wisdom by neutrality. (Though I did notice he used it in the broader sense in HPMoR. Is it thus officially redefined?)

0Scott Alexander14y
Yeah, I was thinking of the HPatMOR usage. It's a good phrase, and it would be a shame not to use it.

Death universally seems bad to pretty much everyone on first analysis, and what it seems, it is.

How can you know? Have you ever tried living a thousand years? Has anybody? If you had a choice between death and infinite life, where inifinite does mean infinite, so that your one-billion year birthday is only the sweet begining of it, would you find this an easy choice to make? I think that's big part of the point of people who argue that no - death is not necessarily a bad thing. 

To be clear, and because this is not about signalling: I'm not saying I would immediately choose death. I'm just saying: it would be an extraordinarily difficult choice to make.

Funnily enough, triads became a meme format I've seen around recently (

This triad was missed:

"Muslims are terrorists!" / "Islam is a religion of peace." / "Religion is problematic in general but Islam is the worst and I can back that claim up with statistics I read on Sam Harris' blog."

Gwern's TERRORISM IS NOT ABOUT TERROR seems to me like a better candidate for the third.
For the third slot I'd say "religious squabbles are the wrong problem to be thinking about".

It's always a bit of a shock when you're the contrarian and you discover someone meta-contrarianizing you on the outside lane. For example, here's an interesting triad I just recently became aware of:

Base: monogamy is assumed without discussion, cheating is the end of a relationship unless maybe if you confess and swear to never do it again.

Contrarian: open/poly relationship is agreed upon after discussion, it's not cheating if there's no lying.

Meta-con: non-exclusivity is assumed, no discussion. Cheating is whatever, just don't tell me about it.

I held the... (read more)

There is a neat paper on this by Feltovich, Harbaugh, and To called "Too Cool for School? Signaling and Countersignaling."

A little more from Harbaugh's home page Includes the Economist puff piece and an unedited version with some fun-but-unconvincing examples. Fun fact: Harbaugh also made a searchable Chinese dictionary. (

I'm a little confused, what purpose does this distinction serve? That people like to define their opinions as a rebellion against received opinion isn't novel. What you seem to be saying is: defining yourself against an opinion which is seen as contrarian sends a reliably different social signal to defining yourself against an opinion which is mainstream, is that a fair assessment? Because this only works if there is a singular, visible mainstream, which is obviously available in fashion but rare in the realm of ideas.

Moreover, if order-of-contrariness doe... (read more)


The number of global warming skeptics who jumped straight from "it's not happening" to "well we didn't do it" to "well we can't do anything about it without doing more harm than good" should also...give us a bit of pause.

Actually, that move is perfectly consistent with real skepticism applied to a complex assertion.

To see why, let's consider a different argument. Suppose a True Believer says we should punish gays or disallow gay marriage "because God hates homosexuality". You and I are skeptical that this assertion is rationally defensible so we attack it at what seems like the obvious first link in the logical chain. We say "I doubt that god exists. Prove to me that god exists, and then maybe we'll consider your argument." At this point you can divide the positions into:

"god hates X"/god doesn't exist

Now let us suppose TB actually does it. He does prove that god exists. Does this mean that we skeptics immediately have to accept his entire chain of reasoning? Of course not! We jump to the next weak link. To establish the original claim, one would need to prove god exists and is benevolent and wrote the bible and meant t... (read more)

This is a great point that's making me revise my position on some right wing commentators. Still, I'm struggling to think of any actual examples of this behavior in action: we don't actually tell religious people who believe wrong things "well god ain't real deal with it". We point out how their assertions are incompatible with their own teachings, and with the legal system, and scientific findings etc. We don't keep all the flaws we see in their position back in reserve. Moreover most of the serious commentators on the skeptical side of the issue argued only one of the points in question, whether it was the statistics showing warming or the economics implied by it or (cue rim-shot) sunspots, it's only journalists and politicians who skipped from one to the other, which is where I got the impression they'd only looked at the issue long enough to find a contrarian position.
If you've ever said or thought "Okay, just for the sake of argument, I'll assume your point X is correct..." you were holding a position back in reserve. One typical example is arguing with a religious nut that what he's saying is incompatible with the teachings in his own holy book. Suppose he wins this argument (unlikely, I know, but bear with me...) and demonstrates that you were mistaken and no, his holy book really does teach that we should burn scientists as witches. Do you immediately conclude that yes, we should burn scientists as witches? No, because you don't actually hold in high esteem the teachings in his holy book.


Because this only works if there is a singular, visible mainstream, which is obviously available in fashion but rare in the realm of ideas.

However, it seems to me that such mainstream does exist. Compared to the overall range of ideas that have been held throughout the history of humanity, and even the overall range of ideas that I believe people could hold without being crazy or monstrous, the range acceptable in today's mainstream discourse looks awfully narrow to me. It also seems to me very narrow by historical standards -- for example, when I look at the 19th century books I've read, I see an immensely greater diversity of ideas than one can see from the modern authors that occupy a comparable mainstream range. (This of course doesn't apply to hard sciences, in which the accumulation of knowledge has a monotonous upward trend.)

Of course, like every human society, ours is also shaken by passionate controversies. However, most of those that I observe in practice are between currents that are overall very similar from a broader perspective.

Well I can see that in certain areas, but it depends on where you look. The range of held opinions on the construction of gender, criminal punishment and both the nature and the contents of history is much broader than one hundred years ago. The range of opinions on the morality of war is far narrower. In any case, I meant mainstream in the sense that top 40 is mainstream, not in the sense that music is mainstream. Perhaps orthodoxy would be a better word? In fashion there is usually a single current orthodoxy about how people should dress, so it's easy to identify these circles of heterodoxy and reactionism. Other issues show multiple competing orthodoxies, each of which appears contrary to the other.
Mercy: Frankly, I disagree with that statement so deeply that I'm at a loss how to even begin my response to it. Either we're using radically different measures of breadth, or one (or both?) of us has had a grossly inadequate and unrepresentative exposure to the thought of each of these epochs. Yes, certain ideas that were in the minority back then have been greatly popularized and elaborated in the meantime, and one could arguably even find an occasional original perspective developed since then. However, it seems evident to me that by any reasonable measure, this effect has been completely overshadowed by the sheer range of perspectives that have been ostracized from the respectable mainstream during the same period, or even vanished altogether. But in the matters of opinion, there is also a clearly defined -- and, as I've argued, nowadays quite narrow -- range of orthodoxy, and it's common knowledge which opinions will be perceived as contrarian and controversial (if they push the envelope) or extremist and altogether disreputable (if they reach completely outside of it). I honestly don't see on what basis you could possibly argue that the orthodoxy of fashion is nowadays stricter and tighter than the orthodoxy of opinion.
Two hundred years ago, then?
Two hundred years ago, the institutions were very different, and there was much less total intellectual output than a century ago, so it's much harder to do a fair comparison because it's less clear what counts as mainstream and significant. However, the claim is still flat false at least when it comes to criminal punishment. In fact, in the history of the Western world, the period of roughly two hundred years ago was probably the very pinnacle of the diversity of views on legal punishment. On the one extreme, one could still find prominent advocates of brutal torturous execution methods like the breaking wheel (which were occasionally used in some parts of Europe well into the 19th century), and on the other, out-and-out death penalty abolitionists. (For example, the Grand Duchy of Tuscany abolished the death penalty altogether in 1786, and it was abolished almost completely in Russia around the mid-18th century.) One could also find all sorts of in-between views on all sides, of course. Admittedly, one would be hard-pressed to find someone advocating a prison system of the sort that exists nowadays, but that would have been economically impossible back in those far poorer times (modern prisons cost tens of thousands of dollars per prisoner-year, not even counting the cost of building them). Depending on what exactly is meant by "the nature and the contents of history," one could certainly point out many interesting perspectives that could be found 200 years ago, but not today anymore. That, however, is a very complex question. As for gender, well, I'd better not go into that topic. I'll just point out that people have been writing about these matters since the dawn of history, and it's very naive (though sadly common nowadays) to believe that only our modern age has managed to achieve accurate insight and non-evil attitudes about them.
Dawn of history? Now I'm imagining uncovering writing on the wall of caves: "Why women make better hunters" and expressing indignation at under-representation of females in cave paintings of battles.
What Constant said. I meant "history" in the narrow technical sense of the word, i.e. the period since the invention of writing.
You're mixing up history with prehistory.
No I'm not. The counterfactual referred to writing, writing which incidentally happened to be a commentary on the quality of the historical record keeping. (It is not my position that the counterfactual is particularly likely - if anything the reverse.)
People still argue those things nowadays though. Any remotely salacious criminal story has hacks crawling out of the woodwork to gloat about how the perpetrators will be raped, and the current Attorney General has deliberately delayed introduction of mechanisms to clamp down on the practice. For a long time one of the most popular proposal out of Britain's "let the public suggest policies" initiative was to send paedophiles to Iraq as human mine detectors. And you're missing the major reason for the increase in variety of criminal punishments, which is that the increase in the number of non violent crimes. I don't think I'll run too much risk of embarrassing myself if I suggest that mephedrone clinics weren't considered an alternative to jail time 100 years ago. As to gender, I was under the impression that radically post- and anti- gender views like those expressed by Julie Bindel and Donna Harroway were novel, if there are 19th century author's with similar viewpoints I'd be happy to hear them. Again this is an issue where I don't see any dead viewpoints, so even small increases in radical-ness increase the general width of ideas held. It strikes me though from the prison issue that our differences are mostly over what qualifies a belief as respectable. There are many beliefs that are no longer taken seriously by liberal academics, if that's what you mean by mainstream then I agree the 19th century showed a much broader range of opinion then ours. Getting back to my original point, just about everything in the OP is within the range of orthodoxy of public opinion, and everything except "obama is a muslim" within the academic one, and yet they can be modeled as contrary to one another.