Related to: Why Real Men Wear Pink, That Other Kind of Status, Pretending to be Wise, The "Outside The Box" Box

WARNING: Beware of things that are fun to argue -- Eliezer Yudkowsky

Science has inexplicably failed to come up with a precise definition of "hipster", but from my limited understanding a hipster is a person who deliberately uses unpopular, obsolete, or obscure styles and preferences in an attempt to be "cooler" than the mainstream. But why would being deliberately uncool be cooler than being cool?

As previously discussed, in certain situations refusing to signal can be a sign of high status. Thorstein Veblen invented the term "conspicuous consumption" to refer to the showy spending habits of the nouveau riche, who unlike the established money of his day took great pains to signal their wealth by buying fast cars, expensive clothes, and shiny jewelery. Why was such flashiness common among new money but not old? Because the old money was so secure in their position that it never even occurred to them that they might be confused with poor people, whereas new money, with their lack of aristocratic breeding, worried they might be mistaken for poor people if they didn't make it blatantly obvious that they had expensive things.

The old money might have started off not buying flashy things for pragmatic reasons - they didn't need to, so why waste the money? But if F. Scott Fitzgerald is to be believed, the old money actively cultivated an air of superiority to the nouveau riche and their conspicuous consumption; not buying flashy objects becomes a matter of principle. This makes sense: the nouveau riche need to differentiate themselves from the poor, but the old money need to differentiate themselves from the nouveau riche.

This process is called countersignaling, and one can find its telltale patterns in many walks of life. Those who study human romantic attraction warn men not to "come on too strong", and this has similarities to the nouveau riche example. A total loser might come up to a woman without a hint of romance, promise her nothing, and demand sex. A more sophisticated man might buy roses for a woman, write her love poetry, hover on her every wish, et cetera; this signifies that he is not a total loser. But the most desirable men may deliberately avoid doing nice things for women in an attempt to signal they are so high status that they don't need to. The average man tries to differentiate himself from the total loser by being nice; the extremely attractive man tries to differentiate himself from the average man by not being especially nice.

In all three examples, people at the top of the pyramid end up displaying characteristics similar to those at the bottom. Hipsters deliberately wear the same clothes uncool people wear. Families with old money don't wear much more jewelry than the middle class. And very attractive men approach women with the same lack of subtlety a total loser would use.1

If politics, philosophy, and religion are really about signaling, we should expect to find countersignaling there as well.


Pretending To Be Wise

Let's go back to Less Wrong's long-running discussion on death. Ask any five year old child, and ey can tell you that death is bad. Death is bad because it kills you. There is nothing subtle about it, and there does not need to be. Death universally seems bad to pretty much everyone on first analysis, and what it seems, it is.

But as has been pointed out, along with the gigantic cost, death does have a few small benefits. It lowers overpopulation, it allows the new generation to develop free from interference by their elders, it provides motivation to get things done quickly. Precisely because these benefits are so much smaller than the cost, they are hard to notice. It takes a particularly subtle and clever mind to think them up. Any idiot can tell you why death is bad, but it takes a very particular sort of idiot to believe that death might be good.

So pointing out this contrarian position, that death has some benefits, is potentially a signal of high intelligence. It is not a very reliable signal, because once the first person brings it up everyone can just copy it, but it is a cheap signal. And to the sort of person who might not be clever enough to come up with the benefits of death themselves, and only notices that wise people seem to mention death can have benefits, it might seem super extra wise to say death has lots and lots of great benefits, and is really quite a good thing, and if other people should protest that death is bad, well, that's an opinion a five year old child could come up with, and so clearly that person is no smarter than a five year old child. Thus Eliezer's title for this mentality, "Pretending To Be Wise".

If dwelling on the benefits of a great evil is not your thing, you can also pretend to be wise by dwelling on the costs of a great good. All things considered, modern industrial civilization - with its advanced technology, its high standard of living, and its lack of typhoid fever -  is pretty neat. But modern industrial civilization also has many costs: alienation from nature, strains on the traditional family, the anonymity of big city life, pollution and overcrowding. These are real costs, and they are certainly worth taking seriously; nevertheless, the crowds of emigrants trying to get from the Third World to the First, and the lack of any crowd in the opposite direction, suggest the benefits outweigh the costs. But in my estimation - and speak up if you disagree - people spend a lot more time dwelling on the negatives than on the positives, and most people I meet coming back from a Third World country have to talk about how much more authentic their way of life is and how much we could learn from them. This sort of talk sounds Wise, whereas talk about how nice it is to have buses that don't break down every half mile sounds trivial and selfish..

So my hypothesis is that if a certain side of an issue has very obvious points in support of it, and the other side of an issue relies on much more subtle points that the average person might not be expected to grasp, then adopting the second side of the issue will become a signal for intelligence, even if that side of the argument is wrong.

This only works in issues which are so muddled to begin with that there is no fact of the matter, or where the fact of the matter is difficult to tease out: so no one tries to signal intelligence by saying that 1+1 equals 3 (although it would not surprise me to find a philosopher who says truth is relative and this equation is a legitimate form of discourse).

Meta-Contrarians Are Intellectual Hipsters

A person who is somewhat upper-class will conspicuously signal eir wealth by buying difficult-to-obtain goods. A person who is very upper-class will conspicuously signal that ey feels no need to conspicuously signal eir wealth, by deliberately not buying difficult-to-obtain goods.

A person who is somewhat intelligent will conspicuously signal eir intelligence by holding difficult-to-understand opinions. A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.

According to the survey, the average IQ on this site is around 1452. People on this site differ from the mainstream in that they are more willing to say death is bad, more willing to say that science, capitalism, and the like are good, and less willing to say that there's some deep philosophical sense in which 1+1 = 3. That suggests people around that level of intelligence have reached the point where they no longer feel it necessary to differentiate themselves from the sort of people who aren't smart enough to understand that there might be side benefits to death. Instead, they are at the level where they want to differentiate themselves from the somewhat smarter people who think the side benefits to death are great. They are, basically, meta-contrarians, who counter-signal by holding opinions contrary to those of the contrarians' signals. And in the case of death, this cannot but be a good thing.

But just as contrarians risk becoming too contrary, moving from "actually, death has a few side benefits" to "DEATH IS GREAT!", meta-contrarians are at risk of becoming too meta-contrary.

All the possible examples here are controversial, so I will just take the least controversial one I can think of and beg forgiveness. A naive person might think that industrial production is an absolute good thing. Someone smarter than that naive person might realize that global warming is a strong negative to industrial production and desperately needs to be stopped. Someone even smarter than that, to differentiate emself from the second person, might decide global warming wasn't such a big deal after all, or doesn't exist, or isn't man-made.

In this case, the contrarian position happened to be right (well, maybe), and the third person's meta-contrariness took em further from the truth. I do feel like there are more global warming skeptics among what Eliezer called "the atheist/libertarian/technophile/sf-fan/early-adopter/programmer empirical cluster in personspace" than among, say, college professors.

In fact, very often, the uneducated position of the five year old child may be deeply flawed and the contrarian position a necessary correction to those flaws. This makes meta-contrarianism a very dangerous business.

Remember, most everyone hates hipsters.

Without meaning to imply anything about whether or not any of these positions are correct or not3, the following triads come to mind as connected to an uneducated/contrarian/meta-contrarian divide:

- KKK-style racist / politically correct liberal / "but there are scientifically proven genetic differences"
- misogyny / women's rights movement / men's rights movement
- conservative / liberal / libertarian4
- herbal-spiritual-alternative medicine / conventional medicine / Robin Hanson
- don't care about Africa / give aid to Africa / don't give aid to Africa
- Obama is Muslim / Obama is obviously not Muslim, you idiot / Patri Friedman5

What is interesting about these triads is not that people hold the positions (which could be expected by chance) but that people get deep personal satisfaction from arguing the positions even when their arguments are unlikely to change policy6 - and that people identify with these positions to the point where arguments about them can become personal.

If meta-contrarianism is a real tendency in over-intelligent people, it doesn't mean they should immediately abandon their beliefs; that would just be meta-meta-contrarianism. It means that they need to recognize the meta-contrarian tendency within themselves and so be extra suspicious and careful about a desire to believe something contrary to the prevailing contrarian wisdom, especially if they really enjoy doing so.


Footnotes

1) But what's really interesting here is that people at each level of the pyramid don't just follow the customs of their level. They enjoy following the customs, it makes them feel good to talk about how they follow the customs, and they devote quite a bit of energy to insulting the people on the other levels. For example, old money call the nouveau riche "crass", and men who don't need to pursue women call those who do "chumps". Whenever holding a position makes you feel superior and is fun to talk about, that's a good sign that the position is not just practical, but signaling related.

2) There is no need to point out just how unlikely it is that such a number is correct, nor how unscientific the survey was.

3) One more time: the fact that those beliefs are in an order does not mean some of them are good and others are bad. For example, "5 year old child / pro-death / transhumanist" is a triad, and "warming denier / warming believer / warming skeptic" is a triad, but I personally support 1+3 in the first triad and 2 in the second. You can't evaluate the truth of a statement by its position in a signaling game; otherwise you could use human psychology to figure out if global warming is real!

4) This is my solution to the eternal question of why libertarians are always more hostile toward liberals, even though they have just about as many points of real disagreement with the conservatives.

5) To be fair to Patri, he admitted that those two posts were "trolling", but I think the fact that he derived so much enjoyment from trolling in that particular way is significant.

6) Worth a footnote: I think in a lot of issues, the original uneducated position has disappeared, or been relegated to a few rednecks in some remote corner of the world, and so meta-contrarians simply look like contrarians. I think it's important to keep the terminology, because most contrarians retain a psychology of feeling like they are being contrarian, even after they are the new norm. But my only evidence for this is introspection, so it might be false.

267

359 comments, sorted by Highlighting new comments since Today at 8:50 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I also recently noticed this triad:

Seek sex + money / pursue only pure truth and virtue / seek sex + money

To be fair, I think that this triad is largely a function of the sort of society one lives in. It could be summarized as "submit to virtuous social orders, seek to dominate non-virtuous ones if you have the ability to discern between them"

2EditedToAdd4yI think it’s more along the lines of: people in the third stage have acquired and digested all the low-hanging and medium-hanging fruit that those in the second stage are struggling to acquire, that advancing further is now really hard. So they now seek sex and money/power partly because acquiring those will (in the long run) help them further advance in the areas that they have currently put on hold. And partly because of course it’s also nice to have them.
2AlexanderRM6yCould anyone elaborate on this? All the ones listed in the article seem fairly obvious or well-explained, but nothing jumps out to me on this one. I think the problem is that I don't see what positions these are occupying or signaling: The clothing stuff is about wealth, while all the political ones are about intelligence (apparent intelligence, specifically). My assumption is that the first is someone who has very little money and the last is someone who has a lot, but then I'm not sure where the middle one would be. That and perhaps that Yvain didn't list any distinguishing features between the first and last ones. I'm noticing now that all the counter-signaling ones tend to be slightly different- I'm sure the Old Rich didn't wear the exact same things as the poor, but rather nicer but less showy clothes. All the political examples have the third-stage ones usually acknowledging the existence of and problems with the lowest stage, often with significant differences. Likewise Hipsters have a lot of distinctly hipster traits that don't make them look like any particular non-mainstream group, although my knowledge of Hipsters comes almost entirely from jokes about Hipsters rather than having seen the phenomenon much.

Implementing your suggestion is easy. Just keep going "meta" until your opinions become stupid, then set meta = meta - 1.

There's an art to knowing when;

Never try to guess.

Toast until it smokes & then

20 seconds less.

I'm reminded of some "advice" I read about making money in the stock market:

Buy a stock, wait until it goes up, and then sell it. If it doesn't go up, then don't have bought it.

That strategy requires an impossible action in the case that the stock does not go up.

That comment made me smile. I didn't upvote it, but I just hid a paperclip, making the moment when I'll have to buy another box that much closer.

edit: actually, I wrote the above before I actually did it. But when I looked in the place I expected to find paperclips, and didn't find any, making the probability that I'll buy paperclips in the near future somewhat higher. So it's all good.

-4[anonymous]9yI am soooooo wasted right now...

One more time: the fact that those beliefs are in an order does not mean some of them are good and others are bad. For example, "5 year old child / pro-death / transhumanist" is a triad, and "warming denier / warming believer / warming skeptic" is a triad, but I personally support 1+3 in the first triad and 2 in the second. You can't evaluate the truth of a statement by its position in a signaling game; otherwise you could use human psychology to figure out if global warming is real!

Well worth stressing.

It's possible to go meta on nearly any issue, and there are a lot of meta-level arguments - group affiliation, signaling, rationalization, ulterior motives, whether a position is contrarian or supported by the majority, who the experts are and how much we should trust them, which group is persecuted the most, straw man positions and whether anybody really holds them, slippery slopes, different ways to interpret statements, who is working under which cognitive bias ...

Which is why I prefer discussions to stick to the object level rather than go meta. It's just too easy to rationalize a position in meta, and to find convincing-sounding arguments as to why the other side mistakenly disagrees with you. And meta-level disagreements are more likely to persist in the long run, because they are hard to verify.

Sure, meta-level arguments are very valuable in many cases, we shouldn't drop them altogether. But we should be very cautious while using them.

Going meta often introduces burdensome details. This will only lead you closer to truth when your epistemic rationality is strong enough to shoulder the weight.

9minusdash6yThat's a triad too: naive instinctive signaling / signaling-aware people disliking signaling / signaling is actually a useful and necessary thing.

I think this post speaks of an interesting signaling element in societal dialectics. Let's call your hypothesis the "contrarian signaling" hypothesis. But to me, your post also hints at a couple other hypotheses behind this behavior.

The first hypothesis is the mundane one: that people end up in this groups with positions contrary to other positions, because those are just the positions that are more plausible to them, and they decide their subcultures out of their actual tastes. The reason that people divide themselves into groups with contrary views is because people have different phenotypes. I'm sure you've already thought of it, but I want to say a little more about it.

Under this hypothesis, hipster are hipsters primarily because they like retro clothes (and other aspects of the culture). They would have worn these same clothes back when they were in fashion; whereas true contrarians wouldn't have. This might be easier to imagine with another overlapping subculture: hippies. Hippies don't idolize the 60's to be contrarian: they idolize the 60's because they like the ideals of the 60's and feel nostalgic for them.

Now, you may say that the 60's were a contrarian time (w... (read more)

But to me, your post also hints at a couple other hypotheses behind this behavior.

My reading of the post was not so much that it proposed contrarianism as an explanation for other cultural divisions, but that peoples' inclination towards a given level of contrarianism is itself a cultural division. We don't need to hypothesize about why people are metacontrarians; we're defining them by the habit of being metacontrary.

However, your hypotheses are still interesting in their own right. I predict that, were we to run your experiments, the first one would tend to describe the early adopters of a given subculture--the first hipster actually liked those dumb glasses, etc.--and later members would increasingly be described by the latter.

This is roughly what Gladwell's Tipping Point is about, actually.

check out this other cool belief Y!

I think that this is how all debates (and evangelism) should sound.

1FrogSaga8yThe account of the nouveau riche's ostentatious behavior and appearance compared to the relatively subtle expressions exhibited by the old-money generation has causes and explanations far beyond "counter-signaling". I do not mean to say that counter-signaling doesn't play a part; however it's a small facet and not nearly as important as other factors. (I realize that this may come off as overly nit-picky or outright derailing. However, as the bit I am critiquing is one of your foundational points to your article; I feel there is value in calling attention to it.) You did not account for the nouveau riche generation's updated social conditioning factors such as the increase in the volume and effectiveness of mass-marketing. It's important to know what sort of films, books, advertising trends, etc were prevalent and popular during the nouveau riche's formative years. What sort of values became most important in society? So much changed in people psychologically with the rise of consumer culture, such that it is impossible to track human behavior unless we take that rather sudden cultural evolution into account. A person does not need to be counter-signaling when she or he identifies with a particular demographic. A very simple example: The child with enormous wealth watches the same cartoons as the middle class child and learns a similar set of social standards and values; and both children remain in a similar marketing demographic as they age. When the wealthy child becomes an adolescent, she or he will still attribute value to certain types of behaviors and appearances.
0[anonymous]11yI have a strong urge to signal my difference to the Lesswrong crowd. Should I be worried?

Here's a different hypothesis that also accounts for opinions reverting in the direction of the original uneducated position. Suppose "uneducated" and "contrarian" opinion are two independent random (e.g. normal) variables with the same mean representing the truth (but maybe higher variance for "uneducated"); and suppose what you call "meta-contrarian" opinion is just the truth. Then if you start from "contrarian" it's more likely that "meta-contrarian" opinion will be in the direction of "uneducated" than in the opposite direction, simply because "uneducated" contains nonzero information about where the truth is. I think you can also see this as a kind of regression to the mean.

1VAuroch7yI don't see why we should expect the random variables to be based around "truth". I'd believe in a common centerpoint, but I think it would be more usefully labeled "human-intuitive position" than "truth".
0AlexanderRM6yIt seems to me that uneducated-person opinion would be the "human-intuitive position", and the educated person opinion would be... changed off from that, with a tendency to be somewhere in the direction of truth. Although the uneducated-person opinion won't always be constant across different times and cultures (although it varies; Death is Bad is probably a universal for 5-year-olds, Racism might be pretty universal outside of isolated groups with populations too small to have any racial diversity), so I don't think it will usually be an inherent position. I think steven0461's statement makes some sense though if you talk about the average position among many different issues, and also if you only look at issues where things like evidence and human reasoning can tell you about the truth. I expect that the uneducated opinion will appear distributed randomly around the truth (although not absurdly far from it, when you consider the entirety of possibility-space), and the educated opinion will diverge from it in a way that will usually be towards what the evidence supports, but often overshooting, undershooting, or going far to the side. Likewise the 3rd-stage opinion should diverge from the educated opinion in a similar manner, except... by definition it will be in the rough direction of the original position, or we'd just call it a more radical version of the educated position. However there seems to be a MAJOR potential pitfall in reasoning about where they're located, since all the examples listed tend to align politically (roughly conservative/liberal/LessWrong-type). So trying to reason by looking at those examples and seeing which one is true, and then trying to derive a theory on the tendencies involved based on that, will tend to give you a theory which supports your position being right.
0Paul Crowley10yI think that this only works if positions are in one dimension. If they are in many dimensions then I suspect that the truth and the uneducated opinion are on the same side of the contrarian opinion as often as they are on opposite sides. EDIT: I no longer think the above makes any sense. I'm tired, sorry!
0Will_Newsome10yI'm a little sad that I've integrated this pretty thoroughly into my epistemology because it's a very good point and yet most people probably missed this comment.
0shokwave10yThank-you for commenting and bringing this to my attention. This also makes for a fantastic "shut down the contrarian" response when your meta-contrarianism is questioned.

One element of meta-contrarian reasoning is as follows. Consider a proposition P, that is hard for a layperson to assess. Because of this difficulty, an individual must rely on others for information. Now, a reasonable layperson might look around and listen with an open mind to all the arguments, and choose the one that seems most plausible to assign a probability to P.

The problem is that certain propositions have large corps of people whose professions depend on the proposition being true, but no counterforce of professional critics. So there is a large group of people (priests) who are professionally committed to the proposition "God exists". The existence of this group causes an obvious bias in the layperson's decision algorithm. Other groups, like doctors, economists, soldiers, and public school teachers, have similar commitments. Consider the proposition "public education improves national academic achievement." It could be true, it could be false - it's an empirical question. But all public school teachers are committed to this proposition, and there are very few people committed to the opposite.

So meta-contrarians explicitly correct for this kind of bias. I don't necessarily think that the public school proposition is false, but it should be thoroughly examined. I don't necessarily think that the nation would be safer if we abolished the Army and Marine Corps, but it might be.

The problem is that certain propositions have large corps of people whose professions depend on the proposition being true, but no counterforce of professional critics.

This really is a very good point.

Yet another thought-provoking post from Yvain.

I've implicitly noticed the meta-contrarian trend on Less Wrong and to a lesser extent in SIAI before, and I think it's led me to taking my meta-meta-contrarianism a little far sometimes. I get a little too much enjoyment out of trolling cryonicists and libertarians: indeed, I get a feeling of self-righteousness because it seems that I'm doing a public service by pointing out what appears to be a systematic bias and flaw of group epistemology in the Less Wrong belief cluster. This feeling is completely disproportionate to the extent that I'm actually helping: in general, the best way to emphasize the weaker points of an appealing argument isn't to directly troll the person who holds it. Steve Rayhawk is significantly better than me in this regard. So thanks, Yvain, for pointing out these different levels of meta and how the sense of superiority they give can lead to bad epistemic practice. I'll definitely check for signs of this next time I'm feeling epistemically self-righteous.

7Relsqui11yA friend of mine likes to say that, if you find that your personal opinion happens to align perfectly with what popular culture tells you to think, you should examine that opinion really closely to make sure it's really yours. It's a similar heuristic to the self-righteousness one, applied specifically to the first-level or "uninformed" position (since "uninformed" is really a lot closer to "only informed subconsciously, by local culture and media").

Belatedly, a quotation to hang at the top of the post:

There is a great difference between still believing something and believing it again. Still to believe that the moon affects the plants reveals stupidity and superstition, but to believe it again is a sign of philosophy and reflection.

Lichtenberg, Georg Christoph, 1775

Here's my alternative explanation for your triads which, while obviously a caricature, is no more so than yours and I think is more accurate: un-educated / academic / educated non-academic.

Essentially your 'contrarian' positions are the mainstream positions you are more or less required to hold to build a successful academic (or media) career. Some academics can get away with deviation in some areas (at some cost to their career prospects) but relatively few are willing to risk it. Intelligent, educated individuals who have not been subject to excessive exposure to academic groupthink are more likely to take your meta-contrarian positions.

See also Moldbug's thoughts on the University.

4nick01200011ySeems to me like a (hopefully Friendly) seed AI is more likely to provide the "Schelling point" that'd provide an alternative to the modern US government than any sort of reactionary "antiversity". EDIT: Come to think of it, a libertarian space society could probably do it, too, much the same way as the Soviet Union always had "surrender to the US" as an eject button.

A while back the "Steveosphere" had a list of items for which "the masses display more common sense than the smarties do". These suggest that they think they have located Yvain-clusters of the following type:

  1. Troglodyte position.
  2. Liberal position.
  3. Troglodyte position held for sophisticated reasons.
1AlexanderRM6y...many of those questions are rather odd. I went in expecting things like "are the tides controlled by the oceans", questions phrased in a way that sounds stupid but are actually correct (or the "is it scientific to say"), which would have shown that deliberately avoiding stupid statements can lead smart people to make incorrect statements. And some, like "Genes play a major role in determining personality." and "Things for blacks in the US have improved over time.", fall into that category. I particularly liked "Whites are hurt by affirmative action policies that favor blacks". However, many of the questions outright contain "should" statements where you actually cannot say that the answers the "smarties" gave were factually incorrect, because they require getting into our goals and morality, or at least talking about very complicated things from that angle.

Global warming was the least controversial example you could think of? Seriously?

Well, the example was to show that there are certain meta-contrarian views held by a big part of this community which are trivially wrong and proof that they have gone too far. Given that restriction, what less controversial example would you have preferred?

I really would have liked to use the racism example, because it's most elegant. The in-group bias means people will naturally like their own race more than others. Some very intelligent and moral people come up with the opposing position that all races are equal; overcoming one's biases enough to believe this becomes (rightly) correlated with high intelligence and morality. This contrarian idea spreads until practically everyone believes it and signals it so much as to become annoying and inane. This creates a niche for certain people to signal their difference to the majority by becoming pro-racial differences. But taken too far, this meta-contrarian position could easily lead to racism.

But any post that includes a whole paragraph on racism automatically ends up with the comments entirely devoted to discussing racism, and the rest of the post completely ignored. Feminism would also have worked, but I would have to be dumb as a rock to voluntarily bring up gender issues on this blog. Global warming seemed like something that Less Wrong is generally willing to admit is correct and doesn't care that much about, while still having enough of an anti-global-warming faction to work as an example.

What less controversial example should have been used instead?

5Relsqui11yI've been lurking and reading for a few days--interested in a few things, thinking about a few things, but not quite ready to jump in and start participating yet. This comment cracked me up enough to make an account and upvote it.

Whenever holding a position makes you feel superior and is fun to talk about, that's a good sign that the position is not just practical, but signaling related.

Readers be warned: Internalizing this insight may result in catastrophic loss of interest in politics.

Perhaps for some people -- but on the other hand, it creates an even higher intellectual challenge to achieve accurate understanding. Understanding hard and complicated things in math and science is extremely challenging, but ultimately, you still have fully reliable trusted authorities to turn to when you're lost, and you know they won't lie and bullshit you. In politics and heavily politicized fields in general, there is no such safety net; you are completely on your own.

1AlexanderRM6yI've known politics is largely about status signaling (which hasn't caused any reduction of interest in issues which our society politicizes, however, just in elections and the like) since I started reading LW, but I just realized that reading LessWrong makes me feel superior (although I've noticed this before, but it seems hard to avoid) and it's fun to talk about. That's horrifying.

conservative / liberal / libertarian

No way, I don't buy this one at all. I find that most little kids are essentially naive liberals. We should give poor sick people free medicine! We should stop bad polluters from hurting birds and trees! Conservatism/libertarianism is the contrarian position. Everything has a cost! There are no free lunches! Managerial-technocratic liberals are the meta-contrarians. So what about the costs? We've got 800 of the smartest guys from Yarvard and Oxbridge to do cost-benefit analyses for us!

Of course there are meta-meta-contrarians as well: reactionaries, meta-libertarians (Patri Friedman is a good example of a metalibertarian IMO), anarchists, etc.

It's contrarians all the way down.

I was thinking more in terms of conservative values like "My country is the best" and "Our enemies are bad people who hate our freedom", but your way makes a lot of sense too.

Although it's worth noting that all of what you say is obvious even to little kids are things no one had even thought of a hundred years ago. Rachel Carson and Silent Spring are remembered as iconic because they kick-started an environmentalist movement that just didn't really exist before the second half of the 20th century (although Thoureau and people like that get honorable mention). The idea of rich people paying to give poor sick people free medicine would have gotten you laughed out of most socially stratified civilizations on the wrong side of about 1850.

But I don't want to get too bogged down in which side is more contrarian, because it sounds too close to arguing whether liberalism or conservativism is better, which of course would be a terribly low status thing to do on a site like this :)

I think it was probably a mistake to include such large-scale politics on there at all. Whether a political position seems natural or contrarian depends on what social context someone's in, wha... (read more)

I think you're right about the chronological sequence of kids as "naive liberals" to adults as conservative (more so than the kids, anyway), but not about the rationale. Positioning oneself on the contrarian hierarchy is about showing off that your intellect is greater than the people below you on it. It's the rare adult who feels a need to explicitly demonstrate their intellectual superiority to children--but the common adult who has a job and pays taxes and actually ever thinks about the cost of things, as opposed to the kids, who don't need to.

In short, adults don't oppose free medicine etc. to be contrary to the position of naive children; they oppose it because they're the ones who'd have to pay for it.

9kodos9611yI think the takeaway from this is just that classification of phenomena into these triads is a very subjective business. That's not necessarily a bad thing, since the point of this (if I'm reading Yvain correctly) is not to determine the correctness of a position by its position in a triad, but simply to encourage people to notice when their own thinking is motivated by a desire to climb the triad, rather than pursue truth, and to be skeptical of yourself when you detect yourself trying to triad-climb.
5Mercy11yAh thanks that position makes more sense to me now, you mean what most people call social democracy, not liberalism as it is understood outside the US? Because at least in britain, libertarian's align with liberals/conservatives against socialists and social democrats. But to be honest, they are a good example of a flaw in the setup, which is that people tend to define themselves against imaginary enemies that believe everything they do only backwards, rather than naively dispute everything their enemy says. So libertarians are more likely to complain about "statists", than come out in favour of taxes or wars because socialists are against them.

I think it's worth noting explicitly (though you certainly noted it implicitly) that meta-contrarianism does not simply agree with the original, non-contrarian opinion. Meta-contrarianism usually goes to great lengths to signal that it is indeed the level above, and absolutely not the level below, the default position.

An example, from a guy who lives in a local hipster capital:

People not interested in (or just unskilled at) looking cool will mostly buy their clothing at places like Wal-Mart. The "contrarian" cluster differentiates itself by shopping at very expensive, high-status stores (dropping $150 on a pair of jeans, say). Your hipster crowd does not respond to this by returning to Wal-Mart. Instead, they get very distinct retro or otherwise unusual clothing from thrift stores and the like, places that no one who simply, actually didn't care about signaling would never bother to seek out.

The counter-counter culture often cares just as much about differentiating itself from the culture as it does the counter-culture. The noveau-riche may not have to worry about this, if in their case it comes automatically, but other groups do.

3evgenit11yOf course they do, otherwise their signalling would be indistinguishable from the culture's, and thus useless.
1sark11yThere are other dimensions in which the counter-counter people can signal their difference from the non-counters (e.g. hipsters are already living in upper class neighborhoods, have upper class mannerisms etc.). This makes it possible for a simple reversal in the uninformed/contrary/meta-contrary dimension to differentiate them from the counters.
4Spurlock11yAbsolutely. This is the sort of thing I was referring to in the last sentence. The point being, just because they don't seem to go to great pains to distinguish themselves from the non-counters, doesn't mean they're only trying to differentiate from one group: status above both is still the goal, even if they don't have to actively "seek" it.
1sark11yNo problem. I was just providing a live example of metacontrarianism ;)

Could it be that the entire history of philosophy and its "thesis, antithesis, synthesis" recurring structure is an instance of this? Not to mention other liberal arts, and the development of the cycles of fashion.

According to the survey, the average IQ on this site is around 145^2

I can't possibly have been the only one to have been amused by this.

(Well, doesn't Clippy claim to be a superintelligence?)

According to the survey, the average IQ on this site is around 145

I can't possibly have been the only one to have been amused by this.

The really disturbing possibility is that average people hanging out here might actually be of the sort that solves IQ tests extremely successfully, with scores over 140, but whose real-life accomplishments are far below what these scores might suggest. In other words, that there might be a selection effect for the sort of people that Scott Adams encountered when he joined Mensa:

I decided to take an I.Q. test administered by Mensa, the organization of geniuses. If you score in the top 2% of people who take that same test, you get to call yourself a “genius” and optionally join the group. I squeaked in and immediately joined so I could hang out with the other geniuses and do genius things. I even volunteered to host some meetings at my apartment.

Then, the horror.

It turns out that the people who join Mensa and attend meetings are, on average, not successful titans of industry. They are instead – and I say this with great affection – huge losers. I was making $735 per month and I was like frickin’ Goldfinger in this crowd. We had a guy who was

... (read more)

I should clarify that I was specifically referring to the interesting placement of that superscript 2. :-)

EDIT: Though actually, this is probably the perfect opportunity to wonder if the reason people join this community is that it's probably the easiest high-IQ group to join in the world: you don't have to pass a test or earn a degree; all you have to do is write intelligent blog comments.

Oh, then it was a misunderstanding. I thought you were (like me) amused by the poll result suggesting that the intelligence of the average person here is in the upper 99.865-th percentile.

(Just to get the feel for that number, belonging to the same percentile of income distribution in the U.S. would mean roughly a million dollars a year.)

5blogospheroid11yHmm.. Isn't the intelligence distribution more like a bell curve and the distribution of income more like a power law?
7BillyOblivion11yBoth can be power-law or Gaussian depending on your "perspective". There are roughly as many people with a IQ over 190 as there are people with an income over 1 billion USD per annum. By roughly I mean an order of magnitude. Generally IQ is graphed as a Gaussian distribution because of the way it's measured--the middle of the distribution is defined as 100. Income is raw numbers. (edited to move a scare quote)
3Relsqui11yUpvoted for the quality of the analogy, although I also agree with you.
1komponisto11yWell I'm also amused by that, to be sure.
0faul_sname8yAnd since the correlation between the two is about 0.4, that would suggest an income of 1.2 standard deviations above the mean, or about $80,000 a year in the US, not controlling for age. Controlling for age, I suspect LWers have approximately average income for their level of intelligence (and because regression to the mean is not intuitive, it feels like we should be doing far better than that).
0Relsqui11yI find this sort of puzzling. There is clearly a demand for organizations which provide opportunities to interact and socialize with people carefully selected for their ability to solve clever puzzles (and whatever else is on the IQ test--I haven't taken a real one). Why is that? Does anybody here specifically seek out high-IQ friends? Do you feel like trying to explain the appeal to me? Intelligence is one of my criteria for my companions, to be sure, but I'm not sure it's in the top three, and I certainly wouldn't settle for it alone. Also, I'm not sure that earning a degree is harder than writing an intelligent blog post. Not for everyone, anyway.

There is clearly a demand for organizations which provide opportunities to interact and socialize with people carefully selected for their ability to solve clever puzzles (and whatever else is on the IQ test--I haven't taken a real one)

That's not the sense of IQ that I mean; rather, I mean the underlying thing which that ability is supposed to be an indicator of.

(My guess would be that this underlying thing is probably something like "richness of mental life".)

Does anybody here specifically seek out high-IQ friends? Do you feel like trying to explain the appeal to me?

My experience suggests that it makes a significant difference to one's quality of life whether the people in one's social circle are close to one's own intelligence level.

Not too long ago I spent some time at the SIAI house; and even though I was probably doing more "work" than usual while I was there, it felt like vacation, simply because the everyday task of communicating with people was so much easier and more efficient than in my normal life.

3Relsqui11ySee my response to cata. I suppose it's possible that I'm merely spoiled in this regard, but I'm not sure. Yes, most of the people I've spent a lot of time with in my life have been some kind of intelligent--my parents are very smart, and I was taught to value intellect highly growing up. But some of the folks who've really made me glad to have them around have been less educated and less well-read than I am, which isn't trivial (I'm a high school dropout, albeit one who likes to do some learning on her own time). I'm thinking particularly of my coworkers at my last job. We worked behind the counter at a dry cleaner. These were not people with college educations, or who had learned much about critical thinking or logic or debate. This is not to say they had below average intelligence--just not particularly higher, either. They were confused as to why I was working this dead-end job with them instead of going to college and making some of myself; I was clearly capable of it. But those people made the job worthwhile. They were thoughtful, respectful, often funny, and supportive. They were good at their jobs--on a busy day, it felt like being part of a well-oiled machine. There isn't one quality in that list you could have traded for outstanding intelligence and made them better people, nor made me happier to be around them. If your point is right, maybe all that means is that my brain is nothing to write home about. But I'm fonder of the theory that there are other qualities that have at least as much value in terms of quality of life. Would you be happy living in a house of smart people who were all jerks?
4komponisto11yOf course not. What caused your probability of my saying "yes" to be high enough to make this question worth asking? I could with more genuine curiosity ask you the following: would you be happy spending your life surrounded by nice people who understood maybe 20% of your thoughts?
1Relsqui11yIt was rhetorical, and meant to support the point that intelligence alone does not make a person worthwhile. I'd rather have more kindness and less intelligence than the reverse. I think it's clear we'd both prefer a balance, though, and that's really all my point was: intelligence is not enough to qualify a person as worthwhile. Which is why social groups with that as the only criterion confuse me. :)
[-][anonymous]11y 11

Here I go, speaking for other people, but I'm guessing that people at the LessWrong meetup at least met some baseline of all those other qualities, by komponisto's estimation, and that the difference of intelligence allowed for such a massive increase in ability to communicate made talking so much more enjoyable, given that ey was talking to decent people.

Each quality may not be linear. If someone is "half as nice" as another person, I don't want to talk to them at half the frequency, or bet that I'll fully enjoy conversation half of the time. A certain threshold of most qualities makes a person totally not worth talking to. But at the same time, a person can only be so much more thoughtful, respectful, funny, supportive, before you lose your ability to identify with them again! That's my experience anyhow - if I admire a person too much, I have difficulty imagining that they identify with me as I do with them. Trust needs some symmetry. And so there are probably optimal levels of friendship-worthy qualities (very roughly by any measure), a minimum threshold, and a region where a little difference makes a big difference. The left-bounded S-curves of friendship.

Then there ... (read more)

3Relsqui11yI think this is a really excellent analysis and I agree with just about all of it. I suspect that the difference in our initial reactions had to do with your premise that intelligent people are easier to communicate with. This hasn't been true in my experience, but I'd bet that the difference is the topics of conversation. If you want to talk to people about AI, someone with more education and intellect is going to suit you better than someone with less, even if they're also really nice. I've definitely also had conversations where the guy in the room who was the most confused and having the least fun was the one with the most book smarts. I'm trying to remember what they were about ... off the top of my head, I think it tended to be social situations or issues which he had not encountered. Empathy would have done him more good than education in that instance (given that his education was not in the social sciences).
5[anonymous]11yYour suspicion rings true. Having more intelligence won't make you more enjoyable to talk to on a subject you don't care about! It also may not make a difference if the topic is simple to understand, but still feels worth talking about (personal conversations on all sorts of things). Education isn't the same as intelligence of course. Intelligence will help you gain and retain an education faster, through books or conversation, in anything that interests you. Most of my high school friends were extremely intelligent, and mostly applied themselves to art and writing. A few mostly applied themselves to programming and tesla coils. I think a common characteristic that they held was genuine curiosity in exploring new domains, and could enjoy conversations with people of many different interests. The same was true for most of my college friends. I would say I selected for good intelligent people with unusually broad interests. I still care a great deal for my specialist friends, and friends of varying intelligence. It's easy for me to enjoy a conversation with almost anyone genuinely interested in communicating, because I'll probably share the person's interest to some degree. Roughly, curiosity overlap lays the ground for topical conversation, education determines the launching point on a topic, and intelligence determines the speed.
2wedrifid11yIsn't that what you would expect for most conversations, when all else is equal? This is an effect I expect to in general and I attribute it both due to self selection and causation.
2Relsqui11y... well, it isn't what I do expect, so I guess I wouldn't. The thought never crossed my mind, so I don't really have anything more insightful to say about it yet. Let me chew on it. I suspect that I mostly socialize with people I consider equals.
2komponisto11yActually, I was talking about my two-week stay as an SIAI Visiting Fellow. (Which is kind of like a Less Wrong meetup...) But, yeah.
3wedrifid11yI'm quite curious about what benefits you experienced from your two week visit... anything you can share or is it all secret and mysterious? Not that I am considering applying. If I was I would have had to refrain from telling Eliezer (and probably Alicorn) whenever they are being silly. The freedom to speak ones mind without the need for securing approval is just too attractive to pass up! :)
3LucasSloan11yNeither of these should stop you. Alicorn lives on the other side of the country from the house, and Eliezer is pretty lax about criticism (and isn't around much, anyway).
0wedrifid11yOh, there's the thing with being on the other side of the world too. ;)
2LucasSloan11yThey pay for airfare, you know...
2wedrifid11yDamn you and your shooting down all my excuses! ;) Not that I'd let them pay for my airfare anyway. I would only do it if I could pay them for the experience.
4randallsquared11yFortunately, you appear to be able to rationalize more quite easily. ;)
2komponisto11yPerhaps the most publicly noticeable result was that I had the opportunity to write this post [http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/] (and also this wiki entry [http://wiki.lesswrong.com/wiki/Screening_off]) in an environment where writing Less Wrong posts was socially reinforced as a worthwhile use of one's time. Then, of course, are the benefits discussed above -- those that one would automatically get from spending time living in a high-IQ environment. In some ways, in fact, it was indeed like a two-week-long Less Wrong meetup. I had the opportunity to learn specific information about subjects relating to artificial intelligence and existential risk (and the beliefs of certain people about these subjects), which resulted in some updating of my beliefs about these subjects; as well as the opportunity to participate in rationality training exercises. It was also nice to become personally acquainted with some of the "important people" on LW, such as Anna Salamon, Kaj Sotala, Nick Tarleton, Mike Blume, and Alicorn (who did indeed go by that name around SIAI!); as well as a number of other folks at SIAI who do very important work but don't post as much here. Conversations were frequent and very stimulating. (Kaj Sotala wasn't lying [http://lesswrong.com/lw/2co/how_to_always_have_interesting_conversations/] about Michael Vassar.) As a result of having done this, I am now "in the network", which will tend to facilitate any specific contributions to existential risk reduction that I might be able to make apart from my basic strategy of "become as high-status/high-value as possible in the field(s) I most enjoy working in, and transfer some of that value via money to existential risk reduction". Eliezer is uninvolved with the Visiting Fellows program, and I doubt he even had any idea that I was there. Nor is Alicorn currently there, as I understand.
1[anonymous]11yI hear that the secret to being a fellow is show rigorously that the probability that one of them is being silly is greater than 1/2. Just a silly math test.
1[anonymous]11yAh, you lucky fellow!

There is clearly a demand for organizations which provide opportunities to interact and socialize with people carefully selected for their ability to solve clever puzzles (and whatever else is on the IQ test--I haven't taken a real one).

Really? I don't think that's true; I think people just tend to assume that IQ is a good proxy for general intellectualism (e.g. highbrow tastes, willingness to talk and debate a lot, being well-read.) Since it's easier to score an IQ test than a test judging political literacy, education, and favorite novels, that's what organizations like Mensa use, and that's the measuring stick everyone trots out. Needless to say, it's not a very good one, but it's made its way into the culture.

I mean, even in casual usage, when most people talk about someone's high IQ, they probably aren't talking about focus, memory, or pattern recognition. They're likely actually talking about education and interests.

4Relsqui11yThat's precisely what troubles me. I don't like that we use a term which actually only means the former to refer to how "smart" someone is in vague, visceral sense--nor the implied equation of either IQ or smartness with utility. I'm not accusing you of that necessarily, it's just a pattern I see in the world and fret about. Actually, it reminds me of something which might make a good article in its own right; I'll ruminate on it for a bit while I'm still getting used to article etiquette.
1[anonymous]11yI definitely agree on this. It's an abused and conflated word, though I don't know if that's more of a cause than an effect of problems society has with thinking about intelligence. I wonder how we could best get people to casually use a wider array of words and associations to distinguish the many different things we mean by "smart".
1Relsqui11yYou've hit an important point here, and not just about the topic in question. Consider body image (we want to see people on TV we think are pretty, but we get our ideas of what's pretty in part from TV) and media violence (we want to depict the world as it really is, but we also want to impart values that will change the world for the better rather than glorifying people and events which change it for the worse). How, in general, do we break these loops? So far, I haven't thought of anything better than choosing to be precise when I'm talking about somebody's talents and weaknesses, so I try to do that.
1cata11yWell, me neither; I think it's a reflection of how people would like to imagine other humans as being much simpler and more homogeneous than they actually are. I look forward to your forthcoming post.
0Relsqui11yThat's reassuring. :) Me too. I don't have a post's worth of idea yet. But there's cud yet to chew. ( Ruminate [http://www.etymonline.com/index.php?term=ruminate] has one of my favorite etymologies.)
4NancyLebovitz11yThis surprises me. One explanation for the mismatch between my experience with Mensa and Adams' is that local groups vary a lot. Another is that he's making up a bunch of insults based on a cliche. What I've seen of Mensa is people who seemed socially ordinary (bear in mind, my reference group is sf fandom), but not as intelligent as I hoped. I went to a couple of gatherings-- one had pretty ordinary discussion of Star Trek. Another was basically alright, but had one annoying person who'd been in the group so long that the other members didn't notice how annoying he was-- hardly a problem unique to Mensa. Kate Jones, President of Kadon Games [http://www.gamepuzzles.com/kadon.htm], is a Mensan and one of the more intelligent people I know. I know one other Mensan I consider intelligent, and there's no reason to think I have a complete list of the Mensans in my social circle. I was in Mensa for a while-- I hoped it would be useful for networking, but I didn't get any good out of it. The publications were generally underwhelming-- there was a lot of articles which would start with more or less arbitrary definitions for words, and then an effort to build an argument from the definitions. This was in the 80s, and I don't know whether the organization has changed. Still, if I'd lived in a small town with no access to sf fandom, Mensa might have been a best available choice for me. These days, I'd say there are a lot of online communities for smart people. All this being said, I suspect that IQ tests the like select for people with mild ADD (look! another question! no need to stay focused on a project!) and against people who want to do things which are directly connected to their goals.

I'd say that the problem is the selection effect for intelligent underachievers. People who are in the top 2% of the population by some widely recognized measure of intellectual accomplishment presumably already have affiliations, titles, and positions far more prestigious than the membership in an organization where the only qualification is passing a written test could ever be. Also, their everyday social circles are likely to consist of other individuals of the same caliber, so they have no need to seek them out actively.

Therefore, in an organization like Mensa, I would expect a strong selection effect for people who have the ability to achieve high IQ scores (whatever that might specifically imply, considering the controversies in IQ research), but who lack other abilities necessary to translate that into actual accomplishment and acquire recognition and connections among high-achieving people. Needless to say, such people are unlikely to end up as high-status individuals in our culture (or any other, for that matter). People of the sort you mention, smart enough to have flashes of extraordinary insight but unable to stay focused long enough to get anything done, likely account for some non-trivial subset of those.

That said, in such a decentralized organization, I would expect that the quality of local chapters and the sort of people they attract depends greatly on the ability and attitudes of the local leadership. There are probably places both significantly better and worse than what you describe.

1komponisto11yI'm not sure about this. I doubt I would do all that well on a Mensa-type IQ test, and I suspect ADD may be part of the reason. (Though SarahC has raised the possibility of motivated cognition interfering with mathematical problem solving [http://lesswrong.com/lw/2q6/compartmentalization_in_epistemic_and/2msz?c=1], which I hadn't really considered.) This, however, I do believe. Despite Richard Feynman's supposedly low IQ score, and Albert Einstein's status as the popular exemplar of high-IQ, my impression (prejudice?) regarding traditional "IQ tests" is that they would in fact tend to select for people like Feynman (clever tinkerers) at the expense of people like Einstein (imaginative ponderers).
3gwern9yWhile I'm passing through looking for something else: http://news.ycombinator.com/item?id=1159719 [http://news.ycombinator.com/item?id=1159719]
0NancyLebovitz11yI was generalizing from one example-- it's easier for me to focus on a series of little problems. If I have ADD, it's quite mild as such things go.
2Relsqui11yThat's fairly analogous to my worries about joining LW [http://lesswrong.com/lw/2pw/the_affect_heuristic_sentiment_and_art/2m0w?c=1]. I was afraid it would be full of extremely intelligent, very dumb people. ;)
3Raw_Power10yHow do you know this isn't the case?
1BillyOblivion11yIntelligence is but one measure of mental ability. One of the critical ones for modern life goes by "Executive Function" http://en.wikipedia.org/wiki/Executive_functions [http://en.wikipedia.org/wiki/Executive_functions] it seems to be moderately independent of IQ. It could also be called "Self Discipline". It is why really bright kids get lousy grades. Why kids who do well in High School, but never seem to study, tank when they hit college, or when the get out of college and actually have to show up for work clean, neat and on time. I don't CARE if you can solve a rubics cube in 38 seconds, I need those TPS reports NOW.
0wedrifid11yIt's correlated with self discipline but it is actually a different ability. In fact, some with problems with executive function compensate by developing excessive self discipline. (Having a #@$%ed up system for dealing with prioritisation makes anxiety based perfectionism more adaptive.)

herbal-spiritual-alternative medicine / conventional medicine / Robin Hanson

Can you link to a Robin Hanson article on this topic so that people who aren't already familiar with his opinions on this subject (read: LW newbies like me) know what this is about?

Or alternately, I propose this sequence:

regular medical care by default / alt-med / regular medical care because alt-med is unscientific

regular medical care by default / alt-med / regular medical care because alt-med is unscientific

This is more in line with the other examples. I second the request for an edit. Yvain, you could add "Robin Hanson" to the fourth slot: it would kinda mess up your triplets, but with the justification that it'd be a funny example of just how awesomely contrarian Robin Hanson is. :D

Also, Yvain, you happen to list what people here would deem more-or-less correct contrarian clusters in your triplet examples. But I have no idea how often the meta-level contrarian position is actually correct, and I fear that I might get too much of a kick out of the positions you list in your triplets simply because my position is more meta and I associate metaness with truth when in reality it might be negatively correlated. Perhaps you could think of a few more-wrong meta-contrarian positions to balance what may be a small affective bias?

4FAWS11yHuh? In all of those examples the unmentioned fourth level is correct and the second and third level both about equally useless.
3Will_Newsome11yHalf-agree with you, as none of the 18 positions are 'correct', but I don't know what you mean by 'useless'. Instead of generalizing I'll list my personal positions: If I failed to notice that there are scientifically proven genetic differences I would be missing a far more important part of reality (evolutionary psychology and the huge effects of evolution in the last 20,000 years) than if I failed to notice that being a bigot was bad and impeded moral progress. That said, if most people took this position, it'd result in a horrible tragedy of the commons situation, which is why most social scientists cooperate on the 'let's not promote racism' dilemma. I'm not a social scientist so I get to defect and study some of the more interesting aspects of human evolutionary biology. No opinion. Women seem to be doing perfectly fine. Men seem to get screwed over by divorce laws and the like. Tentatively agree more with third level but hey, I'm pretty ignorant here. What can I say, it's politics. Libertarians in charge would mean more drugs and ethically questionable experiments of the sort I promote, as well as a lot more focus on the risks and benefits of technology. Since the Singularity trumps everything else policy-wise I have to root for the libertarian team here, even if I find them obnoxiously pretentious. (ETA: Actually, maybe more libertarians would just make it more likely that the 'Yeah yeah Singularity AI transhumanism wooooo!' meme would get bigger which would increase existential risk. So uh... never mind, I dunno.) Too ignorant to comment. My oxycodone and antiobiotics sure did me good when I got an infection a week ago. My dermatologist drugs didn't help much with my acne. I've gotten a few small surgeries which made me better. Overall conventional medicine seems to have helped me a fair bit and costs me little. I don't even know what Robin Hanson's claims are, though. A link would be great. Okay, anyone who cares about helping people in Africa and can

My comment was largely tongue in cheek, but:

  • KKK-style racist / politically correct liberal / "but there are scientifically proven genetic differences"

If I failed to notice that there are scientifically proven genetic differences I would be missing a far more important part of reality (evolutionary psychology and the huge effects of evolution in the last 20,000 years) than if I failed to notice that being a bigot was bad and impeded moral progress. That said, if most people took this position, it'd result in a horrible tragedy of the commons situation, which is why most social scientists cooperate on the 'let's not promote racism' dilemma. I'm not a social scientist so I get to defect and study some of the more interesting aspects of human evolutionary biology.

Awareness of genetic differences between races constitutes negative knowledge in many cases, that is it leads to anticipations that match the outcomes more badly than they would have otherwise. If everyone suspects that genetically blue-haired people are slightly less intelligent on average for genetic reasons, you want to hire the most intelligent person for a job and after a very long selection process (th... (read more)

Okay, anyone who cares about helping people in Africa and can multiply should be giving their money to x-risk charities. Because saving the world also includes saving Africa.

But... but... but saving the world doesn't signal the same affiliations as saving Africa!

On LW, it signals better affiliations!

8NancyLebovitz11yMy impression is that Hanson's take on conventional medicine is that half the money spent is wasted. However, I don't know if he's been very specific about which half.
5Larks11yThe RAND Health Experiment [http://www.overcomingbias.com/2007/05/rand_health_ins.html], which he frequently citied study didn't investigate the benefits of catastrophic medical insurance or that which people pay for from their own pocket, and found the rest useless.
8multifoliaterose11yWhy is giving money to x-risk charities conducive to saving the world? (I don't necessarily disagree, but want to see what you have to say to substantiate your claim.) In particular, what's your response to Holden's comment #12 [http://blog.givewell.org/2010/06/29/singularity-summit/#comment-155806] at the GiveWell Singularity Summit thread [http://blog.givewell.org/2010/06/29/singularity-summit/] ?

Sorry, I didn't mean to assume the conclusion. Rather than do a disservice to the arguments with a hastily written reply, I'm going to cop out of the responsibility of providing a rigorous technical analysis and just share some thoughts. From what I've seen of your posts, your arguments were that the current nominally x-risk-reducing organizations (primarily FHI and SIAI) aren't up to snuff when it comes to actually saving the world (in the case of SIAI perhaps even being actively harmful). Despite and because of being involved with SIAI I share some of your misgivings. That said, I personally think that SIAI is net-beneficial for their cause of promoting clear and accurate thinking about the Singularity, and that the PR issues you cite regarding Eliezer will be negligible in 5-10 years when more academics start speaking out publically about Singularity issues, which will only happen if SIAI stays around, gets funding, keeps on writing papers, and promotes the pretty-successful Singularity Summits. Also, I never saw you mention that SIAI is actively working on the research problems of building a Friendly artificial intelligence. Indeed, in a few years, SIAI will have begun the ende... (read more)

3multifoliaterose11yReasonable response, upvoted :-). •As I said, I cut my planned sequence of postings on SIAI short. There's more that I would have liked to say and more that I hope to say in the future. For now I'm focusing on finishing my thesis. •An important point that did not come across in my postings is that I'm skeptical of philanthropic projects having a positive impact on what they're trying to do in general (independently of relation to existential risk). One major influence here has been my personal experience with public institutions. Another major influence has been reading the GiveWell blog. See for example GiveWell's page on Social Programs That Just Don't Work [http://www.givewell.org/giving101/Social-Programs-That-Just-Dont-Work]. At present I think that it's a highly nonobvious but important fact that those projects which superficially look to be promising and which are not well-grounded by constant feedback from outsiders almost always fail to have any nontrivial impact on the relevant cause. See the comment here by prase [http://lesswrong.com/lw/2lr/the_importance_of_selfdoubt/2h6x?c=1] which I agree with. •On the subject of a proposed project inadvertently doing more harm than good, see the last few paragraphs of the GiveWell post titled Against Promise Neighborhoods [http://blog.givewell.org/2010/07/07/against-promise-neighborhoods/]. Consideration of counterfactuals is very tricky and very smart people often get it wrong. •Quite possibly SIAI is having a positive holistic impact - I don't have confidence that this is so, the situation is just that I don't have enough information to judge from the outside. •Regarding the time line for AGI and the feasibility of FAI research, see my back and forth with Tim Tyler here [http://lesswrong.com/lw/2lr/the_importance_of_selfdoubt/2lf3?c=1]. •My thinking as to what the most important causes to focus at present are is very much in flux. I welcome any information that you or others can point me to. •My reasons f
4wedrifid11yIf you had a post on this specifically planned then I would be interested in reading it!
-3timtyler11yIs that what they are doing?!? They seem to be funded by promoting the idea that DOOM is SOON - and that to avert it we should all be sending our hard-earned dollars to their intrepid band of Friendly Folk. One might naively expect such an organisation would typically act so as to exaggerate the risks - so as to increase the flow of donations. That seems pretty consistent with their actions to me. From that perspective the organisation seems likely to be an unreliable guide to the facts of the matter - since they have glaringly-obvious vested interests.

/startrant

They seem to be funded by promoting the idea that DOOM is SOON - and that to avert it we should all be sending our hard-earned dollars to their intrepid band of Friendly Folk.

Or, more realistically, the idea that DOOM has a CHANCE of happening any time between NOW and ONE HUNDRED YEARS FROM NOW but that small CHANCE has a large enough impact in EXPECTED UTILITY that we should really figure out more about the problem because someone, not necessarily SIAI might have to deal with the problem EVENTUALLY.

One might naively expect such an organization would typically act so as to exaggerate the risks -- but SIAI doesn't seem to be doing that so one's naive expectations would be wrong. It's amazing how people associate an aura of overconfidence coming from the philosophical positions of Eliezer with the actual confidence levels of the thinkers of SIAI. Seriously, where are these crazy claims about DOOM being SOON and that ELIEZER YUDKOWSKY is the MESSIAH? From something Eliezer wrote 10 years ago? The Singularity Institute is pretty damn reasonable. The journal and conference papers they write are pretty well grounded in sound and careful reasoning. But ha, who would read tho... (read more)

[This comment is no longer endorsed by its author]Reply
-4timtyler11yThat was quite a rant! I hope I don't come across as thinking "the worst" about those involved. I expect they are all very nice and sincere. By way of comparison, not all cults have deliberately exploitative ringleaders. Really? Really? You actually think the level of DOOM is cold realism - and not a ploy to attract funding? Why do you think that? De Garis and Warwick were doing much the same kind of attention-seeking before the SIAI came along - DOOM is an old school of marketing in the field. You encourage me to speculate about the motives of the individuals involved. While that might be fun, it doesn't seem to matter much - the SIAI itself is evidently behaving as though it wants dollars, attention, and manpower - to help it meet its aims. FWIW, I don't see what I am saying as particularly "contrarian". A lot of people would be pretty sceptical about the end of the world being nigh - or the idea that a bug might take over the world - or the idea that a bunch of saintly programmers will be the ones to save us all. Maybe contrary to the ideas of the true believers - if that is what you mean. Anyway, the basic point is that if you are interested in DOOM, or p(DOOM), consulting a DOOM-mongering organisation, that wants your dollars to help them SAVE THE WORLD may not be your best move. The "follow the money" principle is simple - and often produces good results.

FWIW, I don't see what I am saying as particularly "contrarian". A lot of people would be pretty sceptical about the end of the world being nigh - or the idea that a bug might take over the world - or the idea that a bunch of saintly programmers will be the ones to save us all. Maybe contrary to the ideas of the true believers - if that is what you mean.

Right, I said metacontrarian. Although most LW people seem SIAI-agnostic, a lot of the most vocal and most experienced posters are pro-SIAI or SIAI-related, so LW comes across as having a generally pro-SIAI attitude, which is a traditionally contrarian attitude. Thus going against the contrarian status quo is metacontrarian.

You encourage me to speculate about the motives of the individuals involved. While that might be fun, it doesn't seem to matter much - the SIAI itself is evidently behaving as though it wants dollars, attention, and manpower - to help it meet its aims.

I'm confused. Anyone trying to accomplish anything is going to try to get dollars, attention, and manpower. I'm confused as to how this is relevant to the merit of SIAI's purpose. SIAI's never claimed to be fundamentally opposed to having resources.... (read more)

Everyone is incredibly critical of Eliezer, probably much more so than he deserves, because everyone is racing to be first to establish their non-cult-victim status.

I don't know about anybody else, but I am somewhat disturbed by Eliezer's persistent use of hyphens in place of em dashes, and am very concerned that it could be hurting SIAI's image.

9Will_Newsome11yAnd I say the same about his use of double spacing. It's an outdated and unprofessional practice. In fact, Anna Salamon and Louie Helm are 2 other SIAI folk that engage in this abysmal writing style, and for that reason I've often been tempted to write them off entirely. They're obviously not cognizant of the writing style of modern academic thinkers. The implications are obvious.
5wedrifid11yAnother reason that I suspect is more important than trying to signal non-cult-victim status is that people who do want to be considered part of the cult believe that the cause is important and believe that Eliezer's mistakes could destroy the world (for example).
-2timtyler11yTo recap, the SIAI is funded by donations from those who think that they will help prevent the end of the world at the hands of intelligent machines. For this pitch to work, the world must be at risk - in order for them to be able to save it. The SIAI face some resistance over this point, and these days, much of their output is oriented towards convincing others that these may be the end days. Also there will be a selection bias, with those most convinced of a high p(DOOM) most likely to be involved. Like I said, not necessarily the type of organisation one would want to approach if seeking the facts of the matter. You pretend to fail to see connections between the SIAI and an END OF THE WORLD cult - but it isn't a terribly convincing act. For the connections, see here [http://lesswrong.com/lw/2lr/the_importance_of_selfdoubt/2h99?c=1]. For protesting too much, see You're calling who a cult leader? [http://lesswrong.com/lw/4d/youre_calling_who_a_cult_leader/]
8Will_Newsome11yNo, I see it, look further, and find the model lacking in explanatory power. It selectively leaves out all kinds of useful information that I can use to control my anticipations. Hmuh, I guess we won't be able to make progress, 'cuz I pretty much wholeheartedly agree with Vladimir when he says: and Nick Tarleton when he says:
2wedrifid11y"This one is right" for example. ;)
-7timtyler11y
0[anonymous]11yAnother reason that I suspect is more important than trying to signal non-cult-victim status is that people who do want to be considered part of the cult believe that the cause is important and believe that Eliezer's mistakes could destroy the world (for example).
0timtyler11yI didn't say anyone was "racing to be first to establish their non-cult-victim status" - but it is certainly a curious image! [deleted parent comment was a dupe].
0wedrifid11yOops, connection troubles then missed.
6orthonormal11yTim, do you think that nuclear-disarmament organizations were inherently flawed from the start because their aim was to prevent a catastrophic global nuclear war? Would you hold their claims to a much higher standard than the claims of organizations that looked to help smaller numbers of people here and now? I recognize that there are relevant differences, but merely pattern-matching an organization's conclusion about the scope of their problem, without addressing the quality of their intermediate reasoning, isn't sufficient reason to discount their rationality.
2khafra11yWill said "meta-contrarian," which refers to the recent meta-contrarians are intellectual hipsters [http://lesswrong.com/lw/2pv/intellectual_hipsters_and_metacontrarianism/] post. I also think you see yourself as trying to help SIAI see how they look to "average joe" potential collaborators or contributors, while Will sees your criticisms as actually calling into question the motives, competence, and ingenuity of SIAI's staff. If I'm right, you're talking at cross-purposes.
0timtyler11yReforming the SIAI is a possibility - but not a terribly realistic one, IMO. So, my intended audience here is less that organisation, and more some of the individuals here who I share interests with.
0Will_Newsome11yOh, that might be. Other comments by timtyler seemed really vague but generally anti-SIAI (I hate to set it up as if you could be for or against a set of related propositions in memespace, but it's natural to do here, meh), so I assumed he was expressing his own beliefs, and not a hypothetical average joe's.
7[anonymous]11yThis is an incredibly anti-name-calling community. People ascribe a lot of value to having "good" discussions (disagreement is common, but not adversarialism or ad hominems.) LW folks really don't like being called a cult. SIAI isn't a cult, and Eliezer isn't a cult leader, and I'm sure you know that your insinuations don't correspond to literal fact, and that this organization is no more a scam than a variety of other charitable and advocacy organizations. I do think that folks around here are over-sensitive to normal levels of name-calling and ad hominems. It's odd. Holding yourself above the fray comes across as a little snobbish. There's a whole world of discourse out there, people gathering evidence and exchanging opinions, and the vast majority of them are doing it like this: UR A FASCIST. But do you think there's therefore nothing to learn from them?
0[anonymous]11yI think the reasoning goes something like: * Existential risks are things that could destroy the world as we know it. * Existential risk charities work to reduce such risks. * Existential risk charities use donations to perform said task * Giving to x-risk charities is conducive to saving the world. Before looking at evidence for or against the effectiveness of particular x-risk charities our prior expectation should be that people who dedicate themselves to doing something are more likely to contribute progress towards that goal than to sabotage it.
2waveman7yThis is only true if it is the case that the first-order effect of legalizing drugs (legality would encourage more people to take them) outweighs second order effects. An example of the second order effects is the fact that the price is higher encourages production and distribution. Or the fact the that illegality allows them to be used as signals of rebellion. Legalizing drugs would potentially put distribution in the hands of more responsible people.And so forth. As the evidence based altruism people have found, improving the world is a lot harder than it looks.
1Relsqui11yI actually disagree with this statement outright. First of all, ignoring the existence of a specific piece of evidence is not the same as being wholly ignorant of the workings of evolution. Second, I think that the use or abuse of data (false or true) leading to the mistreatment of humans is a worse outcome than the ignorance of said data. Science isn't a goal in and of itself--it's a tool, a process invented for the betterment of humanity. It accomplishes that admirably, better than any other tool we've applied to the same problems. If the use of the tool, or in this case one particular end of the tool, causes harm, perhaps it's better to use another end (a different area of science than genetics), or the same one in a different environment (in a time and place where racial inequality and bias are not so heated and widespread--our future, if we're lucky). Otherwise, we're making the purpose of the tool subservient to the use of the tool for its own sake--pounding nails into the coffee table. Besides--anecdotally, people who think that the genetic differences between races are important incite less violence than people who think that not being a bigot is important. If, as you posited, one had to choose. ;) I have a couple other objections (really? sex discrimination is over? where was I?) but other people have covered them satisfactorily. New here; can I get a brief definition of this term? I've gotten the gist of what it means by following a couple of links, I just want to know where the x bit comes from. Didn't find it on the site's wiki or the internet at large.
3ChristianKl11yX-risk stands for existential risk. It about possible events that risk ending the existence of the human race.
0Relsqui11yGot it; thank you.
0NancyLebovitz11yWhat do you have in mind?
0Relsqui11yI'm not sure what "what" would refer to here. I didn't have an incident in mind, I'm just giving my impression of public perception (the first person gets called racist, and the second one gets called, well, normal, one hopes). It wasn't meant to be taken very seriously.

Noticing a social cluster takes social savvy and intelligence.

Therefore, showing that you can see a social cluster makes you look good.

Maybe going up a level in one of Yvain's hierarchies is showing off that you've discovered a social cluster? It goes together with distancing yourself from that cluster, but I don't know why.

I would like to announce that I have discovered the social cluster that has discovered the method of discovering all social clusters, and am now a postmodernist. Seriously guys, postmodernism is pretty meta. Update on expected metaness.

0Spurlock11yI'm confused. What point are you trying to make about postmodernism?

None, really. I just like how its proponents can always win arguments by claiming to be more meta than their opponents. ("Sure, everything you made sense within your frame of reference, but there are no privileged frames of reference. Indeed, proving that there are privileged frames of reference requires a privileged frame of reference and is thus an impossible philosophical act. I can't prove anything I just said, which proves my point, depending on whether you think it did or not.")

(I don't take postmodernism seriously, but some of the ideas are philosophically elegant.)

I can't prove anything I just said, which proves my point, depending on whether you think it did or not.

I would like this on a t-shirt.

0[anonymous]7yMmm, but isn't it true that "proving that there are privileged frames of reference requires a privileged frame of reference and is thus an impossible philosophical act."
0[anonymous]7yI think there's missed "you said" here: "everything you made sense".

I have to admit, this has definitely been a hazard for me. As I said to simplicio a few months ago, I've had a sort of tendency to be "too clever" by taking the "clever contrarian" position. This gets to the point where I'm fascinated by those who can write up defenses of ridiculous positions and significantly increase my exposure to them.

I think part of what made me stray from "the path" was a tendency to root for the rhetorical "underdog" and be intrigued -- excessively -- with brilliant arguments that could defend ridiculous positions

I have to wonder if I'm falling into the same trap with my "Most scientists only complain about how hard it is to explain their field because their understanding is so poor to begin with." (i.e., below Level 2, the level at which you can trace out the implications between your field and numerous others in both directions, possibly knowing how to trace back the basis of all specialized knowledge to arbitrary levels)

Does making fun of hipsters to seem cool make you a meta-hipster?

Very much related to The Correct Contrarian Cluster.

Also, we had a post specifically on countersignaling: Things You Can't Countersignal.

One more cluster I can think of is attitude to copyright law. Something like:

  1. Huh? It's illegal for me to copy that song? How totally stupid, I'm not harming anyone.
  2. Strong intellectual property law is necessary to encourage innovation and protect artists.
  3. Copyright law does more harm than good and needs to be reformed or abolished.

This is actually an interesting example, because I think if you look at the patterns of contrarian and meta-contrarian groups--that is, the people who tend to prefer those attitudes--you actually flip the second two, which breaks the pattern of contradiction and counter-contradiction. That is to say,

  1. (ordinary people who don't worry too much about this) Huh? It's illegal for me to copy that song? How totally stupid, I'm not harming anyone.
  2. (people who are really into the torrent community) It's not/shouldn't be illegal! Information wants to be free!
  3. (meta) If nobody paid for music, no one could live off being a musician. Torrenters are just making excuses for the convenience of breaking the law.
  4. (approaching sense) We need to reform or abolish copyright law and replace it with a system that pays artists fairly while working with, not against, new technology.

At least, that's my experience; take it with a grain of bias in favor of position four.

3DSimon11yThat's an interesting example because 1 and 2 arrive at the same conclusion, but 2 might still want to signal themselves as being contrary to 1 (i.e. "It's not just that it's not harming anybody, but sharing information around freely is actually helping everybody!")
4Relsqui11yI agree with that, and you make a good point--it suggests that being contrarian doesn't require disagreeing with the position as disagreeing with the reasoning. In a lot of cases it'll amount to the same thing, or at least come off as the same thing, but the above is one where it doesn't.
3loqi11yI basically object to copyright law because of 1. Clearly my opinion is transcendent, a least fixed point of meta-contrarianism.

Not everything is signaling.

The intellectually compulsive are natural critics. You see something wrong in an argument, and you argue against it. The natural stopping point in this process is when you don't find significant problems with the theory, and that is more likely for a fringe theory that other's don't bother to critique. When no one is helping you find the flaws, it's less likely you'll find them. You'll win arguments, at least by your evaluation, because you are familiar with their arguments and can show flaws, but your argument is unfamiliar to ... (read more)

doesn't follow politics / political junkie / avoids talking about politics due to mind-killing

This suggests that a common tactic (deliberate or otherwise) would be to represent your opponents as being the level below you, rather than the level above. For example this article, which treats Singularitarians as at level 1, rather than level 3, on

technology is great! -> but it has costs, like to the enviroment, and making social control easier -> Actually, the benefits vastly outweigh those.

Ironically, it's not that far off for SIAI, which is at level 4, 'certain technologies are existentially dangerous'

This seems to hold true for all the triads ... (read more)

3Will_Newsome11yExistentially dangerous doesn't mean the benefits still don't outweigh the costs. If there's a 95% chance that uFAI kills us all, that's still a whopping 5% chance at unfathomably large amounts of utility. Technology still ends up having been a good idea after all. Each level adds necessary nuance. Unfortunately, at each level is a new chance for unnecessary nuance. Strong epistemic rationality is the only thing that can shoulder the weight of the burdensome details [http://wiki.lesswrong.com/wiki/Burdensome_details]. Added: Your epistemic rationality is limited by your epistemology. There's a whole bunch of pretty and convincing mathematics that says Bayesian epistemology is the Way. We trust in Bayes because we trust in that math: the math shoulders the weight. A question, then. When is Bayesianism not the ideal epistemology? As humans the answer is 'limited resources'. But what if you had unlimited resources? At the limit, where doesn't Bayes hold?
0AlexanderRM6yI've noticed that quite often long before seeing this article. There seems to be a strong tendency for people to try to present themselves as breaking old, established stereotypes even when the person they're arguing against says exactly the same thing, and in some cases where the stereotype has only been around for a very short time (I recall one article arguing against the idea of Afghanistan being "the graveyard of empires", which in my understanding was an idea that had surfaced around 6 months prior to that article with the publication of a specific book). However, this does add an interesting dimension to it, with the fact that Type 2 positions actually were founded on a rejection of old, untrue beliefs of Type 1s, and Type 3s often resemble Type 1s. In fact I'd say that in every listed political example, the Type 2s who know about Type 3s will usually lump them in with Type 1s. This is, IMO, good in a way because it limits us from massive proliferation of levels over and over again and the resulting complications; instead we just get added nuance into the Type 2 and 3 positions.

The pleasure I get out of trolling atheists definitely has a meta-contrarian component to it. When I was a teenager I would troll Christians but I've long since stopped finding that even slightly challenging or fun.

7Scott Alexander11yYes, I often find myself tempted to do that too. Although I understand on an intellectual level that creationism is stupid, it is hard for me to get worked up about it and I certainly don't have the energy to argue with creationists ad nauseum. I do find myself angry whenever an atheist makes a superficial or stupid point in defense of atheism, or when they get too smug about how much smarter they are than creationists. My guess is that I have a sufficiently inflated view of my intelligence to be high enough that I have no need to differentiate myself intellectually from creationists, but I do feel a need to differentiate myself intellectually from the less intelligent sort of atheist.

As a mathematician, I offer my services for anybody who wants arguments (mathematical arguments, not philosophical ones) that 1+1 = 3. But beware: as a meta-contrarian mathematician, I will also explain why these arguments, though valid in their own way, are silly.

8[anonymous]9y1.3 + 1.4 = 2.7, which when reported to one significant figure...
3CronoDAS9yAs the "old" computer science joke goes, 2 + 2 = 5 (for extremely large values of 2).
1Manfred9yThe physicist-typical version is that 3=4, if you take lim(3->4).
4TobyBartels9yThis reminds me that the difference between a physicist and astronomer is that a physicist uses π ≈ 3 while an astronomer uses π ≈ 1.
7[anonymous]9yI remember someone in a newsgroup saying the average person is about one metre tall and weighs about 100 kilos, and when asked whether maybe they were approximately a bit too roughly, they answered “I'm an astronomer, not a jeweller.” (And physicists sometimes use π ≈ 1 too -- that's called dimensional analysis. :-) The problem is when the constant factor dimensional analysis can't tell you turns out to be 1/(2π)^4 ≈ 6.4e-4 or stuff like that.)
1RobinZ10yYou know, I am seized with a sudden curiosity. You have arguments such that 1 is still the successor of 0 and 3 is still the successor of the successor of 1, where 0 is the additive identity?
4TobyBartels10yAh, now I have to remember what I was thinking of back in September! Well, let's see what I can come up with now. One thing that I could do is to redefine every term in the expression. You tried to forestall this by insisting [Note: I originally interpreted this as "3 is still the successor of 2" for some dumb reason.] But you never insisted that 2 is the successor of 1, so I'll redefine 2 to be 1 and redefine 3 to be 2, and your conditions are met, while my theorem holds. (I could also, or instead, redefine equality.) But this is silly; nobody uses the terms in this way. For another method, I'll be a little more precise. Since you mentioned the successor of 0, let's work in Peano Arithmetic (first-order, classical logic, starting at zero), supplemented with the axiom that 0 = 1. Then 1 + 1 = 3 can be proved as follows: * 1 + 1 = 1 + S(0) by definition of 1; * 1 + S(0) = 1 + S(1) by substitution of equality; * 1 + S(1) = 3 by any ordinary proof in PA; * 1 + 1 = 3 by transitivity of equality (twice). Of course, this is also silly, since PA with my new axiom is inconsistent. Anything in the language can be proved (by going through the axiom that 0 = S(n) is always false, combining this with my new axiom, and using ex contradictione quodlibet). Here is a slightly less silly way: Modular arithmetic is very useful, not silly at all, and in arithmetic modulo 1, 1 + 1 = 3 is true. But however useful modular arithmetic in general may be, arithmetic modulo 1 is silly (for roughly the same reasons that an inconsistent set of axioms is silly); everything is equal to everything else, so any equation at all is true. In other words, arithmetic modulo 1 is trivial. You can get arithmetic modulo b by replacing the Peano axiom that 0 = S(n) is always false with the axiom that 0 = b and b − 1 additional axioms stating (altogether) that a = b is false whenever (in ordinary arithemetic) 0 < a < b. But you could instead add an arbitrary axiom of the form b = c (and another
3RobinZ10yI wrote "successor of the successor of" - 3 is the successor of 2, which is the successor of 1. But I understand that this was a typo. :P But yes, I enjoyed that. Thank you.
2TobyBartels10yHa, that would be a reado! But seriously, I should have read that again. I got it in my head that you had done this while I spent time planning my response and forgot to verify.

I have a strong urge to signal my difference to the Lesswrong crowd. Should I be worried that all my positions may be just meta^2 contrarianism?

people get deep personal satisfaction from arguing the positions even when their arguments are unlikely to change policy

I very much wish that intellectual debate was more effectiveness-oriented in general. I myself try to refrain from arguing about things that don't actually matter or that I can't hope to change (not always successfully).

I have a style question. Are there less grating ways to write gender neutral texts?

I, to my great surprise, was irritated to no end by "ey" and "eir". I always stumbled when reading it. I dislike it and think "he/she" or "they" may be more natural and cause less stumbling when reading the article.

So far, I am against all the invented gender-neutral pronouns. Most of them sound strange ("ey" and "eir" look like a typo or phonetic imitation of deep southern accent, "xe" and "xir" use "x" sound and are simply painful to pronounce)

As of now, I am willing to sacrifice gender neutrality in texts in favor of readability.

3g_pepper6yTechnically, "he" is perfectly acceptable for gender neutral texts. Merriam-Webster states that "he" can be "used in a generic sense or when the sex of the person is unspecified". However, to avoid the appearance of non-neutral text, I usually use "he/she", "his/her", etc. "They" or "their" can be used, but these are not really appropriate referring to a singular antecedent, so I quite often use "his/her" rather than "their". Another technique that you see frequently and that I sometimes use is to use "he" sometimes and "she" other times. As long as these more-less balance out in your text, you should be OK from a neutrality standpoint. Any of these alternatives is preferable IMO to "ey" and "eir".
0[anonymous]6yThe Eir of Slytherin has opened the Chamber of Socrates...
2MarkusRamikin6yIf you dislike zes, xes and eys and find them horrible little abominations that have no place among good and decent words, and suspect they were meant to trick us into unknowingly saying things that count as worship of Cthulhu... ...oh, wait, that's me. Let's try again. If you dislike zes, xes and eys, then using "they" seems to me the best solution if you care about being gender-neutral.

That suggests people around that level of intelligence have reached the point where they no longer feel it necessary to differentiate themselves from the sort of people who aren't smart enough to understand that there might be side benefits to death.

This is an interesting hypothesis, but applying it to LessWrong requires that the LW community has a consensus on how people rank by intelligence, that that consensus be correct, and that people believe it is correct. My impression is that everybody thinks they're the smartest person in the room, and judges ... (read more)

3Vaniver6yI think this is generalizing from one example; I've certainly met people who didn't think they were the smartest person in the room, either because they're below median intelligence and reasonably expect that most people are smarter than them or because even though they're above median they've met enough people visibly smarter than them. (I've been in rooms where I wasn't the smartest person.) I suspect that people may not be very good at ranking, and are mostly able to put people in buckets of "probably smarter than me," "about as smart as me," and "probably less smart than me" (that is, I think the 'levels below mine' blur together similarly to how the 'levels above mine' do). I also suspect that a lot of very clever people think that they're the best at their particular brand of intelligence, but then it's just a question of self-awareness as to whether or not they see the reason they're picking that particular measure. I can recall, as a high schooler, telling someone at one point "I'm the smartest person at my high school" and then having to immediately revise that statement to clarify 'smartest' in a way that excluded a friend of mine who definitely had more subject matter expertise in several fields and probably had higher g but had (I thought, at least) a narrower intellectual focus.

Ebola has offered a recent nice example of the triad. Mainstream: "be afraid, be very afraid"; contrarian: "don't be so gullible, why, hardly any more people have died from Ebola than have died from flu/traffic accidents/smoking/etc"; meta-contrarian: "what is to be feared is a super-lethal disease escaping containment & killing many more millions than the normal flu or traffic death toll".

4[anonymous]6yMeh. Mankind survived the mad cow, the SARS, the bird flu and the swine flu hardly scathed; why should it be different this time around?
6gwern6y1. human deaths are not irrelevant; a million deaths != no deaths. 2. pandemics are existential threats, which can drive species extinct; I trust you understand why 'mad cow, the SARS, the bird flu and the swine flu' are not counter-arguments to this point.
0[anonymous]6yNo, I don't. EDIT: Anthropics?
1Confusion3yIn that triad the meta-contrarian is broadening the scope of the discussion. They address what actually matters, but that doesn’t change that the contrarian is correct (well, a better contrarian would point out the number of deaths due to Ebola is far less than any of those examples and Ebola doesn’t seem a likely candidate to evolve into a something causing an epidemic) and that the meta-contrarian has basically changed the subject.

BTW, I'm not actually that intelligent (IQ about 92 or 96 if I remember right) but pretending to adopt a meta-contrarian position might be a useful social tactic for me. Any advice from those who know the area on how to use it?

Advocate for the obvious position using the language and catchphrases of its opponents. I remember once saying, "Well, have we ever tried blindly throwing lots of money at the educational system?" Everyone agreed that this was a wise and sophisticated thing to say, even though I was by far the least knowledgeable person in the room on the subject and was just advocating the default strategy for improving public schools. Other examples:

"Greed is good."

"The chief virtue of a $professional is $vice."

"I'm a tax-and-spend liberal, and I think there should be much more government regulation. For example, the sad truth is that the realities of medical care require the existence of death panels, and I'd rather have them run by government bureaucrats than corporate accountants."

2glenra9yKansas City was one of the more notable examples of having tried that; it didn't work out well: http://www.cato.org/pubs/pas/pa-298.html [http://www.cato.org/pubs/pas/pa-298.html]
1epursimuove7yThis seems quite unlikely given your reasonably high-quality posting history. Is this number from a professionally administered test? Do you have a condition like dyslexia or dyscalculia that impairs specific abilities but not others?
1Carinthium7yI have Aspergers Syndrome, which affects things like this. Probably has something to do with it.
[-][anonymous]11y 6

I wonder if this means we should place more weight on opinions that don't easily compress onto this contrarianism axis, since they're less likely to be rooted in signalling/group affiliations, and more likely to have a non-trivial amount of thought put into them.

0AlexanderRM6yAnother thing to take away from this is that we should be wary of any system that categorizes opinions based on sociology rather than direct measures of their actual truth. Contrarians and Meta-Contrarians both have similar explanations of why they go for their levels, by pointing out the flaws with the lower level.

I think about the counter-signaling game a bit differently. Consider some question that has a binary answer - e.g. a yes/no question?. Natural prejudices or upbringing might cause most people to say pick, say, yes. Then someone thinks about the question and for reason r1 switches to no. Someone else who agrees with r1 then comes up with reason r2, and switches back to yes. Then r3 causes a switch back to no, ad infinitum.

Even though the conclusion at each point in the hierarchy is indistinguishable from a conclusion somewhere else in the hierarchy, th... (read more)

1ChristianKl11yNassim Taleb's makes an argument that he believes on God by default and he is widely seen as a rational person. I don't think it makes sense to see his position as lower in the hierarchy than people who believe based on the design argument.
3Matt_Simpson11yHis belief by default is based on some sort of argument, not unthinking acceptance of whatever his parents told him. In other words, his "default belief" is not the same as my hierarchy's "default belief."
0nick01200011ySo, where would "Believe because of generalised Pascal's Wager" be on your hierarchy? ;)
0Matt_Simpson11yI'm not sure what the "generalized" is doing, but normal pascal's wager would probably be right before or right after the design argument.

6) Worth a footnote: I think in a lot of issues, the original uneducated position has disappeared, or been relegated to a few rednecks in some remote corner of the world, and so meta-contrarians simply look like contrarians. I think it's important to keep the terminology, because most contrarians retain a psychology of feeling like they are being contrarian, even after they are the new norm. But my only evidence for this is introspection, so it might be false.

Deserves MORE than a footnote.

[-][anonymous]9y 4

conservative / liberal / libertarian

Liberal and libertarian don't mean the same thing in Europe as in America; keep that in mind when writing for international audiences. (Very roughly speaking, an European liberal is a moderate version of an American libertarian, and an American liberal is a moderate version of an European libertarian.)

7steven04619yDo European and American liberals advocate different policies, or is it just that the political spectrum in both places is different so the same policies appear at different relative positions? As far as I can tell, while this was true in the 19th century, Europe has almost completely adopted the American use of the word. Here are some examples (is there a way to get markdown to work with links that end in parentheses?): * http://libertaer.wordpress.com/themen/liberalismus/libertarismus/ [http://libertaer.wordpress.com/themen/liberalismus/libertarismus/] * http://en.wikipedia.org/wiki/United_Kingdom_Libertarian_Party [http://en.wikipedia.org/wiki/United_Kingdom_Libertarian_Party] * http://en.wikipedia.org/wiki/Libertarian_Party_(Netherlands [http://en.wikipedia.org/wiki/Libertarian_Party_(Netherlands]) * http://en.wikipedia.org/wiki/Libertarian_Movement_(Italy [http://en.wikipedia.org/wiki/Libertarian_Movement_(Italy])
7[anonymous]9yIf I understand correctly, in America liberal is essentially a synonym of ‘(moderate) left-winger’, and hence the antonym of conservative or ‘(moderate) right-winger’ wrt social values, though they often are in favour of greater economic regulation (e.g. the US Democratic Party), whereas in Europe liberals are those who favour greater economic freedom, though they are often conservative wrt social values (e.g. the Italian centre-right). Among mainstream parties, it appears to me to be the case both in Europe and in America that the political spectrum concentrates along a line with positive slope in the Political [http://www.politicalcompass.org/euchart] Compass [http://www.politicalcompass.org/usstates?ak=on&az=on&il=on&ny=on], i.e. those who value free capitalism also value traditional social values and those who value economic equality also value social freedom (and liberal appear to refer to different directions along that line in Europe and in America), but Europe has (or should I say “used to have”?, I was not aware of the parties you mentioned) fewer extremists east of that line and America has fewer extremists west of it, AFAICT. http://en.wikipedia.org/wiki/Libertarian_Party_%28Netherlands%29 [http://en.wikipedia.org/wiki/Libertarian_Party_%28Netherlands%29] (28 and 29 being the hex codes for ( and ) respectively).
2David Althaus9yFWIW I'm from Germany and tend to agree with the above comment.
-2MugaSofer8yWould have upvoted just for this.
2wedrifid8yAlternately, note that the escape character in markdown is "\". Putting that before the (first) closing parenthesis works fine.
4thomblake9yIn short, 'liberal' in the US is merely the opposite of 'conservative', matching the usage of "He was liberal with his praise"; 'liberal' in Europe for the most part retains the meaning specified by 'classical liberal' in the US - "in favor of individual liberty".
5steven04619yIt looks to me like it means something more specific than just the opposite of "conservative". For example, this article [http://en.wikipedia.org/wiki/Modern_liberalism_in_the_United_States] has a header "opposition to socialism". I'm aware that US liberals are less conservative than the US spectrum and that European liberals are more in favor of individual liberty than the European spectrum, but before concluding they're different, you'd first need to rule out the hypothesis that it's because the US spectrum is more conservative and the European spectrum is less in favor of individual liberty. ETA: I don't think this is the whole explanation, but I think it's a large part of the explanation.
3thomblake9yThe thing is, "less conservative" doesn't actually mean anything in the US. "Conservative" and "liberal" are just pointers to the Republican and Democratic parties, respectively, which in turn are semi-permanent coalitions of people with vastly different (and often incompatible) ideologies, that end up being used for color politics [http://wiki.lesswrong.com/wiki/Color_politics]. There isn't really a spectrum, but you can pretend there is - if you have 390 Green beliefs and 100 Blue beliefs, then you're clearly a Moderate Green (aquamarine?). Whereas in most of Europe, parties actually represent ideologies to some extent, and so ideological terms don't get corrupted so much in favor of talking about the platform of a party. This is often because temporary coalitions happen between political parties, instead of within them. England is a notable exception to this - it has more-or-less two parties, and they pretend to fall on a "political spectrum" like the US parties; thus, they even tend to echo US meaningless political rhetoric.
5[anonymous]9yAt least in Italy, “It's a complete mess” would be a more accurate (though less precise) description than that.
0thomblake9yAgreed.
1dbaupp9yFWIW, in Australia, there are two main political parties, Liberal and Labor. The Liberals are reasonably close to the Republicans (from what I can glean of US politics), and "liberals" (US meaning) seem to align with Labor or one of the other parties. A backslash in front of the offending punctuation should fix it.

Thus Eliezer's title for this mentality, "Pretending To Be Wise".

Have we broadened that term to refer to... well, lowercase pretending-to-be-wise in general? In the original post, he used it specifically to refer to those who try to signal wisdom by neutrality. (Though I did notice he used it in the broader sense in HPMoR. Is it thus officially redefined?)

0Scott Alexander11yYeah, I was thinking of the HPatMOR usage. It's a good phrase, and it would be a shame not to use it.

This triad was missed:

"Muslims are terrorists!" / "Islam is a religion of peace." / "Religion is problematic in general but Islam is the worst and I can back that claim up with statistics I read on Sam Harris' blog."

4ChristianKl4yGwern's TERRORISM IS NOT ABOUT TERROR [https://www.gwern.net/Terrorism%20is%20not%20about%20Terror] seems to me like a better candidate for the third.
0username25yFor the third slot I'd say "religious squabbles are the wrong problem to be thinking about".

It's always a bit of a shock when you're the contrarian and you discover someone meta-contrarianizing you on the outside lane. For example, here's an interesting triad I just recently became aware of:

Base: monogamy is assumed without discussion, cheating is the end of a relationship unless maybe if you confess and swear to never do it again.

Contrarian: open/poly relationship is agreed upon after discussion, it's not cheating if there's no lying.

Meta-con: non-exclusivity is assumed, no discussion. Cheating is whatever, just don't tell me about it.

I held the... (read more)

There is a neat paper on this by Feltovich, Harbaugh, and To called "Too Cool for School? Signaling and Countersignaling."

http://www.jstor.org/pss/3087478

1toro10yA little more from Harbaugh's home page http://www.bus.indiana.edu/riharbau#CS [http://www.bus.indiana.edu/riharbau#CS]. Includes the Economist puff piece and an unedited version with some fun-but-unconvincing examples. Fun fact: Harbaugh also made a searchable Chinese dictionary. (zhongwen.com)

I'm a little confused, what purpose does this distinction serve? That people like to define their opinions as a rebellion against received opinion isn't novel. What you seem to be saying is: defining yourself against an opinion which is seen as contrarian sends a reliably different social signal to defining yourself against an opinion which is mainstream, is that a fair assessment? Because this only works if there is a singular, visible mainstream, which is obviously available in fashion but rare in the realm of ideas.

Moreover, if order-of-contrariness doe... (read more)

The number of global warming skeptics who jumped straight from "it's not happening" to "well we didn't do it" to "well we can't do anything about it without doing more harm than good" should also...give us a bit of pause.

Actually, that move is perfectly consistent with real skepticism applied to a complex assertion.

To see why, let's consider a different argument. Suppose a True Believer says we should punish gays or disallow gay marriage "because God hates homosexuality". You and I are skeptical that this assertion is rationally defensible so we attack it at what seems like the obvious first link in the logical chain. We say "I doubt that god exists. Prove to me that god exists, and then maybe we'll consider your argument." At this point you can divide the positions into:

"god hates X"/god doesn't exist

Now let us suppose TB actually does it. He does prove that god exists. Does this mean that we skeptics immediately have to accept his entire chain of reasoning? Of course not! We jump to the next weak link. To establish the original claim, one would need to prove god exists and is benevolent and wrote the bible and meant t... (read more)

4Mercy11yThis is a great point that's making me revise my position on some right wing commentators. Still, I'm struggling to think of any actual examples of this behavior in action: we don't actually tell religious people who believe wrong things "well god ain't real deal with it". We point out how their assertions are incompatible with their own teachings, and with the legal system, and scientific findings etc. We don't keep all the flaws we see in their position back in reserve. Moreover most of the serious commentators on the skeptical side of the issue argued only one of the points in question, whether it was the statistics showing warming or the economics implied by it or (cue rim-shot) sunspots, it's only journalists and politicians who skipped from one to the other, which is where I got the impression they'd only looked at the issue long enough to find a contrarian position.
3glenra9yIf you've ever said or thought "Okay, just for the sake of argument, I'll assume your point X is correct..." you were holding a position back in reserve. One typical example is arguing with a religious nut that what he's saying is incompatible with the teachings in his own holy book. Suppose he wins this argument (unlikely, I know, but bear with me...) and demonstrates that you were mistaken and no, his holy book really does teach that we should burn scientists as witches. Do you immediately conclude that yes, we should burn scientists as witches? No, because you don't actually hold in high esteem the teachings in his holy book.

Mercy:

Because this only works if there is a singular, visible mainstream, which is obviously available in fashion but rare in the realm of ideas.

However, it seems to me that such mainstream does exist. Compared to the overall range of ideas that have been held throughout the history of humanity, and even the overall range of ideas that I believe people could hold without being crazy or monstrous, the range acceptable in today's mainstream discourse looks awfully narrow to me. It also seems to me very narrow by historical standards -- for example, when I look at the 19th century books I've read, I see an immensely greater diversity of ideas than one can see from the modern authors that occupy a comparable mainstream range. (This of course doesn't apply to hard sciences, in which the accumulation of knowledge has a monotonous upward trend.)

Of course, like every human society, ours is also shaken by passionate controversies. However, most of those that I observe in practice are between currents that are overall very similar from a broader perspective.

0Mercy11yWell I can see that in certain areas, but it depends on where you look. The range of held opinions on the construction of gender, criminal punishment and both the nature and the contents of history is much broader than one hundred years ago. The range of opinions on the morality of war is far narrower. In any case, I meant mainstream in the sense that top 40 is mainstream, not in the sense that music is mainstream. Perhaps orthodoxy would be a better word? In fashion there is usually a single current orthodoxy about how people should dress, so it's easy to identify these circles of heterodoxy and reactionism. Other issues show multiple competing orthodoxies, each of which appears contrary to the other.
8Vladimir_M11yMercy: Frankly, I disagree with that statement so deeply that I'm at a loss how to even begin my response to it. Either we're using radically different measures of breadth, or one (or both?) of us has had a grossly inadequate and unrepresentative exposure to the thought of each of these epochs. Yes, certain ideas that were in the minority back then have been greatly popularized and elaborated in the meantime, and one could arguably even find an occasional original perspective developed since then. However, it seems evident to me that by any reasonable measure, this effect has been completely overshadowed by the sheer range of perspectives that have been ostracized from the respectable mainstream during the same period, or even vanished altogether. But in the matters of opinion, there is also a clearly defined -- and, as I've argued, nowadays quite narrow -- range of orthodoxy, and it's common knowledge which opinions will be perceived as contrarian and controversial (if they push the envelope) or extremist and altogether disreputable (if they reach completely outside of it). I honestly don't see on what basis you could possibly argue that the orthodoxy of fashion is nowadays stricter and tighter than the orthodoxy of opinion.
0CronoDAS11yTwo hundred years ago, then?
3Vladimir_M11yTwo hundred years ago, the institutions were very different, and there was much less total intellectual output than a century ago, so it's much harder to do a fair comparison because it's less clear what counts as mainstream and significant. However, the claim is still flat false at least when it comes to criminal punishment. In fact, in the history of the Western world, the period of roughly two hundred years ago was probably the very pinnacle of the diversity of views on legal punishment. On the one extreme, one could still find prominent advocates of brutal torturous execution methods like the breaking wheel (which were occasionally used in some parts of Europe well into the 19th century), and on the other, out-and-out death penalty abolitionists. (For example, the Grand Duchy of Tuscany abolished the death penalty altogether in 1786, and it was abolished almost completely in Russia around the mid-18th century.) One could also find all sorts of in-between views on all sides, of course. Admittedly, one would be hard-pressed to find someone advocating a prison system of the sort that exists nowadays, but that would have been economically impossible back in those far poorer times (modern prisons cost tens of thousands of dollars per prisoner-year, not even counting the cost of building them). Depending on what exactly is meant by "the nature and the contents of history," one could certainly point out many interesting perspectives that could be found 200 years ago, but not today anymore. That, however, is a very complex question. As for gender, well, I'd better not go into that topic. I'll just point out that people have been writing about these matters since the dawn of history, and it's very naive (though sadly common nowadays) to believe that only our modern age has managed to achieve accurate insight and non-evil attitudes about them.
1wedrifid11yDawn of history? Now I'm imagining uncovering writing on the wall of caves: "Why women make better hunters" and expressing indignation at under-representation of females in cave paintings of battles.
0Vladimir_M11yWhat Constant said. I meant "history" in the narrow technical sense of the word, i.e. the period since the invention of writing.
-1[anonymous]11yYou're mixing up history with prehistory.
0[anonymous]11yNo I'm not. The counterfactual referred to writing, writing which incidentally happened to be a commentary on the quality of the historical record keeping. (It is not my position that the counterfactual is particularly likely - if anything the reverse.)
0Mercy11yPeople still argue those things nowadays though. Any remotely salacious criminal story has hacks crawling out of the woodwork to gloat about how the perpetrators will be raped, and the current Attorney General has deliberately delayed introduction of mechanisms to clamp down on the practice. For a long time one of the most popular proposal out of Britain's "let the public suggest policies" initiative was to send paedophiles to Iraq as human mine detectors. And you're missing the major reason for the increase in variety of criminal punishments, which is that the increase in the number of non violent crimes. I don't think I'll run too much risk of embarrassing myself if I suggest that mephedrone clinics weren't considered an alternative to jail time 100 years ago. As to gender, I was under the impression that radically post- and anti- gender views like those expressed by Julie Bindel and Donna Harroway were novel, if there are 19th century author's with similar viewpoints I'd be happy to hear them. Again this is an issue where I don't see any dead viewpoints, so even small increases in radical-ness increase the general width of ideas held. It strikes me though from the prison issue that our differences are mostly over what qualifies a belief as respectable. There are many beliefs that are no longer taken seriously by liberal academics, if that's what you mean by mainstream then I agree the 19th century showed a much broader range of opinion then ours. Getting back to my original point, just about everything in the OP is within the range of orthodoxy of public opinion, and everything except "obama is a muslim" within the academic one, and yet they can be modeled as contrary to one another.
1HumanFlesh11yMephedrone clinics? Do you mean methadone clinics?
4Scott Alexander11yYou're right, the examples were pretty cherry-picked. My point was to show that, although we tend to celebrate our failure to be lured into holding contrarian positions for the sake of contrarianism, this can itself be a trap that we need to watch out for. I think the idea of meta-contrarian-ness is novel in a way the idea of contrarian-ness is not.
3cousin_it11yWhy do you want to define "genuine meta-contrarianness" based on correctness/merit? It will cause endless flamewars. Yvain's recipe, on the other hand, is relatively uncontroversial.
2Mercy11yAs far as I can see, it's uncontroversial because it doesn't add any information in the first place, compared to just including the norm in question when describing something as contrarian, which takes a similar number of words, less effort and is less subjective. But I'm not suggesting double contrarian opinions must be better than unrecontructed ones, rather that if they are distinguishable they should have different bottom lines: they shouldn't just be better arguments for the same thing. We see this in the race example: modern genetics recognises very different ethnic distributions to those of classical racialist science, or modern derivations thereof.
4cousin_it11yI think the post was a guideline to help you catch yourself when you write the bottom line of your position for signaling reasons (contrarian or meta-contrarian). If you never experience that problem, more power to you. I do have it and the post was helpful to me.
0Mercy11yHah, I'm sure I do, I guess the point then is that just because your position is counter-revolutionary, doesn't mean you haven't adopted it out of rebelliousness. Um, assuming that revolutionary zeal as a potential source of bottom lines was taken for granted. I think I knew that already, if only through hatred of South Park style antagonistic third way-ism, and so have spent these last few responses training on straw.

Are there also meta-meta-contrarians?

6Scott Alexander11yMaybe it's context dependent. If I am hanging around a lot of contrarians, I usually end up looking for a meta-contrarian position. If I'm hanging around a lot of meta-contrarians who I think aren't as smart as me, and those meta-contrarians are being really smug and annoying, I become meta-meta-contrarian. I fondly remember a period of my life when I went to my college's Objectivist club every week to argue vehemently against everything they said. I think that qualifies as meta-meta-contrarian if anything does.
4multifoliaterose11yThe game theory goes as deep as people are inclined to take it. In practice, I'm not sure.
2whpearson11yI'm somewhat of one for peak oil. What oil problem? > peak oil "we are doomed" > synthetic fuel created using nukes/humanity will find a way > likely to be a bumpy ride as alternatives take a while to ramp up in scale
2steven046111yI was wondering the same thing. Scott Aaronson and Cosma Shalizi come to mind.
2Liron11ySince he doesn't present sophisticated meta-meta-arguments, to me it just seems like Scott's beliefs are harder to shift from contrarian to meta-contrarian.
3steven046111yThat sounds right, though maybe as you go more meta it just gets harder to distinguish between any level and the level two levels down.
0Will_Newsome11yI don't mean to toot my own meta (especially as metaness isn't directly correlated with truth), but me with respect to cryonics [http://lesswrong.com/lw/2a8/abnormal_cryonics/]. Carl Shulman? Michael Vassar? Most people who think about things? In general, people who think about any given topic more than the average LW poster are likely to be meta-LW-contrarian on that topic, for better or worse.

A triad I just thought of today which seems definitely true:

Doesn't care to make small mistakes that might make him/her look somewhat silly / Cares to avoid every level of mistake to acquire the cleanest reputation possible / Doesn't care to make small mistakes that might make him/her look somewhat silly.

A follow-up thought: This pattern seems to also work for life decisions, and not only for positions in debates or fashion choices. For example: A few years ago, I almost didn't take the offer to do a PhD at an Ivy League school, as opposed to a less highly ranked school, and to live in a mainstream popular city, as opposed to the middle of nowhere, because of my contrarianism. And then my meta-contrarianism kicked in, and I took the offer. I'm happy with the choice, but I do every once in a while have to remind myself of the fact that I consciously decided... (read more)

SPAMMITY SPAM SPAM

I think the whole thing about dwelling on the negatives of our society, is because there's a deeper level to the concerns. Like a sort of collective lacking of something, lacking in Romantic relations to nature and society and such things, but without knowing where such things can be found. Just basic yearning that shows up in the extremes of our modern society, which manifests in media and overromantized movies about how 'spiritually connected' Indians were, or the peace of Buddhists, or the good old ways; you know what I mean. The movie Avatar is basical... (read more)

This is exactly how history is studied.

Historiography is how historical opinions have changed over time. It first begins with the Orthodox viewpoint, which is the first, generally accepted viewpoint of the events that arises. It is generally very biased because it comes about directly after the event has occurred, when feelings still run strong.

This Orthodox viewpoint is contrasted by several Revisionist viewpoints, which tend to make wildly different conclusions based upon new evidence in order to sell books (historical scandals are quite good for that)... (read more)

I know this is an old post, but I wanted to ask a couple questions.

Can you clarify if this meta-contrarian hypothesis of human psychology makes predictions that distinguish it from other explanations for holding an idea to be true or communicating it to be true? I ask since from reading some of the comments, the classification of these triads seems like a fluid thing, and I can't think of anything offhand that might be used to constrain them. If you want to use your hypothesis merely to talk about the reasons for why confidence is assigned, do you think ... (read more)

Great post. I've had a similar idea for a while but didn't realize just how far it could be generalized.

I especially noticed this idea while reading C.S. Lewis' The Screwtape Letters, which seems to posit the hierarchy as being something like "Belief in Christianity because of social pressures / Disbelief in Christianity because who needs social pressures / Belief in Christianity because of comprehension of its 'true meaning' (or something)".

I guess when there are potentially a lot of layers of meta-contrarianism like in Matt_Simpson's example, t... (read more)

Great article. However, why do you call them "meta-contrarian", instead of "anti-contrarian"? I would not call something "meta-" unless it adds additional dimensions to the given context. For example, "meta-theory" is not about disputing particular theories but something totally different.

1Scott Alexander11yI interpret meta- to mean "one level above"; thus for example Douglas Hofstadter's "meta-agnostic", someone who is agnostic about agnosticism, and your own mention of "meta-theory", a theory about a theory. I use "meta-contrary" because it's a position deliberately taken to be contrary to a position deliberately taken to be contrary.

I advocate majoritarianism on most topics related to science.

Including nutrition.

I'll take Gary Taubes seriously when the NIH does.

3Kutta11yAll I can say is that actual studies' results and science is the thing making it rationally possible to discern someone pulling a Semmelweis or being a quack. You can definitely do better than the outside view if you're willing to expend at least some personal effort to investigate. Especially if a quick meta-glance (you can think of Hanson here, among others) on medicine suggests that governmental medical institutes and guidelines are a lot less trustworthy than what is usual in non-medical domains.
0CronoDAS11yYeah, they also tend to be inconsistent over time. Consider: butter or margarine? The mainstream view isn't very solid, but the non-mainstream views don't seem like they're any better either. (If they were better, then why aren't they mainstream yet?)
4Relsqui11yI think the butter thing, like a lot of very specific dietary concerns, is hard to settle popularly because the answer may not be the same for everyone. Carbohydrate intake is another good example of that phenomenon (I hestitate to even call it a "problem"--the problem is the alleged need for a universal answer). A lot of people who live relatively sedentary lifestyles take in a lot more carbs than they use, and might reasonably be advised to cut back. That does not make it good advice for, say, a bike commuter who's actually getting a reasonable amount of cardiovascular exercise.
2gwern6yMainstream: 'correlation=causation' (almost all of nutrition research); contrarian: 'correlation!=causation' (Taubes); meta-contrarian: 'ah, but really, correlation~=causation!'
6gwern4yComputer chess: 'AIs will never master tasks like chess because they lack a soul / the creative spark / understanding of analogies' (laymen, Hofstadter etc); 'AIs don't need any of that to master tasks like chess but computing power and well-tuned search' (most AI researchers); 'but a human-computer combination will always be the best at task X because the human is more flexible and better at mega-cognition!' (Kasparov, Tyler Cowen).
1Vaniver4y3 has been empirically disproven at this point, I believe?
2arundelo4ygwern on "centaurs" (humans playing chess with computer assistance): [https://news.ycombinator.com/item?id=14616478]
0Decius4yThere will always be tasks at which better (Meta-)*Cognition is superior to the available amounts of computing power and tuning search protocols. It becomes irrelevant if either humans aren't better than easily created AI at that level of meta or AI go enough levels up to be a failure mode.
0happywheels4yThe game is best known for its dark sense of humor and its graphic violence. Expect to see a lot of blood and guts. Your goal is to go far across each level without letting your character get hurt. The game is over even the smallest body part injuries. It takes much patience to finish the goal. Is this much challenging? Make your best efforts to survive in this glory and funny game. We can find a way to break through Even if we can't find heaven, I'll walk through hell with you.http://happy-wheelsgames.com [http://happy-wheelsgames.com] =>happy wheelshttp://geometrydash-game.com [http://geometrydash-game.com] =>geometry dash

At the university I have met 2 types of professors. The first type will explain the new topic in very academic and strictly professional way. The second type will explain the topic in a plain language, in a very easy-to-understand way. If I look at it from the perspective of countersignaling, I would describe the first type as contrarian and the second type as meta-contrarian. My observation is also that the contrarian would be usually younger professor in his late thirties or forties, and meta-contrarian would be professor in his sixties and up.

The third "related to" link is a bit broken: points to a Google redirect instead of the article itself.

What about the following triad?

life's good / nihilistic approach to life / I want to become a god on earth.

A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.

What does it mean when people hold difficult to understand moral opinions?

But as has been pointed out, along with the gigantic cost, death does have a few small benefits. It lowers overpopulation, it allows the new generation to develop free from interference by their elders, it provides motivation to get things done quickly.

Right - and let's not forget that it takes out of circulation a load of persistent parasites which have evolved over hundreds of generations to exploit your genome, which might otherwise find and attack your relatives and descendants.

This post inspired me to write an article on Feminism, criticism of feminism, and contrarianism over at FeministCritics.org.

2fburnaby8yI identified very strongly with your article. I feel exactly the same way and suspect the same things are going on in my brain when I hear really bad feminist arguments. They're somehow more annoying than really bad (even worse!) gender regressive arguments. This has lead me to question whether I should indulge myself in making my contrarian, actually-gender-progressive, arguments against what I perceive as mainstream opinion (feminism). Feminism really isn't nearly as mainstream as it feels to me. I'm just privileged as a member of the intellectual progressive elite - I got to go to good schools, I'm a professional, I select progressive friends and grew up with somewhat progressive parents. Yes, it was a revelation when I realized how many problems there are with mainstream feminism, but I'm also a product of a pretty rare selection bias in a society that's actually still racist. I actually buy the feminist narrative that there is still a lot of (level 1) sexism in our society, even though I tend to only see the problems with (level 2) mainstream feminism. But there is a problem here for a consequentialist. No matter how clearly I put my criticisms, they're only understood as "some reactionary rationalization". People don't grasp the nuance and count one more head on the wrong side. It seems like it will lead to better consequences if I spend a majority of time "me too"ing mainstream feminism and biting my tongue about most of the issues in it. Or at least building more explicit feminist cred before pointing out some of the problems. So this leads me to a question for you: why do you think that, in the face of your realization about why you criticize what you criticize, continuing to do it is the right thing to do?
-2MugaSofer8yAt risk of sounding tautological, that depends on whether it's the right thing to do. If you have identified a systematic bias, try to remove it, then reevaluate your choices. You may still make he same ones; you cannot deduce reality from your bias. But you cannot know that if you're still biased.
-1NancyLebovitz9yI agree with roshni-- it would be better if you made your criticisms as you see them rather than as levels of a signalling game. From my point of view, the PUA believers have the advantage at LW, and being gently told, no it's wonderful, and the non-wonderful bits (the worst of which I'd never heard of until you brought them up, something I'm never sure you quite believed) don't matter when so much of it is different and being in the brainfog business is best for everyone even though there's no careful way for you to check on the effects on people you're taking charge of for your own good, just leaves me feeling rather hopeless about that part of LW. A specific example: I think you're one of the people who says that some men in PUA start out misogynistic, but become less so after they've had some success with attracting women. I wonder how they treat the women they're with before they've recovered from misogyny. Those women don't seem to be there in your calculus.

Nancy, I'm a bit confused by your comment.

From my point of view, the PUA believers have the advantage at LW

What does "PUA believer" mean? Out of the folks who discuss pickup positively on LessWrong, I doubt any of them "believe" in it uncritically. However, they may feel motivated to defend pickup from inaccurate characterizations.

I do not see people who want to discuss pickup in a not-completely-negative way on LW as having an obvious advantage. The debate is not symmetrical. Anyone who can be painted as a defender of pickup is vulnerable to all sorts of stigma. Yet the worst they can say in their defense is to call the attackers close-minded or uneducated about pickup.

and being gently told, no it's wonderful, and the non-wonderful bits (the worst of which I'd never heard of until you brought them up, something I'm never sure you quite believed) don't matter when so much of it is different

Yes, different parts of pickup are different. No, the good parts don't necessarily justify the bad parts, but the presence of good parts means that pickup shouldn't be unequivocally dismissed.

being in the brainfog business is best for everyone even though there's no

... (read more)

How can we reduce this polarization?

Maybe by moderates coming out of the closet, so to speak?

Hi, my name is Daenerys, and I have ambiguous views about PUA. My initial reaction was "Ew! Bad!" but after reading the debates here, talking with a friend, and learning more elsewhere, my views towards it have softened. I still do not think that all of it is 100% ok though. It is a complicated issue with many facets.

Mainly I wish it wouldn't hijack non-PUA discussions. I am seriously close to just starting a PUA discussion to keep all this stuff in one place, but I guess I feel if anyone should do it, it should be the mods.

PUA Moderates of the World, Unite!

7TheOtherDave9ySpeaking as an indifferent moderate, I suspect that well over 90% of the value extractable from discussions of applications of evidence-based reasoning to dating is extractable with significantly less effort from discussions of applications of evidence-based reasoning to job interviews, used car purchases, getting along with parents and children and neighbors and classmates and coworkers, and other social negotiations. That said, I also suspect that the far greater fascination the dating-related threads have for this site than the other stuff has more to do with various people's interests in dating than with their interest in evidence-based reasoning, so I expect we will continue to have the dating-related threads.
6wedrifid9yModerators moving into a role of actively constructing official topics like that would be somewhat awkward. Moderation being damn near invisible for the most part is a feature.
4HughRistik9yHi daenerys! Welcome to the PUA Moderates club.
8NancyLebovitz9yThis is very much a first attempt at answering these matters. I think more honesty on both sides (and you've made a good start) will help. Part of what's been going on is that your advocacy has left me feeling as though my fears about PUA were being completely dismissed. On the other hand, when you've occasionally mentioned some doubts about aspects of PUA, I've felt better, but generally not posted anything about it. I may have said something in favor when the idea of "atypical women" (more straightforward than the average and tending to be geeky) was floated. I'm pretty sure I didn't when someone (probably you) said something about some PUA techniques being unfair (certainly not the word used, but I don't have a better substitute handy) to women who aren't very self-assured, even though that's the sort of thing I'm concerned about. Thanks for posting more about what's going on at your end. As for stigma, I actually think it's funny that both of us feel sufficiently like underdogs that we're defensive. From my point of view, posting against PUA here leads to stigma not just for being close-minded and opposed to rational efforts to improve one's life (rather heavier stigmas here than in most places), but also for unkindness to men who would otherwise be suffering because they don't know how to attract women. I don't know if it was unfair of me to assume that you hadn't performed a moral calculus-- from my point of view, the interests of women were being pretty much dismissed, or being assumed (by much lower standards of proof) to be adequately served by what was more convenient for men. Part of what squicks me about PUA is that it seems as though there's very careful checking about its effects (at least in the short term) on men, but, in the nature of things, much less information about its effects on women.
9HughRistik9yOn LW in general I've spilled gallons of ink engaging in moral analyses of pickup, and of potential objections to pickup techniques. In my PUA FAQ [http://lesswrong.com/lw/44r/post_proposal_attraction_and_seduction_for/3hn5], I made a whole section on ethics. In general, I have trouble reconciling your above perceptions with my participation in pickup discussions on LW. But my memory of those discussions isn't perfect, so it's possible that I've been lax in replying to you personally. If you raised an issue that I didn't satisfactorily respond to, that's probably because I missed it, or left the thread, or had already talked about it elsewhere on LW, not because I didn't think it was important. I'm glad that you noticed, even if you didn't comment much. Perhaps I'll talk more about those doubts when people engage me more about them. Yes, I believe that pickup can be harsh towards women who aren't very self-assured, and who don't have good boundaries. Yet that fact has to be taken in context. Particular sexual norms and sexual cultures (e.g. high status, extraverted, and/or gender-traditional cultures) are harsh towards people of both sexes who aren't very self-assured, and who don't have good boundaries. Pickup is merely one example. I have a shortlist of particular behaviors and mindsets that I find especially objectionable about pickup. Yet when trying to assess PUAs, who is the control group? Who are we comparing them to? Over the years, my ethical opinion of PUAs (on average) has fallen, but my ethical opinion of non-PUAs has been falling perhaps even faster. Criticizing PUAs for doing what everyone else is doing turns PUAs into scapegoats, and lets the rest of the culture off the hook. Thanks for filling me in on some of the stigmas on your end... I hadn't thought of the "unkind to men" one. Still, do you think those stigma as symmetrical in impact to charges of misogyny and not caring about women? I am skeptical that you have sufficient data about peo
3[anonymous]7yThat's one of the best sentences I've read today, especially given what the title of this website is.
1[anonymous]9yI think I agree with this. We are already supposed to be honest here most of the time. I think something needs to be changed to facilitate such a debate, if we wish to have it. I just think that while there are hopeful signs that we will chew through this with our usual set of tools and norms, but those hopeful signs have been around for years, and the situation dosen't seem to be improving. Honestly I think our only hope of addressing this is having a farm more robust debating style, far more limited in scope than we are used to since tangents often peter out without follow up or any kind of synthesis or even a clear idea of what is and what isn't agreed upon in these debates.
2TheOtherDave9yMy $0.02: It might help to state clearly what "addressing this" would actually comprise... that is, how could you tell if a discussion had done so successfully? It might also help if everyone involved in that discussion (should such a discussion occur) agreed to some or all of the following guidelines: * I will, when I reject or challenge a conclusion, state clearly why I'm doing so. E.g.: is it incoherent? Is it dangerous? Is it hurtful? Is it ambiguous? Is it unsupported? Does it conflict with my experience? Etc. * I will "taboo" terms where I suspect people in the conversation have significantly different understandings of those terms (for example, "pickup"), and will instead unpack my understanding. * I will acknowledge out loud when a line of reasoning supports a conclusion I disagree with. This does not mean I agree with the conclusion. * I will, insofar as I can, interpret all comments without reference to my prior beliefs about what the individual speaker (as opposed to a generic person) probably meant. Where I can't do that, and my prior beliefs about the speaker are relevantly different from my beliefs about a generic person, I will explicitly summarize those beliefs before articulating conclusions based on them.
1NancyLebovitz9yI don't know what you mean by that-- could you expand on the details or supply an example of a place that has the sort of style you have in mind? My instincts are to go for something less robust. I know that part of what drives my handling of the subject is a good bit of fear, and I suspect there was something of the sort going on for HughRustik. I'm not sure what would need to change at LW to make people more comfortable with talking about their less respectable emotions. I'm contemplating using a pseudonym, but that might not be useful-- a number of people have told me that I write the way I talk. You've probably got a point about synthesis. It might help if people wrote summaries of where various debates stand. I bet that such summaries would get upvoted.
0[anonymous]9yI doubt talking about the emotions, specifically about individual's emotions, or even how each "side" (ugh tribalism) may feel about the matter, will improve the situation. If anything I suspect it will result in status games around signalling good tactically usefull emotions and people resenting others for their emotions. Perhaps this should be a start.
1NancyLebovitz9yI think the last clause of the first sentence is missing some words. Emotions are part of what's going on, and it's at least plausible that respect for truth includes talking about them. Discussion which includes talk about emotions can blow up, but it doesn't have to. I suggest that there are specific premises that make talk about emotion go bad-- the idea that emotions don't change, that some people's emotions should trump other people's emotions, and that some emotions should trump other emotions. This list is probably not complete. The challenge would be to allow territorial emotions to be mentioned, but not letting them take charge. I think the crucial thing is to maintain an attitude of "What's going on here?" rather than "This is an emergency-- the other person must be changed or silenced".
1[anonymous]9yCorrect, I was writing at a late hour. I've fixed the missing bits now. This has shifted my opinion more in favour of such a debate, I remain sceptical however. First identifying what exactly are the preconditions for such a debate (completing that list in other words) and second the sheer logistics of making it happen that way seem to me daunting challenges.
4NancyLebovitz9yMore for the list, based on your point about groups: It's important to label speculations about the ill effects of actions based on stated emotions as speculations, and likewise for speculations about the emotions of people who aren't in the discussion. Part of what makes all this hard is that people have to make guesses (on rather little evidence, really) about the trustworthiness of other people. If the assumption of good will is gone, it's hard to get it back. If someone gives a signal which seems to indicate that they shouldn't be trusted, all hell can break loose very quickly. and at that point, a lesswrongian cure might be to identify the stakes, which I think are pretty low for the blog. The issues might be different for people who are actually working on FAI.
0NancyLebovitz9yAs for whether this kind of thing can be managed at LW, my answer is maybe tending towards yes. I think the social pressure which can be applied to get people to choose a far view and/or curiosity about the present is pretty strong, but I don't know if it's strong enough. The paradox is that people who insist on naive territorial/status fights have to be changed or silenced.
0lessdazed9yWe could have a pidgin language pseudonym thread.
0lessdazed9yWhat exactly do you mean? If the situation is getting no worse, notice the population is expanding.
6[anonymous]9yIt is not improving. This is up for debate. Vladimir_M and others have argued [http://lesswrong.com/lw/7e6/rationality_and_relationships_september_2011/4qrh] that precisely the fact that blow ups are rarer means more uninterrupted happy death spirals are occurring and we are in the processes of evaporative cooling of group beliefs [http://lesswrong.com/lw/lr/evaporative_cooling_of_group_beliefs/] on the subject. I think they are right. LessWrong actually needs either better standards of rationality or better mechanisms to sort through the ever growing number of responses as it grows in order to keep the signal to noise ratio close to something worth our time. Also I'm confused as to why a larger population of LWers, would translate into this being something LWers can more easily make progress on.
1NancyLebovitz9yAs for contrarianism, I think of myself as a second-order curmudgeon. When people talk about how things are getting worse, I push for specific examples rather than just a claim that things are bad. People rarely have anything specific in mind.
-3NancyLebovitz10yYou might be interested in this [http://hugoschwyzer.net/2010/11/08/for-pleasure-for-justice-and-against-shame-on-acceptance-as-a-prerequisite-for-growth/] -- it's by a male feminist who's working on how to have feminism which is genuinely friendly to heterosexual men. I suspect the problem goes deeper than the specifics of feminism, though those are worth addressing. A lot of people interpret moral advice in self-damaging ways, and I'm not sure what's going on there. It seems like a taught vulnerability.
6MileyCyrus9yIf by "make feminism genuinely friendly to men" you mean "defend paternity fraud" [http://www.hugoschwyzer.net/2011/07/13/cuckolding-is-the-worst-thing-that-can-happen-to-a-man/] and compare the "men's right movement to the KKK [http://toysoldier.wordpress.com/2011/03/14/i-know-you-are-but-what-am-i/]"...
2NancyLebovitz9yFor what it's worth, I never got around to reading much of Schwyser's blog. These days, I read No Seriously, What About Teh Menz? [http://noseriouslywhatabouttehmenz.wordpress.com/], and they're none too fond of Schwyzer either. I hate that there was a felt need to give it a jokey title more than I can say.
0MileyCyrus9yI like NSWATM too. I'm glad it's becoming more popular.
0waveman7yRead the update here. Truth is stranger than fiction. http://en.wikipedia.org/wiki/Hugo_Schwyzer [http://en.wikipedia.org/wiki/Hugo_Schwyzer]
0HughRistik10yThanks, Nancy. I do find Hugo's blog interesting, and I post there sometimes.
0wedrifid10yYou are saying, I take it, that the guy in question was mistaken in believing the advice prohibited the use of pornography? It isn't quite clear to me whether you were saying that he correctly understood the pornography related implications but ought not have considered it self-damaging. I have of course seen both, as well as those (not you) who suggest that following the ideals is actually beneficial to the individual as well, almost by definition.
1NancyLebovitz10yYes, your first suggestion.

To be contrarian, I think you're only portraying a subset of possible outcomes. We might say the following fits:

Kantian Deontological Ethics (all men are oblidged to) > Positivist Ethics (ethics don't exist as anything more than preference) > Modern Liberal Ethics (ethics exist as preference but preferences are important survival tools that can lead us to objective ethics),

But the truth is I don't see a necessary triad in any of this because there is no original position. In my example, we would find that Kantian Dialectical Ethics consumed prior t... (read more)

0blacktrance7yFor ethics, I think it's more like (Divine Command/intuitionism)/(subjectivism/nihilism)/(other systems of objective morality).
0Raw_Power10yCould it be that the entire history of philosophy and its "thesis, antithesis, synthesis" recurring structure is an instance of this? Not to mention other liberal arts, and the development of the cycles of fashion.

The bigger issue to me is the value system that makes this phenomenon exist in the first place. It essentially requires people to care more about signaling than seeking truth. Of course this makes sense for many (perhaps most) people since signaling can get you all sorts of other things you want, whereas finding the truth could happen in a vacuum/near vacuum (you could find out some fundamental truth and then die immediately, forget about it, tell it to people and have no one believe you, etc.)

It bothers me that extremely narrow self-interest (as indicated... (read more)

from my limited understanding a hipster is a person who deliberately uses unpopular, obsolete, or obscure styles and preferences in an attempt to be "cooler" than the mainstream.

Not to argue over definitions, but your use of "hipster" seems overly-narrow. As I understand it, it refers to those who deliberately appropriate styles used by old / other subcultures with concern for aesthetics rather than signaling (or, if you prefer, complex signaling rather than mere group-membership). Obviously some of those folks are doing it to try ... (read more)

[-][anonymous]11y 1

The contrarian treadmill for medicine is more like "conventional / alternative / conventional again / Robin Hanson"

You can't evaluate the truth of a statement by its position in a signaling game; otherwise you could use human psychology to figure out if global warming is real!

Agree to some extent but not fully. In particular, I think that the fact the well documented phenomenon of illusory superiority gives rational grounds for skepticism when somebody claims that his or her abilities are greater than he or she able to effectively signal. In situations where there's great disparity between claimed ability and signaled ability, very high levels of skepticism are warranted.

Stupid question for the guys here, but how long is optimal to counter-signal to a woman. i.e., how long do you pretend not to be interested in her, whether she is interested in you or not. Based on my non-trivial romantic experience, I have two theories.

1. Wait until she makes unusually-long eye contact with you. It should be pretty noticable, like ~5 seconds or longer, such that it would otherwise be unusual. Use Bayes theorem. THEN WAIT ANOTHER WEEK to stop countersignalling.

2. Three weeks. IDK it just seems to work that way.

3. You do not stop counter-si... (read more)

I can think of clear examples where a particular ideological foundation allows for death to be good, without requiring a contrarian or meta-contrarian position. One thought along such lines is whether religion would fall into the contrarian, or meta-contrarian view.

If you ask most 5 year olds, they believe in the metaphysical.

So could a triad be religious/atheist/religious? Or is there an extra level, where the first kind of atheist is the fedora tipping teenager on reddit, then there be a meta-meta atheist, or would perhaps the meta-meta position be agnostic?

Is religion too complex for such an simplification?

Great post, obviously.

You argue that signaling often leads to distribution of intellectual positions following this pattern: in favor of X with simple arguments / in favor of Y with complex arguments / in favor of something like X with simple arguments

I think it’s worth noting that the pattern of position often looks different. For example, there is: in favor of X with simple arguments / in favor of Y with complex arguments / in favor of something like X with surprising and even more sophisticated and hard-to-understand arguments

In fact, I think many of yo... (read more)

The Patri Friedman links are dead, and blocked from archive.org. Anyone have access to another archive, so I can see what he's talking about? There has got to be a better way to link. Has no one come up with a distributed archive of linked material yet?

0arundelo4yarchive.is has both things from Patri's LiveJournal: * "Obama is a Muslim :). Seriously!" [https://archive.fo/CM2Ez] * "Yer buttons, they got pressed!" [https://archive.fo/DPHFg] (Unlike archive.org, archive.is does not, IIRC, respectrobots.txt [http://www.robotstxt.org/robotstxt.html].) Gwern Branwen has apage on link rot and URL archiving [https://www.gwern.net/Archiving%20URLs].
0arundelo4y--archive.is faq [https://archive.fo/faq] --archive.org blog, 2017-04-17 [https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/]
[-][anonymous]11y 0

To be contrarian, I think you're only portraying a subset of possible outcomes. We might say the following fits:

Kantian Deontological Ethics (all men are oblidged to) > Positivist Ethics (ethics don't exist as anything more than preference) > Modern Liberal Ethics (ethics exist as preference but preferences are important survival tools that can lead us to objective ethics),

But the truth is I don't see a necessary triad in any of this because there is no original position. In my example, we would find that Kantian Dialectical Ethics consumed prior t... (read more)

[-][anonymous]11y 0

To be contrarian, I think you're only portraying a subset of possible outcomes. We might say the following fits:

Kantian Deontological Ethics (all men are oblidged to) > Positivist Ethics (ethics don't exist as anything more than preference) > Modern Liberal Ethics (ethics exist as preference but preferences are important survival tools that can lead us to objective ethics),

But the truth is I don't see a necessary triad in any of this because there is no original position. In my example, we would find that Kantian Dialectical Ethics consumed prior t... (read more)

[-][anonymous]11y 0

To be contrarian, I think you're only portraying a subset of possible outcomes. We might say the following fits:

Kantian Deontological Ethics (all men are oblidged to) > Positivist Ethics (ethics don't exist as anything more than preference) > Modern Liberal Ethics (ethics exist as preference but preferences are important survival tools that can lead us to objective ethics),

But the truth is I don't see a necessary triad in any of this because there is no original position. In my example, we would find that Kantian Dialectical Ethics consumed prior t... (read more)

libertarians are always more hostile toward liberals, even though they have just about as many points of real disagreement with the conservatives.

I think it's because libertarians care a lot more about the points on which they disagree with the liberals. Issues like gay marriage and abortion don't seem to matter as much as economic rights.

I don't think this is the case for most libertarians, especially the younger, internet based, ron paul oriented kind of libertarians - many of them are primarily motivated by the social issues.... and yet they still seem to prefer arguing with liberals rather than conservatives. I think it has more to do with the fact that they view liberals as smart people who believe stupid things, while they view conservatives as just stupid troglodytes not worth wasting time on.

[-][anonymous]11y 0

I'm a little confused, what purpose does this distinction serve? That people like to define their opinions as a rebellion against received opinion isn't novel. What you seem to be saying is: defining yourself against an opinion which is seen as contrarian sends a reliably different social signal to defining yourself against an opinion which is mainstream, is that a fair assessment? Because this only works if there is a singular, visible mainstream, which is obviously available in fashion but rare in the realm of ideas.

Moreover, if order-of-contrariness doe... (read more)

[-][anonymous]11y 0

atheist/libertarian/technophile/sf-fan/early-adopter/programmer = very large self selected group

college professor = very small highly filtered group

I believe this exemplifies a major weakness of this article and I'd actually like to hear a better analogy since I'm somewhat skeptical about a populist take down of a poorly defined subculture; one which I presume varies greatly from the authors' tribe(s). More vexing, but also harder to describe, is it's vaguely moralistic tone, but perhaps that's just the absence of evidence talking.

(Would like to be shown t... (read more)

0[anonymous]11yi.e. needs extra study controls for group-think.

Word!

I went through a few of these on my way though idea-space and then it took a while to recognize the systematic.

I wonder if conspiracy theories could be a "middle-band" position? Any fool can see the WTC was destroyed by a plane...

[-][anonymous]7y -2
#normcore
[This comment is no longer endorsed by its author]Reply

1. The average IQ of visitors to this site is 145 squared? Impressive!

2. Are you trying to be subtly meta-contrarian with your idiosyncratic orthography, or are you just really glad to see me?

|so no one tries to signal intelligence by saying that 1+1 equals 3

oh, you are so asking for it, no matter how old this topic is...

There IS a sense in which 1+1=3. It is not particularly deep, or philosiphical, or even particulalry useful mathematically, except possibly to demonstrate a simple result of playing around with unusual axioms.

See, when one man and one woman....

snickers

[comment deleted]

[This comment is no longer endorsed by its author]Reply