I've heard of the concept of "weirdness points" many times before, but after a bit of searching I can't find a definitive post describing the concept, so I've decided to make one.  As a disclaimer, I don't think the evidence backing this post is all that strong and I am skeptical, but I do think it's strong enough to be worth considering, and I'm probably going to make some minor life changes based on it.


Chances are that if you're reading this post, you're probably a bit weird in some way.

No offense, of course.  In fact, I actually mean it as a compliment.  Weirdness is incredibly important.  If people weren't willing to deviate from society and hold weird beliefs, we wouldn't have had the important social movements that ended slavery and pushed back against racism, that created democracy, that expanded social roles for women, and that made the world a better place in numerous other ways.

Many things we take for granted now as why our current society as great were once... weird.


Joseph Overton theorized that policy develops through six stagesunthinkable, then radical, then acceptable, then sensible, then popular, then actual policy.  We could see this happen with many policies -- currently same-sex marriage is making its way from popular to actual policy, but not to long ago it was merely acceptable, and not too long before that it was pretty radical.

Some good ideas are currently in the radical range.  Effective altruism itself is such a collection of beliefs typical people would consider pretty radical.  Many people think donating 3% of their income is a lot, let alone the 10% demand that Giving What We Can places, or the 50%+ that some people in the community do.

And that's not all.  Others would suggest that everyone become vegetarian, advocating for open borders and/or universal basic income, theabolishment of gendered language, having more resources into mitigating existential riskfocusing on research into Friendly AIcryonicsand curing death, etc.

While many of these ideas might make the world a better place if made into policy, all of these ideas are pretty weird.


Weirdness, of course, is a drawback.  People take weird opinions less seriously.

The absurdity heuristic is a real bias that people -- even you -- have.  If an idea sounds weird to you, you're less likely to try and believe it,even if there's overwhelming evidence.  And social proof matters -- if less people believe something, people will be less likely to believe it.  Lastly, don't forget the halo effect -- if one part of you seems weird, the rest of you will seem weird too!

(Update: apparently this concept is, itself, already known to social psychology as idiosyncrasy credits.  Thanks, Mr. Commenter!)

...But we can use this knowledge to our advantage.  The halo effect can work in reverse -- if we're normal in many ways, our weird beliefs will seem more normal too.  If we have a notion of weirdness as a kind of currency that we have a limited supply of, we can spend it wisely, without looking like a crank.


All of this leads to the following actionable principles:

Recognize you only have a few "weirdness points" to spend.  Trying to convince all your friends to donate 50% of their income to MIRI, become a vegan, get a cryonics plan, and demand open borders will be met with a lot of resistance.   But -- I hypothesize -- that if you pick one of these ideas and push it, you'll have a lot more success.

Spend your weirdness points effectively.  Perhaps it's really important that people advocate for open borders.  But, perhaps, getting people to donate to developing world health would overall do more good.  In that case, I'd focus on moving donations to the developing world and leave open borders alone, even though it is really important.  You should triage your weirdness effectively the same way you would triage your donations.

Clean up and look good.  Lookism is a problem in society, and I wish people could look "weird" and still be socially acceptable.  But if you're a guy wearing a dress in public, or some punk rocker vegan advocate, recognize that you're spending your weirdness points fighting lookism, which means less weirdness points to spend promoting veganism or something else.

Advocate for more "normal" policies that are almost as good.   Of course, allocating your "weirdness points" on a few issues doesn't mean you have to stop advocating for other important issues -- just consider being less weird about it.  Perhaps universal basic income truly would be a very effective policy to help the poor in the United States.  But reforming the earned income tax credit and relaxing zoning laws would also both do a lot to help the poor in the US, and such suggestions aren't weird.

Use the foot-in-door technique and the door-in-face technique.  The foot-in-door technique involves starting with a small ask and gradually building up the ask, such as suggesting people donate a little bit effectively, and then gradually get them to take the Giving What We Can Pledge.  The door-in-face technique involves making a big ask (e.g., join Giving What We Can) and then substituting it for a smaller ask, like the Life You Can Save pledge or Try Out Giving.

Reconsider effective altruism's clustering of beliefs.  Right now, effective altruism is associated strongly with donating a lot of money and donating effectively, less strongly with impact in career choice, veganism, and existential risk.  Of course, I'm not saying that we should drop some of these memes completely.  But maybe EA should disconnect a bit more and compartmentalize -- for example, leaving AI risk to MIRI, for example, and not talk about it much, say, on 80,000 Hours.  And maybe instead of asking people to both give more AND give more effectively, we could focus more exclusively on asking people to donate what they already do more effectively.

Evaluate the above with more research.  While I think the evidence base behind this is decent, it's not great and I haven't spent that much time developing it.  I think we should look into this more with a review of the relevant literature and some careful, targeted, market research on the individual beliefs within effective altruism (how weird are they?) and how they should be connected or left disconnected.  Maybe this has already been done some?


Also discussed on the EA Forum and EA Facebook group.

New Comment
98 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

after a bit of searching I can't find a definitive post describing the concept

The idiom used to describe that concept in social psychology is "idiosyncrasy credits", so searching for that phrase produces more relevant material (though as far as I can tell nothing on Less Wrong specifically).

3Peter Wildeford
Wow, that's amazing! Thanks!

This post makes some great points. As G.K. Chesterton said:

A man must be orthodox upon most things, or he will never even have time to preach his own heresy.

Fundamentally, other people's attention is a scarce resource, and you have to optimise whatever use of it you can get. Dealing with someone with a large inferential gap can be exhausting and you are liable to be tuned out if you make too many different radical points.

I would also add that part of being persuasive is being persuadable. People do not want to be lectured, and will quickly pick up if you see them as just an audience to be manipulated rather than as equals.

Personally, people who do plenty of things weirdly come off as trying way too hard. Depending on your environment, choose one thing that you care about and do that. I wouldn't be found dead at a formal dinner in washed out jeans and a dirty t-shirt. I would be willing to experiment with neck pieces different from formal ties.

I agree with the general gist of the post, but I would point out that different groups consider different things weird, and have differing opinions about what weirdness is a bad thing.

To use your "a guy wearing a dress in public" example - I do this occasionally, and gauging from the reactions I've seen so far, it seems to earn me points among the liberal, socially progressive crowd. My general opinions and values are such that this is the group that would already be the most likely to listen to me, while the people who are turned off by such a thing would be disinclined to listen to me anyway.

I would thus suggest, not trying to limit your weirdness, but rather choosing a target audience and only limiting the kind of weirdness that this group would consider freakish or negative, while being less concerned by the kind of weirdness that your target audience considers positive. Weirdness that's considered positive by your target audience may even help your case.

I think I might have been a datapoint in your assessment here, so I feel the need to share my thoughts on this. I would consider myself socially progressive and liberal, and I would hate not being included in your target audience, but for me your wearing cat ears to the CFAR workshop cost you weirdness points that you later earned back by appearing smart and sane in conversations, by acceptance by the peer group, acclimatisation, etc.

I responded positively because it fell within the 'quirky and interesting' range, but I don't think I would have taken you as seriously on subjectively weird political or social opinions. It is true that the cat ears are probably a lot less expensive for me than cultural/political out-group weirdness signals, like a military haircut. It might be a good way to buy other points, so positive overall, but that depends on the circumstances.

Thank you! I appreciate the datapoint.

To make this picture a bit more colourful: I love suits, they look great on me. But I will be damned if I wear suits to university for people will laugh at me and not take me seriously because to the untrained eye all suits are considered business suits. On the other hand hanging around in a coffee place at any odd time of the day is completely to normal to the same group.

Contrast this with the average person working in an environment where they wear a suit: The suit could help me signal that I am on their side, the being in a coffee place at any odd time would then become my cause to be accepted.

The lesson then is to pick the tribe you are in, as you will know their norms best and adher to them anyhow, and then a cause that will produce the most utility within that tribe. It just so happens that there is the extremely large tribe "the public" which sometimes leads people to ignore that they can influence other, really big tribes, like Europeans, British, Londoners and then the members of their boroughs, to make a divide by region.

This carries the slight problem that people tend to get offended when they realize you're explicitly catering to an audience. If I talked about the plight of the poor and meritocracy to liberals and about responsibility and family to conservatives, advocating the exact same position to each, and then each group found out about the speech I gave to the other, they would both start thinking of me as a duplicitous snake. They might start yelling about "Eli Sennesh's conspiracy to pass a basic income guarantee" or something like that: my policy would seem "eviler" for being able to be upheld from seemingly disjoint perspectives.
Right, I wouldn't advocate having contradictory presentations, but rather choosing a target audience that best fits your personality and strengths, and then sticking to that.

I believe the effect you describe exists, but I think there are two effects which make it unclear that implementing your suggestions is an overall benefit to the average reader. Firstly, to summarize your position:

Each extra weird belief you have detracts from your ability to spread other, perhaps more important, wierd memes. Therefore normal beliefs should be preferred to some extent, even when you expect them to be less correct or less locally useful on an issue, in order to improve your overall effectiveness at spreading your most highly valued memes.

  1. If you have a cluster of beliefs which seem odd in general then you are more likely to share a "bridge" belief with someone. When you meet someone who shares at least one strange belief with you, you are much more likely to seriously consider their other beliefs because you share some common ground and are aware of their ability to find truth against social pressure. For example, an EA vegan may be vastly more able to introduce the other EA memes to a non-EA vegan than a EA non-vegan. Since almost all people have at least some weird beliefs, and those who have weird beliefs with literally no overlap with yours are li

... (read more)

Regarding point 2, while it would be epistemologically risky and borderline dark arts, I think the idea is more about what to emphasize and openly signal, not what to actually believe.

True, perhaps I should have been more clear in my dealing with the two, and explained how I think the they can blur across unintentionally. I do think being selective with signals can be instrumentally effective, but I think it's important to be intentionally aware when you're doing that and not allow your current mask to bleed over and influence your true beliefs unduly. Essentially I'd like this post to come with a "Do this sometimes, but be careful and mindful of the possible changes to your beliefs caused by signaling as if you have different beliefs." warning.
There is a definite likelihood that acting out a belief will cause you to believe it due to your brain poorly distinguishing signalling and true beliefs. That can be advantageous at times. Some beliefs may be less important to you, and worthy of being sacrificed for the greater good. If you say, believe that forcing people to wear suits is immoral and that veganism is immoral then it may be worth you sacrificing your belief in the unethical nature of suits so you can better stop people eating animals. A willingness to do this is beneficial in most people who want to join organizations. They normally have a set of arbitrary rules on social conduct, dress, who to respect and who to respect less, how to deal with sickness and weakness, what media to watch, who to escalate issues to in the event of a conflict. If you don't do this you'll find it tricky gaining much power because people can spot people who fake these things.
No. I will make concessions about which beliefs to act on in order to optimize for "Goodness", but I'm highly concerned about sacrificing beliefs about the world themselves. Doing this may be beneficial in specific situation, but at a cost to your overall effectiveness in other situations across domains. Since the range of possible situations that you might find yourself in is infinite, there is no way to know whether you've made a change to your model with catastrophic consequences down the line. Furthermore, we evaluate the effectiveness of strategies on the basis of the model we have, so every time your model becomes less accurate, your estimate of what is the best option in a given situation becomes less accurate. (Note that your confidence in your estimate may rise, fall, or stay the same, but I would doubt that having a less accurate model is going to lead to better credence calibration) Allowing your beliefs to change for any reason other than to better reflect the world, only serves to make you worse at knowing how best to deal with the world. Now, changing your values - that's another story.
You can easily model beliefs and work out if they're likely to have good or bad results. They could theoretically have a variety of infinite impacts, but most probably have a fairly small and limited effect. Humans have lots of beliefs, they can't all have a major impact. For the catastrophic consequences issue, have you read this? http://lesswrong.com/lw/ase/schelling_fences_on_slippery_slopes/ The slippery slope issue of potentially catastrophic consequences from a model can be limited by establishing arbitrary lines before hand that you refuse to cross. Whether you should sacrifice your beliefs, like with Gandhi, depends on what the value given for said sacrifice is, how valuable your sacrifice is to your models, and what the likelihood of catastrophic failure is. You can swear an oath not to cross those lines, give valuable possessions to people to destroy if you cross those lines so you can heavily limit the chance of catastrophic failure. Yeah, your success rate drops, but your ability to socialize can rise since irrational beliefs are how many think. If your irrational beliefs are of low importance, not likely to cause major issues, and unlikely to cause catastrophic failure they could be helpful.

Interesting post (upvoted) but I would add one "correction" : the amount of "weirdness points" isn't completely set, there ways to get more of them, especially by being famous, doing something positive or helping people. For example, by writing a very popular fanfiction (HPMOR), Eliezer earned additional weirdness points to spend.

Or on my own level, I noticed that by being efficient in my job and helpful with my workmates, I'm allowed a higher number of "weirdness points" before having my workmates start considering me as a loonie. But then you've to be very careful, because weirdness points earned within a group (say, my workmates) don't extend outside of the group.

For anyone who hasn't read HP and thinks fantasy is weird, he lost points for that. One way to get more points is to listen to other people's weird ideas. In fact, if someone else proposes a weird idea that you already agree with, it may be a good idea not to let on, but publicly "get convinced", to gain points. (Does that count as Dark Arts?)
I have actually thought of that, but in relation to a different problem: not that of seeming less "weird", but that of convincing someone of an unpopular idea. It seems like the best way to convince people of something is to act like you're still in the process of being convinced yourself; for instance, I don't remember where, but I do remember reading an anecdote on how someone was able to convince his girlfriend of atheism while in a genuine crisis of faith himself. Incidentally, I should emphasize that his crisis of faith was genuine at the time--but it should work even if it's not genuine, as long as the facade is convincing. I theorize that this may be due to in-group affiliation, i.e. if you're already sure of something and trying to convince me, then you're an outsider pushing an agenda, but if you yourself are unsure and are coming to me for advice, you're on "my side", etc. It's easy to become entangled in just-so stories, so obviously take all of this speculation with a generous helping of salt, but it seems at least worth a try. (I do agree, however, that this seems borderline Dark Arts, so maybe not that great of an idea, especially if you value your relationship with that person enough to care if you're found out.)
This is called "concern trolling". It isn't "borderline Dark Arts", it's straight-out lying. This imagines the plan working, and uses that as argument for the plan working.
I was not aware that it had a name; thank you for telling me. Agreed. The question, however, is whether or not this is sometimes justified. Well, no. It assumes that the plan doesn't fall prey to an obvious failure mode, and suggests that if it does not, it has a high likelihood of success. (The idea being that if failure mode X is avoided, then the plan should work, so we should be careful to avoid failure mode X when/if enacting the plan.)
The failure mode (people detecting the lie) is what it would be for this plan to fail. It's like the empty sort of sports commentary that says "if our opponents don't get any more goals than us, we can't lose", or the marketing plan that amounts to "if we get just 0.001% of this huge market, we'll be rich." See also. Lying is hard, and likely beyond the capability of anyone who has just discovered the idea "I know, why not just lie!"
That the plan would fail if the lie is detected is not under contest, I think. However, it is, in my opinion, a relatively trivial failure mode, where "trivial" is meant to be taken in the sense that it is obvious, not that it is necessarily easy to avoid. For instance, equations of the form a^n + b^n = c^n have trivial solutions in the form (a,b,c) = (0,0,0), but those are not interesting. My original statement was meant to be applied more as a disclaimer than anything else, i.e. "Well obviously this is an easy way for the plan to fail, but getting past that..." The reason for this was because there might be more intricate/subtle failure modes that I've not yet thought of, and my statement was intended more as an invitation to think of some of these less trivial failure modes than as an argument for the plan's success. This, incidentally, is why I think your analogies don't apply; the failure modes that you mention in those cases are so broad as to be considered blanket statements, which prevents the existence of more interesting failure modes. A better statement in your sports analogy, for example, might be, "Well, if our star player isn't sick, we stand a decent chance of winning," with the unstated implication being that of course there might be other complications independent of the star player being sick. (Unless, of course, you think the possibility of the lie being detected is the only failure mode, in which case I'd say you're being unrealistically optimistic.) Also, it tends to be my experience that lies of omission are much easier to cover up than explicit lies, and the sort suggested in the original scenario seem to be closer to the former than to the latter. Any comments here? (I also think that the main problem with lying from a moral perspective is that not just that it causes epistemic inaccuracy on the part of the person being lied to, but that it causes inaccuracies in such a way that it interferes with them instrumentally. Lying omissively about
Lying also does heavy damage to one's credibility. The binary classification of other people into "honest folk" and "liars" is quite widespread in the real world. You get classified into "liars", pretty hard to get out of there.
Well, you never actually say anything untrue; you're just acting uncertain in order to have a better chance of getting through to the other person. It seems intuitively plausible that the reputational effects from that might not be as bad as the reputational effects that would come from, say, straight-out lying; I accept that this may be untrue, but if it is, I'd want to know why. Moreover, all of this is contingent upon you being found out. In a scenario like this, is that really that likely? How is the other person going to confirm your mental state?
YMMV, of course, but I think what matters is the intent to deceive. Once it manifests itself, the specific forms the deception takes do not matter much (though their "level" or magnitude does). This is not a court of law, no proof required -- "it looks like" is often sufficient, if only for direct questions which will put you on the spot.
Well, yes, but are they really going to jump right to "it looks like" without any prior evidence? That seems like major privileging the hypothesis. I mean, if you weren't already primed by this conversation, would you automatically think "They might be lying about being unconvinced" if someone starts saying something skeptical about, say, cryonics? The only way I could see that happening is if the other person lets something slip, and when the topic in question is your own mental state, it doesn't sound too hard to keep the fact that you already believe something concealed. It's just like passing the Ideological Turing Test, in a way.
Humans, in particular neurotypical humans, are pretty good at picking up clues (e.g. nonverbal) that something in a social situation is not quite on the up-and-up. That doesn't necessarily rise to the conscious level of a verbalized thought "They might be lying...", but manifests itself as a discomfort and unease. It's certainly possible and is easy for a certain type of people. I expect it to be not so easy for a different type of people, like ones who tend to hang out at LW... You need not just conceal your mental state, you need to actively pretend to have a different mental state.
Fair enough. How about online discourse, then? I doubt you'd be able to pick up much nonverbal content there.
It is much easier to pretend online, but it's also harder to convince somebody of something.
Would you say the difficulty of convincing someone scales proportionally with the ease of pretending?
Hm. I don't know. I think it's true when comparing a face-to-face conversation with an online one, but I have no idea whether that can be extended to a general rule.
Yes. It is.
That's not very helpful, though. Could you go into specifics?
In general, any argument for the success of a plan that sounds like "how likely is it that it could go wrong?" is a planning fallacy waiting to bite you. Specifically, people can be quite good at detecting lies. On one theory, that's what we've evolved these huge brains for: an arms race of lying vs. detecting lies. If you lie as well as you possibly can, you're only keeping up with everyone else detecting lies as well as they can. On internet forums, I see concern trolls and fake friends being unmasked pretty quickly. Face to face, when person A tells me something about person B not present, I have sometimes had occasion to think, "ok, that's your story, but just how much do I actually believe it?", or "that was the most inept attempt to plant a rumour I've ever heard; I shall be sure to do exactly what you ask and not breathe a word of this to anyone, especially not to the people you're probably hoping I'll pass this on to." If it's a matter that does not much concern me, I won't even let person A know they've been rumbled. In the present case, the result of being found out is not only that your relationship ends with the person whose religion you were trying to undermine, but they will think that an atheist tried to subvert their religion with lies, and they will be completely right. "As do all atheists", their co-religionists will be happy to tell them afterwards, in conversations you will not be present at.
In what manner do you think it is most likely for this to occur? If possible, could you outline some contributing factors that led to you spotting the lie?
That's a bit like asking how I recognise someone's face, or how I manage to walk in a straight line. Sometimes things just "sound a bit off", as one says, which of course is not an explanation, just a description of what it feels like. That brings to my attention the distinction between what has been said and whether it is true, and then I can consider what other ways there are of joining up the dots. Of course, that possibility is always present when one person speaks to another, and having cultivated consciousness of abstraction, it requires little activation energy to engage. In fact, that's my default attitude whenever person A tells me anything negatively charged about B: not to immediately think "what a bad person B is!", although they may be, but "this is the story that A has told me; what does it seem to me likely to be true?"
Well, based on that description, would I be accurate in saying that it seems as though your "method" would generate a lot of false positives?
You can always trade of specificity for sensitivity. It also possible to ask additional questions when you are suspicious.
Suspending judgement is not a false positive. And even from such a limited interaction as seeing the name and subject line of an email, I am almost never wrong in detecting spam, and that's the spam that got past the automatic filters. I don't think I'm exceptional; people are good at this sort of thing. My hobby: looking at the section of the sidebar called "Recent on rationality blogs", and predicting before mousing over the links whether the source is SlateStarCodex, Overcoming Bias, an EA blog, or other. I get above 90% there, and while "Donor coordination" is obviously an EA subject, I can't explain what makes "One in a Billion?" and "On Stossel Tonight" clearly OB tiles, while "Framing for Light Instead of Heat" could only be SSC.
Deliberately uninformative title. Robin Hanson does this fairly often, Scott much less so. Very short, which is highly characteristic of OB. Very large number is suggestive of "large-scale" concerns, more characteristic of OB than of Scott. Nothing that obviously suggests EAism. Self-promoting (RH frequently puts up things about his public appearances; other sidebarry folks don't). Very short. Assumes you know what "Stossel" is; if you don't this reads as "deliberately uninformative" (somewhat typical of OB), and if you do it reads as "right-wing and businessy connections" (very typical of OB). (As you may gather, I share your hobby.)
Huh. I must just be unusually stupid with respect to "this sort of thing", then, as I'm rarely able to discern a plausible-sounding lie from the truth based on nonverbal cues. (As a result, my compensation heuristic is "ignore any and all rumors, especially negative ones".) Ah, well. It looks like I implicitly committed the typical mind fallacy in assuming that everyone would have a similar level of difficulty as I do when detecting "off-ness". That sounds like an awesome hobby, and one that I feel like I should start trying. Would you say you've improved at doing this over time, or do you think your level of skill has remained relatively constant?
I couldn't really say. Back when I read OB, I'd often think, "Yes, that's a typical OB title", but of course I knew I was looking at OB. When the sidebar blogroll was introduced here, I realised that I could still tell the OB titles from the rest. The "X is not about Y" template is a giveaway, of course, but Hanson hasn't used that for some time. SSC tends to use more auxiliary words, OB leaves them out. Where Scott writes "Framing For Light Instead Of Heat", Hanson would have written "Light Not Heat", or perhaps "Light Or Heat?".
It sounds like you're implying that most lies are easily found, and consequently, most unchallenged statements are truths. That's, really really really stretching my capacity to believe. Either you're unique with this ability, or you're also committing the typical mind fallacy, w.r.t thinking all people are only as good at lying (at max) as you are at sniffing them out.
Emphasis added: In a scenario like this, i.e. pretending to be undergoing a deep crisis of faith in order to undermine someone else's. My observation is that in practice, concern trolling is rapidly found out, and the bigger the audience, the shorter the time to being nailed. On the whole, people are as good at lying as, on the whole, people are at finding them out, because it's an arms race. Some will do better, some worse; anyone to whom the idea, "why not just lie!" has only just occurred is unlikely to be in the former class.
Most people are not able to have the kind of strength of emotions that come with a genuine crisis of faith via conscious choice. Pretending to have them might come of as creepy even if the other person can't exactly pinpoint what's wrong.
Fair enough. Are there any subjects about which there might not be as high an emotional backlash? Cryonics, maybe? Start off acting unconvinced and then visibly think about it over a period of time, coming to accept it later on. That doesn't seem like a lot of emotion is involved; it seems entirely intellectual, and the main factor against cryonics is the "weirdness factor", so if there's someone alongside you getting convinced, it might make it easier, especially due to conformity effects.
The topic of cryonics is about dealing with death. There a lot of emotion involved for most people.
It's true that cryonics is about death, but I don't think that necessarily means there's "a lot of emotion involved". Most forms of rejection to cryonics that I've seen seem to be pretty intellectual, actually; there's a bunch of things like cost-benefit analysis and probability estimates going on, etc. I personally think it's likely that there is some motivated cognition going on, but I don't think it's due to heavy emotions. As I said in my earlier comment, I think that the main factor against cryonics is the fact that it seems "weird", and therefore the people who are signed up for it also seem "weird". If that's the case, then it may be to the advantage of cryonics advocates to place themselves in the "normal" category first by acting skeptical of a crankish-sounding idea, before slowly getting "convinced". Compare that approach to the usual approach: "Hey, death sucks, wanna sign up to get your head frozen so you'll have a chance at getting thawed in the future?" Comparatively speaking, I think that the "usual" approach is significantly more likely to get you landed in the "crackpot" category.
That's really not how most people make their decisions. There are plenty of ways to tell someone about cryonics that don't involve a direct plea for them to take action.
Maybe it's not how most people make their decisions, but I have seen a significant number of people who do reject cryonics on a firmly intellectual basis, both online and in real life. I suppose you could argue that it's not their true rejection (in fact, it almost certainly isn't), but even so, that's evidence against heavy emotions playing a significant part in their decision process. Yes, but most of them still suffer from the "weirdness factor".
1Peter Wildeford
Seems like another way you're taking advantage of a positive halo effect!

Nerds are very often too shy. They are not willing to go to the extreme. Radical feminism has a lot of influence on our society and plenty of members of that community don't hold back at all.

Bending your own views to be avoid offending other people leads to being perceive as inconfident. It's not authentic. That's bad for movement building.

I think you are making a mistake if you treat the goal of every project as being about affecting public policy. Quite often you don't need a majority. It's much better to have a small group of strongly committed people then a large group that's only lukewarm committed.

Mormons who spent 2 years doing their mission are extreme. Mormonism is growing really fast while less extreme Christian groups don't grow. Groups that advocate extreme positions give their members a feeling that they are special. They are not boring.

In the scarce attention economy of the 21st century being boring is one of the worst things you can do if you want to speak to a lot of people.


Mormon missions are not primaritly there to gain converts. They are there to force the Mormon to make a commitment of time and resources to Mormonism so that the sunk costs would psychologically tie him to the religion.

(Of course, it wasn't necessarily consciously designed for this purpose, but that doesn't prevent the purpose from being served.)

That's part of the point. If you want strong changes in society than you need to do movement building. That means you don't focus on outsiders but on strengthing the commitment inside the movement.
7Peter Wildeford
Though they're only really extreme about a few things -- their Mormonism and some personal restraint (e.g., no alcohol, etc.) that serves religious purposes. They're otherwise quite normal people. And I think religious weirdness is one of the kinds of weirdness that people see past the most easily. I'm not saying that one shouldn't try to be extreme, but that one should (if one aims at public advocacy) try to be extreme in only a few things.
It seems borderline-literally insane to me that "personal restraint" is "extreme" and marks one as a radical.
It's pretty common for groups to treat individual restraint in the context of group lack-of-restraint as a violation of group norms, though "radical" is rarely the word used. Does that seem insane to you more generally (and if so, can you say more about why)? If not, I suspect "extreme" has multiple definitions in this discussion and would be best dropped in favor of more precise phrases.
Yes. That seems insane to me. Self restraint is applied self control. It is a virtue and is something to be admired, so long as what as one is restraining one's self for some benefit, not needlessly (though personally, I have respect for all forms of restraint, even if they are needless, e.g. religiously motivated celibacy, in the same way I have respect for the courage of suicide bombers). Is alcohol consumption restraint without benefit? No. Alcohol is a poison that limits one's faculties in small amounts, with detrimental health effects, in large doses. A friend, was sharing with me the other day that he doesn't like the culture of...I'm not sure what to call it...knowing overindulgence? He gave the example of the half joking veneration of bacon something that everyone loves and always wants more of, as if to say "I know it's unhealthy, but that's why we love it" so much. I hear people say, "I don't eat healthy food", and in the culture we live in, that is an acceptable thing to say, where to me it sounds like an admission that you lack self control, but instead of acknowledging it as a problem, and working on it, glancing over it with a laugh. I am a vegetarian. I once sat down for a meal with a friend and my sister. The friend asked my sister if she was a vegetarian. My sister said she wasn't. The friend said (again, half joking), "Good", as if vegetarianism is a character flaw: real people love meat. I confronted her about it later, and said that that bothered me. I know not everyone is a vegetarian, and it is each person's own choice to weigh the costs and benefits to decide for themselves, but there are many, many good reason to practice some kind of meat-restriction, from the ecological, to the moral, to simple health. I won't tolerate my friend, acting as if not eating meat means there is something wrong with you.. It feels to me, and maybe I'm projecting, that not everyone is up for making hard choices*, but instead of owning up to that, we have bui
(nods) Which is exactly what I asked for; thank you. I think you're using a non-standard definition of "insane," but not an indefensible one.
Depends on what kind. The one that runs counter to the prevailing social norms does mark one as a radical. You can treat incluses as people who practice "personal restraint" :-/
I think these fall under the group that I admire the way I admire the courage of suicide bombers. I admire the dedication, but I think they are insane for other reasons.
Mormon polygamy is not normal. Mormons donating 10% of their income also isn't normal. Mormonism has enough impact on a person that some Mormons can identify other Mormons. The thing that distinguishes religious weirdness is that it comes from a highly motivated place and isn't a random whim. I'm not exactly sure what you mean with "public advocacy".
Mormons don't practice polygamy anymore, and they haven't for a long time (except for small 'unofficial' groups). Most Mormons I know feel pretty weird about it themselves.
2Peter Wildeford
Good point. But, if I recall correctly, don't they go to a good amount of length to not talk about these things a lot? - I don't think it's just a highly motivated place, but rather a highly motivated place that other people can easily verify as highly motivated and relate to. - Bringing up an ingroup idea with people outside your ingroup. For example, I'd love it if people ate less meat. So I might bring that up with people, as the topic arises, and advocate for it (i.e., tell them why I think not eating meat is better). I still envision it as a two-way discussion where I'm open to the idea of being wrong, but I'd like them to be less affected by certain biases (like weirdness) if possible.
I don't think a conversation at a birthday of a friend qualifies as "public" in the traditional sense. I think that's seldom the most straightforward way for changing people through personal conversation. It makes much more sense to ask a lot of questions and target your communication at other person, Status also matters. Sometimes doing something weird lower your status other time it raises it. It always makes sense to look at the individual situation.
1Peter Wildeford
What did you have in mind? I think this advice applies even more so to "public" venues in the traditional sense (e.g., blogging for general audiences).

Thanks, Peter. :) I agree about appearing normal when the issue is trivial. I'm not convince about minimizing weirdness on important topics. Some counter-considerations:

  • People like Nick Bostrom seem to acquire prestige by taking on many controversial ideas at once. If Bostrom's only schtick were anthropic bias, he probably wouldn't have reached FP's top 100 thinkers.
  • Focusing on only one controversial issue may make you appear single-minded, like "Oh, that guy only cares about X and can't see that Y and Z are also important topics."
  • If you advocate many things, people can choose the one they agree with most or find easiest to do.

Clean up and look good. Lookism is a problem in society, and I wish people could look "weird" and still be socially acceptable. But if you're a guy wearing a dress in public, or some punk rocker vegan advocate, recognize that you're spending your weirdness points fighting lookism, which means less weirdness points to spend promoting veganism or something else.

Caveat - if people already know you are well liked and popular, the weirdness actually functions as counter-signalling which makes you more popular - similar to how teasing strengthens ... (read more)

Notions of weirdness vary a lot. Also, individual instances of weirdness will be visible to different people. Both these challenge the idea we should bother having aggregated measurements of weirdness at all. People's sensitivity to weirdness also varies, sometimes in complicated ways. Some people are actually more receptive to ideas that sound weird. Other people will believe if someone is both successful and weird they must know something others don't. Others are willing to ignore weirdness if allied with it. This is all very complex.

I think our social b... (read more)

It would be helpful to point out that your post is within the context of trying to convince other people, aka memetic warfare. Your "actionable principles" serve a specific goal which you do not identify.

0Peter Wildeford
Fair point. I concede I'm only writing here in the context of public advocacy.

This reminds me of a quote by George Bernard Shaw:

“The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”

I think it's important to consider the varying exchange rates, as well as the possible exchange options, when choosing how to spend your weirdness points.

Real example: Like enough other people on this website to make it an even less representative sample of the general population, I'm autistic, so spending a weirdness point on being openly such is useful, not because it's the best possible way to promote disability rights, but rather because I can save a lot of willpower that I need for other tasks that way.

Fake example: The Exemplars, a band popular with ... (read more)

This seems like a subset of point #7 here (https://slatestarcodex.com/2016/02/20/writing-advice/)

7. Figure out who you’re trying to convince, then use the right tribal signals

I would define weirdness as emitting signals that the tribe recognizes as "other" but not "enemy".  Emitting enough of the in-group signals may counteract that.

This is also reminiscent of John Gottman's empirical research on married couples where he found they were much more likely to split if the ratio of positive to negative interactions was less than 5 to 1.

Have we worked through the game theory here? It feels like negotiating with terrorists.

My objection is to the 'set amount.' What about the Bunny Ears Lawyer trope, where someone purchases additional weirdness points with a track record of outstanding competence?
The Bunny Ears Lawyer trope is one of those ones that never shows up in real life.
Sure they do, you just need to look in the right places. Speaking of lawyers, one place would be the tax department of a top-tier New York law firm. Another place would be sysadmins of small companies.
Not to the same extent, maybe, but I was under the impression that it does occur. I'm not willing to go check right now due to memetic hazard, but doesn't TV Tropes have a page full of real life examples?
Nope. No Real Life entry for that trope exists.
It appears like it used to, as it is referenced in the text, and can be found on a fork of tvtropes here.
My favorite example from that page is Paul Erdős, who spent his life couch-surfing from one mathematical collaborator to the next.

Ozy Frantz wrote a thoughtful response to the idea of weirdness points. Not necessarily disagreeing, but pointing out serious limitations in the idea. Peter Hurford, I think you'll appreciate their insights whether you agree or not.



Weirdness is incredibly important. If people weren't willing to deviate from society and hold weird beliefs, we wouldn't have had the important social movements that ended slavery and pushed back against racism, that created democracy, that expanded social roles for women, and that made the world a better place in numerous other ways.

Well, there is that. But there's also just the fact that being weird is what makes people interesting and fun, the sort of people I want to hang out with.

While many of these ideas might make the world a better place if

... (read more)

It might be worth emphasizing the difference between persuading people and being right. The kind of people who care about weirdness points are seldom the ones contributing good new data to any question of fact, nor those posing the best reasoning for judgments of value. I appreciate the impulse to try to convince people of things, but convincing people is extremely hard. I'm not Noam Chomsky; therefore, I have other things to do aside from thinking and arguing with people. And if I have to do one of those two worse in order to save time, I choose to dump the 'convince people' stat and load up on clear thinking.

Weirdness points are evidence of being wrong, since someone who holds positions different from everyone else on almost every point is probably willfully contrarian. So people who care about truth will also care about weirdness points; if someone is too weird (e.g. timecube), it is probably not worth your time listening to them.
I at least somewhat disagree with this. Weirdness is not a reliable measure of truth; in fact, I'd argue that it may even slightly anti-correlate with truth (but only slightly--it's not like it anti-correlates so well that you'd be able to get a good picture of reality out of it by reversing it, mind you). After all, not every change is an improvement, but every improvement is a change. Every position that seems like common sense to us nowadays was once considered "weird" or "unusual". So yeah, dismissing positions on the basis of weirdness alone doesn't seem like that great of an idea; see the absurdity heuristic for further details. Also, people often have reasons for discrediting things outside of striving for epistemic accuracy. Good people/causes can often be cast in a bad light by anyone who doesn't like them; for instance*, RationalWiki's article on Eliezer makes him so weird-sounding as to be absolutely cringeworthy to anyone who actually knows him, and yet plenty of people might read it and be turned off by the claims, just like they're turned off from stuff like Time Cube. *It is not my intention to start a flame-war or to cause a thread derailment by bringing up RW. I am aware that this sort of thing happens semi-frequently on LW, which is why I am stating my intentions here in advance. I would ask that anyone replying to this comment not stray too far from the main point, and in particular please do not bring up any RW vendettas. I am not a moderator, so obviously I have no power to enforce this request, but I do think that my request would prevent any derailment or hostility if acceded to. Thank you.
I think all three of us are right and secretly all agree. (1) that weirdness points are bayesian evidence of being wrong (surely timecube doesn't seem more accurate because no one believes it). Normal stuff is wrong quite a lot but not more wrong than guessing. (2) weirdness points can never give you enough certainty to dismiss an issue completely. Time Cube is wrong because it is Time Cube (read: insane ramblings), not because it's unpopular. Of course we don't have a duty to research all unlikely things, but if we already are thinking about it, "it's weird" isn't a good/rational place to stop, unless you want to just do something else, like eat a banana or go to the park or something. and, critically, (3) If you don't have evidence enough to completely swamp and replace the bayesian update from weirdness points, you really don't have enough evidence to contribute a whole lot to any search for truth. That's what I was getting at. It's also pretty unlikely that the weirdness that "weirdness points" refer to would be unknown to someone you're talking with.

Weirdness is a scarce resource with respect to ourselves? Great! Does that mean that we'd benefit from cooperating such that we all take on different facets of the weirder whole, like different faces of a PR operation?

People tend to model organizations as agents, and I expect weirdness in an org's public-facing representatives would be more salient than normality. That implies that representatives' weirdness would be taken as cumulative rather than exclusive. So, no.

Lots I agree with here. I was suprised to see basic income in your clustering above. As much as I think Cuban's are the ones doing socialism wrong, and everyone doing socialism less, like Venezuala isn't socialist enough, I'm right wing and mindkilled enough to have rejected basic income using general right wing arguments and assumptions until I read the consistency of positive examples on the Wikipedia page. The straw that broke the camels back was that there is right wing support for basic income. That being said, I'm confident that I would pass ideological turing tests.

It is generally a bad idea to change your views based on a Wikipedia page. Particularly a Wikipedia page on a politically charged subject. What you see may only mean that nobody happened to stop by the page who was willing to add the negative examples. Also, be careful that you don't read the article as saying more than it is actually saying. it says that "several people" on the right supported it. Great, at least two, and both of them from far enough in the past that "right wing" doesn't mean what it means today.
Depends on how much you knew about the topic to begin with.
Cool! You can try taking them here: http://blacker.caltech.edu/itt/

I agree that people who want to influence others should avoid having others discount their opinions. I don't see what your analysis here offers beyond noticing that and the simple fact that people generally discount the opinions of weirdos, though. Notions of weirdness vary a lot. Also, individual instances of weirdness will be visible to different people. Both these strain the idea of having any aggregated measurement of weirdness at all. People's sensitivity to weirdness also varies, sometimes in complicated ways. Some people are actually more receptive ... (read more)

[This comment is no longer endorsed by its author]Reply

trevinos degrees of acceptance, not overton's window


Use the foot-in-door technique and the door-in-face technique

Using tactics intentionally designed to appeal to people's biases is dark arts. If you try these, you completely deserve having rationalists tell you "Sorry, I've been trying to remove my biases, not encourage them. Go away until you can be more honest."