by [anonymous]
5 min read

62

Related: politics is the mind killer, other optimizing

When someone says something stupid, I get an urge to correct them. Based on the stories I hear from others, I'm not the only one.

For example, some of my friends are into this rationality thing, and they've learned about all these biases and correct ways to get things done. Naturally, they get irritated with people who haven't learned this stuff. They complain about how their family members or coworkers aren't rational, and they ask what is the best way to correct them.

I could get into the details of the optimal set of arguments to turn someone into a rationalist, or I could go a bit meta and ask: "Why would you want to do that?"

Why should you spend your time correcting someone else's reasoning?

One reason that comes up is that it's valuable for some reason to change their reasoning. OK, when is it possible?

  1. You actually know better than them.

  2. You know how to patch their reasoning.

  3. They will be receptive to said patching.

  4. They will actually change their behavior if the accept the patch.

It seems like it should be rather rare for those conditions to all be true, or even to be likely enough for the expected gain to be worth the cost, and yet I feel the urge quite often. And I'm not thinking it through and deciding, I'm just feeling an urge; humans are adaptation executors, and this one seems like an adaptation. For some reason "correcting" people's reasoning was important enough in the ancestral environment to be special-cased in motivation hardware.

I could try to spin an ev-psych just-so story about tribal status, intellectual dominance hierarchies, ingroup-outgroup signaling, and whatnot, but I'm not an evolutionary psychologist, so I wouldn't actually know what I was doing, and the details don't matter anyway. What matters is that this urge seems to be hardware, and it probably has nothing to do with actual truth or your strategic concerns.

It seems to happen to everyone who has ideas. Social justice types get frustrated with people who seem unable to acknowledge their own privilege. The epistemological flamewar between atheists and theists rages continually across the internet. Tech-savvy folk get frustrated with others' total inability to explore and use Google. Some aspiring rationalists get annoyed with people who refuse to decompartmentalize or claim that something is in a separate magisteria.

Some of those border on being just classic blue vs green thinking, but from the outside, the rationality example isn't all that different. They all seem to be motivated mostly by "This person fails to display the complex habits of thought that I think are fashionable; I should {make fun | correct them | call them out}."

I'm now quite skeptical that my urge to correct reflects an actual opportunity to win by improving someone's thinking, given that I'd feel it whether or not I could actually help, and that it seems to be caused by something else.

The value of attempting a rationality-intervention has gone back down towards baseline, but it's not obvious that the baseline value of rationality interventions is all that low. Maybe it's a good idea, even if there is a possible bias supporting it. We can't win just by reversing our biases; reversed stupidity is not intelligence.

The best reason I can think of to correct flawed thinking is if your ability to accomplish your goals directly depends on their rationality. Maybe they are your business partner, or your spouse. Someone specific and close who you can cooperate with a lot. If this is the case, it's near the same level of urgency as correcting your own.

Another good reason (to discuss the subject at least) is that discussing your ideas with smart people is a good way to make your ideas better. I often get my dad to poke holes in my current craziness, because he is smarter and wiser than me. If this is your angle, keep in mind that if you expect someone else to correct you, it's probably not best to go in making bold claims and implicitly claiming intellectual dominance.

An OK reason is that creating more rationalists is valuable in general. This one is less good than it first appears. Do you really think your comparative advantage right now is in converting this person to your way of thinking? Is that really worth the risk of social friction and expenditure of time and mental energy? Is this the best method you can think of for creating more rationalists?

I think it is valuable to raise the sanity waterline when you can, but using methods of mass instruction like writing blog posts, administering a meetup, or launching a whole rationality movement is a lot more effective than arguing with your mom. Those options aren't for everybody of course, but if you're into waterline-manipulation, you should at least be considering strategies like them. At least consider picking a better time.

Another reason that gets brought up is that turning people around you into rationalists is instrumental in a selfish way, because it makes life easier for you. This one is suspect to me, even without the incentive to rationalize. Did you also seriously consider sabotaging people's rationality to take advantage of them? Surely that's nearly as plausible a-priori. For what specific reason did your search process rank cooperation over predation? 

I'm sure there are plenty of good reasons to prefer cooperation, but of course no search process was ever run. All of these reasons that come to mind when I think of why I might want to fix someone's reasoning are just post-hoc rationalizations of an automatic behavior. The true chain of cause-and-effect is observe->feel->act; no planning or thinking involved, except where it is necessary for the act. And that feeling isn't specific to rationality, it affects all mental habits, even stupid ones.

Rationality isn't just a new memetic orthodoxy for the cool kids, it's about actually winning. Every improvement requires a change. Rationalizing strategic reasons for instinctual behavior isn't change, it's spending your resources answering questions with zero value of information. Rationality isn't about what other people are doing wrong; it's about what you are doing wrong.

I used to call this practice of modeling other people's thoughts to enforce orthodoxy on them "incorrect use of empathy", but in terms of ev-psych, it may be exactly the correct use of empathy. We can call it Memetic Tribalism instead.

(I've ignored the other reason to correct people's reasoning, which is that it's fun and status-increasing. When I reflect on my reasons for writing posts like this, it turns out I do it largely for the fun and internet status points, but I try to at least be aware of that.)

New Comment
62 comments, sorted by Click to highlight new comments since:
[-][anonymous]20

I was going to link that, but somehow I forgot. Thanks!

You can make the alt-text alt-text like ![alt-text](url) I think. Oh wait it looks like you already did, but no alt-text. What's up with that?

Because it really is alt text, which is properly displayed as an alternative to the image when it is unavailable.

Microsoft Internet Explorer has the particular behavior of displaying the alt attribute as a tooltip, but most other browsers do not do this.

[-][anonymous]30

Is the tooltip ![alt text](url "tooltip"), then? Or does markdown not do it?

Ah, that worked! Thanks!

If that's not the tooltip, I've been terribly mistaken for a while.

The nearest reason why I want people around me to become more rational is because irrationality (in some specific forms) repels me. I admit this is how being a member of a tribe can feel from inside. (In a parallel branch of the multiverse I could be a theist, repelled by atheists. I mean, how could you not dislike the people who throw away infinities of utilons, only because they are so overconfident about their human reasoning abilities, which outside view suggests are pretty pathetic.)

But I also believe that having a higher sanity waterline is a good thing. With a specific person, sabotaging their rationality to exploit them may sometimes bring me more utilons than cooperating with them. But what about a population as a whole? I enjoy having higher standards of living, I enjoy having internet, I enjoy having the possibility of hearing different opinions and not having to follow religious leaders. I would enjoy even more if driverless cars became commonplace, if medicine could make us live even better and longer, and if psychology could help significantly beyond the placebo effect. All these things require some general standard of rationality. -- We often complain how low that level is, so for the sake of fairness I would like to note that it could be even much lower. Imagine a society where every problem is solved by asking a local shaman, and a typical answer is that a problem was caused by a witch, and you must kill the witch to fix the problem. And if you somehow step out of the line, you become the best candidate for a witch. Some humans live like this, too. -- If only during the recent century all the money and energy spent on horoscopes would be spent on medicine instead, maybe 100 years could be now the average lifespan, and 150 years rather likely for those who take care to exercise and avoid sugar. Think about all other improvements we could get if only people became more rational. (We would get some new harmful things, too.)

I agree that even if I feel that people should become more rational, trying to correct them is probably not the best way, and quite often it does more harm than good. (I mean harm to the person who wastes their time trying to correct others. Waste of time, and frustration.) I used to spend a lot of time correcting people online. Finding LessWrong helped me a lot; now that I know there is one website where people can discuss rationally, the existence of others feels less painful. It also helped to realize that inferential distances are too big to be overcome by a comment in a discussion. I feel certain in many situations that I know better than other people, but I have updated my estimate of fixing their reasoning to near epsilon. (Unless the other person specifically asks to be fixed, which almost never happens.) Writing a blog or starting a local rationalist group would be better. (I just need to overcome my akrasia.)

So, instead of doing stupid stuff that feels good, if we agree that having more rationalists on this planet is a good idea, what next? I know that CFAR is doing workshops for a few dozens of participants. The LessWrong blog is here, available for everyone. That is already pretty awesome, but it is unlikely that it is the best thing that could be done. What else could have a higher impact?

My ideas in five minutes -- write a book about rationality (books have higher status than blogs, can be read by people who don't procrastinate online); create a "LessWrong for dummies" website (obviously with a different name) explaining the uncontroversial LW/CFAR topics to a public in a simplified form. Actually, we could start with the website and then publish it as a book. But it needs a lot of time and talent. Alternative idea: do something to impress the general population and make rationality more fashionable (moderate use of Dark Arts allowed); for example organize a discussion about rationality on a university with rationalists who also happen to be millionaires (or otherwise high status), and minicamp-style exercises for participants as a followup. Requires the rationalist celebrities and someone to do the exercises.

If only during the recent century all the money and energy spent on horoscopes would be spent on medicine instead, maybe 100 years could be now the average lifespan, and 150 years rather likely for those who take care to exercise and avoid sugar.

I don't think enough has been spent on horoscopes to do that much good. On the other hand, if people gave up on lotteries, that might have some impact.

I agree that figuring out how to teach rationality to people with average intelligence is an important goal, even if "Thinking Clearly for Dummies" is an amusing title.

What else could have a higher impact?

One idea is to produce entertainment with rationalist themes (books, movies, TV shows). Methods is a good start, but much more could be done here. Not sure if anyone's working on stuff like this though. Hopefully a past or future workshop participant will get on this.

"More rationalists" seems like slightly the wrong way to think about the goal here, which to me should really be to increase the power of the rationalist community. This doesn't imply going after large gains in numbers so much as going after powerful people. For example, in a conversation I had at the workshop it was pointed out to me that some governmental agency that determines funding for medical research doesn't classify aging as a disease, so researchers can't get certain kinds of funds for aging research (I may be misremembering the details here but that was the gist of it). The easiest way I can think of to fix this problem is for the rationalist community to have friends in politics. I don't know if that's currently the case.

So we should focus on increasing our social skills, with the specific goal of befriending influential people, and influence politics. Without officially becoming politicians ourselves, because that messes with one's brain. Unless we consciously decide to sacrifice a few of us.

Can we agree on which political goals would be desirable? Funding for aging research seems like a good candidate. (Even a libertarial opposed to any kind of taxation and government spending could agree that assuming the government already takes the money and spends it anyway, it is better if the money is spent on aging research, as opposed to research of rare diseases of cute puppies.) Opposing obvious stupidities could be other thing. Unfortunately, no politician can become popular by doing nothing else but opposing obvious stupidities, although I personally would love to see more such politicians.

Then we would need a proper protocol on sacrificing rationalists for politics. A rationalist who becomes a politician could fix a lot of things, but inevitably they would stop being a rationalist. I guess it is impossible to keep a functioning mind... and even if by some miracle one could do it, then they could not discuss their opinions and goals on LW openly anyway.

Actually, a LW community could easily ruin a rational politician's career by asking them questions where a honest answer means political suicide, but a less-than-honest answer is easily disproved by the community. Imagine a Prisonners' Dilemma among politicians where two politicians agree to support each other's ideas for mutual benefit. Each of them dislikes the other idea, but considers the world with both of them better than the world with none of them. But for the plan to work, both of the politicians must pretend to support both ideas wholeheartedly. And now the LW community would openly ask the former rationalist politician about the other idea, and present their own speculations about the motives; an saying "please, for greater utility, let's not discuss this" would probably have the opposite effect.

So there would need to be some firewall between the politician and the community. For example that the politician discusses with the community only in specific articles, where it is allowed to discuss only the explicitly allowed topics. (You could send the politician a PM suggesting a new topic, but you would be forbidden to say publicly that you did so.)

So we should focus on increasing our social skills, with the specific goal of befriending influential people, and influence politics. Without officially becoming politicians ourselves, because that messes with one's brain. Unless we consciously decide to sacrifice a few of us.

I cannot determine whether this is presented ironically.

Completely seriously.

Politics is the mindkiller. But if rational people refuse to participate in politics, then all policy will be decided by irrational people, which is not good.

As the linked articles says, Bayesians should not lose against Barbarians. Rationalists should win; not invent clever rationalizations for losing. We should one-box in Newcomb's Problem, instead of complaining that the choice is unfair against our preconceptions of rationality.

I don't want to ever hear this: "Eliezer told me that politics is the mindkiller, so I refused to participate in politics, and now my children learn mandatory religion and creationism at school, cryonics and polyamory are illegal, the AI research is focused on creating a supermachine that believes in god and democracy... and it all sucks, but my duty as a rationalist was to avoid politics, and I followed my duty."

So what is the solution?

Learn to influence the politics while protecting yourself from most of the mindkilling. If that turns out to be impossible or very ineffective, then select a group of people who will use their rationality to become skilled politicians and shape the society towards greater utility, even if they lose their rationality in the process... and be prepared to deal with this loss. Be prepared for a moment when you have to say to the given person "we don't consider you rational anymore" or even "supporting you now would make the world worse". The idea is that the person should make the world better (compared with someone else getting the office) before this happens. We should evaluate carefully how much likely it is for the specific person; perhaps make some preparations to increase the likelihood.

It is also perhaps useful to distinguish between "talk about politics in unfocused gatherings with large undifferentiated groups of people," "talk about politics in focused gatherings with selected groups of people," and "take steps to affect policy." It might turn out that there are good reasons to avoid politics in the first case while not avoiding it all in the latter two.

It's probably not so much the mandatory tribalism that makes people apathetic to working with politics, but more like the thing in this Moldbug quote via patrissimo:

You're trying to replace Windows with Linux. Great.

Your way of replacing Windows with Linux: install Linux as a set of Word macros, one macro at a time. (You'd need something like Emscripten for Word macro.) Oh, also - Linux doesn't exist. So you're actually building Linux as a set of Word macros, one macro at a time. Oh, and you have no distribution mechanism. Your users need to type in the macros themselves.

Are the Word users fed up with Word? Oh, man. They've had it up to here with Word. So what?

Tech-minded people want to solve problems. They look at politics and see a lifetime of staring at countless problems while stuck in a system that will let them solve almost none of them and being barraged with an endless stream of trivial annoyances.

Wouldn't it be easier to use your rationality to amass huge amounts of wealth, then simply buy whatever politicians you need, just like other rich people do ?

I don't know how much control rich people really have over politicians.

When someone becomes a successful politician, they have means to get money. The more money they have, the more it costs to buy them. And they probably get different offers from different rich people, sometimes wanting them to do contradictory things, so they can choose to accept bribes compatible with their own opinions.

Also I suspect that when you have enough money, more money becomes meaningless, and the real currency is the power to influence each other. For example, if you already have 10 billlion dollars, instead of even another 50 billion dollars you would prefer a "friend" who can get you out of jail free if that ever becomes necessary. So maybe from some level higher, you have to hold an office to be able to provide something valuable to others who hold an office.

But if having enough money really is enough to improve the world, then certainly, we should do that.

Well, firstly, you don't need to buy a whole politician (though it doesn't hurt); you only need to buy the legislation you need. Thus you don't care how your politician votes on gay marriage or veteran's benefits or whatever, as long he is voting for Bill #1234567.8, which you sponsored, and which deals with protecting squirrel habitats (because you really like squirrels, just for example). This is good, because it's not enough to just have one politician, you need a bunch of them, and it's cheaper to just buy their votes piecemeal.

Secondly, you are of course correct about politicians getting money from different sources, but hey, that's the free market for you. On the other hand, politicians aren't really all that rich. Sure, they may be millionaires, and a few might be billionaires, but the $50e9 figure that you mentioned would be unimaginable to any of them. If you really had that much money (and were smart about using it), you would be able to buy not merely a single politicians, but entire committees, wholesale.

[-]jimmy110

What matters is that this urge seems to be hardware, and it probably has nothing to do with actual truth or your strategic concerns.

You're wrong dammit! Er... :P

In all seriousness though, it's absolutely software. I have really toned down my drive to correct people and have taken big chunks out of it for others as well.

It comes from trying to enforce "shoulds". People shouldn't be so "irrational"! Gah! Can't they see how costly it is!? I can't even imagine why they'd be so stuuuupid! When you're so focused on how "not-okay" it all is, you tend to knee jerk into fighting it. No time for planning - it must stop NOW!

When you put that aside and acknowledge it as something that is, then you can get into it and figure out why it is and why it grinds your gears. Okay, so they are irrational. I really wish they weren't, who wouldn't? And yet they're irrational. Why is that? Okay, so they're irrational because of X. Why X? Okay, so of course X, and so of course they're irrational. What's so infuriating now? Uh, nothing, I guess. I just have to go change X or get on with my life.

When you put the judgement aside and really dig into why it is the way it is (and associating into it), you come out of it with a new clarity of understanding, no "urges" to resist, and much more effective ways of improving peoples rationality with your new found empathy.

It's really really cool stuff.

Nyan, how about we have a conversation on Skype sometime and you can write part 2 of this afterwards ;)

[-][anonymous]40

It comes from trying to enforce "shoulds".

Plausible. Seems like hardware to me, but introspection is a rather crappy lens...

I have really toned down my drive to correct people and have taken big chunks out of it for others as well.

...

Totally. My attitude with irrational people now is a deliberate "whatevs, not my problem". Of course things still rustle my jimmies. I just try to keep it contained.

I was describing parts of me that seem to be mostly under control, but I'd noticed they were not under control in some other aspiring rationalists I know.

Nyan, how about we have a conversation on Skype sometime and you can write part 2 of this afterwards ;)

The skype sounds wonderful, but I dunno about part 2. This one was just something I want to get off my plate as fast as possible. There's way better stuff in the works than memetic tribalism part 2.

I'll PM you if you haven't already.

Plausible. Seems like hardware to me, but introspection is a rather crappy lens...

I'm curious, what exactly are you picking up on that makes it feel like hardware? How would it feel if it were software?

I'm guessing it's the fact that feeling the urge feels like it's outside your locus of control so that it feels like something "your brain" is doing and not something "you" are doing, and that you can't see any underlying reasoning. Is that close?

The skype sounds wonderful, but I dunno about part 2. This one was just something I want to get off my plate as fast as possible. There's way better stuff in the works than memetic tribalism part 2.

Aww, bummer. I was hoping I could work my magic on you and have you post on LW about how it's totally software and that this is how to change your thinking. That way I wouldn't have to write it myself and it'd be more easily believable :p

[-]Shmi60

How do you guys define hardware vs software as applied to brain functions?

Not in any rigorous or clean way. I basically just mean that it's "software like" in that it is changeable with interventions like "talking" rather than only interventions like "ice pick".

[-]Shmi20

You really ought to write a sequence based on your blog, unless you don't care about people being wrong :)

I'm interested in the idea and feel like I should share more with the LW crowd. I do still value other people's instrumental rationality, even though irrationality is less irksome these days :P

I'm not sure how to cross the inferential distance. I'm starting to lose track of what it's like to not know everything I've learned in the past couple years, and the feedback I'm getting on my blog is that it's pretty hard to follow. I'm not sure exactly what I'd have to say to explain it to the average LWer.

I'm busy enough and out-of-touch-with-my-target-audience enough that I can't really crank out a sequence on my own right now, however, if you (or anyone else) want to collaborate, I'd be very interested in doing that.

[-]Shmi00

I'm busy enough and out-of-touch-with-my-target-audience enough that I can't really crank out a sequence on my own right now, however, if you (or anyone else) want to collaborate, I'd be very interested in doing that.

Well, I am probably not be the most suitable collaborator, but I'm certainly fascinated enough by what you are doing to give it a try, if you feel like it. Feel free to email me to this nick at gmail, if you like.

In all seriousness though, it's absolutely software. I have really toned down my drive to correct people and have taken big chunks out of it for others as well.

It's not like the hardware is unchangeable.

I'm now quite skeptical that my urge to correct reflects an actual opportunity to win by improving someone's thinking,

Shouldn't you be applying this logic to your own motivations to be a rationalist as well? "Oh, so you've found this blog on the internet and now you know the real truth? Now you can think better than other people?" You can see how it can look from the outside. What would the implication for yourself be?

[-][anonymous]90

This comment is thoroughly discouraging to me as it pokes at some open wounds that I'm working on.

Therefor I'm quitting LW for lent.

Kick my ass if I'm back before April fools day.

(except for meetup posts).

Well, shit. Now I feel bad, I liked your recent posts.

[-][anonymous]80

I'll make a quick exception to cut off your anxiety.

Don't feel bad, I need a break from LW; your comment and lent just gave me a good excuse. I'm still writing though. I'll be back in april with a good stack of posts.

We should measure our winning, somehow, and see whether reading LW increases it.

Sure, this answer just brings a new set of questions. Such as: what exactly should we measure? If we use something as an approximation, what if it becomes a lost purpose? If we change our method of measuring later, what if we are just rationalizing conveniently? (We can create an illusion of infinite growth just by measuring two complementary values X and Y, always focusing on the one which grows at the given moment.)

I would say that a person reading LW for longer time should be able to list specific improvements in their life. Improvements visible from outside; that is, what they do differently, not how they think or speak differently. That is the difference from the outside. If there is no such improvement, that would suggest it is time to stop reading; or at least stop reading the general discussion, and focus on stuff like Group Rationality Diary.

(My personal excuse is that reading LW reduces the time spent reading other websites. Debates on other websites suddenly feel silly. And the improvement is that reading other websites often made me angry, but reading LW does not mess with my emotions. -- I wish I could say something better, but even this is better than nothing. Of course it does not explain why reading LW would be better than abstaining from internet. Except that abstaining from internet seems unlikely; if I stopped reading LW, I would probably return to the websites I used to read previously.)

We could measure the change in several objective metrics, such as:

  • Annual income (higher is better)
  • Time spent at work to achieve the same or greater level of income (lower is better, though of course this does not apply if one's work is also one's hobby)
  • Weight to Height ratio (closer to doctor-recommended values is better)
  • Number of non-preventative doctor visits per year (lower is better)
  • Number of scientific articles accepted for publication in major peer-reviewed journals (higher is better)
  • Number of satisfactory romantic relationships (higher is arguably better, unless one is not interested in romance at all)
  • Hours spent browsing the Web per week, excluding time spent on research, reading documentation, etc. (lower is better)

The list is not exhaustive, obviously.

Well put! Have some internet status points!

I agree that trying to turn people around you into rationalists is probably a bad idea. First, it is dangerous to be half a rationalist, so if you get started doing this you're committing to either finishing it or potentially harming the person you're trying to turn. Second, most people are not doing interesting enough things for rationality to be all that useful to them; cached thoughts are more than enough for living a cached life. Probably the only people worth trying to turn are people with something to protect.

If your goal is to spend more time around rationalists, you could just... spend more time around rationalists. Moving to Berkeley is not a bad idea.

It is a lot less dangerous to be half a rationalist if you don't think you're completely a rationalist.

Does that still hold up if they believe that they don't think they're completely rationalist, but fail to notice that they haven't propagated and updated on this at all? My first guess would be that this is only more dangerous.

[-][anonymous]50

This thread is confusing me. Can we be specific with some examples?

What does half a rationalist look like and what specific example can we come up with to demonstrate their precarious position?

A non-rationalist lectures me about conspiracy theories and secret soviet alchemy research and cold fusion.

A half-rationalist acts on those beliefs and invests all their money in an alchemy quack project.

[-]TimS40

What does half a rationalist look like and what specific example can we come up with to demonstrate their precarious position?

For example, a half-rationalist understand bias but doesn't internalize it in his thinking. Thus, he might say to a political opponent "Your beliefs about the world have been corrupted by your mindkilled commitment to particular political outcomes" - without realizing that his own truth beliefs were also corrupted by mindkiller issues.

In short, knowing about bias allows one to deploy concepts that effectively act as fully general counter-arguments. This can occur without necessarily improving the quality of one's own beliefs, or even making one feel an unjustified increase in confidence because one falsely believes one has avoided a bias.

Are there any full rationalists, by this definition?

No, but keep in mind Fallacy of Grey considerations here. Plausibly humans may range all the way from a fifth of a rationalist to a quarter of one.

I don't really consider "rationalist" to mean "a person who is rational," but rather "a person who studies in the methods of rationality." My question was intended to demonstrate the silliness in breaking rationalists up into fractional classes by pointing out that there's no actual reference class to compare them to.

More rational, less rational, yes. More of a rationalist, less of a rationalist, no. The idea is as silly to me as "half a biologist." A rationalist is a qualitative, not quantitative, descriptor.

[Edited to eliminate some redundant redundancy.]

[-][anonymous]30

Someone should write a near-mode description of a full rationalist, (besides harry james potter evans verres, who is more intelligent than rational, IMO).

The short answer is that none of us are anywhere near a full human rationalist.

A half-rationalist that believes he's not a complete rationalist holds off on investing in the alchemy project, and attempts to figure out whether their mind is doing everything correctly.

Preferably, a good half-rationalist is prioritizing becoming a complete one before having to make any important decisions or ending up with a corrupt anti-epistemological belief network. They would be aware that they can't fully trust their brain.

A half-rationalist that believes they believe the above, but don't actually update on this...

Well, let's just say my brain isn't quite focusing the picture all too clearly, and I can't tell what would happen in this set of examples. My question in the grandparent was mostly equivalent to "Is this last example going to be worse than the one who invests it all in alchemy, or slightly better? My first guess is it's going to go even more horribly wrong."

[-]CCC70

I agree that trying to turn people around you into rationalists is probably a bad idea.

I'm going to disagree with that sentiment. In general, the people around me have goals that are compatible with mine; so helping them to be more effective at accomplishing their goals helps me as well. (If nothing else, it makes them less likely to sabotage me by accident). Secondly, if a person recognises that I have been helpful, then they are more likely to be helpful in return; purposely sabotaging someone's rationality could lead to emnity if they find out, which would be bad.

Under normal social circumstances, I no longer attempt to correct another person's belief by telling them how it is wrong and stating mine. If somebody makes a statement of questionable accuracy, I ask questions to determine how they came to the conclusion. This not only forces the person to consciously justify themselves and perhaps change their mind on their own, but allows for me to collect potential good arguments against my contrary belief. Conversations in general become more interesting and less hostile while following this protocol.

For some reason "correcting" people's reasoning was important enough in the ancestral environment to be special-cased in motivation hardware.

It feels instinctual to you and many others alive today including myself, but I'm not sure that's evidence enough that it was common in the ancestral environment. Isn't "people are not supposed to disagree with each other on factual matters because anything worth knowing is common knowledge in the ancestral environment" also an ev-pysch proposition?

[-][anonymous]00

It feels instinctual to you and many others alive today including myself, but I'm not sure that's evidence enough that it was common in the ancestral environment.

Do you mean that it could just be a learned thing from today's culture? Or that it is a side-effect of some other adaptation?

Isn't "people are not supposed to disagree with each other on factual matters because anything worth knowing is common knowledge in the ancestral environment" also an ev-pysch proposition?

Yes I suppose it is. Is this a proposed source of the frustration-with-unorthodoxy instinct?

Whatever the cause is, "What matters is that this urge seems to be hardware, and it probably has nothing to do with actual truth or your strategic concerns."

I'm thinking it would best be described as "cultural". Some level of taboo against correcting others unless you're in a socially-approved position to do so (teacher, elder, etc.) is, to my understanding, fairly common among humans, even if it's weaker in our society and time. I brought up the common knowledge thing just because it seems to contradict the idea that a strong urge to correct others could have been particularly adaptive.

I think it's a selection effect on the kind of people who wind up on LW.

[-][anonymous]00

I brought up the common knowledge thing just because it seems to contradict the idea that a strong urge to correct others could have been particularly adaptive.

Not all beliefs would be direct common knowledge.

People still had gods and group identities and fashions to disagree about.

On the other hand, I actually can't see any quick reason it would be adaptive either.

I am surprised to see a lack of attention to parallels between empathy in a tribal setting and conscientiousness in an individual tribesman in discussions here. It is a scary setting we find ourselves in when we all can be agreeable through sites like Kiva and Kickstarter, neurotic through sites like 4chan and reddit, extroverted through services like Facebook and flickr, open through services like Google; and yet there is not one place that translates in a tribal way to experiencing any form of conscientiousness online today.

I am driven to understand this to mean that there is a very limited scope of conscientiousness in the physical world at large today. Is this lack of accountability and self-discipline not what is being addressed when we thumb up a post like this? I want these parallels to be discussed here, but my wherewithal in forums such as this is limited.

Where can we go to share, develop, and experience a sense of Jiminy Cricket today in either world actual or the nets? If there is not a place left where this sense of organized social self is available in a rich format, I can then only encourage people to continue to try and change others' ways of thinking, if only to be sure conscientiousness' utility survives our tribes' rapid-fire cultural shifts to one day prove more effective and less invasive than the current orthodoxy nyan describes.

and yet there is not one place that translates in a tribal way to experiencing any form of conscientiousness online today.

How about Beeminder? I haven't used it, so I'm not sure how public (if at all) are the commitments one makes there. If everything one does there is between oneself and Beeminder only, maybe there's scope for starting up a communal version. Although I can imagine the sort of flame wars that could develop if one person is making a public commitment that another person thinks evil. It would probably have to be specialised to communities having some general commonality of goals.

I just wrote a suggestion to Beeminder authors to add group goals. Two options:

Competitive mode: All people make the same commitment. The only difference between individual commitments is that all lines are displayed on the same graph. (So in addition to avoid losing, you have a motivation to get more points than your friends.) If a person loses, others continue. If only one winner remains, they get an option to cancel the commitment after one week (because the motivation of competition is gone; and maybe the rest of the group wants to start the game again).

Cooperative mode: A commitment fulfilled by the group together; a sum of individual contributions must reach a given limit. The system does not care about who did what, they either win as a group, or lose as a group. However, the individual contributions are displayed, so they can translate to group status.

This could translate conscientiousness into a group status, thereby encourage conscientiousness. (Well, assuming that no one cheats when entering data, etc.)

The group goals could work also without the Beeminder system. Just make a graph where all group members can report their progress, without any time limits. In the cooperative mode, the game ends when the sum of contributions reaches a specified value. In competitive mode, when the first player reaches the specified value, others get an additional week to keep up or lose.

(In a competitive mode, players could specify handicaps, for example that one point by person A is equal to two points by person B. Maybe it would make some sense in a cooperative mode, too.)

For what specific reason did your search process rank cooperation over predation?

I don't think it's really relevant to your post, but I've actually experimented both ways, and there are a lot of predators out there. You just tend not to see them on LessWrong, because our community has a high Save Against Troll bonus :)

[-]Shmi20

For some reason "correcting" people's reasoning was important enough in the ancestral environment to be special-cased in motivation hardware.

I agree that this is a "natural" urge, but only in-tribe. It can be conditioned to be arbitrarily weak, for example by changing how much we care about these people. In other words, if you can imagine that their being wrong does not affect your well-being or the well-being of your group, you usually don't have nearly as much urge to correct them. If you imagine that their errors are to your benefit, you will want them to stay wrong.

[-][anonymous]00

Oh yeah, it can definitely be turned off, which is the desired consequence of this post.

using methods of mass instruction like writing blog posts, administering a meetup, or launching a whole rationality movement is a lot more effective than arguing with your mom

Might be useful/instructive to think of actual best strategies for how your average person can increase the world's rationality. ("Start a rationality movement" is taken, writing blog posts doesn't actually expand the audience of rationalists unless people link to them from elsewhere, and maybe your local meetup already has an administrator.) I think this post has some good ideas, but I am a little worried that people will stop telling their friends about LW because of it, or something.

One idea might be to integrate rational thinking in to your every-day small talk? I'd guess there'd be less resistance then than if you were trying to win an argument.

I think a main reason why I try to correct friends thought patterns is practice. With friends I get a certain amount of wiggle room, if I accidentally say something that insults them, or turns them off of rationality, or would cause a form a social friction, they would be inclined to tell me before it got between us. I can learn what I did wrong and don't have to keep bothering the same friend to the point of it actually hampering our friendship.

Lessons learned from this can be used to correct someones thought patterns when it is much more imperative for you to do so as in cases where

your ability to accomplish your goals directly depends on their rationality.

and allows you to teach these people whom having social conflict would be very difficult since they are typically people you have to cooperate with a lot.

Well, you've convinced me that I shouldn't correct people for their own sakes (most of the time), but you haven't convinced me that I shouldn't do it for the hypothesised ev-psych reasons like status gaining, group signalling, and just generally getting bonus hedons if I can convince someone of something.

(And the reason I wouldn't want to convince people of silly things as much is because I get fewer hedons that way, and generally think that ceteris paribas it's easier to convince someone of something closer to the truth than something further away from the truth.)

Maybe it's a good idea, even if there is a possible bias supporting it. Reversed stupidity is not intelligence.

Italics mine. I am somewhat tired at the moment, but it was (and is) not immediately apparent to me why the italic'd phrase was used in that particular place. Bringing this up as a data point re: clarity.

[-][anonymous]20

changed it:

Maybe it's a good idea, even if there is a possible bias supporting it. We can't win just by reversing our biases; reversed stupidity is not intelligence.

Is that clearer?

Thanks for the feedback!

That seems better. No problem. :)