This post is going in discussion until I get it edited enough that I feel like its post-worthy, or if it does well.


Core Post:

Rationality has helped me do a lot of things (in the past year: being elected President of my robotics team, getting a girlfriend, writing good college apps (and getting into a bunch of good schools), etc.), and I feel sort of guilty for not helping other people use it.

I had made progress on a lot of those fronts before, but a bunch of things fell into place in a relatively short period of time after I started trying to optimize them. Some of my friends have easyish problems, but unsolicited risky counterintuitive advice is uncouth and unhelpful.

More pressingly, I want to pass on a lot of rationality knowledge to people I know before I graduate high school. Being in a fairly good Math/Science/Computer Science Magnet Program, I have access to a lot of smart, driven people who have a lot of flexibility in their lives and I think it would be a shame if there were things I could tell them that would make them do a lot better. On top of that, I want to pass on this knowledge within my robotics team so that they continue doing well.

Basically, I want to learn how to explain useful rationality concepts to other people in a non-annoying and effective way. As far as I can tell, many people want to do similar things, and find it difficult to do so.

I suspect that this topic is broad enough that it would be hard for a single person to tackle it in one post. So that people don't need to have enough information for an entire post (which, would be awesome by the way) before they talk about it, here's a thread to respond to.

I'd particularly like to encourage people who have successfully bridged inferential distances to reply with where people started and how the conversation went. Please. An example:

In my Origins of Science (basically a philosophy) class, a conversation like this (paraphrased, happened a few days ago) took place. I'm not sure where the other people in the class started, but it got them to the point that they understood how you model reality, but that beliefs are supposed to reflect reality, and you can't just make things up entirely.

W: "I feel like if people want to think God exists, then God exists for them, but if they want to ignore him then he won't."

me: "But that's not how existing works. In our thoughts and opinions, we make a map of how the world exists. But the map is not the territory."

W: "But it will still seem real to you..."

me: "Like, you can put whatever you want in your map like dragons or whatever, but that doesn't actually put dragons in the territory. And now its a failure of your map to reflect the territory, not of the territory to reflect your map"

I could have said the last part better, but I definitely remember saying the last sentence.

The map vs. territory example seems to be really effective, a few people complimented it (and I admitted that I had read it somewhere else). Not sure how much it propagates into other beliefs, I'll update later with how much it seems to affect later conversations in the class.

Questions:
What basic rationality ideas are the most helpful to the most people?

Would it be helpful to try and categorize where people are inferentially? Is it possible?

Observations:

  • Inferential Distance is a big deal. Hence the first part of the title. I was able to explain transhumanism to someone in 3 minutes, and have them totally agree. Other people don't even accept the possibility of AI, let alone that morality can happen when God doesn't exist.
  • Its much easier to convince people who know and like you.
  • There's a difference between getting someone to ostensibly agree with something, and getting it to propagate through their beliefs.
  • People remember rationality best when they benefit from learning it, and it applies to what they're specifically trying to do.
  • It's difficult to give someone specific advice and have them pick up on the thought process that you used to come up with it.
  • Atheists seem to be pretty inferentially close to Singularity-cluster ideas.
  • From an earlier post I got a bunch of helpful feedback, particularly from Nornagest's comment and TheOtherDave. The short versions:
    • Asking people to do specific things is creepy, teaching someone is much more effective if you just tell them the facts and let them do whatever they want with it.
    • People need specifics to actually do something, and its hard to make them decide to do something substantially different than what they already are doing
  • And from a comment by David Gerard: People need to want to learn/do something, its hard to push them into it.
  • A lot of people are already doing useful things (research, building businesses), so it might be more helpful to make a bunch of them better than a few of them do something entirely different.

New to LessWrong?

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 1:01 AM

I definitely would recommend holding off on proposing solutions, as soon as they have the basic background knowledge to understand it. Maps and territory, as you mentioned above, is also a good, foundational topic.

  • I use holding off on solutions many times each day, when I'm thinking about any of life's little puzzles. That is one of the most useful lessons I've ever learned.
  • Making beliefs pay rent is something I would teach very early on.
  • Mysterious answers to mysterious questions
  • The general idea of reductionism. The world, and most problems, can be broken down into smaller and smaller parts, which is often a useful problem solving tool.
  • Another very useful concept is positive bias. Teaching your brain to look for counterexamples as well as examples is an extremely important tool for determining the truth.

I think these, in general, are some of the most important topics to teach if you want people to start becoming rationalists. In terms of how to teach them, I would say that encouraging curiosity, and supporting a questioning mindset is fundamental. I also think that I learned most of the techniques of rationality in terms of the problem I was working on at the time. I'd read something on less wrong or in a book and see an immediate and specific application for the general technique that I'd just learned. If you're teaching people in a robotics club, I'd say that you shouldn't necessarily make a syllabus or anything like that, just wait until you see them working on something where a certain lesson in rationality might be applicable.

On a complete side note, in your introduction you mentioned that you'd used rationality to get a girlfriend. I'm actually planning to ask out a girl I know in the next day or two, and that caught my attention. I'm curious what you did, or how you went about doing that.

Thanks for the input, good suggestions on starting points. Particularly positive bias, I remember what it was but forgot how important it is.

What background knowledge do you think is necessary for holding off on proposing solutions? Everyone I've explained it to understood it without any problems. Though, I didn't bring cognitive biases into the explanation.

So about the girlfriend... A large part of the results came from just asking her out. Being in high school, I had a habit of "liking" someone for a long time, maybe telling her, but then never doing anything in particular about it. Rationality made me notice that that's pretty much guaranteed to result in nothing. So you're a good part of the way there.

With regards to asking someone out, I've found that unless the other person is already interested in you, starting the conversation with asking doesn't really work. When she's in a good mood (I normally measure this by laughing -- and I'm assuming you know the difference between "haha you're hilarious and awesome" laughing and "eww/awkward" laughing), she's more likely to say yes. Saying things in the wrong part of the conversation can cause awkwardness.

Also don't act particularly tense about it.

People on my robotics team seemed to understand holding off on proposing solutions pretty quickly. Like, to the point of universal agreement almost immediately. It might be because engineers are used to coming up with solutions based on problems.

They also agreed on testing (read: empirical verification of) the robot being important before it was brought up.