Posts

Sorted by New

Wiki Contributions

Comments

In order to justify discarding the Noble Lie your Third Option needs to be mutually exclusive with the Noble Lie, otherwise you're discarding a utilitarian gain for nothing. If your five minutes of wild thought brings up ideas that only work if you also discard the Noble Lie, that's probably motivated reasoning as well.

I don't see upvote or downvote buttons anywhere. Did LW remove this feature, or is it something that's only happening to me specifically?

Edit: Also, the sort function is static in most but not all comment sections for me. I'm running Chrome.

"Intelligence is an emergent phenomenon!" means that intelligence didn't happen on purpose, or that intelligence doesn't need to be intentional in order to happen. Emergence as a term doesn't add a reason for a thing, but it does rule some out.

The benefit of morality comes from the fact that brains are slow to come up with new ideas but quick to recall stored generalizations. If you can make useful rules and accurate generalizations by taking your time and considering possible hypotheticals ahead of time, then your behavior when you don't have time to be thoughtful will be based on what you want it to be based on, instead of television and things you've seen other monkeys doing.

Objective morality is a trick that people who come up with moralities that rely on co-operation play on people who can't be bothered to come up with their own codes. If I learned, suddenly and definitively, that nothing is moral and nothing is right, I wouldn't change anything except to be more secretive about my own morality in order to keep everyone else from finding out that they don't need to follow theirs.

Sorry to answer a 5 year old post, but apparently people read these things. You asked "Why should rationalists necessarily care a lot about other people," but all the post said was that they should be able to.

If parental health plays a role in this I would be interested in seeing if there's a correlation between parental vaccination and obesity.

Why aren't "rationalists" surrounded by a visible aura of formidability? Why aren't they found at the top level of every elite selected on any basis that has anything to do with thought? Why do most "rationalists" just seem like ordinary people, perhaps of moderately above-average intelligence, with one more hobbyhorse to ride?

I'm relatively new to rationality, but I've been a nihilist for nearly a decade. Since I've started taking developing my own morality seriously, I've put about 3500 hours of work into developing and strengthening my ethical framework. Looking back at myself when nihilism was just a hobbyhorse, I wasn't noticeably moral, and I certainly wasn't happy. I was a guy who knew things, but the things I knew never got put into practice. 5 years later, I'm a completely different person than I was when I started. I've made a few discoveries, but not nearly enough to account for the radical shifts in my behavior. My behavior is different because I practice.

I know a few other nihilists. They post pictures of Nietzsche on Facebook, come up with clever arguments against religion, and have read "the Anti-Christ." They aren't more moral just because they subscribe to an ethos that requires them to develop their own morality, and from that evidence I can assume that rationalists won't be more rational just because they subscribe to an ethos that demands the think more rationally. Changing your mind requires more than just reading smart things and agreeing with them. It requires practice.

In the spirit put up or shut up, I'm going to make a prediction. My prediction is that if we keep track of how often we use a rationalist technique in the real world, we will find that frequency of use correlates to the frequency at which we visualize and act out using that technique. Once we start quantifying frequency of use, we'll be able to better understand how rationalism impacts our abilities to reach our goals. Until we differentiate between enthusiasts and practitioners, we might as well be tracking whether liking a clever article on Facebook correlates to success.

I noticed that I was confused by your dragon analogy. 1) Why did this guy believe in this dragon when there was absolutely no evidence that it exists? 2) Why do I find the analogy so satisfying, when its premise is so absurd.

Observation 1) Religious people have evidence:

The thing about religion is that a given religion's effects on people tend to be predictable. When Christians tell you to accept Jesus into your heart, some of the less effective missionaries talk about heaven, but the better ones talk about positive changes to their emotional states. Often, they will imply that those positive life changes will happen for you if you join, and as a prediction that tends to be a very good one.

As a rationalist, I know the emotional benefits of paying attention when something nice happens, and I recognize that feeling gratitude boosts my altruism. I know I can get high on hypoxia if I ever want to see visions or speak in tongues. I know that spending at least an hour every week building ethical responses into my cached behavior is a good practice for keeping positive people in my life. I recognize the historical edifice of morality that allowed us to build the society we currently live in. This whole suite of tools is built into religion, and the means of achieving the benefits it provides is non-obvious enough that a mystical explanation makes sense. Questioning those beliefs without that additional knowledge means you lose access to the benefits of the beliefs.

Observation 2) We expect people to discard falsifiable parts of their beliefs without discarding all of that belief.

The dragon analogy is nice and uncomplicated. There are no benefits to believing in the dragon, so the person in the analogy can make no predictions with it. I've never seen that happen in the real world. Usually religious people have tested their beliefs, and found that the predictions they've made come true. The fact that those beliefs can't predict things in certain areas doesn't change the fact that they do work in others, and most people don't expect generality from their beliefs. When that guy says that the dragon is permeable to flour, that isn't him making an excuse for the lack of a dragon. That's him indicating a section of reality where he doesn't use the dragon to inform his decisions. Religious people don't apply their belief in their dragon in categories where believing has not provided them with positive results. Disproved hypotheses don't disprove the belief, but rather disprove the belief for that category of experience. And that's pretty normal. The fact that I don't know everything, and the fact that I can be right about some things and wrong about others means that I pretty much have to be categorizing my knowledge.

Thinking about this article has lead me to the conclusion that "belief in belief" is more accurately visualized as compartmentalization of belief, that it's common to everyone, and that it indicates that a belief that I have is providing the right answer for the wrong reasons. I predict that if I train myself to react to predicting that the world will behave strangely in order to not violate my hypothesis by saying out loud "this belief is not fully general" I will find that more often than not that this statement will be correct.

Check out Erfworld. It starts off as a webcomic, moves to narrative for the parts that are best done in a narrative style, and then jumps back to webcomic format for battles. It hits all of those marks you mention, as well as a standard that I hold personally. I think that rationalist fiction is at its most compelling when it creates a world with new rules, and sends the empiricist out to learn them. It's easier to show how to learn things empirically when there's still low-hanging-fruit to be plucked, and there have to be people in the story who don't know things in order to show the advantages of knowledge.

One of the catchphrases that develops is, "We try things. Sometimes they even work."