If you are already an atheist that does not believe in ghosts, what can you learn from rationality? I'd love to be wrong about lots of things but my problem is, I think I'm right.

As far as I can tell, none of this reflective thinking has lead to deeper understanding of consciousness. (A subject I wish I wasn't so interested in, because its study seems so futile).

If you feel like it, please tell me about any particular instances where actively working on your own thought processes has lead you to realize you were wrong about something (other than blatantly false things like those I mentioned above) or if the same program lead to any new understanding of consciousness.

New to LessWrong?

New Comment
19 comments, sorted by Click to highlight new comments since: Today at 6:07 PM

Today I was late coming home, and realized that my aversion to asking people for reasonable things was preventing me from acquiring useful information about the fastest route home. So I went up and asked the person in the metro station for help, and I got it, finding out that my original plan depended on a bus that stopped running much earlier than I expected on Sundays.

Rejection Therapy FTW!

Basically, to the extent that there are useful things that other people do habitually, and are fairly simple for me to do, I can benefit from learning rationality.

And that's a lower bound.

[-][anonymous]13y80

I think I'm right.

Don't be so quick to assume you are correct or rational just because your beliefs are within a particular cluster, no matter how accurate those beliefs seem at first. After all, the one of the virtues of rationality is to continue to be curious even after you think you've found the answer. You might even be wrong about ghosts (although that seems pretty unlikely). The point is, not all of the things that "skeptics," or "traditional rationalists" as they are often called here, believe are necessarily true. In fact, some of their epistemic standards are incorrect or too weak. Continue to question what you believe--if you put forth a genuine effort, I think you'll find that you still have some lingering irrational beliefs that can be corrected.

I've gained an understanding of the motivations behind some of my behavior, and learned to detect and unravel the flattering narratives my brain creates to explain my actions. I've also learned how other people think. As well as being self-knowledge that I want for its own sake, it's helped me a good deal in my interactions with people. Learning about how I and others think has made social interaction a lot easier.

If you are already an atheist that does not believe in ghosts, what can you learn from rationality?

You will know why you should be an atheist that does not believe in ghosts.

You'll also know under which circumstances you should stop being an atheist and start believing in ghosts.

You forgot to ask: what can we gain from irrationality?

Also, if you're willing to admit you don't understand consciousness without also claiming it must be some ineffable eternal mystery, you're already using an important rationalist skill.

[-][anonymous]13y30

I felt much as you do, but then, I can't remember who or in which thread, I stumbled upon a up voted comment by a LWer who recommended this essay.

It's titled: What you can't say

The Conformist Test

Let's start with a test: Do you have any opinions that you would be reluctant to express in front of a group of your peers?

If the answer is no, you might want to stop and think about that. If everything you believe is something you're supposed to believe, could that possibly be a coincidence? Odds are it isn't. Odds are you just think whatever you're told.

...

Trouble

What can't we say? One way to find these ideas is simply to look at things people do say, and get in trouble for. [2]

Of course, we're not just looking for things we can't say. We're looking for things we can't say that are true, or at least have enough chance of being true that the question should remain open. But many of the things people get in trouble for saying probably do make it over this second, lower threshold. No one gets in trouble for saying that 2 + 2 is 5, or that people in Pittsburgh are ten feet tall. Such obviously false statements might be treated as jokes, or at worst as evidence of insanity, but they are not likely to make anyone mad. The statements that make people mad are the ones they worry might be believed. I suspect the statements that make people maddest are those they worry might be true.

This sentence in particular helped me overcome inappropriate emotional reactions when thinking about things. It made it much easier to recognize a few well sheltered and hidden beliefs which weren't paying rent or where free floating. Since then I've changed some parts of my world-view considerably as a consequence.

Rationality is a method for answering questions, not an answer itself. If you don't have any pressing questions - in other words, you're happy and content - you may not see much use for it yet.

When I first finished reading the sequences, I thought, "Great! Now I'll go through my beliefs and fix all the stupid ones! Okay, what do I believe that's wrong?" My reply: "..." Obviously, it's not that simple - if I knew it was wrong, I wouldn't have believed it in the first place. I could have tried to reevaluate everything I believe from the ground up, but that sounded like a poor effort:reward task. I suspect you feel the same way.

So what am I getting out of Bayesian rationality, the study of biases, and the Less Wrong community?

  • A better understanding of my own motivations. For example: My job hunt post, Motivated Stopping.
  • A collection of effective life-hacks and a community dedicated to finding and sharing more. Examples: Learn from Textbooks, rejection therapy, Defeating Ugh fields.
  • A strategy for attacking questions that I really don't know the answer to. Examples: What can my parents do to take care of their surviving elders without totally sacrificing their financial and mental health? What can I do to help my autistic, college drop-out younger brother? What should my wife and I do about her house in Florida that's been on the market for nearly a year?

In addition to all that, I'm updating my beliefs in place. When I learn something that surprises me, I take a closer look at why I believe what I believe, looking for an unfounded assumption that lead to the current error. That's what I suggest for you: don't expect what you've learned here to rewrite your entire worldview, but keep it handy for the next time life asks a Hard Question or throws you an utterly unanticipated datum.

One thing to gain from rationality, apart from what you mentioned, is a firmer understanding (and real belief) in reductionism. This can apply to pretty much anything, and I think for me contributed to a new understanding of consciousness. On the topic of how it has lead me to realize I was wrong about something, it's helped by tons of times. Sometimes I'll find myself in an argument resisting what the other person has to say and then I notice how silly that is and I consider whether I need to update. I've changed my mind about a number of mostly trivial things in this manner. Rationality (and less wrong in particular) helped me understand that winning an argument by stubbornly remaining wrong isn't really winning, and that has benefited me greatly. It can also help us be more strategic, and I think I've become a bit better at planning. I know some of this might not answer your question exactly, but it's a mainly response to your title.

Rationality by itself is too far from your life to benefit from directly. The most direct application would be learning Bayes' Theorem, which tells you how to update on new evidence. In fact, the very concept of 'updating' is very useful. Being more rational means understanding when to change your mind, why, and how. But, you get a lot from rationality when you apply it to particular fields, and then seeing what those fields yield:

  • Applying rationality to the science of desire allows you to learn how to modify your desires, which is essentially modifying your very self. You can break your wants down, analyze what goals you are trying to fulfill, and come up with better ways to fulfill those goals. Purchase Fuzzies and Utilons Separately

  • Applying rationality to the science of emotions allows you to understand emotion in others (how it controls others) and more importantly learning to notice these emotions in yourself, understand what they communicate to you, and how to deal with them. Most important application is detecting Ugh fields.

  • Applying rationality to making decisions allows you to understand how to make the best decision, when do you need more information, and how much that information is worth to you. (Example: consider how much would saving 10 minutes on your trip to work be worth to you. Would that be enough to justify moving closer to work, perhaps to a more expensive place?)

  • Applying rationality to biases allows you to understand various biases that almost everyone succumbs to, detect when they occur in your own life, and systematically correct them. (Example: over-confidence bias. It's well known in computer science field that when you need to estimate how long a program will take to write, you make your best guess and multiply it by 2. The fact is, you might need to do that kind of correction in a lot of other estimates that you make in your life.)

  • Applying rationality to beliefs allows you to know when you actually understand something, rather that feeling like you do. You can make predictions, you know what evidence will falsify your believes, you know where your beliefs come from. This means you are never stuck in a dead-end, and your believes aren't floating in space.

  • Applying rationality to arguments allows you to argue with better results. You'll find the truth faster. You know when you need to give examples. You understand the concept of inferential distance, you know when you need to unpack various concepts.

There is a lot lot lot more, this is just a little bit that I got from the Anki cards I made so far (based on information presented at the rationality minicamp). There is a lot more and you only have to look at all the posts this community has created to see the benefits of being rational.

I wouldn't say consciousness is such a hard problem. Though getting people to accept that would probably require coding a sentient AI, since the supermajority thinks it's a really hard problem. "How an algorithm feels from inside" is the key post here, maybe. What sort of algorithm thinks it has free will?

[-][anonymous]13y00

I lack a definition of consciousness, any way to measure it (for example, to find that a cat is and a rock isn't) and any understanding of how it arises. I'm struggling to see why you would say it's not a hard problem without having answers to these three problems. (On the other hand, maybe you do? That would be wonderful)

Well, of course since it's a human natural-language definition it's fuzzy (fuzzy meaning overloaded with several meanings since we tend to encounter all at once) and you can fulfill some parts but not others. But a rock is definitely no conscious because it doesn't have a mental object that it labels itself that includes the system that does its computations, can't examine or manipulate its thoughts, it's not roughly modelable as a human (not the most egalitarian part of the definition, but there I think), and all that good stuff.

A cat probably does, probably can't, and is. So it's in the fuzzy zone of some people being able to go "look it's conscious, it clearly exhibits living thing social responses like pain and love, therefore it's conscious," and other people going "But cats can't do complicated things with their own thoughts because they're not capable of representing things with language, so they're not conscious." It's the Standard Definitional Dispute, caused not because consciousness is undefined but because it's overdefined.

In order to make an AI that humans accepted as conscious you would probably have to get past the fuzzy zone and fulfill lots of properties, maybe even making it roughly modelable as a human to satisfy people (emotion, a little selfishness, those sorts of things). It would understand language, be able to examine and manipulate its own thoughts, have a "me" mental object, and generally try to have a human-like mental structure.

So yeah, I can't measure consciousness, but I can measure the sub-definitions I know about and then say "this fulfills most of the conditions" or "this fulfills none of the conditions."

EDIT: Also, I should note that actually building an AI with human emotion equivalents sounds like a bad idea, at least without a bigger AI to keep things safe.

getting people to accept that would probably require coding a sentient AI,

And finding a way to persuade the doubters it isn't a zombie, a sci fi AIs are standardly portrayed.

Well that's easy, you just don't make it out of dead flesh :D

[-]wtroyw13y-10

This whole statement of yours is so fucked up I don't even know where to begin. It has some fundamental errors in logic and English. "If you are an atheist that does not believe in ghosts, what can you learn from rationality?" WTF?! What does being an atheist and/or not believing in ghosts have to do with learning from rationality. Is this your thesis or is this an attention grabbing statement? "I'd love to be wrong about lots of things but my problem is, I think I'm right." Of course you think you're right! If you thought you were wrong, you wouldn't believe yourself. There are two clauses here that don't belong together in this sentence, "I'd love to be wrong," and "I think I'm right". If your point is "I think I'm right," then there's no use in saying "I'd love to be wrong"? If your point is "I'd love to be wrong," then there's no use in saying "I think I'm right". Additionally, what is the point of this statement? Is THIS your thesis? Is this statement somehow related to the first statement? Why is this statement relevant? "As far as I can tell, none of this reflective thinking has lead to deeper understanding of consciousness." Are you somehow trying to associate reflective thinking with learning from rationality? If so, you have to be very specific. Also, what is your definition of "consciousness"? Is it the Freudian definition? Is it some kind of mediation lexicon? Define your terms, that's Philosophy 101! Another thing, are you equivocating "reflective thinking" with "critical thinking". I was just confused because since it seems you want to discuss rationality, why don't you talk about critical thinking? Lastly, what does a deeper understanding of consciousness have to do with what we can gain from rationality? One subject is more subjective than the other. "If you feel like it, please tell me about any particular instances where actively working on your own thought processes has lead you to realize you were wrong about something... or if the same program lead to any new understanding of consciousness." You have finally made your fork in the road here. First, you're discussing rationality and now you're discussing personal reflections. Once again, you have not clearly defined your terms. What does "actively working on your own thought processes' mean? Are you suggesting I work with someone else's thought process or do you mean I should reflect on the subject of my own thought processes? It seems as if you started with critical thinking and ended in reflective thinking. If you want an example, I'll give you an example: I figured out I was wrong on my own when I asked myself "Given what I already know, is there a such thing as free will". I didn't do any research online, it was a totally internal journey so to speak. I started by believing in free will, then when I asked myself the question and thought about it for a few days, I arrived at the conclusion that I was wrong.

In conclusion, the main question "What can we gain from rationality?" was never clearly answered or articulated in you post. You have MUCH more to learn about logic and English. I'm not saying you're stupid, I used to do the same exact things. You just need to recognize your mistakes from an objective perspective. You know what you're talking about (I hope) and I want you to get the point across without confusion. If you ever repost this same question and follow my advice, I'll give you a clear and concise answer.