A model of approaches to truth-seeking
Two kinds of approaches to truth-seeking that people tend to have. Most average people in western society seem to be in the middle between the two. Rationalists aim to be on the side of reason. I am trying hard to be, too.

The intuition-based system can lead to more possible worldviews as it doesn't have the aim to produce worldviews that are isomorphic with one objective reality. In the following, I am listing exemplary, stereotypical views of the intuition-based approach to illustrate my points.

Both systems are a meta-system of the other one in the sense that they can ‘explain away’ the truth and importance of the conclusions of the other one.

Both are internally consistent.

SystemsReasonIntuition
Definition of truthTruth is when the map reflects the territory. (i.e. correspondence of your system of claims with reality)Truth is that which feels true.
Importance is that which feels important.
Good is that which feels good. (etc.)
Axioms and tools of truth-seekingReason, Logic, Empirical Observation, LanguageEmotions
Explanation for the other system’s inferiorityFeeling is flawed for truth-seeking as our feelings are shaped by Darwinian, evolutionary pressure to maximize fitness, not to maximize map-territory correspondence.Thinking is flawed for truth-seeking as it feels false to use the methods of empirical observation, and logical deduction/induction if they are not accompanied by feelings of truth/falseness. These days, science is completely detached from feeling and thus does not produce any truths.
Free will?Malformed questionThere is free will.
Concepts of meaning, beauty, goodMeaning, beauty, and good are subjective, human concepts that you’ll fail to explicitly locate on the territory and that you’ll thus fail to map.There is meaning, beauty, and good in the world.
Explanation for the other system sometimes being successfulSometimes but rarely, what people feel like is true actually turns out to be true because of good instincts, chance, etc. but if so, often for the wrong reasons (at least explicitly)Because many people have been raised to value the methods of science, empiricism, and logic, science sometimes produces truth as its findings will feel true to people. If so, they will feel it to be true but for the wrong reasons (at least explicitly).
Utility of the systemsThe conclusions of science produce real-world outcomes (technology, etc) proving the utility of science.The conclusions of feeling-based systems about the structure of reality, the meaning of things, and any other thing feel important, true, and good and are thus highly useful.
Utility of the systems - technology and utility of technologyFeeling-based systems fail to show correspondence to empirical observations of the map. They especially fail to produce novel insights into the territory that are translatable into technologies/courses of action that are useful for further advancing progress (i.e. science itself) or arguably any other means.The conclusions of science often feel false (e.g. quantum physics) and its technologies often feel mundane and unimportant (when contrasted with the eternal transcendent experiences we can find in spirituality or drugs) The technology it produces feels useless and at most is useful for nothing but doing more (useless) science itself (’progress’).
Utility of the systems - feeling of truth, good, beautyEven if your goal is to maximize the feeling of truth, goodness, and beauty, the science-based method will be better at allowing you to do this. (e.g. evolutionary psychology, wire-heading technology)This feels false, the choice to sit in an experience machine would never be good as it feels like the experience would only be ‘fake’.
Utility of the systems - reproductive fitnessEventually, natural selection will prove who has the more useful system when it comes to surviving.This feels false, also, Darwinian fitness feels unimportant. (And even if true, the ‘souls’ of our experience will live on forever.)
Resilience of thinking-based systems to transformative experiencesPeople turning hippies after too much LSD, traumatic experiences, or brain injury is only proof of the inherent irrationality of the human animal and the fragility of the hard-earned rationality that we have been privileged to even be able to attain (by nature and nurture).
We can only try our hardest to avoid the kind of damage that traumatic experiences like that can entail to our neural machinery (by abstaining from drugs, wearing helmets, and looking out for our mental health) and hope to never lose this gift.
If you go beyond the lack of depth of science and reason and if you are only sufficiently exposed to the truth, you will understand. Look at all the people who got their life changed and found truth, meaning, and beauty after truly deep experiences such as ancient drugs, traumatic experiences, or near-death experiences.

 

Questions and Hypotheses

Question I: Am I right that both systems are internally consistent? 

Question II: What would you say or do to someone in any of the two, to convince them to switch sides?

Hypothesis: If you are sufficiently deep in one of these two local optima, you’ll likely never switch sides without extraordinary, transformative experiences.

My guess is that having sufficiently transformative experiences, e.g. sufficient exposure to psychedelics, will make most ‘rational’ humans switch sides. It is probably harder to leave the valley of the feeling-based optimum than the ‘rational’ one. I am not sure if there is anything you can say or do trying to rescue someone beyond the ‘event horizon’ of the feeling-based local optimum.

Hypothesis II: The ‘rational system’ is actually a sub-system of the feeling system because you’ll only find it compelling if its core assumptions feel true - e.g. taking empirical evidence and reason as the axioms of truth-seeking feels true because we have been (implicitly) taught to do so by growing up in a specific bubble of society (through upbringing, formal education, exposure to academia/rationality).

Hypothesis III: Eventually, natural selection would mean that feeling-based systems (or the species/civilizations that least evolve away from feeling towards thinking) die out as their form of truth-seeking has less relevance for fitness due to being more detached from actual reality. However, there is no substance or importance in this argument when viewed from someone in the feeling-based optimum.

Question III: Besides natural selection, is there any other universal argument to be made for the 'superiority' of one of the two valleys?  

New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 7:04 PM

Good topics for thinking about, but I think this misses a LOT of nuance about how and when to use each system, and how they interact.  This is related to, or perhaps the same as what Kahnenman calls "system 2" (thinking, slower and careful) and "system 1" (intuiting, fast and computationally cheap).  It hasn't done well in the replication crisis, but https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow remains a good high-level abstraction of these ideas.

Fundamentally, humans don't have enough data and nowhere near enough compute capability to use reason for very much of our understanding and decisions.  We can focus our attention on various things for some time, and that's very useful for aligning with reality.  It's also one very good way to hone your intuitions - examining a topic logically absolutely changes the way you'll react in the future.  And the reverse is true as well - your intuitions are going to drive the topics you look into, and how deeply.  These things are so deeply entwined that it's perhaps better to think of them as simultaneous parallel modes of modeling, rather than strict alternatives.

Answers to your questions:

Q1.  In fact, neither system is internally consistent.  Reasoning is susceptible to partitioning errors, where you focus on some things in one domain, and other things in other domains, and come up with incompatible results. Intuition is even more so, as it's difficult to even identify the partitioning or conflicts.  Note that consistency does not imply correctness, but correctness does imply consistency.  Assuming there IS a consistent objective reality, making any predictive system better at "truth" will make it more consistent.

Q2. I don't think these are sides.  They are complementary modeling systems in your brain.  Use them together, and work to strengthen them both.  On actual topics of belief, behavior, or policy, rationality and reason is FAR easier to communicate, and is really your primary channel.  Of course, intuition is a stronger motivator for many people on some topics, so if you are willing to manipulate rather than cooperate in truth-seeking, that can be more effective.

H1-H3. I don't know how to test any of that, and it doesn't match my model that they're both just part of human belief-formation and decision-making.  I do suspect that fast/cheap processing is going to have a TON of evolutionary support for a long long time, and the domains where slow/detailed processing is worth the time and energy will be a minority.

Q3. They are each valuable for different things, and failing to use them together is strictly worse than having either one dominate.  Intuition integrates far more data, and makes faster decisions.  Reasoning only uses a tiny subset of available data (those parts that you can identify and focus on), and is extremely slow and often draining.

I agree with you, both reason and intuition are being used and very useful in the decision-making of day-to-day life, where in a lot of cases it simply is not efficient to 'compute', making intuition more efficient and interaction between the two modes necessary, which blurs the line between the two sides.

However, I was intending this post to consider the sides specifically when it comes to non-routine matters such as thinking about ontology, metaphysics, or fundamental ethical/moral questions. (I see how I should have made this context more explicit) I think in that context, the sides do become more apparent and distinct.

In that sense, when I talk about the consistency of a system, I mean 'a system's ability to fully justify its own use', for which 'self-justifying' might actually be a better term than consistency.

I think this also implies that once you're beyond the 'event horizon' and fully committed to one system (while talking about fundamental, non-routine topics), you will not have any reason to leave it for the other one, even if confronted with the best self-justifying reasons of the other system! 

It applies even more strongly to those topics - intuition is the root of truth for these things, and is the guide to what's worth exploring rationally.  In fact, it's not clear how they are tied to reality in the first place, without intuition.

To me, "intuition" means using the parts of my brain that are not open to introspection. Emotion is how the conclusion made by some other part of my brain is communicated to my awareness.

Instead of comparing whether reason or intuition "is better", let's frame it as a question about their specific strengths and weaknesses. The greatest strength of intuition is noticing things I otherwise would not have noticed. (Noticing that I am confused. Noticing that I feel afraid, despite having no obvious reason for that.)

The argument in favor of intuition would be like this: "Thinking is limited to processing evidence I can express verbally. But compared to all evidence that is available to my brain, that's merely the tip of the iceberg. Under most circumstances, this is okay, because generally either both point in a similar direction, or the intuition points in a certain direction and the reason does not care either way. Two major exceptions are: (a) new situations that requires immediate response, such as 'you almost stepped on a snake'; and (b) adversarial situations when someone provides you lots of verbal arguments, filtered to make you act in a certain way, persuading you to do something irreversible. In both cases, the idea is that by the time your thinking would make the right conclusion, it would already be too late."

(And then of course, reason also has its strengths, but we all already know that.)

Most importantly, reason vs intuition is not a zero-sum game. For example, you can improve your intuition in certain area by getting more experience in it; that does not diminish your reason in any way.

[-]TAG2y21

Am I right that both systems are internally consistent?

Reason has an elephant in the room: as soon as you define truth as a correspondence to reality, you place it outside empiricism, because degrees of correspondence can't be measured. This has led to a split between different styles of reasoning -- reason is more than one thing -- the empirical instrumentalist style, and the aprioristic style (rationalism as traditionally defined). There should not just be two self-consistent systems, becasue there could be any number.

Eventually, natural selection would mean that feeling-based systems (or the species/civilizations that least evolve away from feeling towards thinking) die out as their form of truth-seeking has less relevance for fitness due to being more detached from actual reality.

Why didn't that happen a long time ago? The empirical, pragmatic approach is good enough for purposes of survival. You don't need to know ultimate metaphysical truths in order to survive. If you eat a plant, and it makes you sick, don't eat it again...But you don't need to know what a plant is ontologically. Our ancestors survived, so they must have been doing that.

So maybe there aren't two isolated bubbles. Maybe it's like a bell curve, where most people are in a big bulge in the middle, using a mixture of unambitious, pragmatic reasoning, and non extreme intuition. Then the extreme rationalists are one tail, and the full on mystics are another.

The ‘rational system’ is actually a sub-system of the feeling system because you’ll only find it compelling if its core assumptions feel true—e.g. taking empirical evidence and reason as the axioms of truth-seeking feels true because we have been (implicitly) taught to do so by growing up in a specific bubble of society (through upbringing, formal education, exposure to academia/rationality).

Yes, there's another elephant, which is whether it is even possible for extreme rationalists to eschew intuitions entirely.

Issues that are sufficiently deep, or which cut across cultural boundaries run into a problem where, not only do parties disagree about the object level issue, they also disagree about underlying questions of what constitutes truth, proof, evidence, etc. "Satan created the fossils to mislead people" is an example of one side rejecting the other sides evidence as even being evidence . Its a silly example, but there are much more robust ones.

Aumanns theorem explicitly assumes that two debaters agree on their priors, including what evidence is. The Bay Area rationalist version implicitly assumes it. Real life is not so convenient.

Can't you just agree on an epistemology, and then resolve the object level issue? No, because it takes an epistemology to come to conclusions about epistemology. Two parties with epistemological differences at the object level will also have them at the meta level.

Once this problem, sometimes called "the epistemological circle" or "problem of the criterion" is understood, it will be seen that the ability to agree or settle issues is the exception, not the norm. The tendency to agree , where it is apparent, does not show that anyone has escaped the epistemological circle, since it can also be explained by culture giving people shared beliefs. Only the convergence of agents who are out of contact with each other is strong evidence for objectivism.

Philosophers appeal to intuitions because they can't see how to avoid them...whatever a line of thought grounds out in, is definitionally an intuition. It is not a case of using intuitions when there are better alternatives, epistemologically speaking. And the critics of their use of intuitions tend to be people who haven't seen the problem of unfounded foundations because they have never thought deeply enough, not people who have solved the problem of finding sub-foundations for your foundational assumptions

Scientists are typically taught that the basic principles maths, logic and empiricism are their foundations, and take that uncritically, without digging deeper. Empiricism is presented as a black box that produces the goods...somehow. Their subculture encourages use of basic principles to move forward, not a turn backwards to critically reflect on the validity of basic principles. That does not mean the foundational principles are not "there". Considering the foundational principles of science is a major part of philosophy of science, and philosophy of science is a philosophy-like enterprise, not a science-like enterprise, in the sense it consists of problems that have been open for a long time, and which do not have straightforward empirical solutions.

Why didn't that happen a long time ago?

Because our intuitions have been shaped by Darwinian forces to, as you say, work great in the ancestral environment and still work well enough in today's society. 

What happens if we consider the long-term future, though? 
Structuring society or civilizations in a way that is 'moral' in any common sense of the word is meaningful only from the intuition perspective. E.g. a society that aims to abolish suffering and maximize good qualia does so because it feels right/meaningful/good to do so but you cannot prove by reason alone that this is objectively good/meaningful.

Now contrast this to a hypothetical society whose decision-making is based on the tail end of 'reason'. They would realize that our subjective moral intuitions have been shaped by evolutionary Darwinian forces, i.e. to maximize reproductive fitness and that there might not be any objective 'morality' to be located on the territory of reality that they are mapping. They might start reasoning about possible ways to structure society and progress civilization and see that if they keep the Darwinian way of optimizing for fitness (rather than for morality or good qualia), they will in expectation continue to exist longer than any civilization optimizing for anything else. 
Thus they would on average outlive other civilizations/societies.

This assumes that it is even possible to effectively optimize fitness through deliberate consideration and 'do the work for evolution' without the invisible hand of natural selection. However, even if it is not possible to do this in a deliberate, planned way, natural selection would lead to the same outcome. (of societies with the largest fitness rather than the largest amount of happy people/morality surviving the longest)

I suspect the dichotomy may be slightly misapportioned here, because I sometimes find that ideas which are presented on the right side end up intersecting back with the logical extremes of methods from the left side. For example, the extent to which I push my own rationality practice is effectively what has convinced me that there's a lot of ecological validity to classical free will. The conclusion that self-directed cognitive modification has no limits, which implies conceptually unbounded internal authority, is not something that I would imagine one could come to just by feeling it out; in fact, it seems to me like most non-rationalists would find this highly unintuitive. On the other hand, most non-rationalists do assume free will for much less solid reasons. So how does your formulation account for a crossover or "full circle" effect like this?

On a related note, I'm curious whether LWers generally believe that rationality can be extended to arbitrary levels of optimization by pure intent, or that there are cases when one cannot be perfectly rational given the available information, no matter how much effort is given? I place myself in the former camp.