Abram Demski and Grognor

Much of rationality is pattern-matching. An article on lesswrong might point out a thing to look for. Noticing this thing changes your reasoning in some way. This essay is a list of things to look for. These things are all associated, but the reader should take care not to lump them together. Each dichotomy is distinct, and although the brain will tend to abstract them into some sort of yin/yang correlated mush, in reality they have a more complicated structure; some things may be similar, but if possible, try to focus on the complex interrelationships.

  1. Map vs. Territory
    1. Eliezer’s sequences use this as a jump-off point for discussion of rationality.
    2. Many thinking mistakes are map vs. territory confusions.
      1. A map and territory mistake is a mix-up of seeming vs being.
      2. Humans need frequent reminders that we are not omniscient.
  2. Cached Thoughts vs. Thinking
    1. This document is a list of cached thoughts.
  3. Clusters vs. Properties
    1. These words could be used in different ways, but the distinction I want to point at is that of labels we put on things vs actual differences in things.
    2. The mind projection fallacy is the fallacy of thinking a mental category (a “cluster”) is an actual property things have.
      1. If we see something as good for one reason, we are likely to attribute other good properties to it, as if it had inherent goodness. This is called the halo effect. (If we see something as bad and infer other bad properties as a result, it is referred to as the reverse-halo effect.)
    3. Categories are inference applicability heuristics; ruling X an instance of Y without expecting novel inferences is cargo cult classification.
  4. Syntax vs. Semantics
    1. The syntax is the physical instantiation of the map. The semantics is the way we are meant to read the map; that is, the intended relationship to the territory.
  5. Semantics vs. Pragmatics
    1. The semantics is the literal contents of a message, whereas the pragmatics is the intended result of conveying the message.
      1. An example of a message with no semantics and only pragmatics is a command, such as “Stop!”.
      2. Almost no messages lack pragmatics, and for good reason. However, if you seek truth in a discussion, it is important to foster a willingness to say things with less pragmatic baggage.
      3. Usually when we say things, we do so with some “point” which is beyond the semantics of our statement. The point is usually to build up or knock down some larger item of discussion. This is not inherently a bad thing, but has a failure mode where arguments are battles and statements are weapons, and the cleverer arguer wins.
    2. The meaning of a thing is the way you should be influenced by it.
  6. Object-level vs. Meta-level
    1. The difference between making a map and writing a book about map-making.
    2. A good meta-level theory helps get things right at the object level, but it is usually impossible to get things right at the meta level before before you’ve made significant progress at the object level.
  7. Seeming vs. Being
    1. We can only deal with how things seem, not how they are. Yet, we must strive to deal with things as they are, not as they seem.
      1. This is yet another reminder that we are not omniscient.
    2. If we optimize too hard for things which seem good rather than things which are good, we will get things which seem very good but which may only be somewhat good, or even bad.
    3. The dangerous cases are the cases where you do not notice there is a distinction.
      1. This is why humans need constant reminders that we are not omniscient.
    4. We must take care to notice the difference between how things seem to seem, and how they actually seem.
  8. Signal vs. Noise
    1. Not all information is equal. It is often the case that we desire certain sorts of information and desire to ignore other sorts.
    2. In a technical setting, this has to do with the error rate present in a communication channel; imperfections in the channel will corrupt some bits, making a need for redundancy in the message being sent.
    3. In a social setting, this is often used to refer to the amount of good information vs irrelevant information in a discussion. For example, letting a mediocre writer add material to a group blog might increase the absolute amount of good information, yet worsen the signal-to-noise ratio.
    4. Attention is a scarce resource; yes everyone has something to teach you, but many people are much more efficient sources of wisdom than others.
  9. Selection Effects
    1. Filtered evidence.
      1. In many situations, if we can present evidence to a Bayesian agent without the agent knowing that we are being selective, we can convince the agent of anything we like. For example, if I want to convince you that smoking causes obesity, I could find many people who became obese after they started smoking.
      2. The solution to this is for the Bayesian agent to model where the information is coming from. If you know I am selecting people based on this criteria, then you will not take it as evidence of anything, because the evidence has been cherry-picked.
      3. Most of the information you receive is intensely filtered. Nothing comes to your attention with a good conscience.
    2. The silent evidence problem.
      1. Selection bias need not be the result of purposeful interference as in cherry-picking. Often, an unrelated process may hide some of the evidence needed. For example, we hear far more about successful people than unsuccessful. It is tempting to look at successful people and attempt to draw conclusion about what it takes to be successful. This approach suffers from the silent evidence problem: we also need to look at the unsuccessful people and examine what is different about the two groups.
    3. Observer selection effects.
  10. What You Mean vs. What You Think You Mean
    1. Very often, people will say something and then that thing will be refuted. The common response to this is to claim you meant something slightly different, which is more easily defended.
      1. We often do this without noticing, making it dangerous for thinking. It is an automatic response generated by our brains, not a conscious decision to defend ourselves from being discredited. You do this far more often than you notice. The brain fills in a false memory of what you meant without asking for permission.
  11. What You Mean vs. What the Others Think You Mean
    1. The illusion of transparency.
    2. The double illusion of transparency.
    3. Wiio’s Laws
  12. What You Optimize vs. What You Think You Optimize
    1. Evolution optimizes for reproduction but in doing so creates animals with a variety of goals which are correlated with reproduction.
    2. Extrinsic motivation is weaker than intrinsic motivation.
    3. The people who value practice for its own sake do better than the people who only value being good at what they’re practicing.
    4. “Consequentialism is true, but virtue ethics is what works.”
  13. Stated Preferences vs. Revealed Preferences
    1. Revealed preferences are the preferences we can infer from your actions. These are usually different from your stated preferences.
      1. X is not about Y:
        1. Food isn’t about nutrition.
        2. Clothes aren’t about comfort.
        3. Bedrooms aren’t about sleep.
        4. Marriage isn’t about love.
        5. Talk isn’t about information.
        6. Laughter isn’t about humour.
        7. Charity isn’t about helping.
        8. Church isn’t about God.
        9. Art isn’t about insight.
        10. Medicine isn’t about health.
        11. Consulting isn’t about advice.
        12. School isn’t about learning.
        13. Research isn’t about progress.
        14. Politics isn’t about policy.
        15. Going meta isn’t about the object level.
        16. Language isn’t about communication.
        17. The rationality movement isn’t about epistemology.
      2. Everything is actually about signalling.
    2. Humans Are Not Automatically Strategic
      1. Never attribute to malice that which can be adequately explained by stupidity. The difference between stated preferences and revealed preferences does not indicate dishonest intent. We should expect the two to differ in the absence of a mechanism to align them.
      2. Hidden Motives vs. Innocent Failure
    3. People, ideas, and organizations respond to incentives.
      1. Evolution selects humans who have reproductively selfish behavioral tendencies, but prosocial and idealistic stated preferences.
        1. Near vs. Far
      2. Social forces select ideas for virality and comprehensibility as opposed to truth or even usefulness.
        1. Motte-and-bailey fallacy
      3. Organizations are by default bad at being strategic about their own survival, but the ones that survive are the ones you see.
  14. What You Achieve vs. What You Think You Achieve
    1. Most of the consequences of our actions are totally unknown to us.
    2. It is impossible to optimize without proper feedback.
  15. What You Optimize vs. What You Actually Achieve
    1. Consequentialism is more about expected consequences than actual consequences.
  16. What You Seem Like vs. What You Are
    1. You can try to imagine yourself from the outside, but no one has the full picture.
  17. What Other People Seem Like vs. What They Are
    1. When people assume that they understand others, they are wrong.
  18. What People Look Like vs. What They Think They Look Like
    1. People underestimate the gap between stated preferences and revealed preferences.
  19. What Your Brain Does vs. What You Think It Does
    1. You are running on corrupted hardware.
      1. The brain’s machinations are fundamentally social; it automatically does things like signal, save face, etc., which distort the truth.
    2. The reverse of stupidity is not intelligence.
      1. Knowing that you are running on corrupted hardware should cause skepticism about the outputs of your thought-processes. Yet, too much skepticism will cause you to stumble, particularly when fast thinking is needed.
        1. Producing a correct result plus justification is harder than producing only the correct result.
        2. Justifications are important, but the correct result is more important.
        3. Much of our apparent self-reflection is confabulation, generating plausible explanations after the brain spits out an answer.
        4. Example: doing quick mental math. If you are good at this, attempting to explicitly justify every step as you go would likely slow you down.
        5. Example: impressions formed over a long period of time. Wrong or right, it is unlikely that you can explicitly give all your reasons for the impression. Requiring your own beliefs to be justifiable would preempt impressions that require lots of experience and/or many non-obvious chains of subconscious inference.
        6. Impressions are not beliefs and they are always useful data.
  20. Clever Argument vs. Truth-seeking; The Bottom Line
    1. People believe what they want to believe.
      1. Believing X for some reason unrelated to X being true is referred to as motivated cognition.
      2. Giving a smart person more information and more methods of argument may actually make their beliefs less accurate, because you are giving them more tools to construct clever arguments for what they want to believe.
    2. Your actual reason for believing X determines how well your belief correlates with the truth.
      1. If you believe X because you want to, any arguments you make for X no matter how strong they sound are devoid of informational context about X and should properly be ignored by a truth-seeker.
    3. If you believe true things when doing so improves your life, that is no credit to you at all. Everyone does that.
  21. Lumpers vs. Splitters
    1. A lumper is a thinker who attempts to fit things into overarching patterns. A splitter is a thinker who makes as many distinctions as possible, recognizing the importance of being specific and getting the details right.
    2. Specifically, some people want big Wikipedia and TVTropes articles that discuss many things, and others want smaller articles that discuss fewer things.
    3. This list of nuances is a lumper attempting to think more like a splitter.
  22. Fox vs. Hedgehog
    1. “A fox knows many things, but a hedgehog knows One Big Thing.” Closely related to a splitter, a fox is a thinker whose strength is in a broad array of knowledge. A hedgehog is a thinker who, in contrast, has one big idea and applies it everywhere.
    2. The fox mindset is better for making accurate judgements, according to Tetlock.
  23. Traps vs. Gardens
    1. Well-kept gardens die by pacifism.
      1. Conversations tend to slide toward contentious and useless topics.
      2. Societies tend to decay.
      3. Systems in general work poorly or not at all.
      4. Thermodynamic equilibrium is entropic.
      5. Without proper institutions being already in place, it takes large amounts of constant effort and vigilance to stay out of traps.
    2. From the outside of a broken Molochian system it is easy to see how to fix. But it cannot be fixed from the inside.

New to LessWrong?

New Comment
24 comments, sorted by Click to highlight new comments since: Today at 9:32 AM

Everything is actually about signalling.

Counterclaim: Not everything is actually about signalling.

Almost everything can be pressed into use as a signal in some way. You can conspicuously overpay for things to signal affluence or good taste or whatever. Or you can put excessive amounts of effort into something to signal commitment or the right stuff or whatever. That almost everything can be used as a signal does not mean that almost everything is being used primarily as a signal all of the time.

Signalling only makes sense in a social environment, so things that you would do or benefit from even if you were in a nonsocial environment are good candidates for things that are not primarily about signalling. Things like eating, wearing clothes, sleeping areas, medical attention and learning.

Some of the items from the list of X is not about Y:

"Food isn’t about nutrition. Clothes aren’t about comfort. Bedrooms aren’t about sleep. Laughter isn’t about humour. Charity isn’t about helping. Medicine isn’t about health. Consulting isn’t about advice. School isn’t about learning. Research isn’t about progress. Language isn’t about communication."

All these are primarily about something other than signalling. Yes they can be "about" signalling some of the time to varying degrees but not as their primary purpose. (At least not without becoming dysfunctional.)

People underestimate the gap between stated preferences and revealed preferences.

Everything is actually about signalling.

These two put together invite in me a sort of dysfunction. I have a stated preference for my stated preferences matching my revealed ones, i.e. genuine honesty over stated-preference-as-signaling. Yet it is highly likely that this stated preference itself is 1. inaccurate, and 2. signalling. And I treat both consistency and honesty as something like terminal values, so I find this situation unacceptable. That seems to leave me three options:

  1. Adjust my stated preferences to match my revealed ones. Abandon my ideas of what's good and right in favor of whatever the monkey brain likes.
  2. Rigidly adhere to my stated preferences, even when that leaves me unhappy due to not satisfying what (would have been) my revealed ones.
  3. Stop valuing intellectual integrity; accept hypocricy and doublethink. Be happy.
  4. Morbidly reflect on how fucked I am.

All of these alternatives seem horrible to me!

The brain fills in a false memory of what you meant without asking for permission.

Reference? This terrifies me if true.

[-][anonymous]9y50

(2) and (4) are the correct approaches. "Revealed preferences" are, by and large, just the balance of the monkey-brain's incentives, and scarcely yield any useful information or ordering about the choice you were originally trying to make anyway. Throw them out. You're allowed to be stressed-out about how "inhuman" it feels to throw them out, but throw them the hell out! Your conscious self will thank you later.

You are also allowed to optimize your life for taking care of the monkey-brain's wants and needs without impacting the goals of the conscious self.

You are also allowed to deliberatively choose which desires and goals get classified as "monkey brain" and which ones as "the real me". After all, in truth, everything comes at least partially from the monkey-brain and everything goes, at least at the last step before action, through the conscious self. Any apparent "division" into "several people" is just your model of what your brain is doing. The real you can eat cookies, wear leather jackets, and have sex sometimes -- oy gevalt, being a good person does not mean being a robot.

I advise something between path 1 and path 2. You fool yourself, saying one thing and doing another; but you legitimatly want to be consistent (because it is more convincing if you are). So, once you observer the inconsistency, you react to it. In the objectivist crowd, this has resulted in honesty about selfish behavior. In the lesswrong crowd, this has more often resulted in the dominance of the idealistic goals which previously served only as signalling.

Actually, in practice, 2 is fairly good signalling! It's a costly sign of commitment to altruism. This is basically the only reason the raltionalist community can socially survive, I guess. :p

3 is also perfectly valid in some sense, although it's much further from the lesswrong aesthetic. But, see A Dialog On Doublethink. And remember the Litany of Gendlin.

4 is also a necessary step I think, to see the magnitude of the problem. :)

The brain fills in a false memory of what you meant without asking for permission.

Reference? This terrifies me if true.

Again: good terror, justified terror.

I don't have a reference, just an observation. I think if you observe you will see that this is true. It also fits with what we hear from stuff like The Apologist and the Revolutionary and prettyrational memes. It makes social sense that we would do this: the best way to fool others into thinking we meant X is to believe it ourselves. This helps us appear to win arguments (or at least save face with a less severe loss) and even more importantly helps us to appear to have the best of intentions behind our actions. So, it makes a whole lot of sense that we would do this.

People who seem not to do it are mostly just more clever about it. However, the more everyone is aware of this, the less people can get away with it. If you want to climb out of the gutter, you have to get your friends interested in climbing out too -- or find friends who already are trying.

(Once you've convinced yourself it's worth doing!)

People who seem not to do it are mostly just more clever about it.

Hmm. This statement is troublesome because it falls into the category of "I expect you not to see evidence for X in case Y, so here's an excuse ahead of time!" type arguments.

And the rest of the paragraph is an argument that you should not only believe my claim, but convince your friends, too!

How convenient. :p

I would expect a witch to deny that they were signaling "not-witchness"

I would expect a witch to preemptively accuse herself so that no one else can gain status by doing so.

All of these alternatives seem horrible to me!

The good news is that there are others. Stated and "revealed" preferences don't come out of nowhere, take it or leave it, choose one or the other. I use the scare quotes because the very name "revealed preference" embeds into the vocabulary an assumption, a whole story, that the "revealed" preference is in fact a revelation of a deeper truth. Cue another riff on this.

No, call revealed preferences merely what they visibly are: your actions. When there is a conflict between what you (this is the impersonal "you") want to do and what you do, the thing to do is to find the roots of the conflict. What is actually happening when you do the thing you would not, and not the thing that you would?

Some will answer with this again, but real answers to questions about specific instances are not to be found in any story. Something happened when you acted the way you did not want to. There are techniques for getting at real answers to such questions, involving various processes of introspection and questioning ... which I'm not going to try to expound, as I don't think I can do the subject justice.

If rationality means winning, you should probably choose option 3.

Unless you have something to protect, in which case either 1 or 2 (probably 2) might serve you better.

Motte-and-bailey seems like it should be under What You Mean vs. What You Think You Mean

I agree that it makes sense there.

The reason I put it where it is, is: belief-edifice-memeplex-paradigm-framework-system-movement-whatevers have members who say different things. Some members say things that are more like a motte and others say things that are more like a bailey. Even if the individual members consistently claim one or the other, this looks suspiciously like a group responding to incentives by committing the fallacy.

Excellent post! This is some good old-fashioned rationality-type stuff right here.

One nanoquibble:

If you know I am selecting people based on this criteria

should be "criterion".

I find this outline helpful. I do however have a quibble.

If you believe X because you want to, any arguments you make for X no matter how strong they sound are devoid of informational context about X and should properly be ignored by a truth-seeker.

This seems slightly inaccurate. It would imply that a truth-seeking judge would decide cases just as well (or better) without hearing from the lawyers as with, because lawyers are paid to advocate for their clients. More accurate would be:

If you believe X because you want to, your belief in X is devoid of informational context about X and should properly be ignored by a truth-seeker.

If you believe X for reasons unrelated to X being true, your testimony becomes worthless because your belief in X is not correlated with X. But arguments for X are another matter.

Example: Alice says, "There is no largest prime number," and backs it up with an argument. You are now in possession of two pieces of evidence for Alice's claim C:

(1) Alice's argument. Call this "Argument." It is evidence in the sense that p(C|argument) > p(C).

(2) Alice's own apparent belief that C. Call this "Alice." It is evidence in the sense that p(C|Alice) > p(C).

Now suppose you discover that Alice has been paid handsomely to make this statement, and that she would gladly have made the opposite claim had her boss wanted her to. If the claim in the post is correct, then both items of evidence are zeroed out, such that :

(3) p(C) = p(C|Argument) = p(C|Alice)

Whereas the correct thing to do is to zero out "Alice" but not "Argument" thus:

(4) p(C|Alice) = p(C)

(5) p(C|Argument) > p(C)

*Edited for formatting

I think this is an interesting question. If the arguer is cherry-picking evidence, we should ignore that to a large degree. We are often even justified in updating in the opposite direction of a motivated argument. In the pure mathematical case, it doesn't matter anymore, so long as we are prepared to check the proof thoroughly. It seems to break down very quickly for any other situation, though.

In principle, the Bayesian answer is that we need to account for the filtering process when updating on filtered evidence. This collides with logical uncertainty when "evidence" includes logical/mathematical arguments. But, there is a largely seperate question of what we should do in practice when we encounter motivated arguments. It would be nice to have more tools for dealing with this!

Yes, this in an interesting issue. One unusual (at least, I have not seen anyone advocate it seriously elsewhere) perspective is that mentioned by Tyler Cowen here. The gist is that in Bayesian terms, the fact that someone thought an issue was important enough to lie about is evidence that their claim is correct.

The gist is that in Bayesian terms, the fact that someone thought an issue was important enough to lie about is evidence that their claim is correct.

Or their position on the issue could be motivated by some other issue you don't even know is on their agenda.

Or...pretty much anything.

Hmmm. It's better evidence that they want you to believe the claim is correct.

For example, I might cherry-pick evidence to suggest that anyone who gives me $1 is significantly less likely to be killed by a crocodile. I don't believe that myself, but it is to my advantage that you believe it, because then I am likely to get $1.

Someone points out in the comments to that:

The Bayesian point only stands if the P(ClimateGate | AGW) > P(ClimateGate | ~AGW). That is the only way you can revise your prior upwards in light of ClimateGate

Now suppose you discover that Alice has been paid handsomely to make this statement, and that she would gladly have made the opposite claim had her boss wanted her to.

Are we to assume that Alice would have presented a equally convincing-sounding argument for the opposite side had that been her boss' demand, or would she just have asserted the statement "There is a largest prime number" without an accompanying argument?

Hmm... I am not sure. Because the value of her testimony (as distinguished from her argument) is null whichever side she supports, I am not sure the answer matters. But I could be wrong. Does it matter?

Well, I agree that the value of Alice's testimony is null. However, depending on the answer to my original question, the value of her argument may also become null. More specifically, if we assume that Alice would have made an argument of similar quality for the opposing side had it been requested of her by her boss, then her argument, like her testimony, is not dependent upon the truth condition of the statement "There is no largest prime number", but rather upon her boss' request. Assuming that Alice is a skilled enough arguer that you cannot easily distinguish any flaws in her argument, you would be wise to disregard her argument the moment you figure out that it was motivated by something other than truth.

Note that for a statement like "There is no largest prime number", Alice probably would not be able to construct a convincing argument both for and against, simply due to the fact that it's a fairly easy claim to prove as far as claims go. However, for a more ambiguous claim like "The education system in America is less effective than the education system is in China", it's very possible for Alice's argument to sound convincing and yet be motivated by something other than truth, e.g. perhaps Alice is harbors heavily anti-American sentiments. In this case, Alice's argument can and should be ignored because it is not entangled with reality, but rather Alice's own disposition.

This advice does not apply to those who happen to be logically omniscient.

Now you need to subdivide the nuances into categories of nuances because it helps to mentally manipulate them and remember them.