I recently proposed to a friend applying Bayes rule to making friends, to observe which strategies make friends faster. My friend's response was as follows (quoting to avoid strawman)

"You’re trying to use one single type of intelligence to analyze and find patterns in human behavior driven by other types of intelligence. It’s like if I was trying to use my musical intuition to understand math. They’re orthogonal domains. (...)

"Emotional logic is orthogonal to formal logic, I’ve said it for ages and I’ll keep saying it forever because I know beyond a doubt that it’s true. And in my opinion we can train our intelligence in various domains but we can’t always use one type of intelligence to make decisions related to another type of intelligence. Let’s say that the field of economics is 60% logical-mathematical, 20% interpersonal, ... , 0% musical-rhythmic and harmonic. Then those are the proportions in which those intelligences should be applied to that field"

I was not compelled by my friends argument. Say that listening to someone makes a friend 80% of the time and talking at them makes a friend 20% of that time. Bayes rule is still an efficient way to notice that pattern, even though an emotionally intelligent person might have guessed it.

I want a way to describe our ways of thinking. This great LW post uses the phrases Toolboxism and Single-Magisterium Bayes to describe the two ways of thinking (the sequences are clearly in the Single-Magisterium Bayes camp). The problem is that my friends find Toolboxism an offensive descriptor. Any thoughts on a word they they would prefer?

edit: added link

New Answer
Ask Related Question
New Comment

3 Answers sorted by

I too find your friend's statement uncompelling. There's no reason to limit yourself like that; even assuming that the multiple-intelligence premise is true, the correct solution is to apply 100% of your logical-mathematical and interpersonal abilities. You might want to practice the logical-mathematical part first, to achieve a better ROI, but that's not the same thing as saying you should apply 60% of your ability just because logical-mathematical contributes to 60% of the result.

Unfortunately people usually find it distasteful to apply S2 to things that they're used to using S1 for. I've found that S2 is associated with negative feelings of coldness, calculation, and inauthenticity, so I avoid talking about rationality (when memes like "love isn't rational" dominate, good luck bridging the inferential distance). Instead, I solve their problem myself, and frame the result in their language. Your friend would probably accept advice of the form "do X, don't do Y, here's why [touchy-feely explanation]".

Some examples, just off the top of my head:

  • Don't use words like S2, inferential distance, math, utility, pattern, strategy, intelligence, signalling, or efficiency. This is not an exhaustive list
  • Don't bring up probabilities, odds, or frequencies.
  • In fact, don't mention any numbers. Numbers are an automatic fail unless your friend brings them up first, and even then, be careful not to take that as permission to go full-bore mathematician
  • Equations count as numbers. So do theorems, proofs, or anything that even vaguely pattern-matches to mathematics.
  • Pretend the words Bayes and rationality are unspeakable curse words
  • Any time you feel the urge to say "optimal", say "good" instead
  • Don't accuse your friend of being stupid or toolboxing, no matter how dumb or crazy they get
  • Replace S1 with gut or heart, and S2 with head.
  • Don't talk about near/far... in fact, if you read about it in the Sequences or on LessWrong, you'll probably lose points for talking about it (but you can still use the techniques and skills behind the scenes, just not openly).

Yes, this is hard. It'll get easier as you practice and becomes an S1 process.

Thank you for this answer. It honestly deals with my core problem. I suspect it will be useful for me.

Say that listening to someone makes a friend 80% of the time and talking at them makes a friend 20% of that time. 

The kind of mental model you need of friendship to model it in a way where that sentence makes sense might not be conductive to winning friends. 

The problem is that my friends find Toolboxism an offensive descriptor. 

Toolboxism isn't an inherently offensive descriptor. It's however not a term that describes the way of thinking that your friend describes in your quote. It might not be that your friend finds it offensive, but just finds it wrong. 

It's problematic to have a mental model where you expect people to either be blues or greens and not be open for someone to come with a different position. 

The idea of orthogonality isn't part of toolbox thinking the way it was previously described. When David Chapmen transfered what he learned from a religious ritual to his DARPA AI research Chapmen transfered knowledge across domains. 

The kind of mental model you need of friendship to model it in a way where that sentence makes sense might not be conductive to winning friends. 

For me personally, Bayes thinking is useful when I have some model that is wrong, but I refuse to let go of. In this case, I did not like "vibing" with people. I wanted all interactions to be problem solving-y in some way. Because I wanted the world to be like that, I didn't accept evidence that people do not prefer it. But I could look at attempts to make friends and notice "oh, vibing is... (read more)

If there are blues, greens, reds and oranges and when you are dealing with a orange you want to label them as either blue or green and label them green because you are blue you are not going to have them be happy with the label. No, if you override emotions as the motivating factor for actions with intellectual guidelines that can harmful even if the intellectual guidelines are based on patterns that exist. If you look at the people in EY's post about toolbox thinking and lawful thinking, EY uses David Chapman as an example for toolbox thinking. He uses Julia Galef as an example for lawful thinking and as far as I remember Julia Galef is a person who doesn't believe that people should override their way of social habits with intellectual models. (Flag - My view on Galef is second-hand information from maybe 2016)
This statement "Affirms the antecedent". If I saw SMB and toolboxism as blue vs. green conflict , then yes I would label any position into either category. However, there are other reasons I might categorize my friend as a toolboxist. In this case, his beliefs match the definition of toolboxism given in the linked post "There is no one correct way to arrive at the truth (...) The only way to get better at finding the correct answer is through experience and wisdom, with a lot of insight and luck, just as one would master a trade such as woodworking." His interpretation of multiple intelligences, as each adapted to their own field of endeavor and non-transferrable, matches the the definition ( Tristanm has a SMB interpretatio [https://www.lesswrong.com/posts/GTAFKjdQoSa9smKmj/one-magisterium-bayes]n of multiple intelligences which I prefer) I disagree. People "override emotions" all the time with intellectual guidelines. For example, bankers exponentially discount when they want to hyperbolically discount. I might want to buy a girl a drink, but realize that doing so would offend her, so I don't. I see that Tristanm and EY use [https://www.lesswrong.com/posts/CPP2uLcaywEokFKQG/toolbox-thinking-and-law-thinking] different definitions of Toolboxism, which might explain some of the confusion.

I believe the term you are looking for is a fox, in the sense of Tetlock. But honestly, as someone who is generally pro-toolboxism, I don't understand why that's offensive. The whole point is that you have a whole toolbox of different approaches

Fox - that sounds like a good word. Can you link me the Tetlock book it comes from.

Yeah I agree that toolbox isn’t shouldn’t be offensive. I guess something in my tone offended the person, rather than the word itself.

The classic Expert Political Judgment: How Good Is It? How Can We Know? [https://www.goodreads.com/book/show/89158.Expert_Political_Judgment?ac=1&from_search=true&qid=biP2PyuoeF&rank=3] The cover even has adorable foxes and hedgehogs on it.
9 comments, sorted by Click to highlight new comments since: Today at 7:02 PM

Imagine that you are likely to make huge mistakes when trying to think rationally, but you usually get good results when you follow your instincts. Wouldn't it make sense to ignore rational arguments and just follow your instincts? I suspect that many neurotypical people are like that.

It is not about applying some Platonic "logical-mathematical intelligence". It is about your logical and mathematical skills. Maybe they suck. It is a fact about you, not about math per se. But it can be a true fact.

I agree with everything you said. Great brevity and clarity!

Drop generic Baye's rule recommendations in favor of applications:

  • Results oriented - what works works, theory or no theory. Example: Making a good first impression formally, it may be obvious that there are better and worse things to say/methods of delivery than others. Past a certain point, this might not be super useful - but the basics matter.
  • Types, and balance. Some people like talking. Some people like being around lots people. If someone likes talking, maybe listening more helps in that situation. But tendencies aren't the be all, end all, anymore than moods are. Things that are generally the case about a person matter, but current circumstances (in the moment/thinking fast rather than working out ahead of time) can be a big deal as well - especially when they differ.
  • Noticing mistakes, or benefits just from thinking about things more.

Your friend might disagree because the idea of general methods are counter-intuitive. (Different types of people, different things that appeal, etc.) People do generally exhibit some pattern (like their interests) which are important, and important to them (shared interests can bring people together).

Emotional logic is orthogonal to formal logic,

Not completely.

we can’t always use one type of intelligence to make decisions related to another type of intelligence.

Indeed. Here the relation might be visible (and useful) around basic stuff. (Being nicer is more effective, etc.)

While there might not be a lot of overlap, maybe someday a computer will be able to infer 'frustration' from someone punching it - without otherwise being filled to the brim with emotional intelligence. (If only as a result of hardcoding.)

I like your top comments a lot. Thanks for the answer!

Why do you want to put a label on their belief? There are some adverse effects like they can get offended, or being a Toolboxist becomes a part of their identity and then it's even harder to change their mind.

It helps withdraw from the conversation. You believe in <belief> and I believe in One-Magisterium Bayes is a script that people use to abandon disagreements. Like saying "You are a Muslim and I am a Christian, so we should change topics".

This great LW post uses the phrases Toolboxism and Single-Magisterium Bayes to describe the two ways of thinking

Was this meant to include a link to that post?

Perhaps OP meant to simultaneously establish the usage and praise their own post :P