Follow-up to: The Intelligent Social Web
The human mind evolved under pressure to solve two kinds of problems:
- How to physically move
- What to do about other people
I don’t mean that list to be exhaustive. It doesn’t include maintaining homeostasis, for instance. But in practice I think it hits everything we might want to call “thinking”.
Mechanical reasoning is where our intuitions about “truth” ground out. You throw a ball in the air, your brain makes a prediction about how it’ll move and how to catch it, and either you catch it as expected or you don’t. We can imagine how to build an engine, and then build it, and then we can find out whether it works. You can try a handstand, notice how it fails, and try again… and after a while you’ll probably figure it out. It means something for our brains’ predictions to be right or wrong (or somewhere in between).
I recommend this TED Talk for a great overview of this point.
The fact that we can plan movements lets us do abstract truth-based reasoning. The book Where Mathematics Comes From digs into this in math. But for just one example, notice how set theory almost always uses container metaphors. E.g., we say elements are in sets like pebbles are in buckets. That physical intuition lets us use things like Venn diagrams to reason about sets and logic.
…well, at least until our intuitions are wrong. Then we get surprised. And then, like in learning to catch a ball, we change our anticipations. We update.
Mechanical reasoning seems to already obey Bayes’ Theorem for updating. This seems plausible from my read of Scott’s review of Surfing Uncertainty, and in the TED Talk I mentioned earlier Daniel Wolpert claims this is measured. And it makes sense: evolution would have put a lot of pressure on our ancestors to get movement right.
Why, then, is there systematic bias? Why do the Sequences help at all with thinking?
Sometimes, occasionally, it’s because of something structural — like how we systematically feel someone’s blow as harder than they felt they had hit us. It just falls out of how our brains make physical predictions. If we know about this, we can try to correct for it when it matters.
But the rest of the time?
It’s because we predict it’s socially helpful to be biased that way.
When it comes to surviving and finding mates, having a place in the social web matters a lot more than being right, nearly always. If your access to food, sex, and others’ protection depends on your agreeing with others that the sky is green, you either find ways to conclude that the sky is green, or you don’t have many kids. If the social web puts a lot of effort into figuring out what you really think, then you’d better find some way to really think the sky is green, regardless of what your eyes tell you.
The thing is, “clear thinking” here mostly points at mechanical reasoning. If we were to create a mechanical model of social dynamics… well, it might start looking like a recursively generated social web, and then mechanical reasoning would mostly derive the same thing the social mind already does.
…because that’s how the social mind evolved.
And once it evolved, it became overwhelmingly more important than everything else. Because a strong, healthy, physically coordinated, skilled warrior has almost no hope of defeating a weakling who can inspire many, many others to fight for them.
You might hope that that “almost” includes things like engineering and hard science. But really, for the most part, we just figured out how to align social incentives with truth-seeking. And that’s important! We figured out that if we tie social standing to whether your rocket actually works, then being right socially matters, and now culture can care about truth.
But once there’s the slightest gap between cultural incentives and making physical things work, social forces take over.
This means that in any human interaction, if you don’t see how the social web causes each person’s actions, then you’re probably missing most of what’s going on — at least consciously.
And there’s probably a reason you’re missing it.