Follow-up to: The Intelligent Social Web

The human mind evolved under pressure to solve two kinds of problems:

  • How to physically move
  • What to do about other people

I don’t mean that list to be exhaustive. It doesn’t include maintaining homeostasis, for instance. But in practice I think it hits everything we might want to call “thinking”.

…which means we can think of the mind as having two types of reasoning: mechanical and social.

Mechanical reasoning is where our intuitions about “truth” ground out. You throw a ball in the air, your brain makes a prediction about how it’ll move and how to catch it, and either you catch it as expected or you don’t. We can imagine how to build an engine, and then build it, and then we can find out whether it works. You can try a handstand, notice how it fails, and try again… and after a while you’ll probably figure it out. It means something for our brains’ predictions to be right or wrong (or somewhere in between).

I recommend this TED Talk for a great overview of this point.

The fact that we can plan movements lets us do abstract truth-based reasoning. The book Where Mathematics Comes From digs into this in math. But for just one example, notice how set theory almost always uses container metaphors. E.g., we say elements are in sets like pebbles are in buckets. That physical intuition lets us use things like Venn diagrams to reason about sets and logic.

…well, at least until our intuitions are wrong. Then we get surprised. And then, like in learning to catch a ball, we change our anticipations. We update.

Mechanical reasoning seems to already obey Bayes’ Theorem for updating. This seems plausible from my read of Scott’s review of Surfing Uncertainty, and in the TED Talk I mentioned earlier Daniel Wolpert claims this is measured. And it makes sense: evolution would have put a lot of pressure on our ancestors to get movement right.

Why, then, is there systematic bias? Why do the Sequences help at all with thinking?

Sometimes, occasionally, it’s because of something structural — like how we systematically feel someone’s blow as harder than they felt they had hit us. It just falls out of how our brains make physical predictions. If we know about this, we can try to correct for it when it matters.

But the rest of the time?

It’s because we predict it’s socially helpful to be biased that way.

When it comes to surviving and finding mates, having a place in the social web matters a lot more than being right, nearly always. If your access to food, sex, and others’ protection depends on your agreeing with others that the sky is green, you either find ways to conclude that the sky is green, or you don’t have many kids. If the social web puts a lot of effort into figuring out what you really think, then you’d better find some way to really think the sky is green, regardless of what your eyes tell you.

Is it any wonder that so many deviations from clear thinking are about social signaling?

The thing is, “clear thinking” here mostly points at mechanical reasoning. If we were to create a mechanical model of social dynamics… well, it might start looking like a recursively generated social web, and then mechanical reasoning would mostly derive the same thing the social mind already does.

…because that’s how the social mind evolved.

And once it evolved, it became overwhelmingly more important than everything else. Because a strong, healthy, physically coordinated, skilled warrior has almost no hope of defeating a weakling who can inspire many, many others to fight for them.

Thus whenever people’s social and mechanical minds disagree, the social mind almost always wins, even if it kills them.

You might hope that that “almost” includes things like engineering and hard science. But really, for the most part, we just figured out how to align social incentives with truth-seeking. And that’s important! We figured out that if we tie social standing to whether your rocket actually works, then being right socially matters, and now culture can care about truth.

But once there’s the slightest gap between cultural incentives and making physical things work, social forces take over.

This means that in any human interaction, if you don’t see how the social web causes each person’s actions, then you’re probably missing most of what’s going on — at least consciously.

And there’s probably a reason you’re missing it.

New to LessWrong?

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 12:13 AM

Related: Robin Hanson's A Tale of Two Tradeoffs.

The design of social minds involves two key tradeoffs, which interact in an important way.
The first tradeoff is that social minds must both make good decisions, and present good images to others.  Our thoughts influence both our actions and what others think of us.  It would be expensive to maintain two separate minds for these two purposes, and even then we would have to maintain enough consistency to convince outsiders a good-image mind was in control. It is cheaper and simpler to just have one integrated mind whose thoughts are a compromise between these two ends.
When possible, mind designers should want to adjust this decision-image tradeoff by context, depending on the relative importance of decisions versus images in each context.  But it might be hard to find cheap effective heuristics saying when images or decisions matter more.
The second key tradeoff is that minds must often think about the same sorts of things using different amounts of detail.  Detailed representations tend to give more insight, but require more mental resources.  In contrast, sparse representations require fewer resources, and make it easier to abstractly compare things to each other.  For example, when reasoning about a room a photo takes more work to study but allows more attention to detail; a word description contains less info but can be processed more quickly, and allows more comparisons to similar rooms.
It makes sense to have your mental models use more detail when what they model is closer to you in space and time, and closer to you in your social world; such things tend to be more important to you.  It also makes sense to use more detail for real events over hypothetical ones, for high over low probability events, for trend deviations over trend following, and for thinking about how to do something over why to do it.  So it makes sense to use detail thinking for "near", and sparse thinking for "far", in these ways.  [...]
The important interaction between these two key tradeoffs is this: near versus far seems to correlate reasonably well with when good decisions matter more, relative to good images.  Decision consequences matter less for hypothetical, fictional, and low probability events.  Social image matters more, relative to decision consequences, for opinions about what I should do in the distant future, or for what they or "we" should do now.  Others care more about my basic goals than about how exactly I achieve them, and they care especially about my attitudes toward those people.  Also, widely shared topics are better places to demonstrate mental abilities.
Thus a good cheap heuristic seems to be that image matters more for "far" thoughts, relative to decisions mattering more for "near" thoughts.  And so it makes sense for social minds to allow inconsistencies between near and far thinking systems.  Instead of having both systems produce the same average estimates, it can make sense for sparse estimates to better achieve a good image, while detail estimates better achieve good decisions. 

(And obviously, "The Elephant in the Brain" is basically an extended survey of the empirical evidence for these kinds of theses.)

Curated.

I've been interested for awhile in a more explicit "social reality" theory and sequence. The original sequences certainly explore this, as does a lot of Robin Hanson's work, but I feel like I had to learn a lot about how to meaningfully apply it to myself via in person conversations.

I think this is one of the more helpful posts I've read for orienting around why social reasoning might be different from analytic reasoning – it feels like it takes a good stab at figuring out where to carve reality at the joints.

I do think that in the longterm, as curated posts get cached into something more like Site Canon, I'd want to see some followup posts (whether by Val or others) that take the "mechanical vs social thinking" frame from the hypothesis stage to the "formulate it into something that makes concrete predictions, and do some literature reviews that check how those predictions have born out so far."

I agree that lots of biases have their roots in social benefits, but I'm unsure whether they're really here now "because we predict it’s socially helpful to be biased that way" or whether they're here because it was socially helpful to be biased that way. Humans are adaption executers, not fitness maximizers, so the question is whether we adapted to the ancestral environment by producing a mind that could predict what biases were useful, or by producing a mind with hardcoded biases. The answer is probably some combination of the two.

Yep, that seems like a correct nuance to add. I meant "predict" in a functional sense, rather than in a thought-based one, but that wasn't at all clear. I appreciate you adding this correction.

You might have gone too far with speculation - your theory can be tested. If your model was true, I would expect a correlation between, say, the ability to learn ball sports and the ability to solve mathematical problems. It is not immediately obvious how to run such an experiment, though.

Sports/math is an obvious thing to check, but I'm not sure whether it quite gets at the thing Val is pointing at.

I'd guess that there are a few clusters of behaviors and adaptations for different type of movement. I think predicting where a ball will end up doesn't require... I'm not sure I have a better word than "reasoning".

In the Distinctions in Types of Thought sense, my guess is that for babies first learning how to move, their brain is doing something Effortful, which hasn't been cached down to the level of S1 intuition. But they're probably not doing something sequential. You can get better at it just by throwing more data at the learning algorithm. Things like math have more to do with the skill of carving up surprising data into new chunks, and the ability to make new predictions with sequential reasoning.

My understanding is that "everything good-associated tends to be correlated with everything else good", a la wealth/height/g-factor so I think I expect sports/math to be at least somewhat correlated. But I think especially good ball players are probably maxed out on a different adaptation-to-execute than especially good math-problem-solvers.

I do agree that it'd be really good to formulate the movement/social distinction hypothesis into something that made some concrete predictions, and/or delve into some of the surrounding literature a bit more. (I'd be interested in a review of Where Mathematics Comes From)

You might have gone too far with speculation - your theory can be tested.

I think that's good, isn't it? :-D

If your model was true, I would expect a correlation between, say, the ability to learn ball sports and the ability to solve mathematical problems.

Maybe…? I think it's more complicated than I read this implying. But yes, I expect the abilities to learn to be somewhat correlated, even if the actualized skills aren't.

Part of the challenge is that math reasoning seems to coopt parts of the mind that normally get used for other things. So instead of mentally rehearsing a physical movement in a way that's connected to how your body can actually move and feel, the mind mentally rehearses the behavior (!) of some abstract mathematical object in ways that don't necessarily map onto anything your physical body can do.

I suspect that closeness to physical doability is one of the main differences between "pure" mathematical thinking and engineering-style thinking, especially engineering that's involved with physical materials (e.g., mechanical, electrical — as opposed to software). And yes, this is testable, because it suggests that engineers will tend to have developed more physical coordination than mathematicians relative to their starting points. (This is still tricky to test, because people aren't randomly sorted into mathematicians vs. engineers, so their starting abilities with learning physical coordination might be different. But if we can figure out a way to test this claim, I'd be delighted to look at what the truth has to say about this!)

I'm surprised by your post and would have expected different post based on how it started.

One ability our social mind evolved is the ability to care. Caring producing a certain kind of ability to pattern match that's often superior to straight mechanical reasoning. David Chapman wrote the great post Going down on the phenomenon to drive deeper into that dynamic as it applies to science.

Curiosity is also a very important mental move that doesn't come out of the need to coordinate physical movement but that's more social in nature.

I mostly agree. I had, like, four major topics like this that I was tempted to cram into this essay. I decided to keep it to one message and leave things like this for later.

But yes, totally, nearly everything we actually care about comes from the social mind doing its thing.

I disagree about curiosity though. I think that cuts across the two minds. "Oh, huh, I wonder what would happen if I connected this wire to that glowing thing…."

I don't think there's any rat out there that thinks "huh, I wonder what would happen if I connected this wire to that glowing thing…" and I don't think the basic principles about movement coordination changed that much on that evolutionary time-scale.

I could imagine a chimpanzee wondering about what will happen but then chimpanzee's also have strong social mind.

There may be more than one form of curiosity; this discussion suggests that humans, monkeys and rats differ in the kinds of curiosity that they exhibit (emphasis added):

The history of studies of animal curiosity is nearly as long as the history of the study of human curiosity. Ivan Pavlov, for example, wrote about the spontaneous orienting behavior in dogs to novel stimuli (which he called the “What-is-it?” reflex) as a form of curiosity (Pavlov, 1927). In the mid 20th century, exploratory behavior in animals began to fascinate psychologists, in part because of the challenge of integrating it into strict behaviorist approaches (e.g. Tolman, 1948). Some behaviorists counted curiosity as a basic drive, effectively giving up on providing a direct cause (e.g. Pavlov, 1927). This stratagem proved useful even as behaviorism declined in popularity. For example, this view was held by Harry Harlow—the psychologist best known for demonstrating that infant rhesus monkeys prefer the company of a soft, surrogate mother over a bare wire mother. Harlow referred to curiosity as a basic drive in and of itself—a “manipulatory motive”—that drives organisms to engage in puzzle-solving behavior that involved no tangible reward (e.g., Harlow, Blazek, & McClearn, 1956; Harlow, Harlow, & Meyer, 1950; Harlow & McClearn, 1954).
Psychologist Daniel Berlyne is among the most important figures in the 20th century study of curiosity. He distinguished between the types of curiosity most commonly exhibited by human and non-humans along two dimensions: perceptual versus epistemic, and specific versus diversive (Berlyne, 1954). Perceptual curiosity refers to the driving force that motivates organisms to seek out novel stimuli, which diminishes with continued exposure. It is the primary driver of exploratory behavior in non-human animals and potentially also human infants, as well as a possible driving force of human adults’ exploration. Opposite perceptual curiosity was epistemic curiosity, which Berlyne described as a drive aimed “not only at obtaining access to information-bearing stimulation, capable of dispelling uncertainties of the moment, but also at acquiring knowledge”. He described epistemic curiosity as applying predominantly to humans, thus distinguishing the curiosity of humans from that of other species (Berlyne, 1966).
The second dimension of curiosity that Berlyne described informational specificity. Specific curiosity referred to desire for a particular piece of information, while diversive curiosity referred to a general desire for perceptual or cognitive stimulation (e.g., in the case of boredom). For example, monkeys robustly exhibit specific curiosity when solving mechanical puzzles, even without food or any other extrinsic incentive (e.g., Davis, Settlage, & Harlow, 1950; Harlow, Harlow, & Meyer, 1950; Harlow, 1950). However, rats exhibit diversive curiosity when, devoid of any explicit task, they robustly prefer to explore unfamiliar sections of a maze (e.g., Dember, 1956; Hughes, 1968; Kivy, Earl, & Walker, 1956). Both specific and diversive curiosity were described as species-general information-seeking behaviors.

Later in the paper, when trying to establish a more up-to-date framework for thinking about curiosity, they suggest that its evolutionary pathway can be traced to behaviors which are already present in roundworms:

Even very simple organisms trade off information for rewards. While their information-seeking behavior is not typically categorized as curiosity, the simplicity of their neural systems makes them ideally suited for studies that may provide its foundation. For example, C. elegans is a roundworm whose nervous system contains 302 neurons and that actively forages for food, mostly bacteria. When placed on a new patch (such as a petri dish in a lab), it first explores locally (for about 15 minutes), then abruptly adjusts strategies and makes large, directed movements in a new direction (Calhoun, Chalasani, & Sharpee, 2014). This search strategy is more sophisticated and beneficial than simply moving towards food scents (or guesses about where food may be); instead, it provides better long-term payoff because it provides information as well. It maximizes a conjoint variable that includes both expected reward and information about the reward. This behavior, while computationally difficult, is not too difficult for worms. A small network of three neurons can plausibly implement it. Other organisms that have simple information-seeking behavior include crabs (Zeil, 1998), bees (Gould 1986; Dyer, 1991) ants (Wehner et al., 2002), and moths (Vergasolla et al., 2007).

"Why, then, is there systematic bias?... But the rest of the time? It’s because we predict it’s socially helpful to be biased that way."

Similar thoughts led me to write De-Centering Bias. We have a bias towards biases (yes, I know the irony). True rationality isn't just eliminating biases, but also realising that they are often functional.