Sometimes, we assign desires and emotions to cars. We say that the car wants to drive, or we say that a strange vibration is the car expressing its displeasure at a too-long gap between oil changes. That’s anthropomorphization: we imagine that non-human objects have desires and emotions driving their behavior. We model the car as having desires (i.e. oil changes), emotions in response to those desires being met or not met, and observable behavior corresponding to those emotions. Of course, the actual cause of the car’s behavior is much more mechanical - low oil or coolant, built-up sludge, etc.

Now, consider hangriness.

I don’t notice that I’m hungry. I notice that the trash is overflowing and hasn’t been taken out. I feel angry about the trash, so I model myself as angry because of the trash. If someone says “why are you angry?”, I talk about how I want a clean house, and how annoying it is that the trash has not been taken out. But the actual cause is simply low blood sugar, or something like that.

This is anthropomorphization of myself: I imagine that my behavior is driven by some desire (i.e. I want a clean house) and the frustration of not having that desire met. Yet the actual cause is much more mechanical, and unrelated to the supposed desire.

Likewise, we often anthropomorphize other humans. If someone else is hangry, I might notice their anger and ask them what they’re angry about, without realizing that they just haven’t eaten in a while. In general, if I ask someone why they think X or why they decided Y, they’ll come up with a whole explanation for why X or Y makes sense, which may or may not have anything at all to do with the actual causes of their belief/decision - i.e. they rationalize post-hoc. Mistaking that post-hoc justification for the actual cause of the belief/action would be anthropomorphization.

Empathic Reasoning

Empathic reasoning is especially prone to the anthropomorphization failure mode in general, and to anthropomorphization of humans in particular.

Empathic reasoning is all about putting yourself in someone else’ shoes, asking “What do I want? What do I feel?”, and explaining behavior in terms of those wants and feelings. Essentially, empathic reasoning assumes the anthropomorphic hypothesis - it assumes that behavior is a result of desires and emotions - and tries to back out those desires and emotions by simulating oneself in the same situation.

In cases like hangriness, where the real cause diverges heavily from the first-person experience, that’s going to be highly misleading. Empathy may yield a good idea of what the situation feels like/looks like to another person, but the other person’s experience includes a wildly inaccurate model of the underlying causes. If we’re going to leverage empathic reasoning successfully, we need to be very careful about separating what the person perceives from reality, and in particular separating what the person perceives as causing their beliefs/behavior from what actually causes their beliefs/behavior.


New Comment
6 comments, sorted by Click to highlight new comments since: Today at 7:55 PM

Realising that my anger/grumpiness was caused almost exclusively by me being tired rather than the thing I thought was annoying me was one of my formative moments.

Pointing this kind of thing out to people at the time is rarely helpful but, depending on the person, mentioning  it later on can give big mutual wins.

Scott Adams convinced me of this idea with his writing about us being "moist robots". It simplifies a lot of things in day to day life, by the understanding it gives you of both yourself and others. 

This sounds right to me, and seems to be a natural consequence of a generalized no free lunch in inference. If you don't make some normative assumptions, you can't infer anything.

Satire of what?

I agree that this misattribution is a very real and common thing, but I question whether your meta description is a useful one. Yes, we misdiagnose crabbiness when it's caused by hunger or tiredness. But then we constantly misdiagnose every complex system, not just humans.

The way I interpret the post is that it's pointing to a specific type of misattribution, the type where we attribute a "high-level cause" when in actuality it's a "low-level cause". Perhaps there may be a better analogy for this than anthropomorphizing, but none come to my mind right now.

New to LessWrong?