Epistemic Status: I endorse this strongly but don’t think I’m being original or clever at all.
Until recently — yesterday, in fact — I was seriously wrong about something.
I thought that it was silly when I saw people spending lots of energy arguing with their closest friends who almost completely agreed with them, but not quite.
That’s some People’s Front Of Judaea shit, I thought. Don’t you know that guy you’re arguing with so vehemently is your friend? He likes you! He’s a pretty good guy! He even shares your values and models, almost completely! He’s only wrong about this one, itty bitty, relatively abstract thing!
Meanwhile, there are people out there in the world who don’t share your values. And there are people out there who are actually evil and do awful things.
It’s like “ok, saying mean things about Muslims can be bad, but being a Muslim terrorist is a hell of a lot worse! Why do the people who are so quick to penalize Islamophobic speech never have anything bad to say about actual mass murder? C’mon, get a sense of proportion!”
I still think, obviously, that really bad actions are worse than slightly bad actions.
But I was seriously misunderstanding why people argue with their close friends.
Have you noticed my mistake yet? Give it a moment.
. . .
. . .
. . .
Ok, here it is.
Arguing is not a punishment.
Arguing is not a punishment.
Sure, serious wrongdoing should be penalized, and socially disapproved of, more than mild wrongdoing. (Murder is worse than prejudiced speech.)
Also, fixing big problems should take priority over fixing little problems. (Saving money on rent is worth more of your attention than saving money on apples.)
But let’s frame it differently.
Cooperation is really valuable. Stable cooperation, that is; when even in the future, when you know each other better, and you’ve had more time to think, you’ll still want to cooperate.
Trust is really valuable, and scarce. Justified trust, that is; when you can rely on what somebody says to be true and base your decisions on information you get from them.
Having “true friends” — people you can cooperate with and trust, stably, to a high degree — is valuable.
Yeah, you can get along and even thrive in a low-trust environment if you have the right skills for it. Havamal, the medieval Icelandic wisdom literature, attributed to the god Odin, is my favorite advice for how to be a savvy customer in a low-trust world. (Exercise for the reader: think about how it applies to the replication crisis in science.) But especially in a low-trust world, true friends are valuable, as Havamal will remind you again and again.
How do you get more trust and cooperation with your friends?
It’s a hard problem; I haven’t solved it or even really started trying yet, the following are just ideas at the conceptual level rather than things I’ve found successful.
But communicating with them to get on the same page is clearly part of the puzzle. Cooperation means “you and I agree to do X, and then we follow through and actually do X.” The part about willingness to follow through is about loyalty, conscientiousness, motivation, integrity, all those kinds of virtues. The part about agreeing to do X, though? That’s not possible unless you both clearly understand what X is, which is much harder than it sounds! It takes a lot of discussion, in my experience and from what I’ve heard, to get people on the same page about what exactly they’ve committed to doing.
Moreover, if I don’t understand why X is so important to you, and I say “yeah, ok, sure, X”, and then I go home and back to my life, but X still seems pointless to me, then I’m going to be less motivated to do X.
Because we didn’t have the argument about “is X pointless or not?”
We didn’t resolve it. We let it drop, to be nice, because we’re friends and we like each other. But we didn’t get on the same page, and now a ball got dropped and you’re unhappy with me.
That getting-on-the-same-page process is not a punishment.
It’s something you’d only do with a friend close enough that you really might cooperate on work that you care about getting done. (Mundane example: household chores. Gotta get on the same page about who’s responsible for what! Negotiating for fewer/different responsibilities is better than shirking! That can be a really hard thing to internalize, though.)
“I spend more time communicating and getting on the same page with my friends than I do on having discussions with people I hate” — frame it that way, and suddenly that doesn’t sound like pointless infighting, it sounds mature and practical, right?
Of course you’d focus most on clarifying communication with your closest friends! They’re the people you’re most likely to be able to cooperate with!
Ok, so what kind of agreement is most valuable and attainable? After all, nobody, even your closest friends, agrees with you on everything.
Short term, the answer is obvious: agreement on the details that are practical and relevant to the tasks you share. Share an apartment? Gotta come to agreement on chores, and share world-models relevant for those. (It’s no good if I agree to sweep but I don’t know where we keep the broom.)
But how about the long-run and more meta problem of living in a low-cooperation world itself?
Here’s one example: we’re in a real trade war with China now. Chinese investment in the US dropped 92 percent in the first half of 2018! I’ve tuned out financial markets for most of my life, but I’m essentially a professional fundraiser now, and let me tell you, a drop in Chinese-US investment that drastic affects a US organization’s ability to raise capital. Trade wars, like real wars, can come along all of a sudden and destroy value. Cooperation in this sense is less about singing kumbaya and more about not taking a wrecking ball to your own house. The Hobbesian war of all against all ruins things that people were trying to build.
You want collaborators on fixing that kind of a problem?
The relevant things to agree (and disagree!) on are about the nature of cooperation and trust themselves. How are alliances and coalitions formed and maintained and broken? How, and how well, do enforcement mechanisms and incentive strategies work? You can think of these questions through the lenses of a number of fields:
- game theory
- evolutionary psychology
- some branches of economics (mechanism design, public choice, price theory in general)
- international relations (I know none of this)
- Marxism (I haven’t read Marx either, but I’ve heard that his class analysis can be seen as applied iterated game theory, where a “class” refers to a coalition)
In all cases, the things to get on the same page about are positive not normative aspects of fundamental theory not immediate policy.
We want long-term cooperation, right? That means fundamentals need to be gotten right. Why? If you focus on object-level policy, it’s too easy for your friend to concur without agreeing (“I agree we should do X, but not with your reason for doing X”), which means that on the next policy question that comes up, your friend might not even concur!
(I have a friend — a good guy! a smart guy! — who concurs with me on 100% of object-level political controversies, and in every case, he concurs for a reason I think is dumb. You may know someone like that too. For the purposes of building long-term cooperation, your friend Mr. Concur is harder to get on the same page with, and thus lower priority to have discussions with, than your friend Ms. Dissent, who starts with the same premises as you but takes them in a totally different direction. This is counterintuitive, because often you will initially get along better with Mr. Concur! That is because the mechanism that produces “getting along with” and makes friendships closer or weaker is itself a short-term, object-level policy! For instance, people in the same political tribe are nicer to each other.)
So, that’s why fundamental principles, not immediate policy.
Why positive and not normative? So you’ll avoid unnecessary hostility.
Hostility, after all, in game-theory-land, is what it feels like from the inside to decide that your interests are opposed to someone else’s. You can come to this conclusion mistakenly. To avoid becoming hostile by mistake, first try to clearly understand and communicate what the landscape of interests and incentives even looks like. That’s what professional negotiators harp on all the time — more often than most people assume, it’s in your interests to keep asking clarifying questions until you understand wtf is going on, and stay cordial enough to keep talking until you understand wtf is going on, because that increases the odds you’ll find a mutually agreeable deal, should one exist. (Notwithstanding this, there are cases in which obfuscating your negotiating position is in your interest. That’s less true, I expect, the more meta you go. Another reason to start with foundations rather than policies.)
Sticking around for a technical discussion is, itself, a gesture of trust. It invests resources.
That’s why it’s hard to get this stuff started. As I write this, I haven’t washed up yet, I’m not cleaning the house or reading science papers or adding stuff to the LRI blog, and I’m ignoring my baby (who, luckily, is happily playing with his toys and smiling at me every so often.) I’m of the opinion that laying these things out in writing is one of the better ways I have to start coordinated conversations, but, let’s be real, it does involve being a little…spendthrift. Feeling like “sure, I can afford to do this.” I’m also reading Law’s Order, currently. That’s also a resource investment into this whole maybe-doomed “understand the micro-foundations of politics” goal, and it also looks kinda like goofing off, and lookit, aren’t there already economists for this who do it better? I’m in a remarkably privileged position at the moment when I have a bunch of time flexibility, and something tells me that this is one of the ways I want to be using it. It is kind of the future of humanity, after all. But actually spending hours chatting merrily — or furiously — with a friend about what is effectively politics for nerds — well, that’s what people usually call “wasting time”, isn’t it?
It’s not a waste if you do it well. But I get that there are a lot of incentives pushing against it.
What friendly theory talk has going for it is the very long term — getting to be the future’s equivalent of Confucius or Boethius and their friends, or maybe even the Amoraim— and the very short term, in which it’s fun to hang out with your friends and talk about interesting things and have some sense that you’re getting somewhere.
Example question to explore:
The nitty-gritty of the “forgiveness” part of “tit-for-tat-with-forgiveness” in iterated games. There are a lot of slightly different variants of this, I know, which are viable enough to see play. Algorithms for recovery of cooperation after defection — how do different ones work? Advantages or disadvantages? Do any of them correspond to known human behaviors or historical/current institutions? As a practical matter, what kind of heuristics do people use as to whether or how to revive relationships with friends that have grown distant, pitch to leads that have gone cold, collect debts that have gone unpaid for a long time, etc?