(Cross-posted from my blog)
Imagine coming home from work (back when that was a thing). As you walk in the door, your roommate says “Hey! I bought you a burrito for dinner, mind paying me back?” Some possible responses:
“Thanks! That’s exactly what I wanted for dinner, I will gladly pay you back.”
“What?! I hate burritos and you know that, I’m not paying you.”
“I normally love burritos, but I don’t want one right now. You couldn’t have known that though, so I will pay you for it anyways.”
The first two responses seem sensible (except for the not-liking-burritos part), but what’s up with the last one? Why buy a burrito you don’t want? And why is your roommate buying you things without asking?
Let’s move to a simpler example. What if a strange man approached you and said “If you give me $1, I will give your friend Alice $3”? It seems like a good deal, so you call Alice and she agrees to split the money she receives with you. Now, you can both make a profit.
What if the strange man tells you that you can’t call Alice before accepting the deal? Now, you have to infer whether or not Alice wants $3 and make a decision on her behalf.
I see several possible outcomes here.
If you decide to give $1, Alice might be happy with your decision after-the-fact. She might even agree to pay you $1 to compensate you for what you paid. Even better, she might split the money with you so that you can both make a profit.
Alternatively, if you give $1, Alice might be mad at you for making assumptions about what she wants.
If you decide not to give $1, Alice might be disappointed that you didn’t take the deal.
If you decide not to give $1, Alice might be glad that you didn’t take the deal because she does not like people making assumptions about what she wants.
This problem of making decisions on someone else’s behalf without being able to communicate with them is what I call a counterfactual contract. Essentially, you are trying to figure out what they would agree to if you actually could talk to them.
These counterfactual deals can be quite complicated, which is why I refer to them as “contracts”. To make the right choice, you not only need to know what they want, but also how they would respond to the fact that you made a choice for them.
The burrito example is just another version of the problem I just presented. Your roommate tried to decide whether or not to buy you a burrito, based on what they know about you, and without communicating beforehand. They guessed that you would appreciate receiving a burrito (and pay them back), so they bought one on your behalf.
What is the right thing to do in these situations?
Being the person acting on Alice’s behalf is pretty easy. If you have a good understanding of her needs, just choose what you think she would prefer, otherwise, don’t do anything on her behalf.
In the burrito scenario, the statement of the counterfactual contract looks like this: “If I predict Alice will appreciate me buying her a burrito, I will buy her a burrito.”
You could also be a little more selective, requiring that Alice pay you back for your efforts: “If I predict Alice will appreciate me buying her a burrito and will pay me back, I will buy her a burrito.”
In other words, when acting on Alice’s behalf, you draft up a contract you think she would agree to, and you find out later if she actually appreciates it. If Alice agrees to the contract after-the-fact then both parties follow its terms. If she disagrees with it, then she is under no obligation to participate (e.g. pay you back).
What kinds of contracts should Alice agree to?
I think its clear that Alice should reward people for making good choices on her behalf. That is, if she appreciates the contract after-the-fact, she should stick to it. This establishes Alice’s reputation as someone who is willing to honor these kinds of beneficial deals, opening up opportunities for her.
Conversely, she should punish people for making bad choices on her behalf. Alice has no obligation to accept any counterfactual contracts, and she certainly shouldn’t agree to contracts which are net negatives for her. If she agreed to all sorts of bad deals, people would quickly take advantage of this.
This thinking extends to gambles as well. If someone bought a good (but risky) investment on Alice’s behalf, she should reward that (regardless of whether it worked out), if they buy a bad investment (e.g. a lottery ticket), she should punish that.
What if someone is mistaken about your needs? For example, someone might know you like ice cream, but they buy you strawberry ice cream when you would have preferred chocolate. I would argue that even in this case you should generally reward people’s good intentions. Note that there is a principal-agent problem here, you want to incentivize people to make good choices on your behalf while not being too lenient.
Overall, you want to be the kind of person who rewards people trying to benefit you, even if they are somewhat mistaken about your needs. If you didn’t do this, you would miss out on opportunities where someone wants to help you but doesn’t because they don’t think you will reward them after-the-fact. Essentially, you are making the act of helping you less risky for other people. This corresponds to the last response in the burrito scenario.
Why does this matter? When do counterfactual contracts actually come up? Admittedly, a lot of the examples I have used so far are contrived or unrealistic, but there are some important situations where counterfactual contracts come up. Even better, counterfactual contracts can neatly connect many problems in decision theory and ethics.
Street performers are a perfect example of a counterfactual contract in action. They perform in public, for free, and collect tips. No one promised they would pay before seeing the show, but some people pay anyways, even though they have already gotten the enjoyment from the show. Though not everyone gives, it is interesting that some people have an instinct to pay the performer. Here, the counterfactual contract reads “I will perform for you if I predict you will pay me for it afterwards”.
Like with the street performer, joining a union looks a lot like a counterfactual contract. When unions negotiate for all workers, new workers can enter the job with higher pay. How can the union convince these new workers to join the union? The union’s best argument for joining is: if they had not negotiated beforehand, the new worker would not be receiving higher pay. The new worker needs to be the kind of person who pays in order to receive these benefits in the future. The counterfactual contract looks like this: “I, the union worker, will fight to get new workers a raise if I predict they will to pay me once they start working here”
These examples suggest a new system for funding things like public goods. Simply build a public good and then ask people to pay you afterwards! Essentially, you are offering everyone a counterfactual contract “I will build this public good if I predict enough people will pay me afterwards”. The public good can be financed by selling future shares of the after-the-fact payments (the price of these futures also provides information on which projects are worthwhile).
Why would anyone pay after-the-fact? Because establishing a reputation as someone willing to pay for these kinds of goods means that you will get more of them! In a world where cyclists are willing to pay someone after-the-fact for the construction of a new public bike path, that bike path will get built! Alternatively, if the cyclists say “we’re not paying for the bike path, it’s already there” you can expect that nobody will bother to build another public bike path in the future.
I wonder to what degree this kind of thing is happening today. For example, plenty of writers publish their work for free, hoping for support via donations (ahem). Do the interests of people who are willing to pay after-the-fact for writing influence the kind of writing that gets published?
Stepping back, the most important counterfactual contract we must consider involves making decisions on behalf of future generations. We cannot literally ask future generations what they want, but that doesn’t stop us from making choices for them.
Though we don’t entirely know what future generations will want, we can be pretty sure that they would like to live in a world of peace, prosperity, and ecological balance. Ideally, we would make policy decisions which ensured a better world for all subsequent generations.
But we are not limited to doing charitable things for future generations, with counterfactual contracts, we can actually trade with them! Debt finance is an example of this. “We, the citizens of the present, incur debt today to build a park because we predict the citizens of tomorrow are willing to pay part of it.”
Elder care and the the U.S. Social Security system can be framed in this way as well. These institutions can be viewed as a system for younger generations to support older generations in return for the good choices of the older generations. “We, the citizens of the present, will make good choices on behalf of other generations in return for support in our later years”. This raises some uncomfortable questions. Under this framing, is Social Security underfunded or overfunded? Were the choices that older generations made on behalf of younger generations wise? Note that elder care should not depend entirely upon past decisions, since there are important welfare and reciprocation components to these institutions as well.
The decision to have children is a small scale example of making choices on the behalf of future generations. You have to choose whether or not to bring a person into the world, without being able to ask. You have to infer whether or not your child would look back and appreciate that you choose to have them. The counterfactual contract reads “I will have a child if I predict that my child will appreciate existing”.
If it becomes possible to create digital minds, the decision to create a new digital mind is analogous to deciding to have a child. However, we will likely know more about the digital mind’s opinions than our future child’s. For example, you might consider whether or not to make a copy of yourself. You might think, “if I came into being like this, I would appreciate that a person took the time to create me”. In this scenario (without major resource constraints), it would be a good idea to create a copy of yourself. But these deals can get more complicated: the contract “if I came into being, I would be willing to do 1 hour of reasonable work to thank the person who created me” suggests that you could ask your copies for something in return in exchange for creating them. Deals like this may be essential to the Age of Em. There are many nuances to creating these kinds of deals (what happens if an em is not happy about being created or the work they are asked to do?), but that is a discussion for another time.
So far, we have talked about counterfactual contracts which are a good deal for both parties. But this framework can also handle situations where you are offered a bad contract. As discussed before, it is important to establish a reputation of saying “no” to these offers. Mugging is one example of a bad offer. We can think of the mugger as reasoning: “I will threaten my victim if I believe they would prefer paying me over getting a beating”. I admit that this method of framing extortion is strange, but the conclusion is quite natural. Since the mugger is offering you a contract that you would not have agreed to before-the-fact, you are under no obligation to follow it’s terms. In the real world this means that agents should establish a reputation of not giving in to extortion. This is why governments “don’t negotiate with terrorists” and Gwern publishes accounts of failed blackmail attempts.
You have gotten this far into the post, so perhaps you are of a more “theoretical” bent. As promised, counterfactual contracts can unite several problems in decision theory and ethics.
Counterfactual mugging is a problem in decision theory which can be reformulated into a counterfactual contract. In the original scenario, you are approached by a predictor who claims “I flipped a coin, if it came up tails, I ask you for $100, if heads, I would give you $10000 if I predicted that you would pay me if the coin comes up tails”. If the mugger had been able to talk to you beforehand, they could ask “will you agree to pay me $100 if a coin flip comes up tails if I pay you $10000 if a coin flip comes up heads?” instead, they are offering the contract after the coin flip. This reads exactly like the other scenarios above. The mugger reasons that you would be willing to agree to a fair coin flip where you make money on average, so they flip the coin and ask for the money if it is tails.
There is lots of disagreement about whether or not it is rational to pay the mugger but I would argue that if the mugger is true to his word, you should pay him since you make money in net. There is one caveat which counterfactual contracts cover neatly. In situations where you don’t have much money, you can reject the offer. For example, if your life savings is $100, it would be a bad idea to pay the mugger, you should reject his counterfactual contract. In the counterfactual contracts framing, the mugger is mistaken about the coin flip being a good deal for you, since you are close to broke, thus you have no obligation to accept the contract and pay after-the-fact.
Newcomb’s paradox is another problem in decision theory which can be re-framed as a counterfactual contract. Essentially it is a counterfactual contract between the player and the Predictor. If they could speak beforehand, they could sign a contract where the player promises to only take box B if the Predictor promises to put $1 million in box B. In the actual game, the player and the Predictor don’t talk beforehand. Instead, the Predictor reasons “If I predict that the player would promise to take only box B if I put $1 million in box B, then I will put $1 million in box B” the player then accepts the contract after-the-fact by either taking only box B or rejects the contract by taking both boxes. If the Predictor puts money in box B, they are trusting you to only take box B. Like before, you want the be the kind of person who accepts deals like this, because, if you do, you will walk away with $1 million. Similarly, you can reformulate Parfit’s Hitchhiker and Kavka’s Toxin as counterfactual contracts.
Turning to ethics, Rawls’ Original Position can be reformulated as a counterfactual contract as well. The contract reads, “If I predict you would have signed a contract to spread our wealth more evenly before-the-fact, I will share my wealth after-the-fact”. Welfare, insurance, and income share agreements can all be thought of in this way.
Though it is pretty clear how a person should respond to these scenarios, people often do not act this way in real life. Many people don’t pay street performers, or unions, or support welfare, or advocate for future generations. Though it is easy to say “I don’t owe them anything, they didn’t ask me first” the world would be a better place if we all were the kinds of people who agreed to these counterfactual contracts. This isn’t some wishful thinking, societies and individuals who engage in counterfactual contracts actually are better off. Encouraging this kind of thinking at the government level would, at minimum, improve long-term decision-making.
Counterfactual contracts are not really a new insight, they are just a new (and sometimes awkward) way to frame many well understood problems. Nevertheless, I still think this approach is a useful way to think about these topics because it connects so many different questions and suggests new answers. Counterfactual contracts shape my thinking in a lot of matters and will help inform my reasoning in future posts.
I was with you for most of the post, though I don't think "counterfactual contract" really fits. It's more like "speculative agent" where someone decides whether to act on another person's behalf, and later presents that person with a choice of whether to accept or reject their actions (possibly with some associated price).
The street performer one seems slightly different in being collective rather than individual: a street performer doesn't really act on behalf of any one audience member, but performs speculatively for the audience as a group. Any of them can choose to listen/watch or not, pay or not, complain to local authorities or not.
Further out on the fringes of this concept are some forms of promotions, such as "we think you would like X so here's a discount for it".
I don't think that any of the decision theory problems are examples of the same sort of thing, though.
I like the promotions example. It can be framed as a company "sticking their neck out" for a particular customer, and the customer can reciprocate by buying the item if they actually like it, and not buying it if the ad/promotion is a waste of their time.
Yeah framing the decision theory problems as counterfactual contracts is more of a stretch, not sure if its actually useful.
If you want to have public goods funded by the users, why not ask them explicitly before you build the public good? This is usually called "crowdfunding". It works pretty well on relatively small scale projects already, and should really be scaled up.
I agree, I want to see a lot more crowdfunding projects for global problems especially. I think things like crowdfunding are great, and if a public good can be provided this way, its usually easier/simpler than retroactive funding.
That being said, I still think there are a few cases where retroactive funding and prizes can be useful. For example, these mechanisms can allow people to finance the creation of music they like before hearing the song itself!
I talk about this in more depth here and it seems that Vitalik Buterin is working on an implementation of something similar.
I don't expect retroactive funding/prizes to be as important as crowdfunding, but I think having a diverse set of goods-funding mechanisms is important.