This post examines the virtue of fairness. It is meant mostly as an exploration of what others have learned about this virtue, rather than as me expressing my own opinions about it, though I’ve been selective about what I found interesting or credible, according to my own inclinations. I wrote this not as an expert on the topic, but as someone who wants to learn more about it. I hope it will be helpful to people who want to know more about this virtue and how to nurture it.
Fairness may seem like a things-I-learned-in-kindergarten virtue, but experimental evidence shows that it can be difficult for mature adults to wrap their heads around. To summarize: you can set up an experiment in such a way that a subject declares that being fair is very important, expresses that A is more fair than B, does B anyway, and then congratulates themself for their fairness, all seemingly with eyes wide open. More about that later…
What is fairness?
I struggled to define fairness. It seems to be different things, or at least to emphasize different things, in different contexts. (See also Eliezer Yudkowsky’s The Bedrock of Fairness and Is Fairness Arbitrary, and the comments to those.) I’ll stick to being descriptive rather than prescriptive:
One aspect of fairness is impartiality. A process tainted by favoritism, self-dealing, invidious discrimination, or nepotism is unfair. (Why am I still wearing these rags while my wicked stepsisters get all gussied up for the ball?)
A decision can be unfair if it was based on things that are not justly relevant, for instance if prejudice was involved. A decision can also be unfair if it is not based on anything at all: if it is arbitrary.
Fairness can be connected to merit. It is fair to give the trophy to the winner. It would be unfair to give it to the second-place finisher instead. (Such a travesty would probably be a case of prejudice, partiality, or arbitrariness, so maybe it’s subsumed by the previous cases.)
That said, the trophy might fairly be withheld from the winner if it was not won by means of fair play. Fair in that sense means by-the-rules (or sometimes by the unwritten rules of sportsmanship). Being unjust in general is sometimes also called unfair: it’s unfair to go back on your word, or to counterfeit money, or to defraud your customers, for example.
You are supposed to be best able to make fair decisions if you can adopt an objective and impartial perspective, unaffected by irrelevancies. Political philosopher John Rawls created a thought experiment to this end with his “veil of ignorance” behind which you could make your decisions about what would be fair. As a simplified example, imagine that there is going to be a robbery but you don’t know whether you are going to be the robber or the person being robbed (because you are behind the veil of ignorance). You can decide: would you rather have this take place under a system in which robbers are caught and forced to relinquish their ill-gotten gains to their victims, or one in which the robbers get away with it and can brazenly keep what they have stolen, or some other alternative? Because you do not know whether you would personally gain or lose by your decision, you can decide based on fairness rather than on partiality.
How is fairness to be distinguished from justice? Is fairness just a component of justice? or maybe a primitive form of justice? Can you be unfair without also being unjust? Maybe so. For example: If I’m handing out bonuses at the end of the year, and I give higher bonuses to men than to women, or to relatives than to non-relatives, I’m arguably being unfair even if justice did not obligate me to give bonuses of any sort to anyone.
Fairness can be in tension with the virtue of loyalty, which can come packaged with an expectation of partiality.
One way that fairness has been studied rigorously has been through cake-cutting algorithms. These involve dividing a cake among multiple people. The trick is to prove whether there is some algorithm by which a cake can be divided by the people involved such that some criterion of fairness is respected. There are various criteria of fairness that you might choose, such as:
- equality: everybody gets a slice of cake with the same value (size, in a simple model)
- proportionality: everybody gets a slice of cake that they value at least as much as the slice they would have gotten if the cake had been equally divided among everybody by some omniscient authority
- contentment: nobody has reason to wish they had someone else’s slice of cake instead of their own
A simple case is dividing a cake between two people. If person A slices the cake in two, and person B makes the selection of which slice goes to which person, this incentivizes A to slice the cake such that they would not prefer either slice over the other, as presumably if one slice is better, B will choose it. This algorithm guarantees proportionality and contentment. Note that this works even if one person would most prefer the bigger slice of cake, and the other person the slice with more cherries on top. They don’t have to use the same criterion for valuing the slices. Proportionality- and contentment-guaranteeing algorithms of this sort have also been proven for dividing the cake among any number of people.
For something more practical than cake-dividing, imagine this scenario: A set of roommates is moving into a rental house. They need to decide how to divide up the rent, and who gets which room. The rooms are different: some are larger than others, have better window views, are nearer or farther from the noisy neighbor, etc. Is there a fair way to distribute the rooms and divide up the rent? Yes: it’s an application of Sperner’s Lemma and it comes out of the same branch of mathematics as the cake-dividing stuff.
It’s nice to know that at least in some cases, you don’t have to eyeball it, but instead there is a proven method for arriving at a fair result — at least among people who can agree on which criterion of fairness to use.
Ultimatum and dictator games
An ultimatum game is a variety of game theory scenario. In its basic form (there are many variations), player A is given money to split with the other player, B. A decides how much of the money each player gets. B can decide either to accept the portion A has granted them, or to reject it, in which case both player A and player B get nothing at all. (Dictator games are severe variants of the ultimatum game in which B is reduced to helplessness. Player A divides the pot, and player B gets what proportion player A decides to give them, without any opportunity to reject this.)
Naively, it is always in B’s utility-maximizing interest to accept any non-zero portion, as this is better than the zero portion B would get by rejecting it. And correspondingly it is always in A’s utility-maximizing interest to offer B a tiny portion. (If the game is repeated, or under certain other assumptions, this calculus can change.) However, experimentally, people do not utility-maximize in this way: It is most common for A to offer an even 50/50 split, and B will often reject an offer if it is low but non-zero. This may suggest that people are biased towards fairness at the expense of naive utility-maximization. (In other games, people will go out of their way to punish unfairness in ways that also would not be predicted from naive utility-maximization; see this example.)
However, a number of modifications to the experiment have been tried that change the results in revealing ways. I get nearly ten thousand results for “ultimatum games” on Google Scholar, and ten thousand more for “dictator games”. Researchers have delved into all sorts of interpretations of these games, and have designed a multitude of variants meant to clarify various nuances.
For the purposes of this discussion of fairness-as-a-virtue I’m most interested in the results from the varieties of the dictator game that were developed by Daniel Batson and extended by those who followed his lead. There’s a good summary of these at the Stanford Encyclopedia of Philosophy entry on “Distributive Justice and Empirical Moral Psychology.”
Participants in these experiments had the dictator-game-like role of assigning themselves and another subject to two tasks, one of which was described as being clearly more favorable than the other. “After making the assignment privately and anonymously, participants were asked about what was the morally right way to assign the task consequences, and to rate on a 9-point scale whether they thought the way they had actually made the task assignment was morally right.” Almost none of the participants said that simply assigning the better of the tasks to themselves was the morally right thing to do, but a large majority of them did assign the tasks such that they got the better of the two.
In another variation, the participants were pointedly given a coin that they could flip to make the task assignment randomly should they so choose. 70% of participants agreed that assigning the tasks by using the results of a coin flip was the correct thing to do. Half of the participants followed through on this and actually flipped the coin. Of those who did not flip the coin, 90% gave the favorable task to themselves. Of those who did flip the coin, 90% gave the favorable task to themselves as well. When they were later asked to rate the fairness of their decision-making process, those who had flipped the coin rated themselves as having been significantly more fair than those who hadn’t.
This (and other permutations of Batson-style games) suggests to me that people are very vulnerable to self-perceptions of fairness that do not match our actual behavior, and that inclinations to fairness are not very strong or widely-held. If I want to have the habitual characteristic — the virtue — of fairness, therefore, I should expect that I will have to be extraordinarily vigilant and skeptical about my behavior.
Political systems, whether or not they like to admit it, depend on the consent of the governed. If enough of the governed come to feel that the system is unfair, particularly if they are on the raw end of the deal, this is a threat to political stability. Those who have a stake in the political status quo will therefore try to see to it that the state is portrayed as a fair one, and may go so far as to support reforms that make the system more fair.
How a political system can demonstrate that it is a fair one, however, is a matter of much debate and little consensus.
“Equality” is a perennially popular variety of political fairness, though equality-how-exactly can be a wiggly issue. Equality before the law (“The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal loaves of bread.” ―Anatole France)? Abolishing officially-enforced castes? Distributing equal ownership shares of everything?
“From each according to their ability, to each according to their needs,” is an aphoristic formula that caught on in some circles.
John Rawls made a name for himself with an ingenious defense of a sort of maximin version of fairness. An outcome is the fairest one, he thought, if in no other outcome would the worst-off people in it be better off than the worst-off people in this one.
Libertarians tend to prefer Robert Nozick’s rejoinder. In his reckoning, if you start with a fair system, any other system that can be reached from that starting point through intermediate steps that are themselves fair (no force or fraud was involved, for instance) is also fair. In other words: Fairness is not so much about where you’re at, but how you got there.
I race through these shameless oversimplifications of political philosophy just to give some idea of the breadth of notions about what’s “fair” that are out there. We do not agree about what is fair. We don’t even agree about what fair is. It is enough to make one suspect that highfalutin theories of distributive justice have mostly to do with coming up with impressive reasons why the outcome you would find preferable is also the fair one.
How to develop fairness
I didn’t find much about how to become more fair. There is an intervention called “transactive discussion” in which two people with slightly different ideas of fairness discuss a moral dilemma together (e.g. the Heinz dilemma). In the results of some studies, this appears to help the person with a “lower” level of fairness sophistication raise the quality of their fairness evaluation in a measurable and lasting way. So it might be helpful to discuss moral dilemmas (or just ordinary moral quandaries) with others.
John Rawls, A Theory of Justice (1971)
Ariel D. Procaccia, “Cake Cutting Algorithms” Handbook of Computational Social Choice (2016)
Albert Sun, “To Divide the Rent, Start With a Triangle” New York Times (28 April 2014)
Anatole France, The Red Lily (1894)
Robert Nozick, Anarchy, State, and Utopia (1974)
Ben Franklin, Autobiography (1791)
Marvin W. Berkowitz, “The Role of Transactive Discussion in Moral Development” Sum (1980)
Marvin W. Berkowitz & John Gibbs, “Measuring the Developmental Features of Moral Discussion” Merrill-Palmer Quarterly (1983)