In economics, the ideal, or first best, outcome for an economy is a Pareto-efficient one, meaning one in which no market participant can be made better off without someone else made worse off. But it can only occur under the conditions of “Perfect Competition” in all markets, which never occurs in reality. And when it is impossible to achieve Perfect Competition due to some unavoidable market failures, to obtain the second best (i.e., best given the constraints) outcome may involve further distorting markets away from Perfect Competition.
To me, perhaps because it was the first such result that I learned, “second best” has come to stand generally for the yawning gap between individual rationality and group rationality. But similar results abound. For example, in Social Choice Theory, Arrow's Impossibility Theorem states that there is no voting method that satisfies a certain set of axioms, which are usually called fairness axioms, but can perhaps be better viewed as group rationality axioms. In Industrial Organization, a duopoly can best maximize profits by colluding to raise prices. In Contract Theory, rational individuals use up resources to send signals that do not contribute to social welfare. In Public Choice Theory, special interest groups successfully lobby the government to implement inefficient policies that benefit them at the expense of the general public (and each other).
On an individual level, the fact that individual and group rationality rarely coincide means that often, to pursue one is to give up the other. For example, if you’ve never cheated on your taxes, or slacked off at work, or lost a mutually beneficial deal because you bargained too hard, or failed to inform yourself about a political candidate before you voted, or tried to monopolize a market, or annoyed your spouse, or annoyed your neighbor, or gossiped maliciously about a rival, or sounded more confident about an argument than you were, or took offense to a truth, or [insert your own here], then you probably haven't been individually rational.
"But, I'm an altruist," you might claim, "my only goal is societal well-being." Well, unless everyone you deal with is also an altruist, and with the exact same utility function, the above still applies, although perhaps to a lesser extent. You should still cheat on your taxes because the government won't spend your money as effectively as you can. You should still bargain hard enough to risk losing deals occasionally because the money you save will do more good for society (by your values) if left in your own hands.
What is the point of all this? It's that group rationality is damn hard, and we should have realistic expectations about what's possible. (Maybe then we won't be so easily disappointed.) I don't know if you noticed, but Pareto efficiency, that so called optimality criterion, is actually incredibly weak. It says nothing about how conflicts between individual values must be adjudicated, just that if there is a way to get a better result for some with others no worse off, we'll do that. In individual rationality, its analog would be something like, "given two choices where the first better satisfies every value you have, you won't choose the second," which is so trivial that we never bother to state it explicitly. But we don't know how to achieve even this weak form of group rationality in most settings.
In a way, the difficulty of group rationality makes sense. After all, rationality (or the potential for it) is almost a defining characteristic of individuality. If individuals from a certain group always acted for the good of the group, then what makes them individuals, rather than interchangeable parts of a single entity? For example, don't we see a Borg cube as one individual precisely because it is too rational as a group? Since achieving perfect Borg-like group rationality presumably isn't what we want anyway, maybe settling for second best isn't so bad.
"In a way, the difficulty of group rationality makes sense. After all, rationality (or the potential for it) is almost a defining characteristic of individuality. If individuals from a certain group always acted for the good of the group, then what makes them individuals, rather than interchangeable parts of a single entity? For example, in Star Trek, don't we see a Borg cube as one individual precisely because it is too rational as a group? Since achieving perfect Borg-like group rationality presumably isn't what we want anyway, maybe settling for second best isn't so bad."
An intriguing statement. However, you can extend it in the other direction, inside a person. A group has different people with different values, which therefore fail to achieve optimal satisfaction of everyone's values. An "individual" is composed of different subsystems trying to optimize different things, and the individual can't optimize them all. This is an intrinsic property of life / optimizers / intelligence. I don't think you can use it to define the level at which individuality exists. (In fact, I think trying to define a single such level is hopelessly wrongheaded.) If you did, I would not be an individual.
I don't think you can deny that there is indeed a huge gap between group rationality and individual rationality. As individuals, we're trying to better approximate Bayesian rationality and expected utility maximization, whereas as groups, we're still struggling to get closer to Pareto-efficiency.
An interesting question is why this gap exists, given that an individual is also composed of different subsystems trying to optimize different things. I can see at least three reasons:
Also, most subsystems are not just boundedly rational, they have fairly easily characterized hard bounds on their rationality. Boundedly rational individuals have expansions like paper and pencils that enable them to think more steps ahead, albeit at a cost, while the boundedly rational agents of which I am composed, at least most of them, simply can't trade off resources for deeper analysis at all, making their behavior relatively predictable to one another.
Pareto efficiency isn't the gold standard of fairness or efficiency; it's the gold standard of, "You'd have to be a little bit crazy to oppose this."
By way of clarification: it is easy to oppose individual Pareto-efficient distributions... it's more difficult to oppose every Pareto-efficient distribution.
E.g. if the possible distributions are (10,0), (9,9) and (9,10), it's pretty easy to oppose (10,0) even though it's Pareto-efficient. Indeed, many people would rank (9,9) above (10,0) even though (9,9) is Pareto-inefficient. But it's tougher to prefer (9,9) to (9,10).
Of course, there are probably strong egalitarians who would prefer (9,9) to (10,9). Are such people necessarily crazy?
Libertarian answer: "Crazy or evil, yes."
Crazy, evil, or just not understanding (at the instinctive level) that the figures in question are intended to represent absolute utility, with both social-emotional consequences and future implications already taken into account.
For many practical situations for which (10,9) may be used as a simplified model that extra 1 gives an actual loss in utilty.
I would only use the description 'crazy' once it had been explained in detail that:
No, we don't mean you get 9 resources, your rival gets 10 and so you get laid less.
No, we don't mean that your rival has greater resources now, and sowill be able to capitalise on that difference to further increase the discrepancy until he make himself your feudal lord.
While I acknowledge ignorance is a form of 'crazy', it would not be crazy to support (9,9) until such time as it can be demonstrated that these utility functions are actually the abstract ideals that is implied.
When someone says, "OK, the rich are getting richer and the poor are staying the same. This is not PE," the problem is not solved by responding, "Well, just assume the numbers are utility values, and the problem disappears!" You cannot measure the utility (or especially the counterfactual utility) with any precision. So "They're utilities!" as I've heard (and used) it, tends to be a hand-wavy manner of dismissing a potentially serious problem by assumption.
I think a lot of people stubbornly refuse to accept that such values represent utilities because that assumption requires a rather violent departure from reality and realistic measures. Nothing is ever measured or calculated in utilities, so if your model of PE denominates values in them, that model may be shiny and interesting and have lots of cool mathematical properties, but it ain't very useful when we're applying it to, say, income disparity.
Crazy, evil, or the second player.
Fred has a 'Jesus' machine. It is a machine that can take one fish and turn it into three units of foodstuff, where a fish usually has one unit.
Fred starts with three fish. I start with 9. It costs a fixed 0.5 units of food to transport between me and Fred, payable at the end of the month.
Sally the Senator, she's neither crazy nor evil and she's also good at basic arithmetic. She proposes a law that says I must give one fish to Fred for him to manufacture into three units of food. Fred is to split the produce between the two of us evenly.
Sally can see that this outcome will give 10, 9 to Fred and myself respectively, where without Sally's coercion we would have got 9,9.
I think the libertarian answer is "No comment".
I don't think libertarians have nearly as much to say about optimization as they do about regulation. The libertarian answer would be, If you and Fred want to work something out, fine, but Sally has no business telling either of you what to do with your fish.
That was my impression.
My libertarian answer is that you've just convinced the future Freds of the world to keep quiet about any Jesus capabilities they discover.
And if my illustration didn't, then this one might!
I guess that's the commie answer too. The relevant comparison is between (9,9) and (11,8), or maybe (100,1) depending on your rhetorical temperature.
I'd think the more realistic egalitarian opposition would be between, say, (100, 35) and (50,34), i.e. the very rich getting even richer while the poor stay still. There are probably a few who would hold the (10,9) < (9,9), but that's much less realistic.
The real problem with PE is that it specifically determines the "fairness" of a marginal transaction, not the fairness of the actual distribution.
Not true. Perfect competition => Pareto Efficiency. !Perfect Competition !=> !Pareto Efficiency.
NB: IMHO this post on the theory of the second best is slightly better than the wikipedia one.
Unfortunately, I think this is one of those instances where wikipedia can lead one (slightly) astray. Greenwald-Stiglitz is not quite as far-reaching as all that. (Though it is pretty far-reaching, hence my initial comment being a nitpick.) Contra wikipedia, Greenwald-Stiglitz applies to two specific violations of perfect competition: information asymmetry and incomplete risk markets. These do not exhaust the space of possible violations of perfect competition, hence, there may be violations of perfect competition that nonetheless allow Pareto efficiency (at least in theory; in practice, information asymmetry and incomplete risk markets are pretty pervasive).
One (unrealistic) example of a non-perfectly competitive economy that is nonetheless pareto efficient is a centrally-planned economy where the government (magically) imposes exactly the same set of prices/quantities as would naturally arise in the perfectly competitive economy. Another is if two externalities (magically) exactly offset each other. Another is if a government imposes a tax that exactly offsets an externality.
Again, I do not claim that these are especially empirically relevant. My point was a fairly pedantic technical one.
ETA: your wikipedia link has a colon at the end that shouldn't be there.
Do you happen to have any references to back up your claims?
Not that I particularly care about Greenwald-Stiglitz. But in the time taken to make your point and dismiss it as irrelevant you could prevent some future helpless sap from the misfortune of being lead slightly astray!
Come to think of it, I'm going to have to use this retort some day:
Well, the paper itself (referenced in the wikipedia page Wei referred to) is obviously the definitive source. The abstract reads:
All the other summaries I've ever seen also describe the result in similarly narrow terms, e.g.:
the wikipedia entry on Joe Stiglitz, which states that
the paper by Dixit that comes up as the first google hit for "Greenwald Stiglitz", which states that:
I expect that the statement Wei linked to is just a typo where someone accidentally substituted "perfect competition" for "perfect information".
ETA: I actually would have edited the wiki entry myself; but I didn't want to create the impression I'd done so just to back up my claims.
I don't know the literature, but I thought the generic violations theorems covered more ground that that. Can you give an example that is generically Pareto efficient? Your cancelling externalities example is not generic. The other example doesn't seem well-posed enough to talk about genericity.
Why does my original point require genericity?
Logic appears to side with you on this one.
I'm afraid I'm not sure what you mean by generic, nor why it's especially relevant to my original point. Could you explain?
"Generic" is in the statement of the Greenwald-Stiglitz theorem, as quoted by Wei-Dai. It means, roughly, probability 1. The theorem does not say that information asymmetry leads to Pareto inefficiency, only that it does unless there is a numerical coincidence.
I thought you were saying that the GS theorem becomes false if you weaken the hypothesis to allow other kinds of violations. But your examples seemed to also strengthen the conclusion from generic efficiency to efficiency for all parameter values. If you strengthen the conclusion without weakening the hypothesis, it's already false.
Sorry about the deletion. I thought I'd got in quick enough. Clearly not!
I was saying that as far as I knew, the quotation misrepresented the scope of the GS theorem, which did not make claims about other types of violations. You are right that my offsetting externalities counter-example did not rely on this though.
The counter-example I had always been given as evidence that a non-perfectly competitive economy could theoretically achieve Pareto efficiency was that of a perfectly informed, benevolent central planner. However, I readily confess that this does seem something of a cheat. In any event, whether it's technically correct or not, the point is practically irrelevant, and probably not worth wasting any more time on.
I apologise for the diversion.
This is weird. I have always thought that rational thing to do would be something like doing your very best for the prosperity of the society you live in, abiding every norm and law you can etc. I regarded categorical imperative as an obvious result of rational and selfish decision making.
So I was wrong, huh?
The most charitable thing that categorical imperatives can be called is arational. The most accurate thing they can be called is unintelligible. The statement "You should do X" is meaningless without an "if you want to accomplish Y," because otherwise it can't answer the question, "Why?" More importantly, there is no way to determine which of two contradictory CIs should be followed.
No moral rule can be derived via any rational decision making process alone. Morality requires arational axioms or values. The litany of things you "should" have done if you were individually rational does not actually follow. "Rational" gets used to mean "strictly selfish utility maximizer" a bit more often than it should be, which is never. There may be people who are indeed individually arational to not do those things, but as we all have different values, that does not mean we all are.
-I'm using categorical imperative as distinct from hypothetical imperative - "Don't lie" vs. "Don't lie if you want people to trust you." There can be some confusion over what people mean by CI, from what I've seen written on this site.
Categorical imperatives that result in persistence will accumulate.
Why should any lifeform preserve its own existence? There's no reason. But those that do eventually dominate existence. Those that do not, are not.
Nah, that's what they want you to think. (Which seems to be more or less literally how norms apply in reference to altruism.)
I thought I addressed this issue in the paragraph starting "But, I'm an altruist." Is there something about my argument that you find unclear or unsatisfactory?
Argue this point in more detail, it isn't obvious.
It's not obvious, yeah. My failure of communication on the original post. My point, as I intended it, was that I mixed my intuitive feeling("rationalist should follow categorical imperative because it feels sensible") to an obvious fact. My reasoning was based on simplistic model of PD where punishing for non-normative things and trusting and abiding otherwise works. So, I was basically asking for clarification in a guise of a statement :)
I think my earlier response to you (now deleted) misunderstood your comment. I'm still not sure I understand you now, but I'll give it another shot.
All of the things I listed are commonly accepted within the relevant fields as individually rational. It boils down to the idea that it is individually rational to defect in a one-shot PD where you'll never see the other player again and the result will never be made public. Yes, we have lots of mechanisms to improve group rationality, like laws, institutions, social norms, etc., but all of that just shows how hard group rationality is.
Here's another example that might help make my point. How much "CPU time" does an average person's brain spend to play status games instead of doing something socially productive? That is hardly rational on a group level, but we have little hope of reducing it by any significant amount.
Eliezer's solution to Newcomb's problem doesn't apply to human cooperation.
Sorry for being dim, but I'm struggling to see what many of your examples have to do with second-best theory (as opposed to just being kind of bad things). Could you maybe expand a bit on what you mean?
E.g. how do the "yawning gap between individual rationality and group rationality" or Arrow's impossibility theorem reflect the idea that if you constrain one variable in your optimization problem, other variables need not take their first-best values? (Or are you just using second-best to mean "you can't always get what you want"? If so, I guess that's fine, but I think you're missing the distinguishing feature of the theory!)
FWIW, to me, the most obvious potential applications of second-best theory to rationality are that, given that we have limited processing capacity, and are subject to self-serving biases, getting more information and learning about biases need not improve our decision-making. More info can overwhelm our processing capacity, and learning about individual biases can, if we're not careful, lead us to discount others' opinions as biased, while ignoring our own failings.
Yeah, I'm not really using the distinguishing feature of the Theory of the Second Best in this post. Eliezer had made the same point as your paragraph starting "FWIW" in a post and I pointed out the connection to the Theory of the Second Best in a comment. Now I'm just using using "second best" to refer generically to any situation where group rationality conflicts with individual rationality, and we have to settle for something less than optimal.
"In economics, the ideal, or first best, outcome for an economy is a Pareto-efficient one, meaning one in which no market participant can be made better off without someone else made worse off."
Nitpick - Pareto-efficient outcomes are, in real social systems, horrible, horrible outcomes, very far down the scale in terms of overall utility. They are by nature Utopian, and they fail the way Utopias fail. In a Pareto society, you can't do anything productive, because everything you do makes someone worse-off.
Pareto-efficient outcomes are used in economics only because they are mathematically convenient. It's like looking under a streetlamp for your keys because the light is better there.
A much better form of "optimal" outcome would be one cast in dynamic terms, that instead of saying "No transaction is allowed if there exists Y such that d(utility(Y))/dt < 0", would be to say that "No transaction is allowed such that the sum over all Y of d(u(Y))/dt < 0".
Your second condition is analogous to Marshall efficiency, or the closely-related (same?) Kaldor-Hicks efficiency
There is a large difference between "there are no more 'freebies' where we can make someone better off without hurting someone else" and "we will not allow a change if it hurts anyone at all".
The first is Pareto-efficient, the latter is a horrible idea.
Right, to demand a Pareto-efficient outcome is not to demand that all changes are Pareto-improvements.
PG is right to say that the society he describes is Pareto-efficient and awful, but it's not the only Pareto-efficient society.
Regarding Pareto-efficient outcomes, what do you think would happen if Omega came down and allocated all goods in a pareto-efficient way, and then left? Assume he did this simply via pareto-improving trades, not by messing with distributions or anything. Sure, maybe for a little while there would be very few economic transactions. The only trades that could happen would be ones with negative externalities because otherwise you wouldn't be able to find one that made both parties better off. However, around dinner time people's preferences would start changing such that they would prefer some food to some of their money and all of a sudden there would be a ton of pareto-improving trades available.
My point is that everyone's utlity function is a function of time. Therefore any static allocation of goods would be pareto-efficient for a very short time, and then start to become pareto-inefficient very quickly, unless there was a constant stream of transactions pushing it back out onto the efficient frontier.
I sense phantom opportunity cost argument. Pareto efficiency is to be found among the available options, not among the unattainable ones.
I must be dense today, but I don't see the "phantom opportunity cost" connection.
Pareto efficiency is more reachable in game theory, where every agent is a party to every transaction; but not in real life, where you are not a party to, nor even aware of, most transactions that affect you. And a good thing, too; otherwise, we would live in a Pareto-optimal dystopia.
Imagine a world where every decision anyone made was subject to veto by anyone else. That would be a Pareto-optimal society.
Upvoted for insight, but this is wrong. Forbidding Pareto-bad transactions isn't enough to bring you to an optimum, you also need to make Pareto-good transactions happen.
The topic certainly deserves discussion. Sometime ago I was annoyed at everyone's lets-all-do-this exhortations for fixing the problem of offense, and wanted to write a post about individually rational behavior in flamewars. Maybe it'll surface eventually.
If the Borg were rational, they'd build more aerodynamic ships.
Aerodynamism IN SPACE? Whatever for? The cubes are almost certainly constructed in a vacuum and not designed to land, nor operate in atmo. The cube configuration allows for very efficient layout on the interior, very tidy formations in fleets, and lots of nice flat surface area to put exterior equipment.
Well the surface is hardly flat - it's at least not smooth. It's all knobbly and stuff. And I find it hard to believe that cubes are better than spheres for efficiency of interior layout, though perhaps their 'artificial gravity' makes a difference in a mysterious way.
...I really need to read more SF.
For hard vacumn?
Uh huh. They'd also improve the sound proofing. Damn those space battles get noisy some days!
'Perfect competition' is utter nonsense. Not only is it impossible, there is also nothing intrinsically desireable about it.
And Pareto-Superior conditions are also nonsense. There is no non-arbitrary way to compare utilities of separate actors. What makes someone 'better' or 'worse' off is entirely subjective, and not at all subject to arithmetic comparison or external validation/invalidation.
Perfectly elastic collisions and point masses are also impossible, but that doesn't stop physicists from using them in their models sometimes. A simplification can be theoretically useful even if it can't exist in reality, especially when you're studying something as complicated as markets.
And perfect competition does have desirable qualities, it (along with some other conditions) allows for maximum allocative efficiency, meaning that all goods and services are held by the people who value them the most.
And utility incomparability is not a big problem for Pareto efficiency, as its not that hard (at least conceptually) to work out whether someone is better or worse off. The incomparability of utility functions is a problem for Kaldor-Hicks efficiency, but that's not what we're talking about here.
I reject the coherence of neoclassical modeling. I am a definite Misesian in this vein. Predictability and meaningless non-economic situations have nothing to do with the real economy, and have no impact on helping us to understand the real economy (except as counter-factuals, to isolate certain elements, but then they are counter-factuals and ONLY counter-factuals).
A pretty key aspect of pareto-efficiency is that there are no interpersonal utility comparisons. A pareto-improvement is an improvement that makes at least one person better off (by their own standards) while making no one worse of (by their own standards). Even if a trade makes one person much, much better of and another person only a tiny bit worse off, that is not a pareto-improvement. Any situation like that can usually be made into a pareto-improvement by having the person who is made much better off give some enough money to the person who is made worse off that they are no longer made worse off.
Whether something is a 'cost' or a 'benefit' is itself entirely subjective.