Here’s an insight I had about how incentives work in practice, that I’ve not seen explained in an econ textbook/course.
There are at least three ways in which incentives affect behaviour: 1) via consciously motivating agents, 2) via unconsciously reinforcing certain behaviour, and 3) via selection effects. I think perhaps 2) and probably 3) are more important, but much less talked about.
Examples of 1) are the following:
- When content creators get paid for the number of views their videos have... they will deliberately try to maximise view-count, for example by crafting vague, clickbaity titles that many people will click on.
- When salespeople get paid a commision based on how many sales they do, but do not lose any salary due to poor customer reviews... they will selectively boast and exaggerate the good aspects of a product and downplay or sneakily circumvent discussion of the downsides.
- When college admissions are partly based on grades, students will work really hard to find the teacher’s password and get good grades, instead of doing things like being independently curious, exploratory and trying to deeply understand the subject
One objection you might have to this is something like:
Look at those people without integrity, just trying so hard to optimise whatever their incentives tell them to! I myself, and indeed most people, wouldn’t behave that way.
On the one hand, I would make videos I think are good, and honestly sell products the way I would sell something to a friend, and make sure I understand my textbook instead of just memorising things. I’m not some kind of microeconomic robot!
And on the other hand, even if things were not like this… it’s just really hard to creatively find ways of maximising a target. I don’t know what appeals to ‘the kids’ on YouTube, and I don’t know how to find out except by paying for some huge survey or something... human brains aren’t really designed for doing maximising like that. I couldn’t optimise in all these clever ways even if I wanted to.
One response to this is:
Without engaging with your particular arguments, we know empirically that the conclusion is false. There’s a wealth of econometrics and micro papers showing how demand shifts in response to price changes. I could dig out plenty of references for you… but heck, just look around.
There’s a $10.000/year daycare close to where I live, and when the moms there take their kids to the cinema, they’ll tell them to pretend they’re 6 and not 7 years old just to get a $3 discount on the tickets.
And I’m pretty confident you’ve had persuasive salespeople peddle you something, and then went home with a lingering sense of regret in your belly…
Or have you ever seen your friend in a queue somewhere and casually slid in right behind them, just to get into the venue 5 minutes earlier?
All in all, if you give people an opportunity to earn some money or time… they’ll tend to take it!
This might or might not be a good reply.
However, by appealing to 2) and 3), we don’t have to make this response at all. The effects of incentives on behaviour don’t have to be consciously mediated. Rather...
- When content creators get paid for the number of views their videos have, those whose natural way of writing titles is a bit more clickbait-y will tend to get more views, and so over time accumulate more influence and social capital in the YouTube community, which makes it harder for less clickbait-y content producers to compete. No one has to change their behaviour/or their strategies that much -- rather, when changing incentives you’re changing the rules of game, and so the winners will be different. Even for those less fortunate producers, those of their videos which are on the clickbait end of things will tend to give them more views and money, and insofar as they just “try to make videos they like, seeing what happens, and then doing more of what worked”, they will be pushed in this direction
- When salespeople get paid a commission based on how many sales they do, but do not lose any salary due to poor customer reviews… employees of a more Machiavellian character will tend to perform better, which will give them more money and social capital at work… and this will give Machiavellian characteristics more influence over that workplace (before even taking into account returns to scale of capital). They will then be in positions of power to decide on which new policies get implemented, and might choose those that they genuinely think sound most reasonable and well-evidenced. They certainly don’t have to mercilessly optimise for a Machiavellian culture, yet because they have all been pre-selected for such personality traits, they’ll tend to be biased in the direction of choosing such policies. As for their more “noble” colleagues, they’ll find that out of all the tactics they’re comfortable with/able to execute, the more sales-y ones will lead them to get more hi-fives from the high-status people in the office, more room in the budget at the end of the month, and so forth
- When college admissions are partly based on grades… the case is left as an exercise for the reader.
If this is true and important, why doesn’t standard econ textbooks/courses explain this?
I have some hypotheses which seem plausible, but I don’t think they are exhaustive.
1. Selection pressure for explanations requiring the fewest inferential steps
Microeconomics is pretty counterintuitive (for more on the importance of this, see e.g. this post by Scott Sumner). Writing textbooks that explain it to hundreds of thousands of undergrads, even just using consciously scheming agents, is hard. Now both “selection effects” and “reinforcement learning” are independently difficult concepts, which the majority of students will not have been exposed to, and which aren’t the explanatory path of least resistance (even if they might be really important to a small subset of people who want to use econ insights to build new organisations that, for example, do better than the dire state of the attention economy. Such as LessWrong).
2. Focus on mathematical modelling
I did half an MSc degree in economics. The focus was not on intuition, but rather on something like “acquire mathematical tools enabling you to do a PhD”. There was a lot of focus on not messing up the multivariable calculus when solving strange optimisation problems with solutions at the boundary or involving utility functions with awkward kinks.
The extent of this mathematisation was sometimes scary. In a finance class I asked the tutor what practical uses there were of some obscure derivative, which we had spend 45 mins and several pages of stochastic calculus proving theorems about. “Oh” he said, “I guess a few years ago it was used to scheme Italian grandmas out of their pensions”.
In classes when I didn’t bother asking, I mostly didn’t find out what things were used for.
3. Focus on the properties of equilibria, rather than the processes whereby systems move to equilibria
Classic econ joke:
There is a story that has been going around about a physicist, a chemist, and an economist who were stranded on a desert island with no implements and a can of food. The physicist and the chemist each devised an ingenious mechanism for getting the can open; the economist merely said, "Assume we have a can opener"!
Standard micro deals with unbounded rational agents, and its arsenal of fixed point theorems and what-not reveals the state of affairs after all maximally rational actions have already been taken. When asked how equilibria manifest themselves, and emerge, in practice, one of my tutors helplessly threw her hands in the air and laughed “that’s for the macroeconomists to work out!”
There seems to be little attempts to teach students how the solutions to the unbounded theorems are approximated in practice, whether via conscious decision-making, selection effects, reinforcement learning, memetics, or some other mechanism.
Thanks to Niki Shams and Ben Pace for reading drafts of this.
David Friedman is awesome. I came to the comments to give a different Friedman explanation for one generator of economic rationality from a different Friedman book than "strangepoop" did :-)
In "Law's Order" (which sort of explores how laws that ignore incentives or produce bad incentives tend to be predictably suboptimal) Friedman points out that much of how people decide what to do is based on people finding someone who seems to be "winning" at something and copy them.
(This take is sort of friendly to your "selectionist #3" option but explored in more detail, and applied in more contexts than to simply explain "bad things".)
Friedman doesn't use the term "mimesis", but this is an extremely long-lived academic keyword with many people who have embellished and refined related theories. For example, Peter Thiel has a mild obsession with Rene Girard who was obsessed with a specific theory of mimesis and how it causes human communities to work in predictable ways. If you want the extremely pragmatic layman's version of the basic mimetic theory, it is simply "monkey see, monkey do" :-P
If you adopt mimesis as THE core process which causes human rationality (which it might well not be, but it is interesting to think of a generator of pragmatically correct beliefs in isolation, to see what its weaknesses are and then look for those weaknesses as signatures of the generator in action), it predicts that no new things in the human behavioral range become seriously optimized in a widespread way until AFTER at least one (maybe many) rounds of behavioral mimetic selection on less optimized random human behavioral exploration, where an audience can watch who succeeds and who fails and copy the winners over and over.
The very strong form of this theory (that it is the ONLY thing) is quite bleak and probably false in general, however some locally applied "strong mimesis" theories might be accurate descriptions of how SOME humans select from among various options in SOME parts of real life where optimized behavior is seen but hard to mechanistically explain in other ways.
Friedman pretty much needed to bring up a form of "economic rationality" in his book because a common debating point regarding criminal law in modern times is that incentives have nothing to do with, for example, criminal law, because criminals are mostly not very book smart, and often haven't even looked up (much less remembered) the number of years of punishment that any given crime might carry, and so "can't be affected by such numbers".
(Note the contrast to LW's standard inspirational theorizing about a theoretically derived life plan... around here actively encouraging people to look up numbers before making major life decisions is common.)
Friedman's larger point is that, for example, if burglary is profitable (perhaps punished by a $50 fine, even when the burglar has already sold their loot for $1500), then a child who has an uncle who has figured out this weird/rare trick and makes a living burgling homes will see an uncle who is rich and has a nice life and gives lavish presents at Christmas and donates a lot to the church and is friends with the pastor... That kid will be likely to mimic that uncle without looking up any laws or anything.
Over a long period of time (assuming no change to the laws) the same dynamic in the minds of many children could lead to perhaps 5% of the economy becoming semi-respected burglars, though it would be easy to imagine that another 30% of the private economy would end up focused on mitigating the harms caused by burglary to burglary victims?
(Friedman does not apply the mimesis model to financial crimes, or risky banking practices. However that's definitely something this theory of behavioral causation leads me to think about. Also, advertising seems to me like it might be a situation where harming random strangers in a specific way counts as technically legal, where the perpetration and harm mitigation of the act have both become huge parts of our economy.)
This theory probably under-determines the precise punishments that should be applied for a given crime, but as a heuristic it probably helps constrain punishment sizes to avoid punishments that are hilariously too small. It suggests that any punishment is too small which allow there to exist a "viable life strategy" that includes committing a crime over and over and then treating the punishment as a mere cost of business.
If you sent burglars to prison for "life without parole" on first offenses, mimesis theory predicts that it would put an end to burglary within a generation or four, but the costs of such a policy might well be higher than the benefits.
(Also, as Friedman himself pointed out over and over in various ways, incentives matter! If, hypothetically, burglary and murder are BOTH punished with "life without parole on first offense" AND murdering someone makes you less likely to be caught as a burglar, then murder/burglary is the crime that might be mimetically generated as a pair of crimes that are mimetically viable when only one of them is not viable... If someone was trying to use data science to tune all the punishments to suppress anti-social mimesis, they should really be tuning ALL the punishments and keeping careful and accurate track of the social costs of every anti-social act as part of the larger model.)
In reality, it does seem to me that mimesis is a BIG source of valid and useful rationality for getting along in life, especially for humans who never enter Piaget's "Stage 4" and start applying formal operational reasoning to some things. It works "good enough" a lot of the time that I could imagine it being a core part of any organism's epistemic repertoire?
Indeed, entire cultures seem to exist where the bulk of humans lack formal operational reasoning. For example, anthropologists who study such things often find that traditional farmers (which was basically ALL farmers, prior to the enlightenment) with very clever farming practices don't actually know how or why their farming practices work. They just "do what everyone has always done", and it basically works...
One keyword that offers another path here is one Piaget himself coined: "genetic epistemology". This wasn't meant in the sense of DNA, but rather in the sense of "generative", like "where and how is knowledge generated". I think stage 4 reasoning might be one real kind of generator (see: science and technology), but I think it is not anything like the most common generator, neither among humans nor among other animals.