The following text first summarizes the standard moral-hazard model. Afterwards, I point out that it implies that you always get punished for bad luck. The third part is completely speculative: I speculate on how you should behave towards yourself.

A brief summary of a moral-hazard setting

A Moral Hazard situation occurs when someone takes too much risk, or does not reduce it enough because someone else bears the cost.

The following situation is a typical textbook example. A worker works for a firm, and her effort influences the probability that the firm has high revenue. The worker can exert high or low effort, the firm's revenue can be high or low, and low revenue is more likely when effort is low, but can also occur when effort is high. Moreover, the worker has to get a wage that compensates her for forgoing whatever else she would do with her time.

Suppose the firm would, in principle, be willing to compensate the worker for high effort (which means that we assume that the additional expected revenue gained from high effort ist at least as high as the additional wage needed to make the worker willing to exert high effort). Because workers are usually assumed to be risk-averse, the firm would take the risk of low revenue and the worker gets a wage that is constant in all states of the world.

However, now also suppose the firm cannot directly observe the effort - this constitutes a situation of asymmetric information, because the worker can observe her own effort and the firm cannot. Then the firm cannot condition the payment on the worker's effort. It also cannot just conclude that the worker exerted low effort by observing low revenue, because we assumed that low revenue can also occur when the worker exerted high effort.

The second-best optimal solution (that is, the best solution given this information problem) is to condition payments on the revenue - and thus, on the result instead of the effort to get it. The worker gets a higher wage when the firm has high revenue. Thereby, the firm can design the contract such that the worker will choose to exert high effort.

In this setting of asymmetric information, the worker gets the same expected utility as in the setting with symmetric information (in which effort could be observed), because the firm still has to compensate her for not doing something else. But because the worker now faces an uncertain income stream, the expected wage must be higher than if the wage were constant. (Thus, the firm has a lower expected profit. If the loss to the firm due to the high-revenue wage premium is severe enough, the firm may not even try to enforce high effort.) The asymmetric information precludes an optimal arrangement of which economic agent takes the risk.


You'll get punished for bad luck


At this point, note that the way that the firm offers a higher wage when it has high revenues and a lower one when it has low revenues is a matter of framing. The firm may for example say that it wants its workers to participate in its success, and therefore pay a premium.

Vocabulary of "punishment", by contrast, may not be popular. Also, it seems wrong to call the low wage a punishment wage. Why? Because the optimal contract makes the worker exert high effort, and a low revenue will NOT indicate that the worker idled.

So that is the irony of the situation: An optimal contract punishes you for bad luck, and for nothing else. At the same time, the worker would be more likely to get "punished" if he idled, because low revenue would then be more likely. The threat of punishment for a bad result is exactly what makes the worker exert high effort to at least make the bad results unlikely.

Optimal contracts in your brain?

Suppose you feel a bit split between two "agents" in your brain. One part of you would like to avoid working. The other part would like you to exert high effort to have a good chance of reaching your goals.

You cannot pay yourself a wage for high effort, but you can feel good or bad. Yet the kind-of-metaphorical implication of the moral-hazard optimal-contract model is that you should not punish yourself for bad luck. There are random influences in the world, but if you can see (or remember) how much effort you exerted, it does not make sense to give yourself a hard time because you were unlucky.

On the other hand, maybe you punish yourself because you lie to yourself about your effort? If you have created such an asymmetric-information situation within yourself, punishing yourself for bad luck is a seemingly rational idea. But keep in mind that it is only second-best optimal, under the assumption that this asymmetric information indeed exists. If so, think of ways to measure your effort instead of only your outcome. If you cannot do it, consider whether forcing yourself to exert high effort is really worth it. Solve the problem that actually needs to be solved, and respect the constraints that exist, and none that do not.

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 5:21 AM

I agree with much of your reasoning, but come to the opposite conclusion. For _many many_ things, you can't distinguish between luck, bad modeling (incorrect desires for outcome), or bad behavior (incorrect actions toward the desired outcome). Rewarding effort makes up for luck, but magnifies other failures.

So don't try. Reward on good outcome, punish on bad outcome. Sure, the agents will "learn" incorrectly on topics where luck dominates. Make up for it with repetition - figure out good reference classes so you can learn from more outcomes. Enough instances will smooth out the luck, leaving the other factors.

Or maybe you'll actually learn to be luckier. We could surely use a Teela Brown to protect us right about now...

To nitpick on your throwaway Ringworld reference, that's exactly the opposite of the point. Other humans don't benefit from the fact that the Ringworld is going to shield Teela Brown from the core explosion. She would be the person who accidentally bought zoom stock in January because it sounded like a cool company name, or the immortal baby from an unreproducible biomedical research accident that is prompted by post COVID-19 research funding, probably extra lucky to be living in a mostly-depopulated high-technology world due to massive death tolls from some other disaster.

I agree that you can't distinguish between those things. But I wonder if it could be argued that as long as someone is putting in effort and deliberately reflecting and improving after each outcome, then you can't fault them since they are doing everything in their power; even if they are modeling incorrectly or behaving badly, if they did not have opportunities to learn to do otherwise beforehand then is it still reasonable to fault them if they act that way? The pragmatic part of me says that everyone has "opportunities to learn to do otherwise" with the knowledge on the internet, so we can in fact fault people for modeling poorly. But I'm not sure if this line of reasoning is correct.

I disagree, and that's my central issue with the post.

"So that is the irony of the situation: An optimal contract punishes you for bad luck, and for nothing else."

The post gets this exactly backwards - the optimal contract exactly balances punishing lack of effort and bad luck, in a way that the employer is willing to pay as much as the market dictates for that effort under the uncertainty that exists.

Maybe I will have to edit the text to make that clearer, but: the optimal contract in the situation I described (moral hazard with binary states and effort levels) punishes only for bad luck, exactly because it makes the worker choose high effort. In this sense, once revenue is known, you also know that it is not the worker's fault that revenue is low. From an ex-ante perspective, it offers conditional wages that "would" punish for being lazy, however.

You're right - but the basic literature on principle agent dynamics corrects this simple model to properly account for non-binary effort and luck, and I think that is the better model for looking at luck and effort.

I just want to mention that this is an example of the credit assignment problem. Broadly punishing/rewarding every thought process when something happens is policy-gradient learning, which is going to be relatively slow because (1) you get irrelevant punishments and rewards due to noise, so you're "learning" when you shouldn't be; (2) you can't zero in on the source of problems/successes, so you have to learn through the accumulation of the weak and noisy signal.

So, model-based learning is extremely important. In practice, if you lose a game of magic (or any game with hidden information and/or randomness), I think you should rely almost entirely on model-based updates. Don't denigrate strategies only because you lost; check only whether you could have done something better given the information you had. Plan at the policy level.

OTOH, model-based learning is full of problems, too. If your models are wrong, you'll identify the wrong sub-systems to reward/punish. I've also argued that if your model-based learning is applied to itself, IE, applied to the problem of correcting the models themselves, then you get loopy self-reinforcing memes which take over the credit-assignment system and employ rent-seeking strategies.

I currently see two opposite ways out of this dilemma.

1. Always use model-free learning as a backstop for model-based learning. No matter how true a model seems, ditch it if you keep losing when you use it.

2. Keep your epistemics uncontaminated by instrumental concerns. Only ever do model-based learning; but don't let your instrumental credit-assignment system touch your beliefs. Keep your beliefs subservient entirely to predictive accuracy.

Both of these have some distasteful aspects for rationalists. Maybe there is a third way which puts instrumental and epistemic rationality in perfect harmony.

PS: I really like this post for relating a simple (but important) result in mechanism design (/theory-of-the-firm) with a simple (but important) introspective rationality problem.

Thanks for this comment - it highlights that the post _is_ an attempt in the right direction (model-based learning, rather than pure outcome learning). And that it's possibly the wrong model (effort level is an insufficient causal factor).

Ah yeah, I didn't mean to be pointing that out, but that's an excellent point -- "effort" doesn't necessarily have anything to do with it. You were using "effort" as a handle for whether or not the agent is really trying, which under a perfect rationality assumption (plus an assumption of sufficient knowledge of the situation) would entail employing the best strategy. But in real life conflating effort with credit-worthiness could be a big mistake.