Follow-up / Related to: Scott Alexander's Schelling Fences on Slippery Slopes, Sunk Cost Fallacy, Gwern's Are Sunk Costs Fallacies?, and Unenumerated's Proxy Measures, Sunk Costs, and Chesterton's Fence

I was recently reading an essay by Clayton Christensen, in the (fairly worthwhile) HBR's "Must Reads" boxed set, where he recommends that people "Avoid the Marginal Cost Mistake". In short, he suggests that Schelling Fences are sometimes ignored, or not constructed, because of a somewhat fallacious application of marginal-cost thinking. For example, my Schelling fence for work is that I stop when it is time to get my kids. The other side is that occasionally I'm in the middle of something - coding, or writing this lesswrong post - where being interrupted is fairly high cost. I can usually ask someone else to pick them up instead, and given how much I see them, the marginal value of time with my kids is low.

Christensen suggests that this analysis is incorrect, largely because of myopia. I am ignoring the longer term benefits of family dinners because the connection between coming home today and building the norm of being home for dinner every night is a longer-term investment. The future is full of extenuating circumstances, and only a fairly strong Schelling fence will let me insist that my kids stay home for dinner once they are teenagers.

I'd apply it more broadly, but his point was that this is especially critical in matters of morality. Cheating once changes everything. The simple fact that you cheated weakens your resolve not to in the future. The spiral created by a single action leads easily down a path towards using infinite money and invulnerability cheat codes, with no further challenge or enjoyment from playing the video game - or in the context he's discussing, it led to jail time for two of the people from his graduating class back in college.

Conclusions?

The critical question is: where do we want to use marginal cost analysis, and where do we want to stick to our sunk-costs and Schelling fences?

Based on Christensen's analysis, I would suggest that Schelling fences rather than sunk costs are particularly valuable for reinforcing values that are hard to measure, are too long term to get routine feedback on, or that involve specific commitments to other people. On the other hand, based on Gwern's work, I think there are places where marginal costs are under-appreciated, especially in relation to other people. Below, I lay out some settings and examples on each side.


Some examples of where to consider reinforcing fences and avoiding simplistic marginal cost thinking might include:

  • Going to a weekly meet-up that reinforces your connections to a good epistemic community and/or effective altruist values. Value drift is a long-term concern that needs short term reinforcement.
  • Anything involving family or long-term relationships. Marginal cost thinking is poisonous for relationships, since the benefits of investing in the relationship are not very visible, and long term.
  • Moral rules. Utilitarian and consequentialist thinking is easy to use to make yourself stupider. At the very least, you should be asking others - just like this is useful to avoid unilateralist curses, it is useful to avoid self-deception and convenient excuses.
  • Where there are switching costs or longer term goals. Learning to play guitar instead of continuing to practice piano (or moving from C++ to Python) is easy to justify in the short term, but expensive in terms of changes needed and resetting progress.
  • When goals are unknown. As Unenumerated put it, "cases where substantial evidence or shared preferences that motivated the original investment decision have been forgotten or have not been communicated, or otherwise where the quality of evidence that led to that decision may outweigh the quality of evidence that is motivating one to change one's mind."

Some examples of where it seems useful to avoid constructing Schelling fences, and to try paying more attention to marginal cost:

  • When constructing rules for other people, or in orgnaizations. Schelling fences are useful for self-commitment, otherwise they are rules and formal structures rather than norm-based fences. As gwern noted, " Whatever pressures and feedback loops cause sunk cost fallacy in organizations may be completely different from the causes in individuals."
  • When the environment is very volatile, and non-terminal goals change. It's easy to get stuck in a mode where the justification is "this is what I do," rather than a re-commitment to the longer term goal. If you are unsure, try revisiting why the fence was put there. (But if you don't know, be careful of removing Chesterton's Fence! See "When goals are unknown", above.)
  • When the fence is based on a measurable output, rather than an input. In such a case, the goal has been reified, and is subject to Goodhart effects. Schelling fences are not appropriate for outcomes, since the outcome isn't controlled directly. (Bounds on outcomes also implicitly discourage further investment - see: Shorrock's Law of Limits. If necessary, the outcome itself should be rewarded, rather than fenced in.)

New to LessWrong?

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 2:52 AM

My principles in these situations are:

1. Part of choosing a rule is choosing what the threshold is for breaking it.

2. You don't break rules, you change rules and then follow the new rule. So before you 'break' your current rule make sure you know what the new rule will be and that you like the new results better.

3. Cultivate the rule that rules are followed, that on the margin you are always going to underestimate the value of long term investment in habits and virtue cultivation, and that you don't change them in the moment without thinking about it carefully first.

See this old comment of mine for a very similar perspective.

Cultivate [...] that on the margin you are always going to underestimate the value of long term investment in habits and virtue cultivation

Why though? Shouldn't you recalibrate immediately to make this no longer predictable? Or is such recalibration the meaning of the quoted sentence? In that case, why phrase it so, it seems to risk overcorrection, not noticing when the opposite advice becomes relevant, or else requires undue caution in following your own advice, at which point it becomes a self-fulfilling flaw/advice combo? (Following the advice cautiously ensures that the flaw is not fully removed, and so the advice remains relevant.)

This runs the risk of denying that value drift has taken place instead of preventing value drift, creating ammunition for a conflict with future self or future others instead of ensuring that your current self is in harmony with them. Some examples you cite and list seem to be actually making this error.

Yes, that does seem to be a risk. I would think that applying schelling fences to reinforce current values reduces the amount of expected drift in the future, and I'm unclear whether you are claiming that using Schelling fences will do the opposite, or claiming that they are imperfect.

I'd also like to better understand what specifically you think is making the error of making it difficult to re-align with current values, rather than reducing the degree of drift, and how it could be handled differently.

I would think that applying schelling fences to reinforce current values reduces the amount of expected drift in the future

It reinforces the position endorsed by current values, not the current values themselves. (I'm not saying this about Schelling fences in general, which have their uses, rather about leveraging of status quo and commitment norms via reliable application of simple rules, chosen to signal current (past, idealized) values.) This hurts people with future changed values without preventing the change in values.

what specifically you think is making the error of making it difficult to re-align with current values

The effect on prevention of change in values is negative only in the sense of opportunity cost and because of the possibility of confusing this activity for something useful, which inhibits seeking something actually useful. It's analogous to the issues caused by homeopathy. (Though I'm skeptical about value drift being harmful for humans.)

I think it is an important fact about how humans work that reinforcing the schilling fences and following them does in fact reinforce the values involved, whereas ignoring the fences does weaken them. Virtues and habits are something you cultivate through repeated action. This isn't simple signaling of values, it impacts those values for real.

I agree, it's just not a primary thing that's happening, the coercion of the discipline (conflict between values and behavior) is more prominent than reinforcement of values where the discipline becomes necessary (partially by definition, since if it works well, the discipline is not necessary after all). For this reason, it's misleading to characterise the effect of this policy as reinforcement of current values, though that probably happens as well. Not sure how that's balanced by rebellious urges.

(I disagree with my statements above in the thread in the context where preventing value drift is much more important than preventing suffering from coercion of behavior to unaligned values.)