An enormous amount of credit goes to johnswentworth who made this new post possible.


This is a framing practicum post. We’ll talk about what selection incentives are, how to recognize selection incentives in the wild, and what questions to ask when you find them. Then, we’ll have a challenge to apply the idea.

Today’s challenge: come up with 3 examples of selection incentives which do not resemble any you’ve seen before. They don’t need to be good, they don’t need to be useful, they just need to be novel (to you). 

Expected time: ~15-30 minutes at most, including the Bonus Exercise.

What Are Selection Incentives?

Imagine trying to find great/popular posts on LessWrong. We look for things like high karma values, high number of comments, or a well-known writer. We don’t really look at the contents of the individual posts (yet), we just look at an overall “score” that can help us to choose posts. This overall “score” mechanism encourages writers to write posts that could potentially achieve high “scores”, for instance broad-interest posts, thought-provoking posts, controversial posts, etc, regardless of what the actual purpose of the writer is. 

This is a selection incentive: Something is chosen based on some criteria or a known process. For instance, posts are chosen based on an overall “score”: high karma values, high number of comments, etc. On the other hand, presenting ideas or transferring knowledge (not pursuing high karma values) might be what the writers actually want. But, the readers’ selection criteria are there regardless of what writers actually want in the first place.

Another example is corporations maximizing profits. The founder of the corporation has something in mind, for instance sending humans to space, producing the most affordable cars to the mass population, etc., and they may or may not be trying to maximize profit.  What happens in the real business world, however, is that businesses live or die based on how well they maximize profits. Businesses are selected on the basis of how well they maximize profits, regardless of what the founders actually want.

What To Look For

In general, selection incentives should spring to mind whenever something is chosen based on some criteria or a known process. We want to know what factors cause something to be more or less likely chosen. A few ways this can apply:

  • We are selecting/choosing something based on some criteria.
  • We see systems which grow or die, and we want to know what causes the system to grow/die faster or slower.

Useful Questions To Ask

In the post selection example, posts with high karma values or high number of comments are more likely to be chosen than the ones with low karma values or low number of comments. But the post writers imagine a post that can present ideas/thoughts, transfer knowledge, or initiate a communication. High karma values may correlate with great posts, but may not align with what the writer actually wants, i.e., transfer knowledge and/or initiate conversation between the writer and the reader. What the writer actually wants diverges from what the selection criterion selects for/incentivizes.

In general, if an agent is involved, we want to know how the things the agent wants diverge from what the selection criteria “want”.

The Challenge

Come up with 3 examples of selection incentives which do not resemble any you’ve seen before. They don’t need to be good, they don’t need to be useful, they just need to be novel (to you).

Any answer must include at least 3 to count, and they must be novel to you. That’s the challenge. We’re here to challenge ourselves, not just review examples we already know.

However, they don’t have to be very good answers or even correct answers. Posting wrong things on the internet is scary, but a very fast way to learn, and I will enforce a high bar for kindness in response-comments. I will personally default to upvoting every complete answer, even if parts of it are wrong, and I encourage others to do the same.

Post your answers inside of spoiler tags. (How do I do that?)

Celebrate others’ answers. This is really important, especially for tougher questions. Sharing exercises in public is a scary experience. I don’t want people to leave this having back-chained the experience “If I go outside my comfort zone, people will look down on me”. So be generous with those upvotes. I certainly will be.

If you comment on someone else’s answers, focus on making exciting, novel ideas work — instead of tearing apart worse ideas. Yes, And is encouraged.

I will remove comments which I deem insufficiently kind, even if I believe they are valuable comments. I want people to feel encouraged to try and fail here, and that means enforcing nicer norms than usual.

If you get stuck, look for:

  • Systems in which something is chosen based on some criteria or a known process.
  • Systems which grow/die and what makes them grow/die faster or slower.

Bonus Exercise: for each of your three examples from the challenge, explain:

  • What strategies do the selection criteria incentivize for the things being selected?
  • If an agent is being selected, how do the things the agent wants diverge from what the selection criteria are?
  • If the agent is making the selection, how is it different from the selection criteria?

This bonus exercise is great blog-post fodder!


Using a framing tool is sort of like using a trigger-action pattern: the hard part is to notice a pattern, a place where a particular tool can apply (the “trigger”). Once we notice the pattern, it suggests certain questions or approximations (the “action”). This challenge is meant to train the trigger-step: we look for novel examples to ingrain the abstract trigger pattern (separate from examples/contexts we already know).

The Bonus Exercise is meant to train the action-step: apply whatever questions/approximations the frame suggests, in order to build the reflex of applying them when we notice selection incentives.

Hopefully, this will make it easier to notice when a selection incentive frame can be applied to a new problem you don’t understand in the wild, and to actually use it.

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

Conforming To Bias. If people know about status quo bias, the planning fallacy, or the endowment effect, they may feel the need to play into them in order to accomplish goals. Planners will deliberately make optimistic predictions, even when they know better, in order to appear competitive - even though the customer might prefer planners who make more realistic predictions. Product designers may deliberately sacrifice utility for familiarity, even if the unfamiliar product is actually easier to use even for a beginner than the familiar product. My guess is that the design of textbooks is an example here.

This suggests that building products and services that don't conform to biases is a positive externality, and a proper target for regulation or subsidy. For example, governments could require major construction projects to submit a time and cost estimate when the contract is signed, and give a tax credit to companies that an external auditor assesses to have achieved above-average accuracy in their estimate.

Government could offer similar subsidies to combat the endowment effect. It could offer a tax credit for selling your house, moving out of an apartment, or changing your job, perhaps after you've owned the house or worked the job for a reasonable length of time. I'm skeptical of these interventions - just brainstorming to illustrate an idea.

Teaching Styles. Teachers can't get much done if kids are being disrupted. Schools have varying populations of kids. They therefore "select" for teachers capable of managing the type and amount of disruption at their particular school. A tough teacher might be perfect for a rowdy school, but harmfully harsh in a more placid environment. A teacher who focuses on positive reinforcement but can't dish out discipline might get steamrolled by the students in a rowdy school, but do well in an elite prep academy. If the teaching styles exhibited at the best performing schools (i.e. the elite prep academy) become exemplars for teacher training, then we risk attributing to a teaching style alone what is actually a teaching style x school culture interaction effect.

Self-Editing. I write in ways that are legible to me, because during the writing process I have access only to feedback provided by the editor in my mind. Its feedback, particularly in the very beginning stages when the general tone, topic, and form of a piece is being established, is crucial in dictating the direction the post will take. Over time, the partially-written piece becomes more powerful than the editor, but in the beginning the editor is more powerful than the writing. This causes me to select for writing approaches that my internal editor is comfortable with. If I had other external standards or influences - perhaps prompts, a particular audience, or a process involving seeking external feedback on a few very brief possible approaches to an article - I might be able to achieve more variety in my writing.

Can you elaborate your first example more? How does selection incentive come into play in those situations?

One way of looking at biases is that the bias is a heuristic with its own selection criteria. For example, people decide who to trust with authority based on how tall they are. The tall-bias is a heuristic with its own selection criteria (tallness) that doesn't perfectly match what its' supposed to be optimizing for (trustworthiness). You might predict that people would take steps to create the appearance of tallness in order to manipulate this form of selection. Hillary Clinton apparently requested that her podium for a debate with Donald Trump be modified so that both candidates appeared to be the same height relative to the podium when standing in front of them, and for a step stool so that she'd appear to be the same height as DT when they stood behind the podium. One way of looking at the rationality project is that our social systems have optimized themselves to exploit common biases in the human mind. That intersection will feel "normal." Pointing out these biases isn't just about moving from less truth -> more truth. It's also about moving from more commonly exploited heuristics -> less commonly exploited heuristics. It may be that the new heuristics also have serious failure modes. But if society isn't set up to systematically take advantage of them, being divergent might still be beneficial, even if it's not fundamentally any more secure. It's sort of like choosing an operating system or program that is obscure, because there's less attention devoted to hacking it.
If you know how the selection incentive works for a particular situation, you can exploit it to your benefit, or at least prevent yourself from being in an unfavorable position.
  1. Sales jobs tend to bring in a lot of young people, and a high proportion of them aren't able to sell much and leave - i.e. there's high selection pressure on salespeople. There's a selection incentive to engage in sketchy sales practices, e.g. lying about the product. Individual salespeople may intend to be honest, and even believe what they're saying, but they'll still be selected for saying the product is better than it is. So, the salespeople who stick around will be those who mislead customers about the product, even if they do so accidentally.
  2. Competing products are selected by customers; they face different selection incentives depending on whether customers are mostly one-time or repeat users. If customers are mostly one time, then there's a selection incentive for the product to look good before purchase, but less incentive to actually be good. If customers are mostly repeat users, then there's more incentive to actually be good.
  3. I own a few pairs of flip-flops, so I sometimes have to choose which pair to wear; the criteria by which I make that choice create selection incentives for the worn flip-flops. (I.e. if I look at which flip-flops I actually end up wearing, I'll find that they're selected according to those incentives.) In particular, I usually choose whichever flip-flops are most easily available at the time, which is usually whatever pair I wore most recently. (So the selection incentives change sometimes when I wear different flip-flops; they're bistable.)

First example explains why sales people have such a bad reputation. For the second example, apparently products can't actually be good or bad by themselves. So it's the producer who makes the product. Are we assuming a situation where producer is able to select one time users from repeat users?