Upside decay - why some people never get lucky

Great comment; you're right that in addition to avoiding upside decay, we can also try to increase the positive tail!

Also good spot that virtue as defined here is relative (e.g. during the Cold War the USA would be considered "virtuous" by dint of being less mean than the USSR).

Upside decay - why some people never get lucky

Thanks for your thoughtful response! I chose China as an example because it concisely illustrates the contrast - China dominates all the segments that don't need upside, but struggles for those that do. It also explains the narrow definition of virtue I use. It assumes a moderate knowledge of current affairs which is definitely something that's boring for a lot of people, but I'm personally more familiar with China and less familiar with other possible examples like cryptocurrencies or AGI. 

Upside decay - why some people never get lucky

I think upside decay is most applicable to venturesome things. So for example, a plastic chair factory is not very venturesome because the technology and processes are well established. The factory manager can be a real jerk and people will still buy his chairs. On the other hand, things like creating a startup, making smaller semiconductors, or new energy technologies are much more affected by upside decay.

Using the example of scientific discovery, it feels like a major country could have assets such as high investment into education and R&D that helped it have lots of rare discoveries, even if its foreign policy didn't do it any favors and lost it weak ties.

I think this country would be good at incremental discoveries, improvements, some types of development, and commercialization. But it wouldn't be good at generating rare discoveries. Using your example of a board game, weak ties can be translated into "Victory Points" at different ratios depending on the type of activity you're looking at.

Dimensional decoupling

The latter one. It follows the same pattern as this diagram.

Dimensional decoupling

Yes, the failure modes are mentioned in the last part of the post: trying to decouple identical things, and trying to decouple unrelated things.

Dimensional decoupling
How do we identify situations where we are using some concept which may be usefully decoupled? Or, alternatively: which of our concepts in fact constitute couplings of two (or more?!) orthogonal[2] concepts—and how do we tell?

Great catch. This is something I didn't mention in the article because I typical-minded. Here's a description, which I will add back to the article later probably:

1. Whenever you come across something that seems like it is logical, but violates your intuitions, then there's a high chance that this technique can help. This is an easy situation to use dimensional decoupling and it comes naturally, because we are already in 'interrogative' mode.

2. When you're stuck on a problem, go through your assumptions and try to decouple them one by one. Often you will find that some assumptions can be decoupled and then one of the resulting parts can be relaxed. This is relatively harder and needs practice, because it's not natural to examine our assumptions like this.

Having identified a concept which is, in fact, a coupling, just how do we decouple it? Ok, so we have some concept X which we have (somehow) decided may be decoupled into two (or more!) orthogonal concepts Y and Z. Now, how do we identify Y and Z? (And how do we verify that Y plus Z is, in fact, what we originally thought of as X?)

I believe that the hard work is in identifying the object that needs decoupling. Once it's identified, the decoupling method is relatively simpler.

1. The easiest one is with opposites. Happy vs sad, masculine vs feminine, straight vs gay. These are really easy to decouple. To verify them, we just see whether the two new "corners" make sense. E.g. Is it possible for someone to be interested same-sex and opposite-sex people simultaneously? Is it possible for someone to be interested in neither? Y and Z are just the two poles of the spectrum.

2. For non-opposites, make them into poles. Bias vs accuracy. Bias is one pole, so the other pole is "unbiased". Accurate is one pole, so the other pole is "inaccurate". To verify them, again we see whether the two new "corners" make sense.

Note that some pairs { Y, Z } in such situations may not be entirely orthogonal, i.e. there may be a systematic(perhaps because causal) correlation between them.

Yes! They are almost certainly correlated - that's the entire reason that they are so often seen as entwined. Counterintuitively, higher correlations are often more valuable to decouple. On the last graph, we can also think of it in terms of correlations:

1.0 correlation - this is the 'red zone' where we say crazy things like "loud isn't high volume". The correlation is so high that they shouldn't be decoupled.

0.5-0.9 correlation (roughly) - this is the valuable area. The high correlations means that the two concepts frequently go together. But in the situations where they differ, it's super easy to miss.

0.0-0.5 correlation (roughly) - this is not as valuable. Because the correlation is low, it means that we wouldn't naturally think of them as going together. Therefore, there is low risk that we are incorrectly coupling them.

Dimensional decoupling

You're right, bucket errors are the result of entwining things. Dimensional decoupling is a way of reducing bucket errors. In my personal experience, once I used dimensional decoupling regularly, it became second nature and automatic. I think it's important to have low-friction ways of reducing bucket errors.

And yes, the most valuable decouplings are ones where they aren't identical but we think they are. But until we try to decouple them, we don't know whether they are or not!

Dimensional decoupling

Which diagrams disagree?

The pattern is an application of dimensional decoupling - the dimensions are the in the headers of the diagrams.

Top left: sad and not happy.

Top right: sad and happy.

Bottom left: not happy and not sad.

Bottom right: happy and not sad.