Nassim Taleb's concept of "antifragility" describes strategies in which a short-term cost, harm or failure enables a long-term gain. This might be because the short-term cost conveys information that allows you to improve future decision-making. Or maybe it provokes a useful stress response, as in the honeymoon period after a disaster, or a "warning shot" that helps alert the world to an existential risk.

Perhaps in the world of investments, the value of antifragility is already priced in. In other areas where we have less reason to presuppose civilizational adequacy, perhaps Taleb is right that antifragility is neglected. Perhaps we ignore, or fail to anticipate, the value of the information we'd gain from failure. Or we choose strategies that do not provide failure-value, when strategies that do promise failure-value exist.

Here are a few examples of antifragility.

Games of chance

In a bag are nine white marbles and one red marble. You draw up to five times, and if you draw the red marble, you win $10 and the game ends. There are two versions in this game. One is drawing with replacement - you have to put white marbles back after you draw them. The other is drawing without replacement - if you draw a white marble, you don't have to put it back in the bag. In the version without replacement, your chances improve with each draw.

Let's imagine that each draw costs you 50 cents. In the game with replacement, each draw has a cost of 50 cents, and a 10% chance of winning $10. The expected value of each draw, then, is 50 cents.

In the game without replacement (the antifragile game), the first draw comes with a 10% chance of winning $10, and has an expected value of 50 cents. Failing that, you have a 1/9 chance on the second draw (one less white marble), with an expected value of 1/9 x $10 - $0.50 = $0.61. The rest of the draws have expected values of $0.75, $0.93, and $1.17. It's clear that the antifragile game is superior.

In this game, it's not hard to see that the game without replacement is a better deal than the game with replacement. Perhaps it's harder to perceive such structural differences in real-world scenarios.

Experimental design

The Avery, MacLeod, and McCarty experiment showed that DNA is the molecule that transmits genetic information. They took a pathogenic Pneumococcus, isolated and purified its DNA, and transferred it to a nonpathogenic Pneumococcus. They found that the nonpathogenic Pneumococcus became pathogenic, and that its descendents inherited the pathogenic trait.

That experiment would have provided a great deal of value even if it had gone the other way, by confirming that DNA was not involved in genetic transfer. This research was antifragile, because it provided useful information no matter the outcome.

Paul Christiano says of his research methodology for AI alignment,

I’d guess there’s a 25–50% chance that we can find an alignment strategy that looks like it works, in the sense that we can’t come up with a plausible story about how it leads to egregious misalignment. That’s a high enough probability that I’m very excited to gamble on it. Moreover, if it fails I think we’re likely to identify some possible “hard cases” for alignment — simple situations where egregious misalignment feels inevitable." (emphasis mine).

Eliezer Yudkowsky tried stringing up 130 lightbulbs around his house to treat his wife Brienne's SAD. It worked. If it hadn't, though, Eliezer and Brienne would still have gained information on whether or not it was a good idea for her to visit Chile every winter. 

These too are antifragile strategies.

As a counterexample, consider classic experiments from social psychology that have failed to replicate, such as the marshmallow test or power poses. If no significant correlation was found between children waiting for a second marshmallow, or between power posing and testosterone/cortisol levels, would we have found those results interesting? Probably not. We'd have laughed, said, "oh, it was a wacky idea anyway," and moved on. This was fragile research.

One way to evaluate whether a lab is doing antifragile work is by asking whether their results would still be interesting if they'd failed to disprove the null hypothesis. In scientific research, where failure is to be expected, it's useful to design antifragile experiments.

Scout mindset

If you argue with an open mind, you might be right or wrong. If you're right, then you stand a chance of helping your debate partner come to a more accurate worldview. If you're wrong, then it is you who will benefit. When you prioritize accuracy over victory, your argument is antifragile.

Your argument is fragile when you argue in soldier mindset, aiming for victory. A bad-faith argument risks defeat even if you win. Even if you persuade your opponent, you risk transmitting a bad idea from you to them. If you care at all about accuracy or making the world a better place, this should scare you.

Note that it's not necessary to disown your desire to win, or to gain status, in order to be antifragile. You can hope to win your argument, believe in your argument, and still argue in good faith. You just need to desire an accurate worldview even more than you want to win.

This can also help in matters of the heart. Obsessing over a crush, trying to make them like you, and not taking "no" for an answer is a fragile strategy. There's a brittle security in brooding over your own emotions, rather than gathering information on theirs.

By contrast, simply asking someone out on a date might make you nervous. If they accept, you'll be happy. If they decline, you can let it go, and you might feel more brave the next time you have a crush. Scouts want to go on a date with their crush, but they want even more to know whether or not their crush feels the same way. It's an antifragile strategy.

Antifragility also helps with learning to tell jokes. The most important skill for learning how to be funny in conversation is knowing how to stay light if the joke flops. Personally, I poke fun at myself and tease my own bad joke-telling. That way, failure is softened enough that I keep my confidence to keep trying to tell jokes, and I can try to understand why the joke fell flat so that I don't make the same mistake next time. It's a strategy that pays off over time.

Implications for agency

First, you need to decide to be any kind of deliberate agent at all. Don't just go along with whatever kludge of behaviors that evolution and your social environment cobbled together. Instead, make conscious choices about your goals and decision procedures that you reflectively endorse. -- Raemon, Being a Robust Agent

If we can reflectively endorse antifragility, then it becomes part of our decision procedure. In making plans and predictions and decisions, we consider not only what we hope to achieve with success, but anticipate what we'd learn from failure. Considering how your find value in failure might help you find extra value in what seemed at first like a bad bet.

Exercises

  1. Identify a decision or plan in your own life where you haven't made a clear prediction about what you'd learn from failure. What sorts of failures might you risk, and what sort of information would you get out of that failure that would be difficult to get any other way?
  2. Look up a scientific experiment in a field that you're curious about. Would its findings have still been interesting if their experiment found a null result, or simply a different result?
  3. Consider an area of your life where failure seems not to generate much useful information - a project that "fails silently." Try to identify how you could alter the activity so that the failure gives you more information.
New Comment
1 comment, sorted by Click to highlight new comments since: Today at 7:24 AM
Your argument is fragile when you argue in soldier mindset, aiming for victory. A bad-faith argument risks defeat even if you win. Even if you persuade your opponent, you risk transmitting a bad idea from you to them. If you care at all about accuracy or making the world a better place, this should scare you.

bad idea, or false/misleading idea


If you argue with an open mind, you might be right or wrong. If you're right, then you stand a chance of helping your debate partner come to a more accurate worldview. If you're wrong, then it is you who will benefit. When you prioritize accuracy over victory, your argument is antifragile.

I see this more as 'accuracy is victory'. If it's about prioritization, then which is more important (assuming it's not a false dichotomy) is a fact.