I'm reading Antifragile, and I don't have much relevant background, so it's hard for me to evaluate what he's saying.  If anyone has relevant background/expertise, I'd like to hear it. 

 

I can certainly see how the author's tone could annoy a lot of readers, but so far I've found his style entertaining and quite obviously (to me at least) a part of his "shtick", so it comes across as clever and funny instead of arrogant. 

 

I guess this could also evolve into a meta-discussion of how to evaluate books when you have little frame of reference, but I would imagine that has been discussed in other posts on this site.  (Please link to a post of that topic if you can). 

 

 

 

New to LessWrong?

New Comment
28 comments, sorted by Click to highlight new comments since: Today at 3:55 PM
[-]knb11y100

Antifragile meanders too much, and like most books it's too long. I feel like Antifragile could have been handled really well (95% or more of the value of the book) in maybe 30-40 pages. I'm normally someone who gets impatient with self-indulgent writing but since most publishers demand at least a few hundred pages for a serious " Idea Book", I'm not sure how much of this is Taleb's fault.

This is why I prefer to work in media more suited to concision like blogging and youtube videos rather than full-length books.

Also no one has offered to publish me.

I feel like Antifragile could have been handled really well (95% or more of the value of the book) in maybe 30-40 pages.

This. I liked it well enough, but would have preferred it at Kindle Single length. At it felt like the kind of book that was discussed so much that I didn't get that much more out of actually reading it.

I tend to agree with you both on this point. I haven't finished the book yet and he's defined antifragile as something that has a small, defined downside and a large, undefined upside probably six times.

But hey, at least I can provide his definition, so his strategy seems to be working for me as a reader?

I buy the thesis and I enjoy Taleb's schtick.

Eric Falkenstein wrote a good review here.

Thanks for pointing this out.

Admittedly, it contained more jargon than the book itself and was a bit tough for me to follow, but I think I can take something away from it. Once I finish Antifragile I'll read that post again.

I guess what's happening is that I'm rather convinced by each author as I'm reading him. I don't know how to step back and look at evidence for their claims, as I'm unfamiliar with finance, and the evidence that that do present I don't really understand.

I am quite confident, afterr reading the book and the blog post, that both Taleb and Falkenstein are very confident about their positions and their intellect.

Thanks. There's certainly a problem with Taleb's lack of specificity. He says the world is heading for trouble because high status people don't get negative reinforcement, but his advice is "have courage".

He's middling wrong with his theory that there should be more aggressive treatments used on sicker people. This is reasonable within some range, but it also leads to torturing the dying.

On the other hand, I believe he's right that higher populations lead to less predictable behavior.

I am a great admirer of Taleb, I've read most of his books, and I consider him one of the most important intellectuals of our time. That said, AF is very uneven: it feels more like a rant than a detached, objective meditation on statistical philosophy.

I'll mention two ideas that I think are relevant to LW. The first is the concept of robustness in the face of theory failure. Taleb believes that history is dominated by Black Swans: events that shatter our best theories of the world. Therefore the naively plausible rationalist strategy of "Figure out the best theory, and act to optimize utility based on this theory" is a recipe for disaster. Systems, people, and organizations achieve a superficial form of efficiency: they seem to do well in the short run, but go bust (=die/collapse/fail) when a Black Swan hits.

Taleb proposes a different strategy: "Compose an ensemble of theories, and pick an action that will do well under every theory in the ensemble." This action will probably seem to underperform in the short run, but it is much more likely to survive in the long run.

This concept actually has implications for XRisk prevention. Instead of using argumentation and theorizing to pick the most serious XRisk, and then acting to reduce the likelihood of that risk, you should devise a strategy that simultaneously protects against multiple forms of XRisk (the most obvious candidate in my mind is the construction of lunar or Mars colonies).

The second idea is about the ethics of iconoclasm. Taleb believes that in order to thrive, collectives (e.g. societies) must encourage their members to take risks. If many individuals take on risks, many will fail, but those that succeed will contribute to the health and vitality of the collective, thereby enabling it to become anti-Fragile. The ethical tension comes from the fact that risk-taking often seems quite unappealing from the perspective of the individual, compared to the option of staying safe, thinking the same way everyone else thinks, and so on (risk-taking has lower expected utility). So the Talebian hero is the entrepreneur, the artist, the real philosopher; the person who takes a risk by stepping outside the normal ways of thinking and living, and if successful, shares his success with the collective.

Furthermore, in some cases individuals can do the opposite of risk-taking: they can actually secure themselves against risk at the expense of adding risk to the collective - they robustify themselves by fragilizing the collective. Taleb believes that people who do this have a special place in Hell, and he indicts a wide-ranging group of professional archetypes for this crime: academics, journalists, bankers, policy wonks, pundits, and so on. These are people who have no "skin in the game" - they sell their ideas with slick marketing and prestigious credentials, but at the end of the day they have nothing to lose if it turns out the ideas were wrong.

"Compose an ensemble of theories, and pick an action that will do well under every theory in the ensemble." This action will probably seem to underperform in the short run, but it is much more likely to survive in the long run.

The problem with this strategy is that it may not only underperform in the short term, it may not survive in the short term. If you have competing strategies, the optimized-for-short-term ones might destroy you before you get to demonstrate your robustness.

Taleb believes that in order to thrive, collectives (e.g. societies) must encourage their members to take risks.

I am not sure there is much historical support for this idea.

Obviously we are talking about degrees of risk -- both a completely stagnant and a wildly risky societies will fail. I don't see a pronounced historic trend of iconoclast-friendly societies triumphing over conformist ones. Certainly, some risk-taking is needed, but "more" is not always the right answer.

I found The Black Swan well worth the time, but I started AF, got a couple chapters in, lost interest and still can't work up much enthusiasm to pick it up again.