Taleb compares systems which are fragile (easily broken by changes in circumstances), resilient (retain stability in the face of change), or anti-fragile (thrive on variation).

There isn't a standard term for anti-fragility, but it seems like a trait which might keep an FAI from wanting to tile the universe.

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 3:06 PM

There isn't a standard term for anti-fragility, but it seems like a trait which might keep an FAI from wanting to tile the universe.

"Ok, I've been the guardian angel for humans for ages now. Boring. Lets add some variety. Oooh! I know, I'll introduce some BabyEaters into the mix."

I don't think boredom or the lack of boredom is the issue.

It's more like "People want to explore possible minds space. Cool! We can handle that."

My caution is that if you choose 'anti-fragility' over 'resilience' you do not reduce the chance of the uFAI wanting to tile the universe. You just make it tile the universe with variety. Except, perhaps, for one galaxy that is left entirely untouched to spice things up a bit and maybe a world entirely of shrimp. (It would tire of that one quickly!)

[-][anonymous]13y00

"Can handle variety" =/= "compulsively seeks variety".

If it's variety, can you call it tiling?

"Thrives on variety" might or might not be equivalent to "compulsively imposes variety".

If it's variety, can you call it tiling?

I think they are called 'mosaics'. ;)

This isn't really possible in general. Improving something requires you to fit it into a smaller part of phase space, which requires an optimization process like intelligence or natural selection and which releases waste heat. The only time something would improve from random variations alone is if it started in an exceptionally bad state.

Yeah, there's a difference between being optimized to cope with true unexpected events, and being optimized to cope with a range of possible expected events.

For example, I wouldn't benefit from being thrown into a fire, or having a ton of bricks dropped on me, or being devoured by a tiger, but I would benefit from having to cope with a range of temperatures or the need to move some unusually heavy objects, or the need to run.

My brain won't be improved by an incomputable problem, but it will be improved by small changes to my routine and new solvable problems.

By definition you really can't design a system that is robust to unknown unknowns.

I agree with this. It seems like it could be more interesting if we found a way to make it more precise, but I guess that was the point of this post.

By definition you really can't design a system that is robust to unknown unknowns.

You can design one that is provably maximally robust given your knowledge.

I think the term you are looking for is "robust".

The article has that explicitly as a third category.