2018 Review Discussion

Here’s a pattern I’d like to be able to talk about. It might be known under a certain name somewhere, but if it is, I don’t know it. I call it a Spaghetti Tower. It shows up in large complex systems that are built haphazardly.

Someone or something builds the first Part A.

Later, someone wants to put a second Part B on top of Part A, either out of convenience (a common function, just somewhere to put it) or as a refinement to Part A.

Now, suppose you want to tweak Part A. If you do that, you might break Part B, since it interacts with bits of Part A. So you might instead build Part C on top of the previous ones.

And by the time your...

1David James
First, thank you for writing this. I would ask that you continue to think & refine and share back what you discover, prove, or disprove. I'm interested to see if we can (i) do more than claim this is likely and (ii) unpack reasons that might require that it be the case. One argument for (ii) would go like this. Assume the generating process for A has a preference for shorter length programs. So we can think of a A as a tending to find shorter description lengths that match task T. Claim: shorter (and correct) descriptions reflect some combination of environmental structure and compression. * by 'environmental structure' I mean the laws underlying the task. * by 'compression' I mean using information theory embodied in algorithms to make the program smaller I think this claim is true, but let's not answer that too quickly. I'd like to probe this question more deeply. 1. Are there more than two factors (environmental structure & compression)? 2. Is it possible that the description gets the structure wrong but makes up for it with great compression? I think so. One can imagine a clever trick by which a small program expands itself into something like a big ball of mud that solves the task well. 3. Any expansion process takes time and space. This makes me wonder if we should care not only about description length but also run time and space. If we pay attention to both, it might be possible to penalize programs that expand into a big ball of mud. 4. However, penalizing run time and space might be unwise, depending on what we care about. One could imagine a program that start with first principles and derives higher-level approximations that are good enough to model the domain. It might be worth paying the cost of setting up the approximations because they are used frequently. (In other words, the amortized cost of the expansion is low.) 5. Broadly, what mathematical tools can we use on this problem?
3David James
See also Nomic, a game by Peter Suber where a move in the game is a proposal to change the rules of the game.
1David James
I grant that legalese increases the total page count, but I don't think it necessarily changes the depth of the tree very much (by depth I mean how many documents refer back to other documents). I've seen spaghetti towers written in very concise computer languages (such as Ruby) that nevertheless involve perhaps 50+ levels (in this context, a level is a function call).

Agree that it's possible to have small amounts of code describing very complex things, and I said originally, it's certainly partly spaghetti towers. However, to expand on my example, for something like a down-and-in European call option, I can give you a two line equation for the payout, or a couple lines of easily understood python code with three arguments (strike price, min price, final price) to define the payout, but it takes dozens of pages of legalese instead.

My point was that the legal system contains lots of that type of what I'd call fake complexity, in addition to the real complexity from references and complex requirements.


This is an excerpt from the draft of my upcoming book on great founder theory. It was originally published on SamoBurja.com. You can access the original here.

Let’s say you are designing a research program, and you’re realizing that the topic you’re hoping to understand is too big to cover in your lifetime. How do you make sure that people continue your work after you’re gone? Or say you are trying to understand what Aristotle would think about artificial intelligence. Should you spend time reading and trying to understand Aristotle’s works, or can you talk to modern Aristotelian scholars and defer to their opinion? How can you make this decision? Both of these goals require an understanding of traditions of knowledge — in particular, an understanding of whether...

I feel like this post misses one of the most important ways in which a tradition stays alive, that is through contact with the world.

The knowledge in a tradition of knowledge is clearly about something, and the test of that knowledge is to bring it into contact with the thing it is about.

As an example, a tradition of knowledge about effective farming can stay alive without the institutions discussed in the post through the action of individual farmers.  If a farmer has failed to correctly learn the knowledge of the tradition, he'll fail to efficiently... (read more)

Disclaimers:

  • Epistemic status: trying to share a simplified model of a thing to make it easier to talk about; confident there’s something there, but not confident that my read of it or this attempt at simplification is good.
  • This post is a rewrite of a talk I gave at a CFAR event that seemed well-received; a couple of people who weren’t there heard about it and asked if I’d explain the thing. I tried to write this relatively quickly and keep it relatively short, which may mean it’s less clear than ideal - happy to hash things out in the comments if so.
  • The thing is much easier to describe if I occasionally use some woo-y language like “aura” and “energy” to gesture in the direction of what I
...

Just wanted to flag that this post was the most helpful single thing I read about social status in the course of writing my own recent posts on that topic (part 1, part 2). Thanks!!

Load More