by curi

# 1

Replies to comments on my Chains, Bottlenecks and Optimization:

## abramdemski and Hypothesis Generation

Following the venerated method of multiple working hypotheses, then, we are well-advised to come up with as many hypotheses as we can to explain the data.

I think come up with as many hypotheses as we can is intended within the context of some background knowledge (some of which you and I don’t share). There are infinitely many hypotheses that we could come up with. We’d die of old age while brainstorming about just one issue that way. We must consider which hypotheses to consider. I think you have background knowledge filtering out most hypotheses.

Rather than consider as many ideas as we can, we have to focus our limited attention. I propose that this is a major epistemological problem meriting attention and discussion, and that thinking about bottlenecks and excess capacity can help with focusing.

Now we've got it: we see the need to enumerate every hypothesis we can in order to test even one hypothesis properly. […]

It's like... optimizing is always about evaluating more and more alternatives so that you can find better and better things.

Maybe we have a major disagreement here?

## abramdemski and Disjunction

The way you are reasoning about systems of interconnected ideas is conjunctive: every individual thing needs to be true. But some things are disjunctive: some one thing needs to be true. […]

A conjunction of a number of statements is -- at most -- as strong as its weakest element, as you suggest. However, a disjunction of a number of statements is -- at worst-- as strong as its strongest element.

Yes, introducing optional parts to a system (they can fail, but it still succeeds overall) adds complexity to the analysis. I think we can, should and generally do limit their use.

(BTW, disjunction is conjunction with some inversions thrown in, not something fundamentally different.)

Consider a case where we need to combine 3 components to reach our goal and they all have to work. That’s:

A & B & C -> G

And we can calculate whether it works with multiplication: ABC.

What if there are two other ways to accomplish the same sub-goal that C accomplishes? Then we have:

A & B & (C | D | E ) -> G

Using a binary pass/fail model, what’s the result for G? It passes if A, B and at least one of {C, D, E} pass.

What about using a probability model? Problematically assuming independent probabilities, then G is:

AB(1 - (1-C)(1-D)(1-E)))

Or more conveniently:

AB!(!C!D!E)

Or a different way to conceptualize it:

AB(C + D(1 - C) + E(1 - C - D(1 - C)))

Or simplified in a different way:

ABC + ABD + ABE - ABCD - ABCE - ABDE + ABCDE

None of this analysis stops e.g. B from being the bottleneck. It does give some indication of greater complexity that comes from using disjunctions.

There are infinitely many hypotheses available to generate about how to accomplish the same sub-goal that C accomplishes. Should we or together all of them and infinitely increase complexity, or should we focus our attention on a few key areas? This gets into the same issue as the previous section about which hypotheses merit attention.

## Donald Hobson and Disjunction

Disjunctive arguments are stronger than the strongest link.

On the other hand, [conjunctive] arguments are weaker than the weakest link.

I don’t think this is problematic for my claims regarding looking at bottlenecks and excess capacity to help us focus our attention where it’ll do the most good.

You can imagine a chain with backup links that can only replace a particular link. So e.g. link1 has 3 backups: if it fails, it’ll be instantly replaced with one of its backups, until they run out. Link2 doesn’t have any backups. Link3 has 8 backups. Backups are disjunctions.

Then we can consider the weakest link_and_backups group and focus our attention there. And we’ll often find it isn’t close: we’re very unevenly concerned about the different groups failing. This unevenness is important for designing systems in the first place (don’t try to design a balanced chain; those are bad) and for focusing our attention.

Structures can also be considerably more complicated than this expanded chain model, but I don’t see that that should change my conclusions.

## Dagon and Feasibility

I think I've given away over 20 copies of _The Goal_ by Goldratt, and recommended it to coworkers hundreds of times.

The limit is on feasibility of mapping to most real-world situations, and complexity of calculation to determine how big a bottleneck in what conditions something is.

Optimizing software by finding bottlenecks is a counter example to this feasibility claim. We do that successfully, routinely.

Since you’re a Goldratt fan too, I’ll quote a little of what he said about whether the world is too complex to deal with using his methods. From The Choice:

"Inherent Simplicity. In a nutshell, it is at the foundation of all modern science as put by Newton: 'Natura valde simplex est et sibi consona.' And, in understandable language, it means, 'nature is exceedingly simple and harmonious with itself.'"

"What Newton tells us is that […] the system converges; common causes appear as we dive down. If we dive deep enough we'll find that there are very few elements at the base—the root causes—which through cause-and-effect connections are governing the whole system. The result of systematically applying the question "why" is not enormous complexity, but rather wonderful simplicity. Newton had the intuition and the conviction to make the leap of faith that convergence happens, not just for the section of nature he examined in depth, but for any section of nature. Reality is built in wonderful simplicity."

# 1

Mentioned in
New Comment

Yes, introducing optional parts to a system (they can fail, but it still succeeds overall) adds complexity to the analysis. I think we can, should and generally do limit their use.

There is a big difference between adding unnecessary parts which just complicate things, vs well-laid disjunctions.

For an engineering example, backup systems. You can have one power source (eg a connection to the power grid) and then another, totally different power source (eg a gasoline generator).

Aside from that, it makes sense to build in a lot of redundancy in particular parts of a machine (depending, of course, on application).

But I'm worried that the analogy between engineering and epistemology isn't perfect, here. Having a backup generator is in some ways much more expensive than having a backup theory. And in many cases it will make sense to treat multiple theories as roughly equal alternatives (rather than having a main and a backup).

(BTW, disjunction is conjunction with some inversions thrown in, not something fundamentally different.)

Sure, it's the de morgan dual, which means it is closely related but it is opposite in some ways -- particularly, for the discussion at hand.

None of this analysis stops e.g. B from being the bottleneck. It does give some indication of greater complexity that comes from using disjunctions.

There are infinitely many hypotheses available to generate about how to accomplish the same sub-goal that C accomplishes. Should we or together all of them and infinitely increase complexity, or should we focus our attention on a few key areas? This gets into the same issue as the previous section about which hypotheses merit attention.

A straightforward engineering answer to this example is that we should focus on adding more alternatives to B, rather than adding more to C. In a situation where there is a top-level conjunction, we can focus on bottlenecks by adding alternatives to those.

This is similar to theorists noticing that quantum gravity is a trouble spot in our understanding of the world, and as a consequence, creating many competing theories of quantum gravity.

I'm not sure we have any disagreement with respect to this part, I'm just responding to your remarks.

There are infinitely many hypotheses that we could come up with. We’d die of old age while brainstorming about just one issue that way. We must consider which hypotheses to consider. I think you have background knowledge filtering out most hypotheses.

Right, sure. I should have said something more like: we are well-advised to avoid the pitfall of latching onto one hypothesis or a small number of overly similar hypotheses.

Here's a concrete piece of advice which I endorse: whenever you notice that your analysis of something feels finished, as a matter of principle, consider thinking up a very different hypothesis. You don't necessarily have to do it, since you'd get stuck in an infinite loop of analysis if you enforced it as a rule. But it's important to do it often.

Rather than consider as many ideas as we can, we have to focus our limited attention. I propose that this is a major epistemological problem meriting attention and discussion, and that thinking about bottlenecks and excess capacity can help with focusing.

I think that, due to working memory constraints, our innate evolved heuristics tend toward seeing the world in just one way. So, for scientific thinking, it's particularly important to pull in the other direction.

People (I believe) don't naturally fall into a trap of being lost in a sea of too many hypotheses, because people have to generate each hypothesis and explicitly consider it -- which takes time. It's not like you come upon a haystack and are looking for a needle. It's more like trying to guess Nature's password. You have to explicitly construct more guesses.

So if you are explicitly setting out to spend more time thinking (in order to reach better conclusions), then coming up with more guesses is going to tend to be a good use of time.

As such, I still mostly stand by this statement you quoted:

It's like... optimizing is always about evaluating more and more alternatives so that you can find better and better things.

(With the exception of cases resembling calculus, where you can optimize without trying any options, because you can solve for the optimum based on your prior information -- IE, cases where logic is enough to narrow down the right answer.)

Granted, you absolutely can fall into a trap of not focusing on any one hypothesis for long enough.