http://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking/

A long post, here's the key thesis:

I believe our approach is justified, and in order to explain why – consistent with the project of laying out the basic worldview and epistemology behind our research – I find myself continually returning to the distinction between what I call “sequence thinking” and “cluster thinking.” Very briefly (more elaboration below),

  • Sequence thinking involves making a decision based on a single model of the world: breaking down the decision into a set of key questions, taking one’s best guess on each question, and accepting the conclusion that is implied by the set of best guesses (an excellent example of this sort of thinking is Robin Hanson’s discussion of cryonics). It has the form: “A, and B, and C … and N; therefore X.” Sequence thinking has the advantage of making one’s assumptions and beliefs highly transparent, and as such it is often associated with finding ways to make counterintuitive comparisons.
  • Cluster thinking – generally the more common kind of thinking – involves approaching a decision from multiple perspectives (which might also be called “mental models”), observing which decision would be implied by each perspective, and weighing the perspectives in order to arrive at a final decision. Cluster thinking has the form: “Perspective 1 implies X; perspective 2 implies not-X; perspective 3 implies X; … therefore, weighing these different perspectives and taking into account how much uncertainty I have about each, X.” Each perspective might represent a relatively crude or limited pattern-match (e.g., “This plan seems similar to other plans that have had bad results”), or a highly complex model; the different perspectives are combined by weighing their conclusions against each other, rather than by constructing a single unified model that tries to account for all available information.

A key difference with “sequence thinking” is the handling of certainty/robustness (by which I mean the opposite of Knightian uncertainty) associated with each perspective. Perspectives associated with high uncertainty are in some sense “sandboxed” in cluster thinking: they are stopped from carrying strong weight in the final decision, even when such perspectives involve extreme claims (e.g., a low-certainty argument that “animal welfare is 100,000x as promising a cause as global poverty” receives no more weight than if it were an argument that “animal welfare is 10x as promising a cause as global poverty”).

Finally, cluster thinking is often (though not necessarily) associated with what I call “regression to normality”: the stranger and more unusual the action-relevant implications of a perspective, the higher the bar for taking it seriously (“extraordinary claims require extraordinary evidence”).

New Comment
5 comments, sorted by Click to highlight new comments since:

I'm skeptical of any process where, as appears to be the case here, calculating expected values is demoted to a weak tiebreaker. His description implies (though not explicitly) that expected values are only considered heavily in their calculations when comparing within a domain.

I've only breezed through Holden Karnofsky's full article, but the demotion of calculated expected values only seems to apply where Knightian uncertainty is considerable.

That's his assertion but his examples don't really seem to support it. Hence my skepticism.

Cluster thinking vs. Pascal's mugger. Thoughts?

[-][anonymous]10

Depends on your clusters...