I recently came across the posts which underpin the usage of "decoupling" within the rationalist community. While useful, I think they conflate a distinct approach to how people think and communicate, which I call Onticism, with Decoupling.

ontic /ˈɒntɪk/ adjective PHILOSOPHY

relating to entities and the facts about them;
relating to real as opposed to phenomenal existence.

The two posts are Deep Dive into the Harris-Klein Controversy by John Nert, and Decoupling vs Contextualising Norms by Chris Leong.

Here are the descriptions of two styles of conversation from Chris' piece:

  • Decoupling norms: It is considered eminently reasonable to require the truth of your claims to be considered in isolation - free of any potential implications. An insistence on raising these issues despite a decoupling request are often seen as sloppy thinking or attempts to deflect.
  • Contextualising norms: It is considered eminently reasonable to expect certain contextual factors or implications to be addressed. Not addressing these factors is often seen as sloppy or an intentional evasion.

Revisiting Bayes

The pithiest expression I've heard of what Bayes Theorem means is: Evidence should not determine beliefs, only update them. And the Bayes Theorem shows you how. When I first heard it expressed this way, I found it to be truly beautiful - it captured what I understand to be the core of rational and scientific thinking i.e. decoupling.

And so, three approaches emerge:

  • Contextualizers: Prior beliefs should determine how we interpret new evidence. If new evidence disagrees with our priors, we should inherently be skeptical of this evidence.
  • Ontics: Evidence tells us what is true. We can make factual statements based on evidence. Our prior beliefs should not be relevant when faced with new evidence.
  • Decouplers: New evidence should update our prior beliefs. Even when this evidence disagrees with our prior beliefs. We must temporarily and willfully suspend our prior beliefs when interpreting new evidence so that we can do this update accurately.

Contexualizers give primacy to Beliefs, Ontics to Evidence and Decouplers to P(Belief|Evidence) = [P(Belief)*P(Evidence|Belief)]/P(Evidence).

Or: Contexualizers are weak updaters, Ontics are strong updaters and Decouplers are ...? I'm not sure "accurate" is the appropriate word here, but maybe they're just more comfortable with maintaining uncertain beliefs.

Non-Contextualizers Unite

The need for "Non-Contextual" thinking is one of the great attractions of the Rationalist Community. Most discourse about how the world works has been dominated by contextual thinking. And contextualizers will often dismiss new evidence that they consider to be heterodox or dangerous.

The rationalist and scientific communities have provided spaces for those who don't find the contextualism approach appealing, useful or accurate. And this means that both Ontics and Decouplers can be found in these spaces. Both of them are open to discussing and reasoning through new evidence with each other - regardless of beliefs.

But, and this is an important difference, Ontics and Decouplers will not end up believing the same thing after this process. This regularly leads to misunderstandings, and even disillusionment. It's also what I think drives many decouplers to become post-rats.

Non-Contextualizers Disagree

Decouplers expect differences in beliefs to remain even after evaluating the same evidence. This is perfectly reasonable. The suspension of priors while evaluating the evidence is a temporary part of the process of accurately updating beliefs.

Decouplers though, are uncomfortable with, and skeptical of, the many of the conclusions and facts claimed by the Ontics. Ontics on the other hand consider it naked bias or cowardly to not accept what the evidence says is the truth.

Relevant[1] to Ontics is Scott's piece about Aristotelians and Bayesianists. Ontics definitely seem to have Aristotelian tendancies. Though Ontics also rely heavily on empericism. It isn't just true/false statements based on logic and deduction (which is restrictive) - but what they consider a reasonable extrapolation into true/false statements based on what is observed (which is permissive).

Decouplers' discomfort is NOT because they fear the "consequences" of the evidence. The discomfort is because they have a different relationship with calling something the truth. Updating a belief to be close to Probablity=1 requires extraordinary precision and evidence. Truths also need to have time permanence; this was true, is true and will remain true for a reasonably long period of time.

Ontics are willing to go beyond these constraints. Here Chris describes an example of decoupling norms, which I think aligns more with onticism:

Let's suppose that blue-eyed people commit murders at twice the rate of the rest of the population. With decoupling norms, it would be considered churlish to object to such direct statements of facts. Sure it's unfortunate for anyone who is blue-eyed, but the truth is the truth.

First, are the concepts used in this statement precise enough? For decouplers to consider a statement "true" is an extraordinary step - it therefore needs to withstand nitpicking. Is the meaning and evidence easily derived: Precise color categories and color perception, the system and rules to call something murder, variations in measurement and outcomes between one group and the other etc. etc.

Ontics could respond by saying the evidence controls for all these factors. But concepts like eye-color, murder and guilt don't come attached with objective definitions. You'd have to define them explicitly and precisely. Especially if a concept is central to the statement. In making it precise, the statement will no longer remain in its simple and short form.

And even after this is done, a major question remains: Permanence. Has this statement always been true? Will this statement remain true? Over what time period?

Finally, consider the difference between "blue-eyed people commit" and "blue-eyed people are convicted of". Contextualizers and Ontics might show a preference for one vs. the other. Can two statements of the same fact carry two different meanings? Decouplers would likely opt-out all together.

Facts should be able to withstand nitpicking. Decouplers don't avoid statements like "blue-eyed people commit murders at twice the rate of the rest of the population" because of their consequences. They avoid them because they are imprecise and vague.

Implications and Relevance

Discussing the consequences and responsibilities associated with making certain statements is emotionally fraught. I think this is a worthy conversation in of itself; but instead, I want to focus on implications. i.e. Implications as a logical construct when dealing with facts.

From Chris Leongs post (Note that Chris' post summarizes many of the points made in the John Nert post using the less emotionally charged example of blue-eyes instead of race. This assists us in decoupling, which I appreciate, and is why I'm primarily quoting from there):

Sure it's unfortunate for anyone who is blue-eyed, but the truth is the truth. With contextualising norms, you could potentially be criticised for reinforcing the stigma around blue-eyed people. At the very least, you would be expected to have issued a disclaimer to make it clear that you don't think blue-eyed people should be stereotyped as criminals.

If "blue-eyed people commit murders at twice the rate of the rest of the population" is a true statement, it might not imply the stereotype that ALL blue-eyed people are criminals. But it certainly does imply that you should be more vigilant around a blue-eyed person than a brown-eyed one.

Ontics will often be faced with the logical implications of their claimed truth statements. And might have to issue such disclaimers. But the logical implications remain.

Decouplers are not interested in making such statements of fact in the first place - their relationship is with the evidence. And the evidence comes along with all the context and definitions surrounding its measurement and permanence. The evidence only comes into play when it is relevant to a belief.

Chris brings up Zack Davis' piece regarding "relevance norms":

That talking about the higher rate of blue-eyed people who are murderers is relevant when discussing the the higher number of blue-eyed people in prison, but irrelevant when someone mentions they are going to date someone with blue-eyes because the base rate is too low.

This is a good point, but it's always the case that we can shift from viewing a phenomenon as a binary at the lowest resolution, then a spectrum, then contextual. Zack worries that a spectrum wouldn't be a useful model as there isn't a general factor of contextualising. I disagree with this - it seems that social scientists lean very heavily toward contextualisation norms and mathematicians towards decoupling norms.

I think this struggle to reconcile contextualisation and decoupling is actually a struggle between contextualisation and onticism. Not decoupling. The difference in relevance between "higher number of blue-eyed people in prison" versus "dating a blue-eyed person" does not present a problem of spectrums or devolving into contextualism for decouplers.

The former is directly relevant to the evidence - it contains questions about what has happened in the past, which directly resulted in blue-eyed people ending up in jail. It doesn't require distinguishing between "commit" and "are convicted of". It makes no claim as to whether the higher numbers will remain long into the future.

And it makes no claims as to the nature of blue-eyed people. Therefore it is irrelevant to whether one should date a blue-eyed person.

Similar Differences

This distinction between ontics and decouplers is similar to what Scott Aaronson describes as the difference between Bullet-Swallowers and Bullet-Dodgers.

Here’s a favorite analogy. The world is a real-valued function that’s almost completely unknown to us, and that we only observe in the vicinity of a single point x0. To our surprise, we find that, within that tiny vicinity, we can approximate the function extremely well by a Taylor series.

“Aha!” exclaim the bullet-swallowers. “So then the function must be the infinite series, neither more nor less.”

“Not so fast,” reply the bullet-dodgers. “All we know is that we can approximate the function in a small open interval around x0. Who knows what unsuspected phenomena might be lurking beyond it?”

“Intellectual cowardice!” the first group snorts. “You’re just like the Jesuit schoolmen, who dismissed the Copernican system as a mere calculational device! Why can’t you accept what our best theory is clearly telling us?”

So who’s right: the bullet-swallowing libertarian Many-Worlders, or the bullet-dodging intellectual kibitzers? Well, that depends on whether the function is sin(x) or log(x).

It is also related to the differences between Subjective and Objective Bayesian Epistemology, specifically in Bayesian Confirmation Theory.

  1. ^

    u/PolymorphicWetware mentioned this on Reddit 

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 5:02 AM

Strong agree, I've always been annoyed with how the decoupling vs. contextualizing dichotomy is conflated with the descriptive vs. enactive dichotomy in a way that overlooks, for example, the way that strict Bayesianism would be "contextualizing".